5 THE CONTEST FOR ARTIFICIAL INTELLIGENCE SUPREMACY

Artificial intelligence (AI), a software-based capability already being harnessed by businesses, governments, and militaries all around the world, is poised over the next decade to become the most important, valuable, and dangerous technology ever innovated by humans. AI will profoundly impact—and be impacted by—Cold War 2.0. The worst fears of the current cold war, and some of the world’s most life-changing hopes, rest with this relatively new technology. There will be constraints put on AI by governments, but these will look very different depending on whether they are levied by the democracies or the autocracies. This will be a critical fault line in Cold War 2.0.

AI is an “accelerator technology,” in that it not only progresses rapidly itself, but it serves as an accelerant turbocharging all other technology innovation as well. The other principal technologies discussed in the next three chapters, namely semiconductor chips, quantum computing, and biotechnology, are also accelerators, with each of them (and especially AI) releasing powerful waves of competitive displacement through the economy and national security structures. Whichever camp—the autocracies or the democracies—can best capture the added value of these accelerators will prevail over the other in Cold War 2.0.

ACCELERATOR TECHNOLOGIES

Chapter 1 argued that technology and innovation will play a central role in determining whether the democracies or the autocracies win Cold War 2.0. Four accelerator technologies will be critical to this determination: artificial intelligence, semiconductor chips, quantum computing, and biotechnology. Each of these is an accelerator technology for three reasons: (1) each is profoundly transformative, standing alone; (2) each will accelerate developments and progress in each of the others, and in a range of other important technologies; and (3) each reminds humans of the limitless potential of new technology and innovation. Each will also drive competitive displacement in the civilian and military domains. Whichever camp (i.e., the democracies or the autocracies) masters these accelerator technologies before the other, and then sustains that lead through perpetual innovation, will achieve a more robust economy and a more powerful military. Cold War 2.0 will have a winner and loser as a result of technology, just as was the case with Cold War 1.

History has known a number of accelerator technologies, but actually not that many, considering that literally thousands of technologies have come and gone over the millennia. Some other accelerator technologies include fire (not so much “invented” as tamed), agriculture, writing, Gutenberg’s printing press (with the clean face movable type), the steam engine, and the Internet. These are good examples, but perhaps the most accelerating technology of the past 200 years is the telegraph. Prior to its invention in the 1840s, information could only be transmitted from one person to another as fast as the sender’s paper-based document could be carried by a horse, wagon, or ship. The information did not travel faster than the substrate on which the information was expressed.

Consider that prior to the telegraph important news that unfolded in London could only be shared with someone in New York about fourteen days later, the time it generally took for an ocean steamer to cross the Atlantic. Then in 1858 the first telegraph cable was laid under the vast ocean. Thereafter, telegraph messages took only minutes to get from London to New York (and vice versa). What an accelerator technology! Commercial, political, and scientific news could be shared across the Atlantic virtually in real time. This single invention profoundly changed the way people looked upon the earth—how small the globe began to feel. The telegraph changed how people did business internationally, how people fought wars, how businesspeople organized companies and commercial ventures, and how people could travel (and still stay in touch with their business associates and family members back home), to name just a few ramifications. The Internet was certainly important when it was unveiled forty years ago, but the telegraph (sometimes referred to as the Victorian Internet) more so because the feeling of collapsing distance in the world offered by the Internet was actually first felt some 150 years before when the initial cross-Atlantic telegrams were sent. Similar “firsts” are being experienced by millions of users of ChatGPT when they have the new AI system write them an essay, poem, or story on demand.

Accelerator technologies are invariably “dual-use,” in that they can be deployed in both the civilian and military domains. The control panel of a modern clothes washer is run by an SC. Literally the same SC can be repurposed to facilitate the guidance system on a military drone—the drone itself is the quintessential dual-use technology. This dual use of SCs has been witnessed recently on the battlefield in Ukraine. Russia is running out of new SCs. As a backstop, it is taking SCs out of clothes washers, and plugging them into drones. They are not perfect, but good enough. It is hard to think of anything more “dual-use” than that!

Given the dual-use nature of accelerator technologies, and certain of the other important technologies, it is to be expected that militaries in major powers around the world will work diligently to ensure that the academic community that researches these technologies, and the civilian and military-industrial companies that design, develop, and deploy them, are at all times healthy and producing state-of-the-art versions of products encompassing them. This is why governments, ministries of defense, and the militaries themselves fund innovation and purchase prototypes in these domains.

The precise cadence of the release of products containing the most advanced versions of the accelerator technologies is always difficult to estimate. Forecasting the exact future with respect to all technologies, let alone those that are accelerators that will lead to profound competitive displacement, is an exercise that usually ends in embarrassment. There have been some fascinating predictions of technology that have proven ludicrously wrong. Here is a sample of some from the last 150 years that were really off the mark:

Moreover, militaries are also often greatly off the mark when assessing the future use of a nascent technology. Robert Fulton, the American inventor of the steamship, approached for financing none other than Napoleon Bonaparte with the idea of a steam-powered naval vessel, and Napoleon rebuffed him for what the celebrated military genius believed was a nonsensical idea. Similarly, the US Army Air Corps took a look at the embryonic rockets being built by Robert Goddard in 1936 and concluded they would have no military application, not even as targets for aerial gunners, let alone as offensive weapons.

People, even experts, are unable to predict the precise pace of change in technology innovation because everyone typically underestimates the long-term impact of a technology, though innovators often grossly overestimate its short-term impact. Fully autonomous, self-driving cars should be crowding streets in most cities in huge numbers by now, if the estimates from fifteen years ago had come true. On the other hand, once self-driving technology becomes viable and accepted, most vehicles will be driven autonomously most of the time, given the huge economic and time savings it will generate. At that point the autonomous vehicle will competitively displace the traditional car with a human driver.

Another surprising feature of accelerator technologies is that certain aspects of them invariably end up being regulated by government, for the simple reason that their power and performance makes them somewhat dangerous. AI, for instance, presents serious risk factors, in both civilian and military use. Semiconductor chips seem less risky on their face, except when they are implemented in a dangerous setting (i.e., a nuclear power plant control room, the cockpit of a commercial airliner, etc.) and they don’t work as they should. If the use of quantum computers is limited to only the richer countries of the world, they could exacerbate an already dangerous computer divide (perhaps “chasm”?) between the democracies and the Global South. And if biotechnology is not regulated carefully, some human life–ending virus might be released—and the general performance of the world recently with COVID-19, which was quite a mild virus in the scheme of things, does not bode well that the world could handle a really deadly one. It remains to be seen whether the democracies and the autocracies, in the course of their Cold War 2.0 entanglements, are able to address risks such as these posed by the accelerator technologies.

ARTIFICIAL INTELLIGENCE

Artificial intelligence will be central to all the protagonists involved in Cold War 2.0, for the simple reason that it will become a core technology—perhaps the core technology—of the 21st century. In addition, as an accelerator technology, AI is having and will continue to have enormous impact on a range of other innovation. For instance, in terms of biotechnology, AI is impressively speeding up new drug research, by finding molecule combinations that humans would never dream of on their own. (What’s fascinating about some of the chemical compounds proposed by AI is that scientists don’t actually know why they work, but they do.)1

Computers have already had a massive impact on literally every aspect of society. AI will amplify exponentially that influence. In the high-technology horse race that is driving Cold War 2.0, AI is certainly the stallion to keep an eye on. As a result, governments in both the autocracies and the democracies will have to learn how to ensure adequate production and deployment of AI assets, and they will certainly try to regulate and control—including in some cases block—the flow of AI assets from one Cold War 2.0 camp to the other.

It is useful to consider where AI fits into the history of innovation. For tens of thousands of years, humans limited their tool-making to devices that extended their own muscular strength. A rock scraper allowed humans to clean the fat off an animal’s hide more effectively than when people used only bare hands. A knife delivered a more effective blow when humans were hunting than if they simply punched an animal with bare fists. A shovel could dig at three or four times the pace of using fingers alone. With a lever and a pulley system, much more weight could be lifted than with bare arms. With wheels humans could transport loads that would break a bare back.

The great advance in innovation in the Industrial Revolution was the invention and improvement of self-powered machines that were hundreds of times superior to manual tools. The automobile and truck could move much faster than human legs, and even faster than the horse. The railroad, in some places sometimes called the “iron horse,” replaced the equine for long-haul routes. The steam engine could pump water without interruption, and even without wind, which was a limiting factor in the windmills that had been invented hundreds of years before. Still, none of these industrial-age technologies were intended to extend, let alone substitute for, human senses (especially sight and hearing) or human brain-derived cognitive capabilities.

That all changed with the invention of the computer, in and around the time of World War II. The first serious computer, the IBM ENIAC, was invented in the United States during World War II to perform the gargantuan mathematical calculations required to build the first atomic bomb. The British, under the leadership of Alan Turing, also during that war invented a type of computer to assist British code breakers to decipher encoded Nazi messages. Both computers played important roles in helping the allies win World War II. AI will have the same impact for Cold War 2.0.

The generation of computers from the 1940s until about thirty years ago (with the advent of the first AI systems) assisted humans with calculations; indeed, these computers quickly surpassed the human ability to add, subtract, multiply, and divide, because they were able to perform these and many other such mathematical functions millions of times faster than humans. These computers in a sense weren’t always smarter than humans (although they seemed to be), rather they just performed very basic processes at blazing speed.

Then came the big breakthrough, in Toronto, some forty years ago. The methodology used to develop most of today’s AI programs was unleashed when University of Toronto professor Geoffrey Hinton, with some graduate students, published a seminal paper on how to teach computers to “conduct deep learning.”2 Essentially, using some complex mathematics, AI software could be taught, through “machine learning,” how to digest large amounts of data, and then learn from the data certain lessons: what a “cat” looks like; how to translate English words, sentences, and paragraphs into French; and how to determine Thursday night if a restaurant will have enough lettuce to get through the weekend rush, or whether it should order some more on Friday morning.

The result was that starting in the 1990s, certain computers that were loaded with AI software and trained on large sets of data started to excel at performing functions that previously only humans could do. These AI computers started beating humans at games, such as chess and the Asian strategy game called Go. Then a truly amazing thing happened. The second-generation chess computer didn’t need to be “taught” by having it memorize thousands of chess games played by the old masters; rather, this chess-playing computer was merely taught the rules of the game, and, presto, it still won games against really good human chess players.

Other forms of civilian AI began to pour out of AI labs, and they insinuated themselves into many critical workflows in most advanced societies. In banking they constituted the systems that detected credit card fraud by being able to conclude whether a credit card had been stolen based on the cardholder’s recent shopping history. In medicine, AI systems appeared that could help a human pathologist decide whether an X-ray from a patient showed a tumor that was cancerous or benign. Apple put an AI-driven Siri assistant into its smartphone, while Amazon put a similar functionality into its Alexa speaker. There was also a marvelous real-time language translator on the smartphone that let the user exchange sentences with the baggage clerks at the foreign airport in an easy and efficient manner that no paper-based foreign phrase book ever could. The age of AI was upon us.

Most recently, several new AI programs have come to market that have really caught the attention of computer users all over the world. OpenAI has released several versions of chatbots (such as ChatGPT) that are uncanny in their ability to respond to requests to generate prose of a certain length (as short or as long as the user would like) about any subject. The same company’s DALL-E AI allows users to request, verbally, a certain image (“please, draw a bear eating a French crepe full of honey”), and presto, moments later the AI-generated illustration appears. Beyond the functionality of these latest AI applications is the phenomenon of just how quickly they are improving. ChatGPT 3.5 scored in the 10th percentile when completing an LSAT exam (the exam used by law schools to measure general aptitude relevant for the law), but ChatGPT 4, released just six months later, took the same test and scored in the 90th percentile. This is competitive displacement on steroids.

One interesting dimension about OpenAI is that it has received billions of dollars of investment from Microsoft, much of it in the form of brute computing power from Microsoft’s vast system of cloud computing centers. This was required because it takes a very significant amount of computer power to train an AI program (built on a large language model) on huge datasets, which Microsoft also has in abundance, given its ability to scour the Internet through its Bing and related online services. In turn, the building blocks for Microsoft’s vast “computer farms” are tens of thousands of expensive, AI-oriented semiconductor chips inside those computers, sourced from companies like Nvidia.

Returning to what AI does (rather than how it does it), use cases for AI in the civilian domain have proliferated massively over the past decade. A gambler with a gambling problem can ask the local casino to prevent him from entering the casino. No problem. They have a facial recognition system they use at the front entrance. It “learns” the particular gambler’s face, and so long as that gambler enters the casino without a beard or balaclava covering their face, the computer will detect them upon entry, and alert human security to stop them from going into the establishment. Or, your company has just a launched a new movie, and they want to know what moviegoers are saying about it on social media. No problem. There is AI software now that performs “sentiment analysis,” by crawling social media looking for mentions of your film. When it finds such a Facebook or Twitter post, the AI software reads the relevant sentences, and with some further nifty mathematical algorithms, can report back by summarizing in a few short paragraphs how hundreds of thousands of movie watchers feel about your new film. These are but two examples—the actual number of AI applications in the civilian world is growing exponentially. AI is well on its way to becoming the defining innovation of the 21st century, and it will play a central role in Cold War 2.0.

AI WITH CHINESE CHARACTERISTICS

As recently as a few years ago, there was a lot of hype that China was becoming the leader in AI. In particular, a Chinese venture capital investor who had worked at Google, Kai-Fu Lee, had returned to China and began to invest in a number of AI start-ups. Lee then wrote a book about why China would beat America in the AI race.3 He argued that most important, researchers and AI companies in China had access to much more data than in the United States because there were so many more Internet users in China (and China had looser rules around the use of third-party data). It looked like China would finally have a technology where it would be able to leap ahead of the United States.

What is missing in Lee’s book is a discussion of the leading application of AI technology in China, namely as a critical component in surveillance systems used to monitor, and frankly oppress, large swaths of the population. For example, facial recognition, a core form of AI, coupled with biometric systems also turbocharged by AI technology are used to track constantly the location and actions of millions of Chinese citizens. Similar systems are used for running the “social score” system in China, where everyone’s public behavior is tracked and measured against government-imposed norms, and consequences result if a citizen’s actions are considered antisocial in some manner. The faces of young children in school are also monitored during certain classes to determine if they are suitably engaged in the subject matter, such as lectures on “Xi Jinping Thought.” If the facial recognition system determines they are daydreaming, then a stern talk with the child, and typically the parents, follows. And this is just early days for AI-driven surveillance technology in China.

Notwithstanding the proliferation of AI surveillance systems in China, it is now clear that Lee’s prediction that China would lead the AI technology race is not working out. Instead, with the release in March 2023 of ChatGPT 4, discussed above, it is apparent that the Americans are clearly out front of the Chinese (and the rest of the world) when it comes to AI. This was made painfully clear to the Chinese when, a few weeks after the release of ChatGPT 4, Baidu (the equivalent of Google search engine in China) gave a hugely anticipated demo of Ernie Bot, Baidu’s response to ChatGPT. It wasn’t an awful demo, as some have reported, but it was certainly not the major breakthrough or even tactical victory that Lee would have hoped for.4 First of all, Baidu’s CEO, Robin Li, didn’t give a live demo; rather, the various features and requests that Li walked through were all prerecorded videos. That doesn’t show a lot of confidence in his own product.

One of the fundamental problems of Ernie Bot, like other Chinese AI programs, is that there is content that a user cannot query because of Chinese censorship rules. For instance, a user cannot ask about the actions of the Beijing government in massacring hundreds of protesters in Tiananmen Square in 1989. In effect, all politically harmful or even sensitive material is banned from the Chinese chatbot. Going forward, the oversight provided by censors will prove to be more and more difficult to manage for Chinese AI developers because they will have to solve for ever stricter rules on expression. Moreover, a user will never really be able to learn what, exactly, the censorship rules have blocked out of the system.

The underwhelming response to Baidu’s release of its competitor to ChatGPT is reminiscent of what happened upon the release of a large language model AI by the Chinese Institute of Computing Technology in mid-2022; essentially, reaction from the Chinese marketplace was very lukewarm, and no reports have surfaced of any third party picking it up and integrating it into their system. As for other Chinese Internet giants, who presumably would be rushing out ChatGPT-type functionalities, such as Alibaba and TenCent, have yet to do so. Taking all these elements together, it is fair to conclude that the Chinese are quite a ways behind the Americans when it comes to deployment of AI. Once again, the main problem seems to be that China has put a lot of AI eggs into a single basket—namely Robin Lee’s Baidu—and when that team didn’t come forward on time with the requisite functionality, a large cloud started to hang over the entire AI domain in China.

In the United States, and certainly when all the democracies are considered in the aggregate, there are typically three, sometimes six, and often even more teams in the race. That is in case one or two disappoint with their technology, the cadence of their innovation, or the fundamental acceptance of their products in the marketplace. Moreover, it is the competition between these players in the same domain that helps fuel their drive to success; in effect, the economic system of the democracies reinforces that there is a race, and that the bulk of the spoils will go to the winner as dictated by the dynamics of competitive displacement. This is precisely what is currently happening in the AI domain in America, with the result that superior AI technology is coming out of multiple labs and companies. For example, fairly soon after the release of ChatGPT by OpenAI/Microsoft, Google released Bard, its equivalent product. It’s considered not as advanced as ChatGPT, and Bard’s less impressive showing caused Google to shake up its AI business units. In effect, competitive displacement in all its raw power is playing out in real time. Which is incredibly important, because supremacy in AI will drive oversized economic benefits, and an important advantage in national security as well.

MILITARY ARTIFICIAL INTELLIGENCE

It is naive to think that a technology as powerful as AI would not be quickly scooped up by the military and be put to use in their world. AI is a fundamentally dual-use technology, and chapter 1 highlighted that dual-use inventions invariably do well in both the civilian and military domains. Indeed, AI is already firmly embedded in all manner of operations that are key to military success on the battlefield. Here is a general rundown of how and where AI is used by armed forces:

Intelligence, Surveillance, and Reconnaissance (or ISR, as it’s referred to in the military). Computer vision is an important subspecialty of AI. Computers powered by AI ISR capability are much better than humans at reviewing huge volumes of photos, videos, and other images and detecting various objects in them, be they certain people, weapons systems, buildings, high-value targets, or anything else with military value. Equally, ISR systems can scan millions of text, voice, email, and other conversations in real time to distill the few nuggets of value that might make the difference in understanding an enemy’s battle plan. It’s not that people cannot do these things, but that mere humans cannot do them at the scale, and in the compressed time frame, as properly configured AI computers.

War-Fighting Systems. In a busy, sometimes frenetic battle space that includes infantry on the ground, air force assets above ground, and naval vessels just offshore, there might be 200 to 300 discrete data items to try to keep track of. Eventually the objective would be to keep track of each soldier as well, increasing the number of critical data items to 2,000 to 3,000. A commanding officer will be hard-pressed to take in all the information flowing into headquarters from thousands of sensors deployed in or surveying the battle space, especially when in the heat of battle time is of the essence. An AI system becomes critical because it alone is able to collect all this data, process it, organize it into manageable streams or chunks, and serve it up for the commander so that prompt decisions can be made based on it. Increasingly, though, the AI system will also be making an ever-larger number of decisions on its own, like collecting and processing the data points, but then also giving soldiers and platoon commanders orders. Again, the harried, time-starved commanding officer simply cannot do so in the time allotted.

Air Defense/Missile Defense Systems. Consider the previous scenario, but now the enemy has launched several hundred cruise, hypersonic, and ballistic missiles, as well as hundreds of drones, some in swarm formations. This scenario is described in some detail in the beginning of the introduction to this book relative to a hypothetical attack by China on Taiwan. Again, one person or even a team of officers sitting in a C4 center on the ground, in an aircraft, or on a naval vessel are unable to effectively deal with the incoming onslaught of data-rich information. An antimissile defense system, like the American Aegis (for naval vessels) or Patriot (for ground-based combat) equipped with an AI control system can detect the incoming fires, track them, determine their targets and time to targets, and then display operational options for the commander, or the system can be tasked autonomously to itself launch countermeasures—including interception missiles—if time is just that tight. This ability to operate comprehensive and timely air defense is one of the signature military uses of AI.

Cybersecurity. The enemy has launched massive hacking and malware attacks on a democracy’s critical banking, electrical, and government infrastructure assets, likely as a prelude to a kinetic attack. A total of about 1,500 key sites are being targeted. Prior to implementing AI technology, such a wide-ranging cyber strike would simply overwhelm the individual IT departments of these organizations. With AI-driven cyber defense systems, the nature of the specific cyber weapons can be determined in short order, and appropriate defensive software code can be deployed online in no time at all. Again, the role for humans is much diminished in the actual process of getting solutions out to the targeted sites; human controllers are largely on mission to ensure that the AI is working and doing its job.

Training/Simulation. It is very difficult to train military personnel, and the necessary civilian team members as well, for the kinds of scenarios noted in the three examples above in what would have been called “live-fire” exercises. Instead, what will be critical is to have computers equipped with AI systems simulate such attacks, so that personnel can be trained on how to oversee operational AI systems that are engaged to respond to attacks. AI-based training systems will also be used in all other areas of the military where operational and munitions functions are being learned. It is simply too expensive to train soldiers on real systems, especially with live-fire ordnance. Accordingly, a key indirect determinant on the battlefield will be how well one side’s AI-based training simulator worked relative to the other side’s equivalent system.

War Gaming. Considering strategic, and even tactical, options beforehand in order to prepare armed forces for scenarios they might face in the near future has been done for decades. Now, though, by using AI to help with the gaming parameters and option analysis, much more meaningful mock-ups can be run, and far better lessons can be learned from the exercises. Moreover, these AI-powered war-gaming systems will also be running in real time during the battle. They will be updated constantly with new data from the battlefield, and their human operators will coax out of them real-time recommendations for modifying battle plans and terms of engagement. There will be, in a real sense, an “AI digital twin” to what is going on in the real world, so that commanders have another source of advice for crafting strategy and for executing on tactics.

Logistics. However high-tech war fighting may become, at the end of the day it is still critical to get the right military assets to the right place in the world (or the right place on the battlefield) at the right time. For a major armed conflict this can be a daunting exercise. What an AI logistics system can do is track munitions depletion in real time, for example, so that as an artillery shell is fired, a new one is automatically being assembled thousands of miles away and prepared for shipment to the battlefield. Again, as with much of military AI, the system is built from a commercial application that performs essentially the same function for a large retail chain (the minute a shirt is purchased from one of their stores, the factory organizes sending a replacement shirt to the same store the next day, etc.). Moreover, after a few days of battle, the AI system would be able to predict future munitions depletion rates such that new supplies were ordered well in advance so that there would never be shortages on the front line.

Maintenance. The United States Air Force has an AI software product from C3.ai that is used to predict when a plane, or one of its key components, will require maintenance. This might not sound important, but something else learned from the Ukraine War was that about 20 percent of the equipment provided by democracies to Ukraine was actually not fit to be sent into battle. (The Russian army’s equipment suffers from lack of maintenance through corruption, as maintenance money is scooped up by crooked officers.) The armies in the democracies don’t have this problem, but providing their equipment the proper amount of maintenance is still a challenge, a task AI can certainly help with.

The war in Ukraine is teaching certain lessons about the use of AI in a military environment.5 AI, as used by reconnaissance drones as well as ISR satellites and C4 planes, is making it very difficult for soldiers (let alone command posts or supply depots) to hide anywhere on the battlefield. And then AI-guided precision weapons, like HIMARS, are making it much easier to destroy command posts, supply depots, and the like that are found on the battlefield by the various AI-connected radars and other sensors. Accordingly, munitions like artillery shells must be well dispersed, and artillery batteries must be mobile. This also means that mass artillery firing in major wars are becoming a luxury of the past, and therefore artillery must itself become as smart as its rocket and missile cousins. Indeed, China is testing AI-controlled artillery to deliver more bang for the buck (literally and figuratively).6

Interestingly, while not widely appreciated, artillery was in fact critical in the early phases of the war in Ukraine; in addition to the importance of Javelin and other anti-tank weapons noted above, two Ukrainian artillery batteries were key in stopping the Russian column advancing on Kyiv in the first few weeks of the war. Firing thousands of artillery shells a day, though, simply becomes untenable in the mid- to long-term (as discovered by the Ukrainian army), and basic industrial capacity constraints will contribute to the drive to adopt smart artillery practices, including to exempt civilians from being inadvertently hit by dumb artillery shells. Indeed, it is an open question whether one day it will be a war crime for a military to use non-smart artillery when the smart variety is available, even if it costs more per round.

CIVILIAN AI CAPACITY

Most AI breakthroughs have been coming from universities with exceptional STEM experts in the various fields relevant to AI, including psychology, computer science, mathematics, linguistics, and cognitive science. These innovations are then operationalized by tech companies large and small. Then some of these AI systems are absorbed by the large defense contractors (who now have thousands of software programmers working for them in research and development) and cloned for the military. It is, therefore, a “whole of tech community” exercise to have state-of-the-art AI functionality and performance powering weapons systems that can deter, or defeat, enemy forces. Accordingly, to get a sense of the respective strength in AI of the militaries of the autocrats and the democracies, a review of various civilian metrics is helpful, as follows.

In terms of the leading AI development teams in the world, it would be logical for them to be within the large cloud computing firms, or closely affiliated with them, as the cloud companies have the massive datasets and gargantuan computer power required to train the large language and image models necessary for state-of-the-art AI. Alibaba is the Amazon of China, but is currently being broken up into six groups, reportedly at the behest of the Chinese CCP/government. Its Cloud Intelligence Group will continue its AI efforts, but again, the CCP crackdown on big tech in China has not helped Alibaba. The few big “horizontal” players are noted below, and are all headquartered in America and China, while some of the AI “vertical” expert companies—for example, focusing on AI in the energy or health sectors, etc.—are based outside of the US and China, but mainly in other democracies like Canada and the UK. Also telling, though, is the sheer volume of the American funding of AI by private capital, which towers over all other countries:

TABLE 1—HORIZONTAL AI COMPANIES

United States

Sales

Market Cap

Apple

$394 billion

$2.91 trillion

Google

$279

$1.56

Microsoft

$198

$2.51

Facebook

$116

$0.71

AWS

$80

$1.29 (Amazon, entire)

IBM

$60

$0.12

Oracle

$42

$0.33

Salesforce

$26

$0.20

China

Sales

Market Cap

Alibaba

$129 billion

$0.229 trillion

TenCent

$82

$0.429

Baidu

$18

$0.043

TABLE 2—VERTICAL AI COMPANIES

National share of vertical AI companies:7

US

40 percent

UK

7 percent

India

6 percent

China

5 percent

Canada

4 percent

Other

mainly democracies

TABLE 3—EARLY-STAGE AI COMPANIES

National share of funding of early-stage AI companies (to the end of 2022):8

US

$88 billion

China

$42 billion

UK

$8.9 billion

Israel

$4.3 billion

Canada

$3.8 billion

Japan

$2.9 billion

Germany

$2.3 billion

France

$2.1 billion

Singapore

$1.5 billion

India

$1.2 billion

It is also noteworthy that neither Russia nor other autocracies (besides China) have AI companies of any note.

MILITARY AI CAPACITY

The large military prime defense contractors play the central role in implementing AI into modern weapons systems. The list is dominated by five American defense contractors: Lockheed Martin, RTX (formerly Raytheon), Boeing, Northrup Grumman, and General Dynamics. Their role is central for two reasons. First, it is not enough simply to write the AI software code once, but rather it must be constantly maintained and upgraded, and expert software-capable engineering talent is required for this onerous role. Second, these large defense contractors play the crucial role of integrating the software of smaller third-party contractors into specific weapons systems. For example, Lockheed Martin is developing a digital battle-management system in a joint venture with Nvidia. In such a relationship, Nvidia brings deep technical expertise in graphics and AI technologies, but Lockheed Martin brings the invaluable experience of knowing how all the protocols and interfaces work to properly connect the new system to the multiple sensors (from aircraft, radars, satellites, naval vessels, ground units, etc.) and other data development systems that all need to be seamlessly integrated into a single, high-performance system.

It is not surprising, therefore, that out of a total workforce of 115,000 employees at Lockheed Martin, there are 60,000 engineers and scientists, of which about 12,000 are software engineers and data scientists, many of whom specialize in AI. At RTX, the figures are even more software-oriented: 195,000 staff in total, with about 60,000 engineers, and 75 percent of these are software engineers, therefore about 45,000 software specialists, which again includes a large number of AI experts. In terms of AI specifically, here is a sampling of a recent job posting at the top four US defense contractors that highlight the AI-related skill sets that are being recruited for currently.

Lockheed Martin: In June 2023 it had 220 openings for software-related jobs in the US; it was also recruiting on LinkedIn for a chief scientist, artificial intelligence and machine learning, which made a particular callout for a project on “Neuro-Symbolic Reasoning applied to sense making in autonomy and battle management.” It was recruiting for an AI/machine learning software engineer to work on the Trident II D5 fleet ballistic missile, and others to work on Sensor Fusion/AI algorithm design and system integration; the Cognitive and Advanced Strategic Solutions (CASS) team within Lockheed Martin Space; on AI/ML in computer vision; on AI in the Unmanned Aircraft Systems (drones) product line; one of the world’s first quantum computers; a machine-aided vision program with the Sensors and Spectrum Warfare team; AI/ML in large-scale radar products.

RTX (formerly Raytheon): Job opening for staff to work on dependable AI-enabled intelligent systems; application of machine learning to optimization problems; drive adoption of AI at Collins Aerospace; integrate AI into the technology stack at Collins Aerospace.

Northrup Grumman: AI engineering manager to join the Artificial Intelligence and Analytics Department of NG Mission Systems, to probe large quantities of sensor data with signal processing and advanced data analytics to exploit complex phenomenology, using AI and machine learning; radar exploitation and signal processing in the AI and Analytics department, developing algorithms for a novel radar capability.

Boeing: Software manager to lead a team working with AI and machine learning on aerospace, satellite, and autonomous programs, including advanced training and simulation, autonomy, cyber security, electro-optical/infrared sensing, disruptive computing; data scientist to test novel machine learning approaches to analyzing unstructured data, for Boeing Intelligence and Analytics division; AI/ML software engineers to work on next generation of AI-enabled autonomous aircraft.

CONSTRAINTS ON ARTIFICIAL INTELLIGENCE

AI systems are proving to be so powerful that in both the military and civilian domains there have been demands for constraints to be put on them. In respect to military AI systems, a number of commentators have argued for restrictions to be placed on so-called killer drones.9 These are drones that would be equipped with facial recognition AI, and they would be programmed to hunt down a particular enemy individual who is targeted for execution. The drone, once it finds this individual, would fire some form of weapon to kill the person, all without further human intervention by the military unit ultimately responsible for the drone. In a similar vein, there has also been commentary proposing a broader prohibition on all autonomous lethal weapons; again, like killer drones, these would be military assets that would release their munitions simply based on what the AI system detected as an appropriate target, without any human intervention in the ultimate firing decision.

Two important points need to be made about these proposals. First, the democracies should not implement any such rules of constraint unilaterally; that is, without the autocracies agreeing to similar constraints on their AI weapons systems. As with any weapon system, conventional, digital, nuclear, or otherwise, unilateral disarmament is a bad idea. And where mutual constraints are agreed to, there should be very rigorous compliance, inspection, and neutral oversight systems agreed to as well. The second point on constraints is that great care should be exercised about them when it comes to AI systems that are exclusively defensive in nature. Defensive weapons systems, whether exhibiting AI capabilities or not, are generally very useful for supporting the peace, while offensive systems are the more problematic from the perspective of maintaining the peace. Therefore, the democracies should be very slow, and extremely careful, to agree to putting constraints on wholly defensive AI-based weapons systems, like the Patriot and Aegis ADSs highlighted in the beginning of the introduction.

In the civilian realm there have also been a number of initiatives in recent years calling for constraints on AI systems. Singapore was first out of the gate with its Model for AI Governance Framework in 2019. In 2022 three members of Congress introduced an Algorithmic Accountability Act. Other democracies are considering similar legal regimes, such as Canada’s Bill C-27. The European Parliament has made the most progress, and is close to finalizing an Artificial Intelligence Act that it would send to the European Commission for consideration. None of these initiatives have yet produced a working legal regime governing AI.

It is worth noting two overarching thoughts about these types of potential constraints on AI. First, they need to be structured and implemented in a way that doesn’t impede research and development. Thus, if a democracy goes down this path, the regulatory effort should be focused exclusively on commercial distribution of the AI product to the public or into the stream of commerce. The second point is that ideally such domestic constraints are limited to those measures that the autocracies agree to as well, so that there would be a level playing field in the sale of these systems into one another’s markets; but even if an autocracy agrees to pass a law equivalent to the one passed in a democracy, the important question remains: How is that law enforced in the autocracy when it doesn’t have a general system of the rule of law, with the required independent judges and the like? This might lead to the conclusion that the most likely alternative, when it comes to constraints on civilian AI systems, would simply be to disallow entry into the democracies of AI systems designed, developed, and manufactured in the autocracies. This would be consistent with the general approach of adding AI systems to the list of matters that are dealt with through the overall tech decoupling of the democracies and autocracies.

Second, in March 2023 a group of 1,000 researchers and entrepreneurs in the tech space in the democracies signed a letter (crafted by the Future of Life Institute)10 calling for a six-month moratorium on all development work on AI systems like Chat GPT, because they were worried these kinds of AI have risks that need to be assessed and addressed before further work is done on them. Frankly, in a manner consistent with the comments above, such a hiatus would not be a good idea. From the perspective of Cold War 2.0, the autocracies certainly won’t be halting their R&D into AI systems for any stretch of time. Plus, it is not realistic to think all AI researchers will voluntarily give up their jobs, and paychecks, for six months and what, go sit on a beach for half a year?

The pause called for by the Future of Life Institute is clearly not a sensible way forward. It turns out that even Elon Musk, perhaps the highest-profile signer of the Future of Life letter, seems to be proceeding with AI efforts.11 Rather, the legislatures of the relevant democracies should have appropriate science and technology subcommittees hold hearings into AI, and specifically the risks posed by it, but even then the urge to regulate AI should be pursued with restraint, given that there is simply insufficient evidence of harm at this point that would warrant a heavy-handed approach to lawmaking. There might be one day, but that day has yet to come. In the meantime, in order to ensure that they prevail against the autocracies in Cold War 2.0, the democracies should not handicap themselves relative to the autocracies by shutting down AI research unilaterally.

More recently Sam Altman, the CEO of OpenAI (the developer of ChatGPT), appeared before the US Congress and said he would welcome regulation of AI. Again, a cautionary note is in order when a leader of a specific industry calls for regulation of that very industry. Government regulation, if too heavy-handed, can quickly become a barrier to entry, especially to smaller or newer entrants into the domain. A company like OpenAI, courtesy of its partnership with Microsoft, can afford just about any regulatory burden imposed by even a well-meaning Congress. That same regulation, though, could easy stifle the next new AI start-up with a revolutionary idea, technology, or business model. Or worse, the regulation has the effect of exiling the thwarted founders of such a start-up to another country, particularly if it is an autocracy, presumably offering a lot of money for the company’s migration. Whenever regulation is proposed over a fairly specific industry domain, the question must always be asked: Will this measure serve to protect the public interest fairly, or will the measure fail its essential purpose because it will shut down the process of competitive displacement? If it’s likely to do the latter, the regulation is too overbearing and needs to be refashioned or at least pruned materially. Particularly in the Cold War 2.0 world of global rivalry, the democracies cannot afford to hamper competitive displacement in their market for software development, which is the secret sauce of so much of their success.

There is one other potential constraint on AI software worth mentioning briefly. In October 2022, the Biden administration placed hard-hitting controls on the export to China of high-performance semiconductor chips made in America, or made with equipment made in America. Washington has been considering similar restrictions on the export to China of AI, as well a prohibition on Americans (including individuals, companies, banks, and investment funds of all types) investing in Chinese AI companies and research entities. Given the thoroughly dual-use nature of AI, the White House is having some difficulty defining with some precision what type of AI such restrictions will apply to.

Clearly any such restrictions cannot apply to all AI, as that would capture thousands of different variations of AI and machine learning software, and there is the prospect of yet thousands more such programs coming to market in the next few years. There is a scenario where, in ten to fifteen years, virtually all software has at least some AI embedded into it. Therefore, simply defining the newly restricted category as “AI and machine learning” is much too broad. One approach, instead, is to define the prohibited software by function, which might include any AI actually designed to work in a military setting, or delivered to or used by an arm of the military or a research institute affiliated with the PLA, or where the exporter initially delivered the AI software to a civilian entity but knew, or ought to have known (given the surrounding circumstances) that its customer was going to transfer the AI software to a military entity in China. Frankly, the uncertainty attached to any such system will likely mean that few entities will risk exporting American AI software to China for any purpose. This would mean that there would be, in effect, a technology decoupling between the democracies and the autocracies in respect to AI software.