Artificial Intelligence Shock

Terrence Sejnowski & Fotis Sotiropoulos

We live in the future of Future Shock, written in 1970 when computers were just beginning to impact businesses and jobs. The first personal computers became popular in the 1980s, making it possible to automate word processing and spreadsheets. Commerce on the internet took off in the 1990s, transforming almost every aspect of our lives, from access to knowledge, commerce, entertainment, and our social and political life. Since Apple introduced the iPhone in 2007, mobile connectivity has become ubiquitous. The march of Moore’s Law, which predicted a doubling in computer power every 18 months, has continued to this day. This exponential progress of computer technology has sent shock waves through society at a relentless pace.

As Toffler predicted, we are now living in an age of information, one whose impact is just becoming apparent. Artificial intelligence (AI), which was launched in the mid-20th century, has come of age, powered by learning algorithms and fueled by the big data that is exploding on the internet. Deep learning has made it possible for AI to recognize speech and objects in images, and perform translation between languages¹. It has been less than a decade since these breakthroughs occurred, but AI shock is already reverberating across the planet. Futurists are predicting loss of jobs and replacement of humans by AI, but a more likely scenario in the short run is the amplification of human cognitive abilities across a broad range of activities, including commerce, financial markets, medicine, science, and military. What impact will all of these changes have on the world order?

President Trump recently signed an executive order designed to stimulate the development of AI technologies in the United States. Through what is being called the “American AI Initiative,” some $75 million of the Department of Defense’s annual budget would be shifted to a newly created office focusing on these technologies, as a way to keep pace with China and other countries that are making AI a national priority.

This acknowledgement of AI at the federal level may be too little, too late. Much larger investments are being made by China, South Korea, France, Canada, and many other countries. We are once again playing catch-up, just as we were following Russia’s launch of the Sputnik satellite in 1957.

This acknowledgement of AI at the federal level may be too little, too late. Much larger investments are being made by China, South Korea, France, Canada, and many other countries. We are once again playing catch-up, just as we were following Russia’s launch of the Sputnik satellite in 1957, which led to the establishment of not only NASA to lead us into space, but also DARPA, the Defense Advanced Research Projects Agency, to develop and spur breakthrough technologies for national security. In comparison, the new executive order is akin to putting a Band-Aid on a ruptured artery.

China is already well on its way to being the dominant world leader in AI. Consider that two years ago, it unveiled a detailed program, complete with a price tag of $150 billion in economic development, focused on artificial intelligence and big data. Immediately thereafter, two Chinese cities promised to invest $7 billion in the effort. Moreover, South Korea, France, Canada, and other countries have also beat us to the punch by making huge investments in their own AI industries.

Just a few months ago, we had ringside seats on AI activities in China. I gave a keynote talk at the Snowball Summit in Beijing, an annual meeting for people working in high-level finance, economics, technology, and administration. The talk was streamed to an audience of eight million Chinese interested in the prospects for AI in the investment arena. I also addressed a much smaller group of elite entrepreneurs at the Great Wall Club organized by Wen Chu, who is spearheading the national AI effort. China is mobilizing its workforce for the transition to AI at both the macroscale and the microscale. Fotis witnessed firsthand the scale of China’s investments, as a national priority, in healthcare². During my visit to the impressive building housing the Jinan International Medicine Center in Jinan City, and my subsequent meeting with that city’s mayor, I learned that the JIMC had already indexed data from five million individuals from dozens of hospitals in the Shandong province. Equally impressive is the fact that the facility is actually designed to hold the data on 50 million people!

This is just one of the four national centers of its kind across China, composing the National Human Genetic Resources Sharing Service Platform. Such massive population-scale data sets will fuel dramatic advances in AI research, enabling advances in precision medicine and personalized therapies that will be hard to match by the US or any other nation around the globe. They will firmly position China at the forefront of the global race for supremacy in the AI space, at first in the healthcare sector, but it would not be a stretch to predict in many other sectors as well. And the stakes are high. As succinctly articulated by Russian President Vladimir Putin: “The one who becomes the leader in this sphere will be the ruler of the world.³

Indeed, the Information Revolution, and the era of intelligent machines, is in full swing, and AI will determine the geopolitical superpowers of the future and drive economic development, as well as redefine the very essence of what it means to be human. Andrew McAfee, faculty member at the MIT Sloan School of Management, who has studied the impact of technology on economies for years, has said, with no exaggeration, that “digital technologies are doing to brainpower what steam engine and related technologies did for human muscle power during the Industrial Revolution.” Our current geological era has been characterized as the Anthropocene, but the future may very well become known as the “Technocene,” when AI technologies fused with biological intelligence usher in a new phase of cultural evolution.

Yet, even though humanity has lived through and successfully adjusted to several disruptive technological transitions, never before has the role of human work as the main driving force for producing wealth been as unclear as it is today. During the Industrial Revolution, machines could effectively replace human muscle, but human cognition and intelligence continued to drive innovation and economic development, leading to many opportunities for new jobs. However, the AI era can change all this, as deep learning algorithms have the potential to replicate complex human cognitive functions. Estimates suggest that AI technologies could cause anywhere from 10 to 50 percent of human jobs to disappear in the next few decades and create many new jobs, just as the Industrial Revolution sent farmers to factories.

Our current geological era has been characterized as the Anthropocene, but the future may very well become known as the “Technocene,” when AI technologies fused with biological intelligence usher in a new phase of cultural evolution.

Discussion about how to robot-proof all these jobs is not what we need; what we need is less talk and more real and tangible solutions, both top down and bottom up. Just as China, on a national level, is investing billions in big data and AI with a clear vision to become the global AI leader by 2030, so, too, must the US if we hope even to compete in this global race. In the 1960s, the US made an investment of $100 billion in today’s dollars to create advanced microelectronics, materials science, and a thriving aerospace industry. Investments were also made to revamp science and engineering education. When Neil Armstrong stepped onto the moon in 1969, the average age of an engineer at NASA was 26. This investment paid off—these industries have thrived and we are still benefitting from them. But the generation of engineers at NASA that created the Space Age has since retired and STEM education in the US has fallen far behind. We need someone with the vision and leadership of John F. Kennedy to inspire the nation today to make similar investments in the science and engineering of AI.

The solution, however, requires more than investments from government. It also must incorporate innovative partnerships between government, universities, and high-tech industry. To be sure, the role that higher education institutions need to play in this new era of intelligent machines cannot be overstated. Educational paradigms must be established that prepare students to work and creatively co-exist with AI systems by cultivating higher-order human cognitive abilities, which are less likely to be soon surpassed. These include critical thinking, the ability to work with complex interconnected systems, entrepreneurship, compassion, and cross-cultural understanding. Providing some level of proficiency in computing for students across all disciplines will be as prerequisite as speaking a language, as will innovative new degrees and programs that fuse computer science and engineering disciplines with humanities, social sciences, law, business, and medicine.

Some universities around the country are already gearing up to adapt their educational programs to this future. At Stony Brook University, for instance, we recently established the Institute for AI-Driven Discovery and Innovation, with a central theme of Human-Machine symbiosis, based on the idea that AI technologies should amplify human intelligence instead of replace it. Our vision is a new kind of humanities-trained student who is sufficiently proficient in the basics of machine learning and data science, but also possesses the higher-level cognitive skills that machines will not be able to acquire and that a humanities education can cultivate. Innovative educational approaches will be at the center of our efforts, centered on new curricula and vertically integrated design projects, bringing together teams of students, early in their educational journey, from engineering, humanities, and other disciplines to tackle challenging projects relevant to industry with an emphasis on the societal impacts of technology and entrepreneurship.

The advances in machine learning that made possible modern AI were made by researchers trained at universities, but this talent pool has been gutted by much better pay and computing resources at high-tech companies, which recognized much earlier than governments what impact AI was going to have on their businesses and the economy. The good news is that these companies are still competitive on the world stage, but the bad news is that the seed corn needed to train the next generation has left our universities. We need to make universities more attractive to the best and brightest faculty. This can happen if new AI centers and programs are established, such as the Institute for AI-Driven Discovery and Innovation at Stony Brook, which can compete for state and federal research support to build computing infrastructure and retain faculty.

A half-century ago, when Russia launched the Sputnik satellite, America responded full-throttle, legislating the National Defense Education Act, which provided significant resources to universities for science and math education. The global AI race is the 21st century’s Sputnik moment, only with stakes that are much higher. A new National Defense Digital Education Act, along with creative industry-academe-government partnerships and innovative educational paradigms implemented at scale, will all need to be part of this national strategy if the US is to compete effectively against the AI superpowers emerging in China and elsewhere.

Terrence J. Sejnowski holds the Francis Crick Chair at the Salk Institute for Biological Studies and is a Distinguished Professor at the University of California, San Diego. He was a member of the advisory committee for the Obama administration’s BRAIN initiative and is president of the Neural Information Processing (NIPS) Foundation. He has published 12 books, including (with Patricia Churchland) The Computational Brain (25th Anniversary Edition, MIT Press). His most recent book is The Deep Learning Revolution (The MIT Press, 2018).

Fotis Sotiropoulos, PhD, serves as Dean of the College of Engineering and Applied Sciences at Stony Brook University. He is leading university-wide initiatives in Engineering-Driven Medicine and Artificial Intelligence and is at the forefront of efforts to expand diversity and invent the future of engineering education in the era of exponential technologies. Sotiropoulos is State University of New York Distinguished Professor and Fellow of the American Physical Society and the American Society of Mechanical Engineers. He has authored over 200 peer-reviewed journal papers and book chapters in simulation-based fluid mechanics for wind energy, river hydraulics, aquatic biology, and cardiovascular bioengineering. See full bio here: https://www.stonybrook.edu/commcms/ceas/about/office-of-the-dean/about-the-dean

1. Sejnowski, T. J., The Deep Learning Revolution, Cambridge, MA: MIT Press (2018).

2. https://www.reuters.com/article/us-china-medtech-breakingviews/breakingviews-really-big-data-gives-china-medical-ai-edge-idUSKBN1K808W

3. https://www.cnbc.com/2017/09/04/putin-leader-in-artificial-intelligence-will-rule-world.html

4. https://hbr.org/2015/06/the-great-decoupling

5. http://robot-proof.com/#title

6. https://multimedia.scmp.com/news/china/article/2166148/china-2025-artificial-intelligence/index.html

7. https://news.stonybrook.edu/university/leading-the-future-of-ai-university-marries-human-ingenuity-with-machines/?spotlight=hero

8. https://news.stonybrook.edu/university/leading-the-future-of-ai-university-marries-human-ingenuity-with-machines/?spotlight=hero