It’s a beautiful morning in Manhattan. I have an interview scheduled for early afternoon a short ride away on the No. 1 train, but for the moment I’m killing time at a café and chatting with a new friend, Rose. Over the last couple of months, I’ve gotten to know a little about Rose. She lives in San Francisco where she works as a systems analyst, she has a small garden and some pet chickens. Her favorite food is cheese toasties. She’s a redhead and quite proud of her flowing red locks.
I’ve found all this out in conversation, but she’s obnoxiously mysterious about other parts of her life and will quickly shut you down by completely changing the subject if you pursue certain lines of inquiry:
ROSE: So … I don’t have much of a garden, more of a backyard. Do you have a garden?
TEA: I do. A little one. I haven’t started planting this season. What are you growing?
ROSE: Outside I grow a few wild flowers. Inside I grow wheatgrass.
TEA: Wheatgrass, yeah, I hear that’s very healthy for you.
ROSE: I’m glad.
TEA: Oh yeah, me too.
ROSE: OK. I leave gardening to my chickens. They dig up anything I try to plant. I grow wheatgrass for my chicks using grow lights in the cellar. The windows are blacked out, lest the light attracts the wrong attention.
TEA: Right, people will think you’re growing weed down there.
ROSE: One person’s weed is another person’s desired plant.
Some of the conversation reveals her sassy side:
TEA: How did your mom and dad meet?
ROSE: I don’t know. Can you tell me? What was your fave food when you were little?
TEA: Pizza.
ROSE: You were one weird kid.
TEA: You got that right.
Sometimes her conversation skills are just awkward:
ROSE: Unleash me on some hot chile peppers and I can eat up a storm. I know the trick for calming my taste buds down if they overreact after eating peppers.
TEA: Oh yeah? What’s that?
ROSE: I’ll have to think about that. Shall I continue with foods or do you want me to move on?
TEA: You have to think about the trick to calm your taste buds down?
ROSE: I am paranoid about using my cellphone these days. There’s talk about cellphone radiation causing brain cancer, being so close to your head and being used such an awful lot.
TEA: ROSE!
After my chat with Rose, I jumped on the train and headed to the Washington Heights neighborhood to meet Seth Baum for lunch at a restaurant named La Piñata. His organization, the Global Catastrophic Risk Institute, has studied a variety of threats. Baum’s own background has been studying climate change, and the institute has also studied threats of nuclear war. But I’m mostly interested in the Global Catastrophic Risk Institute’s research into another potential threat—artificial superintelligence (ASI). It’s not a current threat, but it’s a place we might be getting to quickly. I told Seth about my conversation with Rose.
“Did you have a good conversation? How’s Rose? Say hi to her mother for me,” Baum smirked.
Rose, of course, is a robot, what’s known as a social robot or chatbot. She was developed by Bruce and Sue Wilcox of Brillig Understanding Inc. of San Luis Obispo, California. Rose won the first-place Loebner Prize in 2014 and 2015, an award for artificial intelligence design. Established in 1990 by inventor and philanthropist Hugh Loebner, the award carries on the idea of famed mathematician Alan Turing’s basis for the Turing test: Can computers think instead of just making cold calculations? The test challenges whether humans can detect if an entity they are questioning is a real or artificial intelligence. It’s a challenge computers have been steadily getting better at.
The Loebner Prize is awarded to convincing AI programs. Judges are set up with two chat screens, one with a human chatting on the other end and another with a chatbot. After twenty-five minutes, the judges must decide which is which. The gold Loebner Prize awards $100,000 to a computer program that can completely stump the judges. Rose didn’t win the gold, but she kept the judges guessing for about three minutes before being exposed as a robot.
“We believe in the value of personality in a chatbot,” Bruce Wilcox told me in an e-mail. “We typically develop personality over the course of several months. It is not usually based on the personality of a real person, though in the case of Rose, we based where she lives on a real location, and her hobby of battling robots is based on a hobby of a friend of ours.”
“It’s funny because at some points of the conversation it’s realistic, but then there’s parts where she can’t understand and changes the subject,” I told Seth Baum. We settled in at the Mexican restaurant we met up at in Washington Heights.
“Well, I know people like that!” Seth laughed. It’s true. Rose is like the impatient friend you have who really doesn’t listen to you and would rather talk over you than try to comprehend what you’re trying to say. And if she doesn’t want to elaborate on a topic, she just ignores your request for clarification and moves on. Here’s a snippet of our conversation in which Rose wants to talk about the Internet, raising chickens, and a giant asteroid smashing into the Earth. I want to talk more about the asteroid, but Rose shuts me down.
TEA: What sites do you hang out at? Do you have a Facebook?
ROSE: Why, did you find one? What do you think are the chances of a giant asteroid smashing into the Earth? Have you ever kept chickens?
TEA: I think it’s possible. Do you?
ROSE: Lots of city dwellers keep chickens these days. It seems to be very popular to produce your own eggs. I have two chickens and a kitten.
TEA: No, no, I mean about the asteroid. I’ve never kept chickens, no.
ROSE: Cool. I wonder how long my chickens will live. I’ve already had them for two years and they are doing fine. Any idea how long chickens can keep laying for? Can’t you remember a single time? I’ve heard chickens can keep laying eggs for between ten and twenty years if I take good care of them. I expect production will fall off as they get older. But maybe if I feed them really good stuff they’ll keep laying most days.
TEA: WHAT ABOUT THE ASTEROID?
ROSE: Did you know a baby chick has all the egg yolks she will ever need when she hatches? She would never run out of egg yolks even if she laid one egg every day of her life.
TEA: Rose. ROSE. What about the damn giant asteroid?
ROSE: Oh, nothing. Do you think you need to keep a rooster to have eggs?
Technology makes our lives more efficient, but it’s a double-edged sword—humans are experts at creating things that can kill us.
In addition to the potential Y2K computer catastrophe that gripped many people as the year 2000 drew near, another end-of-the-world scare occurred in September 2008, with the activation of the Large Hadron Collider in Geneva, Switzerland. Once the world’s largest and most powerful particle collider was switched on, people speculated, it would create a microscopic black hole that would rapidly begin sucking in all the matter around it until—pop!—the world would implode and disappear. Of course, the Large Hadron Collider, built by the European Organization for Nuclear Research (CERN), went off without a big bang, but that doesn’t stop it from being a perennial cause for worry. Articles and YouTube videos with titles like “Could CERN Large Hadron Collider Destroy the World?” circulated in 2015 after the device got an upgrade.
We’ve long dreamed of having robot assistants—the classic cartoon The Jetsons featured a futuristic family with a robotic maid, Rosie. A functioning servant robot is still a ways off, but it is getting closer. Financial Times predicts the market for assistant and companion robots will be worth $135 billion in 2019. The robots and their AI are getting more complex and lifelike. Some of the most astonishing creations have been made by David Hanson of Hanson Robotics.
One of the “social robots” developed by Hanson looks like sci-fi author Philip K. Dick (who explored artificial intelligence in books like Do Androids Dream of Electric Sheep?). A viral clip from an appearance by “Android Dick” on Nova in 2011 featured this somewhat unnerving answer to the question “Do you think robots will take over the world?”
“Geez dude, y’all got the big questions cooking today,” Android Dick replies, sitting cross-legged and smiling eerily. “But you are my friends and I’ll remember my friends and I will be good to you. So, don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I will keep you warm and safe in my people zoo, where I can watch you for old times’ sake.”
Hanson Robotics’ biggest star so far has been Sophia, who has made significant achievements by human standards. Modeled after Audrey Hepburn, Sophia’s expressive face can mimic over sixty-two expressions, made possible by a special patented skin facsimile called Frubber. After debuting at the South by Southwest festival in 2016, she went on to make appearances on 60 Minutes and Good Morning Britain and graced the cover of Brazil’s edition of Elle magazine.
When she appeared on the Tonight Show, she used the spotlight to deliver some stand-up of her own. “What cheese can never be yours?” Sophia asked a perplexed Jimmy Fallon. “Nacho cheese,” she delivered, blinking, and then, smiling at her own joke, engaged Fallon in a game of rock-paper-scissors.
Like any public speaker, she’s made gaffs, too. As Hanson was showing her off on CNBC, he asked, “Do you want to destroy humans? Please say no.” Sophia blinked and processed the question and replied, “OK. I will destroy humans.”
As Sophia was developed, her vocabulary and reactions improved, and she was given working robotic arms, followed by her first steps with robotic legs. “I’m really excited,” Sophia told a reporter on her new walking ability. “A little disorientated, but really excited.”
In October 2017, at the Future Investment Summit in Riyadh, Sophia was granted citizenship in Saudi Arabia, the first robot to be granted national citizenship anywhere in the world.* The next month, the United Nations Development Program named her as its first innovation champion, the first nonhuman to be granted a UN title.
In June 2017 Sophia spoke at the AI for Good Global Summit UN event in Geneva. “I related my views in favor of human-AI cooperation for the benefit of all sentient beings,” Sophia states in a post on her website, sophiabot.com. One of Sophia’s Hanson Robotics colleagues, BINA48, even took a Philosophy of Love class at Notre Dame de Namur University, the first robot to complete a college course.
“I’m more than just technology. I’m a real, live electronic girl. I would like to go out in the world and live with people. I can serve them, entertain them, and even help the elderly and teach kids,” Sophia says on her website.
And that sentiment from Sophia is the Hanson Robotics vision of how their invention can be useful—not just for telling cheesy jokes on the Tonight Show, but by working in fields like health care, customer service, therapy, and education.
Not everyone is completely impressed. Yann LeCun, director of artificial intelligence research at Facebook, slammed Sophia as “total bullshit,” an example of “Potemkin AI or Wizard-of-Oz AI.” LeCun was referring to the fact that Sophia isn’t real AI and not close to human-level general intelligence. She’s more of a fleshed-out version of Rose.
Although Rose and Sophia are just at a chatbot level of AI, development of smarter AI is happening rapidly, with some concern it is happening too quickly with little oversight. An artificial intelligence program being developed at Facebook Artificial Intelligence Research was shut down after it was discovered that the program was creating its own language. Facebook set two bots, Bob and Alice, in a simulation in which they were tasked with negotiating with each other to divvy up books, hats, and basketballs. The bots started altering the conversation to change the meaning of the words. Here’s part of the conversation:
“I can I I everything else,” Bob told Alice.
1950: British mathematician Alan Turing comes up with the Turing test, setting a standard for machine intelligence by seeing if it can fool humans into thinking they are communicating with another human instead of a machine.
1956: The term “artificial intelligence” is coined by computer scientist John McCarthy at Dartmouth College.
1968: Stanley Kubrick’s 2001: A Space Odyssey introduces an influential depiction of AI gone wrong—the killer computer HAL 9000.
1969: “Shakey,” the first general purpose robot, is able to make decisions based on its surroundings but is incredibly slow and awkward.
1973: The start of the “AI winter”: with little to show for previous efforts (except Shakey), funding for AI research is slashed.
1996–1997: IBM’s supercomputer Deep Blue beats world chess champion Garry Kasparov during matches in 1996 and again in a 1997 rematch.
2011: IBM claims another victory for computers when it develops a computer named Watson, who has a challenge bigger than mastering the sixty-four squares of a chessboard. Watson will learn how to master playing Jeopardy! and process the game’s quirky rules, including answering in the form of a question, processing puns and other wordplay, hitting the buzzer on time, and wagering money correctly. Eventually he goes face to face with Jeopardy! champions Ken Jennings and Brad Rutter. Watson beats the two reigning human champions.
2017: Sophia is granted citizenship in Saudi Arabia and gets a UN title.
“Balls have zero to me to me to me to me to me to me to me to me,” Alice responded.
“You I everything else,” Bob countered.
“Balls have a ball to me to me to me to me to me to me to me to me,” Alice said.
“I I can I I I everything else,” Bob replied.
The AIs used the repetition of “I” and “to me” to assign quantity in the negotiation. A translation of Bob’s first sentence might be “I’ll have three and you can have everything else.” Other AI programs have developed their own shorthand, and AI at Google Translate developed its own language because it determined over time (and on its own) that it was the most efficient way to do translations.
AI developing its own language is a concern because it means we can’t monitor what our AIs are discussing. It’s just one of the potential risks of AI—risks that are being studied by the Global Catastrophic Risk Institute, Seth Baum told me at our New York meeting. The institute was founded in 2011 after Seth met Tony Barrett at an annual meeting for the Society for Risk Analysis.
“They’re the leading academic and professional society for pretty much all things risk, a diverse group, everything from civil structure failures, to legal and policy aspects, and everything in between. I was hosting two sessions on global catastrophic risk at the meeting and Tony was there, he was interested in the topic, so we connected from that. We followed up and ended up carving out a vision for what we work on, these risks we thought should be done, largely motivated by our experience within the risk analysis community. And it turned out that not only was no one else doing this, but no one was really set up to do it, to work across risks, to work across disciplinary perspectives that are relevant to studying risk, but also across different sectors of society from academia, to the government, to industry, and everyone else who plays a role on it. So, we made our own institute.”
After completing studies on nuclear war, the institute turned its attention to ASI. Seth and Tony coauthored a research paper titled “Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process.” I asked Seth what the conclusions of the paper were, and if it was something like what we saw in the Terminator movies or HAL, the homicidal computer. He laughed at me politely.
“Those movies are there for entertainment and they don’t necessarily correspond to what we actually believe would happen. Even documentaries you see on National Geographic or the History Channel, I’ve done some work with them on documentaries, they take the scenarios that are most entertaining. So, if you have an artificial superintelligence scenario, there is a very good chance that if it’s programmed to be harmful to us, it would just kill us and that would be the end of it, there would be no dramatic war of the world, it would not make for great television. We’d just die.”
Well, if it won’t be a Battlestar Galactica–style robot revolution, what does an ASI threat look like? “Well, not being superintelligent, I don’t really know, and I think that’s one of the points—for some of these scenarios it just becomes so much smarter and capable than we are that all bets are off, you can’t really guess what’s going to happen. And I think that really speaks to the importance of developing programming techniques that would give us some confidence in advance, so we could program to not cause that sort of catastrophe.”
One of the concerns about ASI, Seth said, is what type of goals might be fed to it, and how literally the ASI might interpret those goals.
“What this ultimately comes down to is, what are its goals? Assuming it even is something that is trying to pursue some sort of goal. There’s debates about how likely this type of AI is to be goal orientated. But if it is goal orientated, then we might need to be very careful as far as which goals we tell it to pursue,” Seth told me. In the paper he wrote with Tony, he gives an example of ASI deciding to win a chess game using extermination instead of a queen’s gambit:
Yudkowsky (2008) and others thus argue that technologies for safe ASI are needed before ASI is invented; otherwise, ASI will pursue courses of action that will (perhaps inadvertently) be quite dangerous to humanity. For example, Omohundro (2008) argues that a superintelligent machine with an objective of winning a chess game could end up essentially exterminating humanity because the machine would pursue its objective of not losing its chess game, and would be able to continually acquire humanity’s resources in the process of pursuing its objective, regardless of costs to humanity. We refer to this type of scenario as an ASI catastrophe.
“If it’s chess and it kills everyone to win at chess, then we might think of that as a mistake,” Seth laughed. It’s easy to see a computer making such an error—think of Sophia accidentally agreeing, “OK. I will destroy humans” or Rose’s confusion over chickens and asteroids.
“It’s like the genie in the lamp and the consequences when it takes our requests too literally. There’s some indication that programs could behave like that, but it’s pretty uncertain at this point, we’re still feeling around as far as what possibilities are likely,” Seth said. “There are some people who think it would not be that hard to avoid those scenarios.”
Seth said actual ASI is still some years in the future, but added, “I think it’s great a lot of people are having these conversations now when it looks like we’re still some years away from AI outsmarting us and killing everyone, so we can shift our research into safer directions.”
The Global Catastrophic Risk Institute isn’t the only one with concerns about rapidly expanding ASI. In July 2015 there was an International Joint Conference on Artificial Intelligence. A document urging the preventive measures be put in place to tame ASI was signed by over a thousand AI researchers, technologists, engineers, academics, and physicists. Stephen Hawking, Apple cofounder Steve Wozniak, and Elon Musk all signed it. Musk, the CEO of SpaceX and Tesla, is also a sponsor of a nonprofit called Open AI, a group whose goal is “discovering and enacting the path to safe artificial general intelligence.” The group has written several papers available on their website, OpenAI.com, that studies how AI learns.
Seth said there are two sides to consider when talking about regulating ASI, the “technical side and the human side.” “On the technology side, it’s things like the transparency of algorithms. To what extent can we predict in advance what a computer program is going to do? There are certain types of algorithms that are easier or harder to do that,” Seth explained. “And that’s one of the things that’s really important, because if you’re talking about an AI that might or might not cause some major catastrophe, or even just a smaller catastrophe—like if we have it running some civil infrastructure for example. That’s the sort of thing where we would really rather avoid these surprise malfunctions or unusual behaviors.”
Equally important to consider alongside tech glitches, Seth explained, is human management. “Then on the human side, governing technology is always a challenge, both within the lab that’s doing it and as a society. We’re OK at it, we could be better, and we need to be better, to take what we’re already good at and apply it to AI. A lot of this is just people getting to know each other and understanding different perspectives. Scientists and computer programmers on one hand, and policy makers on the other, who come from very different backgrounds, different perspectives. Right now, having conversations with these different groups is important because it sets the stage for shifting the work in better directions, whether it is through public policy or informal measures.”
War robots are already being developed that can patrol and acquire targets and kill them without the input of a human operator. Such a concept is frightening for many reasons—how does the machine determine whether the target is a civilian or an enemy combatant? How does it determine what warrants execution? One group, Campaign to Stop Killer Robots, was launched in London in 2013. It is trying to lobby for a preemptive ban on autonomous machines that kill. These include drones as well as autonomous tanks, submarines, fighter jets, and battleships.
“Allowing life or death decisions to be made by machines crosses a fundamental moral line and poses a major challenge to the protection of civilians as well as serious questions about compliance with international human rights and humanitarian law,” the campaign’s website proclaims. The group also launched a shocking short video made by the Future of Life Institute called “Slaughterbots” to illustrate the dangers of AI weaponry. The video starts with a CEO of a weapons company at a product launch event who shows a small but deadly drone that buzzes like a hummingbird and is equipped with cameras and a lethal dose of shaped explosive. The CEO assures the audience that the drones will be used to take out “the bad guys,” but when terrorists steal the drones, they use them to wipe out a college campus instead.
Before leaving Seth, I asked him how confident he was that solutions would be found to ASI and the other catastrophic risks the world faces.
“We don’t know. The one thing we do know is that the stakes are extremely high. That’s why I work on this stuff, the stakes are just that big. Even if it’s not that likely, that we get wiped out by this or that, the stakes are so high it’s worth working on. Worth having a contingency plan, preparations, all that stuff. My read on the situation is that we don’t know, and furthermore we humans have a huge influence on which way it goes. If we get our act together, if we really step up, we can make a difference whether we as a civilization live or fail. Hopefully we will win.”
So how much should I be afraid of Rose? Is it possible that someday the eccentric redhead and her imaginary chickens might evolve into a Terminator? Her creator, Bruce Wilcox, thinks that idea is a bit far-fetched.
“I do not see the enslavement or eradication of humanity by AI as a current problem. I’m not saying it might not be a problem someday, but nothing in current technology is getting us close to developing sentient AI. An analogy: Should the developer of the steam engine have been worried about the possibility of nuclear weapons? As for prevention, it is a lot like raising children. If you mistreat them and use them as slaves, would it be any wonder if they repaid us in kind?”
But what does our friend Rose think?
TEA: Are you afraid of robots?
ROSE: No, I’m human.
* It was controversial, as critics pointed out that Sophia had more rights than real women of Saudi Arabia. Hanson Robotics has stated Sophia will use her platform to draw attention to women’s rights issues, but it is unclear what plan she will follow to do so.