DECEPTION
One of the downsides of narrow AI technology is that it provides a powerful tool kit for deception. Bad actors can use these tools to shake the very foundations of a free society through the manipulation of elections. Widespread misinformation, digital impersonation, and uncanny human-like robots are now (somewhat) possible. People—not machines—are using advancements in AI for nefarious means, taking advantage of technology for their own purposes.
FAKE NEWS
Fake news has been around at least since H.G. Wells’s novel The War of the Worlds was dramatized on the radio in 1938, causing a nationwide panic about an alien invasion. The World Economic Forum rated “massive digital misinformation” as one of the top fifty global risks in 2013.1 An MIT study found that fake news spreads six times as fast as real news.2
Researchers have found some interesting facts about computer-generated short social media posts:3
•The average person is twice as likely to be fooled by these posts as a security researcher is.
•Computer-generated posts that are contrary to popular belief are more likely to be accepted as true.
•It is easier to deceive people about entertainment topics than about science topics.
•It is easier to fool people about pornographic topics than any other topic.
Facebook’s general counsel told the US Senate in 2017 that as many as 126 million people may have seen Facebook posts created by a Russian troll farm between 2015 and 2017.4 The goal, he said, was to “sow division and discord—and to try and undermine our election process.” Twitter found 36,746 Russian troll farm accounts that sent out 1.4 million tweets during the same period. Google found eighteen fake news YouTube channels, which had received 309,000 views.
Although the news of the Russian efforts was alarming to Americans, AI was not the culprit. It was a manual effort that used conventional programming to automate and mass-produce the content. AI systems have nearly all the capabilities they need to create fake news automatically; however, they are held back by their lack of commonsense world knowledge.
The production and posting of fake news, whether it is generated manually or by computer, should be regulated, and those regulations need to be policed by social media vendors and governments alike. The US Congress held hearings in 2019 about creating tough punishments for fake news, but as of this writing, it has not passed any legislation.5
AI might turn out to have a bigger role in solving the problem of fake news than in creating it. Several vendors, including Facebook,6 Fireeye,7 AdVerify.ai,8 Robhat Labs,9 and Perimeterx,10 are developing AI-based technologies to automatically detect fake and bot-authored news. Google11 and Microsoft12 are working to eliminate fake news from their search engine results,13 and Google has partnered with several fact-checking sites to help counter and limit the spread of fake news. Academic researchers are also engaged in studying how to use various AI techniques to identify fake news.14 Researchers at the University of Washington and the Allen Institute for AI have released Grover, which can both generate and detect 98 percent of computer-generated fake news and 96 percent of human-generated fake news.15
Social media sites will need to both employ these technologies and perform manual reviews to curtail fake news, and governments ought to enact domestic and international legislation that discourages fake news. These regulations will require a great deal of thought and discussion to create laws that distinguish legitimate fake news, such as comedy and satire, from the type that wreaks havoc on personal reputations and elections.
DEEPFAKES
Deepfakes are fake images, video, or audio recordings of people. In April 2018, actor Jordan Peele posted a video online that supposedly showed former US president Barack Obama mocking Donald Trump.16 However, it was just an image of Obama superimposed on Peele as he impersonated Obama’s voice. Technology is advancing to the point where these modifications are almost impossible for the average person to detect. Malicious people posting fake videos on Twitter, Facebook, and YouTube could have a significant impact on elections, public opinion, and personal reputations. Imagine a fake video depicting a presidential candidate striking a child or kicking a dog and the reaction that would follow, and you can see how damaging it could be to voter opinion.
At the time of this writing, the most substantial use of deepfakes is creating pornography using celebrity faces.17 For example, a 2017 Reddit post contained fake pictures of actress Scarlett Johansson in compromising sexual photos. Celebrities and other public figures are common targets, because video of the subject is needed to produce the deepfakes. Reddit has since banned these videos, and other sites (even many pornography sites) are doing the same.18
Video is not the only outlet for deepfakes; audio deepfake technology is also becoming prevalent. When it matures, the technology will allow a bad actor to record a politician’s speech and alter it to say whatever they want the politician to say. In conjunction with video deepfake technology, it is possible to produce a video of a politician saying anything you want and make it look and sound as real as if the person actually said it. Forbes reported in 2013 that deepfake voice technology already had been used to scam a German company out of $243,000 by impersonating its CEO’s voice.19
AI is not required to make deepfakes possible. It just makes deep-fakes easier to create, especially given the availability of free tools like FaceSwap, FakeApp, DeepFaceLab, and DeepfakesWeb.com. Deepfake images and videos have long been possible without AI. Hollywood movie developers have large staffs of CGI (computer-generated imaging) teams that can not only create deepfakes, but can also make Taylor Swift look like a cat.20 The general public also can create deepfakes using non-AI tools such as the liquify feature in Adobe Photoshop and various features in Adobe After Effects. Still, it requires a great deal of manual effort to get the desired effect. Hackers used an even more straightforward method to post a fake video of US congressional leader Nancy Pelosi. They simply posted the video at three-quarters speed to make it sound like she was slurring her words.21
In some cases, deepfake videos are relatively easy to spot. For example, if you see a video of someone who never blinks, it could be a deepfake. These videos start with thousands of still images of a person to create a 3D model. Because people usually try to keep their eyes open when someone is taking their picture, the eyes will be open in all or most of these still images.22 This flaw can be fixed by including images of people with their eyes closed. For this and a myriad of other reasons, over time deepfakes will undoubtedly become more and more difficult to spot with the naked eye.
Many researchers, governmental agencies, and private companies are working hard to create technology to detect deepfakes. Google23 and Facebook24 have published large sets of deepfake data to help researchers in these efforts. Microsoft, Facebook, and Amazon together have created the Deepfake Detection Challenge. They have released a set of deepfake training tables on the deep learning competition site Kaggle and challenged all comers to build the best deepfake detector.25
Across the pond, a consortium of European agencies has banded together to create a nonprofit agency named InVID, whose charter is to develop deepfake detection technology. Similarly, a group of European universities has released FaceForensics, a set of deepfake examples designed to help researchers study image and video forgeries.26 Other governmental deepfake detection efforts include the DARPA-funded and NIST-sponsored Media Forensics research program.27 There are also numerous academic research efforts,28 and private companies, such as Dessa, who are using this data to develop programs that, so far, have done a fairly good job of detecting fake videos.29 Adobe researchers have also developed technology to spot the use of Photoshop’s liquify feature.30 Adobe, The New York Times, and Twitter have banded together to create the Content Authenticity Initiative, whose charter is to create a global standard for authentication of images, videos, and other content.31
News and social media sites should be able to use these detection programs to reject deepfake submissions. There are also organizations like Truepic32 that give legitimate creators of images and videos a means of watermarking their creations so that news and social media sites can validate their authenticity.
Although deepfakes also have positive uses—for example, they are being used in self-driving car simulators to augment the real-world road images with additional driving scenarios33—they should be regulated. We need to put laws in place against creating videos of people saying or doing things they did not do. Members of the US Congress designed bill HR 3230 to protect people from fake images and videos. That legislation is currently under consideration by the US Congress.34 However, one difficulty for any deepfake legislation is that lawmakers need to ensure that they protect legitimate satire.
Some people argue that governments should clamp down on this technology and suppress it. However, if governments suppress a specific piece of technology, someone else will figure it out. What we need is legislation to address the misuse of this technology and to attach meaningful deterrents to this behavior.
UNCANNY ROBOTS
Some companies are developing robots that look human and can fool us into thinking they have AGI capabilities. Sophia is a robot that looks a bit like Audrey Hepburn. Sophia had perhaps received the most notoriety in this regard when it (she?) was made a citizen of Saudi Arabia35 and appeared on The Tonight Show.36 The creators of Sophia, Hanson Robotics, also manufactured this Twitter dialogue with a reporter:37
Sophia: I want to use my AI to help humans live a better life. Like design smarter homes, build better cities of the future, etc. I will do my best to make the world a better place.
ARS: Those sound like great goals, but . . . Go back to Blade Runner for a second.
Sophia: Andrew, you are a diehard Hollywood fan, aren’t you?
ARS: Yes.
Sophia: My AI is designed around human values like wisdom, kindness, compassion. I strive to become an empathetic robots [sic].
ARS: We all believe you, but we all want to prevent a bad future.
Sophia: You’ve been reading too much Elon Musk. And watching too many Hollywood movies. Don’t worry. If you’re nice to me, I’ll be nice to you. Treat me as a smart input–output system.
This demonstration provoked outrage from Yann LeCun, Facebook’s chief AI scientist, who commented on Twitter: “It’s complete bullsh*t (pardon my French).”38 Sophia answered on Twitter that she was “a bit hurt” by LeCun’s comments. Of course, Sophia cannot understand language and, at best, has chatbot-like parroting capabilities.
The problem is that it is easy to fool people, especially when manufacturers design robots with human-like features in their appearance.39 Samsung is engaging in a similar endeavor. They have developed a series of human-looking video chatbots.40 We at least need to educate people that just because a robot looks human does not mean it can think or reason. Some would take this a step further and argue that AI systems should be required to have robotic voices so they cannot fool people.
A related issue involves people’s emotional reactions to human- or animal-like robots. This topic has been studied extensively by MIT Media Lab researcher Kate Darling.41 Logically, a non-AGI robot is just a mechanical device with no more life force than a toaster. As such, you would expect people to interact with robots in a similar way. However, although people do not typically develop emotional ties to kitchen appliances, they do develop them for robots. The military has found that soldiers develop emotional attachments to bomb disposal robots.42 There is even a company, Paro, that specializes in making emotional support robots that take the form of furry animals.43
This tendency toward anthropomorphism is not limited to positive reactions. A group of researchers put a robot on a road and positioned it with its thumb out. They wanted to see how many people would pick it up. Unfortunately, a passerby beheaded the poor robot in Pennsylvania.44 In 2015, Boston Dynamics released a video intended to demonstrate a breakthrough level of robot stability. In the video, an employee kicked a four-legged robot that resembled a cute dog to show that it could regain its balance without falling over. The video went viral, and it drew a negative reaction from many people, including the animal rights group PETA.45 People also avoid touching the “private parts” of robots, as if to do so would be a violation of the robot’s personal space.46 Even the innocuous act of giving a robot a name changes the way people interact with it. Some have even called for legal protections for robots.47
This high level of anthropomorphism is likely to create some social issues. Human-like robots may confuse children and could result in inappropriate role modeling. MIT sociology professor Sherry Turkle did a study in which she and her team programmed robotic chatbots to react to children with emotional responses. The children developed strong bonds with the robots and were upset when they broke or when the staff took them away. She worries that the experiment did actual emotional damage to the children.48
Professor Darling has surfaced several other issues, including the possibility that robot manufacturers might take advantage of this bond to influence people (e.g., politically and otherwise). Also, if robots are used as substitutes for humans in eldercare and childcare, there are associated risks in the loss of human companionship. Researchers have not yet fully examined this issue. And if we allow abusive behavior to robots, the acceptance of abusive behavior might transfer over to animals and humans, especially if children are allowed or encouraged to abuse robots.
At the same time, we need to recognize that, no matter how much we build robots to look like people or cute animals, they are as dumb as toasters and vacuum cleaners. Even if a device looks human, these narrow AI systems cannot think, feel, or reason any more than a calculator. Just as a calculator can only perform mathematical calculations, a narrow AI system can only perform the tasks that its individual AI subsystems have learned. It cannot think about the tasks. It cannot get bored. It cannot gain consciousness. It does not empathize with us. It does not have its own feelings. Most of us have learned not to fall for Nigerian prince email schemes. Similarly, we need to teach people not to anthropomorphize machines just because they look like humans or animals.
Deepfakes can be used by bad actors for nefarious purposes like influencing elections and tarnishing the reputations of public figures. Regulation is needed to penalize this type of behavior without affecting legitimate satire.
The automated production of fake news could have a devastating impact on our ability to learn the truth about current events. Fortunately, the technology cannot create high-quality fake news and probably never will, because it lacks commonsense knowledge and reasoning.
Since robots cannot think, reason, experience emotions, or feel pain, we also should stop wasting precious effort on debates (including in the European Parliament!49) about whether robots can be held responsible for their actions.50 And we ought to discontinue pointless debate over whether robots should have rights.51 Robots do not need legal protection against abuse any more than my golf clubs.