Chapter 11. With Power Comes Responsibility

In the previous chapters, we looked at how to leverage psychology to build more intuitive, human-centered products and experiences. We identified and explored some key principles from psychology that can be used as a guide for designing for how people actually are, instead of forcing them to conform to technology. This knowledge can be quite powerful for designers, but with power comes responsibility. While there’s nothing inherently wrong with leveraging the insights from behavioral and cognitive psychology to help create better designs, it’s critical that we consider how products and services have the potential to undermine the goals and objectives of the people using them, why accountability is critical for those creating those products and services, and how we can slow down and be more intentional.

How Technology Shapes Behavior

The first step in making more responsible design decisions is to acknowledge and understand the ways in which the human mind is susceptible to persuasive technology and how behavior can be shaped. There are a number of studies that provide a glimpse into the fundamentals of behavior shaping, but perhaps none are as influential or foundational as those conducted by American psychologist, behaviorist, author, inventor, and social philosopher B. F. Skinner. Through a process he called “operant conditioning,” Skinner studied how behaviors could be learned and modified by creating an association between a particular behavior and a consequence. Using a laboratory apparatus that came to be named after him (Figure 11-1), Skinner studied how the behavior of animals could be shaped by teaching them to perform desired actions in response to specific stimuli in an isolated environment. His earliest experiments involved placing a hungry rat into the chamber and observing it while it discovered that a food pellet would be dispensed when it came into contact with a lever on one side.1 After a few chance occurrences, the rat quickly learned the association between pushing the lever and receiving food, and each time it was put in the box it would go straight to the lever—a clear demonstration of how positive reinforcement increases the likelihood of behavior being repeated. Skinner also experimented with negative reinforcement by placing a rat inside the chamber and subjecting it to an unpleasant electrical current, which would be turned off when the lever was pressed. Much like his previous experiments that rewarded the rats with food, the animal learned to avoid the current quickly by going straight to the lever once placed in the box.

B. F. Skinner’s operant conditioning chamber, also known as the “Skinner box” (source: Skinner, 1938)
Figure 11-1. B. F. Skinner’s operant conditioning chamber, also known as the “Skinner box” (source: Skinner, 1938)

Skinner later discovered that different patterns of reinforcement affected the speed and frequency at which the animals would perform the desired behavior.2 For example, rats that were rewarded with food each time they pressed the lever would press it only when they became hungry, and rats that were rewarded too infrequently would stop pressing the lever altogether. By contrast, rats that were rewarded with food in unpredictable patterns would repeatedly press the lever and continue doing so without reinforcement for the longest time. In other words, the rats’ behavior could most effectively be shaped by reinforcing it at variable times, as opposed to every time or not frequently enough. Too much or too little reinforcement led to the animals losing interest, but random reinforcement led to impulsive, repeated behavior.

Fast-forward to today, and it’s clear that Skinner’s research has been applied beyond the isolated box that bears his name. It can also be observed with human subjects in casinos around the world, where you’ll find slot machines that have perfected operant conditioning. These machines are an excellent modern-day example of the Skinner box: gamblers pay to pull a lever, occasionally being rewarded for doing so. In her book Addiction by Design,3 cultural anthropologist Natasha Dow Schüll explores the world of machine-aided gambling and describes how slot machines are designed to mesmerize people into a state of “continuous productivity” in order to extract maximum value through a continual feedback loop. Additionally, their activity is often recorded into a data system that creates a risk profile for each player, informing the casino observers how much they can lose and still feel satisfied. When a player approaches their algorithmically calculated “pain point,” casinos often dispatch a “luck ambassador” to supplement the holding power of the slot machine by dispensing meal coupons, show tickets, gambling vouchers, and other incentives. It’s a stimulus-response loop that’s optimized to keep people in front of the machines, repeatedly pulling the levers and spending money—all while being tracked in order to maximize time on device.

Digital products and services have also been known to employ various methods with the goal of shaping human behavior, and we can see examples in many of the apps we use every day. Everything from keeping you on a site for as long as possible to nudging you to make a purchase or tempting you to share content is behavior that can be shaped through reinforcement at the right time. Let’s take a closer look at some of the more common methods technology employs to shape behavior, whether intentionally or unintentionally.

Defaults

Default settings matter when it comes to choice architecture because most people never change them. These settings therefore have incredible power to steer decisions, even when people are unaware of what’s being decided for them. For example, a 2011 study found that Facebook’s default privacy settings (Figure 11-5) matched users’ expectations only 37% of the time, leading to their content and personal information being visible to more people than they expected.6

Facebook’s privacy settings (source: Facebook, 2020)
Figure 11-5. Facebook’s privacy settings (source: Facebook, 2020)

Despite these potential mismatches, studies suggest that default options often lead people to rationalize their acceptance and reject alternatives.7

(Lack of) Friction

Another way to shape behavior with digital products and services is to remove as much friction as possible—especially friction around actions you want people to take. In other words, the easier and more convenient you make an action, the more likely people will be to perform that action and form a habit around it. Take, for example, Amazon Dash buttons (Figure 11-6), small electronic devices that enabled customers to order frequently used products simply by pressing a button, without even visiting the Amazon website or app. The physical buttons have since been deprecated in favor of digital-only versions, but this example illustrates just how far companies will go to shape behavior by attempting to remove as many obstacles as possible.

An example of Amazon’s now-deprecated Dash button (source: Amazon, 2019)
Figure 11-6. An example of Amazon’s now-deprecated Dash button (source: Amazon, 2019)

Reciprocity

Reciprocation, or the tendency to repay the gestures of others, is a strong impulse we share as human beings. It’s a social norm we’ve come to value and even rely on as a species. It’s also a strong determining factor of human behavior that can be exploited, intentionally or not. Technology can tap into our impulse to reciprocate the gestures of others and shape our behavior as a result. Take, for example, LinkedIn, which notifies people when others have endorsed them for a skill (Figure 11-7). More often than not, this leads to the recipient of the endorsement not only accepting the gesture but also feeling obliged to respond with their own endorsement. The end result is more time spent on the platform by both people, and more profit for LinkedIn.

LinkedIn skill endorsement notification (source: LinkedIn, 2020)
Figure 11-7. LinkedIn skill endorsement notification (source: LinkedIn, 2020)

Dark Patterns

Dark patterns are yet another way technology can be used to influence behavior, by making people perform actions that they didn’t intend to for the sake of increasing engagement or to convince users to complete a task that is not in their best interest (make a larger purchase, share unnecessary information, accept marketing communications, etc.). Unfortunately, these deceptive techniques can be found all over the internet. In a 2019 study, researchers from Princeton University and the University of Chicago analyzed about 11,000 shopping websites looking for evidence of dark patterns. Their findings were nothing short of alarming: they identified 1,818 instances of dark patterns, with the more popular sites in the sample being more likely to feature them.8 To illustrate, consider 6pm.com, which makes use of the scarcity pattern to indicate that limited quantities of a product are available, increasing its perceived desirability. The company does this by displaying a low-stock message when people choose product options, to make it always seem that the item is in imminent danger of selling out (Figure 11-8).

An example of the scarcity dark pattern (source: 6pm.com, 2019)
Figure 11-8. An example of the scarcity dark pattern (source: 6pm.com, 2019)

These are only some of the more common methods by which technology can be used to shape behavior in subtle ways. Data collected about user behavior can be used to fine-tune how a system responds to an individual, and these methods are constantly increasing in sophistication and accuracy, while the psychological hardware we share as humans remains the same. Now, more than ever, it’s important that designers consider the ethics of influencing behavior.

Why Ethics Matter

Now let’s explore why exploitative technology should matter to those in the technology industry. It seems as if digital technology grows increasingly more embedded in our daily lives with each passing year. Since the arrival of the smartphone and other “smart” devices, we’ve become more and more reliant on the miniaturized computers we keep in our pockets, wear on our wrists, embed in our clothing, or carry in our bags. Everything from transportation and accommodation to food and consumer goods is just a few taps and swipes away, all thanks to these convenient little digital companions. The convenience these devices bring us is liberating and empowering, but it is not without consequence. Sometimes companies with the best of intentions create technology that ultimately produces unintended results.

Good Intentions, Unintended Consequences

Companies seldom set out to create harmful products and services. When Facebook introduced the “like” button in 2009, they probably didn’t intend for it to become such an addictive feedback mechanism, providing a small dopamine hit of social affirmation to users who found themselves returning to the app time and time again to measure their self-worth. They probably also didn’t intend for people to spend so many hours mindlessly scrolling through their news feeds once infinite scrolling was introduced. Snapchat probably didn’t intend for its filters to change how many see themselves or present themselves to others, or to drive some to pursue cosmetic surgery in an effort to recreate the look provided by the filters in the app. They surely didn’t intend for their disappearing videos to be used for sexual harassment or to become a haven for sexual predators. Sadly, I could fill a whole chapter with examples like these—but I think you get the point. It’s hard to imagine any of these companies intended the negative consequences that resulted from the services they provided or features they introduced. And yet those consequences did occur, and the harm created by these examples and countless others is not excusable just because it was unintended by the creators.

Things have moved so fast in the technology industry that we haven’t always had time to see the things that have been broken in the process. Now the research is starting to catch up and enlighten us about the lasting effects of “progress.” It appears that the mere presence of our smartphones reduces our available cognitive capacity, even when the devices are turned off.9 Additionally, links have been made between social media use and its disturbing effects on some of society’s most vulnerable: increases in depression and loneliness in young adults10 and a rise in suicide-related outcomes or deaths among adolescents.11 Unfortunate side effects like these continue to surface as researchers take a closer look at the ways in which technology is impacting people’s lives and society as a whole.

The Ethical Imperative

Human vulnerabilities often get exploited on digital platforms that lose sight of the human problems that they once sought to solve. The same technology that enables us to so easily purchase, connect, or consume can also distract us, affect our behavior, and impact the relationships we have with others around us. Psychology and its application in user experience design plays a critical role in all of this: behavior design is useful for keeping people “hooked,” but at what cost? When did “daily active users” or “time on site” become a more meaningful metric than whether a product is actually helping people achieve their goals or facilitating meaningful connections?

Ethics must be an integral part of the design process, because without this check and balance, there may be no one advocating for the end user within the companies and organizations creating technology. The commercial imperatives to increase time on site, streamline the consumption of media and advertising, or extract valuable data don’t match up with human objectives of accomplishing a task, staying connected with friends or family, and so on. In other words, the corporate goals of the business and the human goals of the end user are seldom aligned, and more often than not designers are a conduit between them. If behavior can be shaped by technology, who holds the companies that build technology to account for the decisions they make?

It’s time that designers confront this tension and accept that it’s our responsibility to create products and experiences that support and align with the goals and well-being of users. In other words, we should build technology that augments the human experience rather than replacing it with virtual interaction and rewards. The first step in making ethical design decisions is to acknowledge how the human mind can be exploited. We must then take accountability for the technology we help to create and ensure it respects people’s time, attention, and overall digital well-being. No longer is “moving fast and breaking things” an acceptable means of building technology—instead, we must slow down and be intentional with the technology we create, and consider how it’s impacting people’s lives.

Slow Down and Be Intentional

To ensure we are building products and services that support the goals of the people using them, it’s imperative that ethics are integrated into the design process. The following are a few common approaches to ensuring the human part of “human-centered design” remains at the forefront.

1 B. F. Skinner, The Behavior of Organisms: An Experimental Analysis (New York: Appleton-Century, 1938).

2 C. B. Ferster and B. F. Skinner, Schedules of Reinforcement (New York: Appleton-Century-Crofts, 1957).

3 Natasha Dow Schüll, Addiction by Design: Machine Gambling in Las Vegas (Princeton, NJ: Princeton University Press, 2012).

4 Michael Winnick, “Putting a Finger on Our Phone Obsession,” dscout, June 16, 2016, https://blog.dscout.com/mobile-touches.

5 Catalina L. Toma and Jeffrey T. Hancock, “Self-Affirmation Underlies Facebook Use,” Personality and Social Psychology Bulletin 39, no. 3 (2013): 321–31.

6 Yabing Liu, Krishna P. Gummadi, Balachander Krishnamurthy, and Alan Mislove, “Analyzing Facebook Privacy Settings: User Expectations vs. Reality,” in IMC ’11: Proceedings of the 2011 ACM SIGCOMM Internet Measurement Conference (New York: Association for Computing Machinery, 2011), 61–70.

7 Isaac Dinner, Eric Johnson, Daniel Goldstein, and Kaiya Liu, “Partitioning Default Effects: Why People Choose Not to Choose,” Journal of Experimental Psychology: Applied 17, no. 4 (2011): 332–41.

8 Arunesh Mathur, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, “Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites,” in Proceedings of the ACM on Human-Computer Interaction, vol. 3 (New York: Association for Computing Machinery, 2019), 1–32.

9 Adrian Ward, Kristen Duke, Ayelet Gneezy, and Maarten Bos, “Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity,” Journal of the Association for Consumer Research 2, no. 2 (2017): 140–54.

10 Melissa Hunt, Rachel Marx, Courtney Lipson, and Jordyn Young, “No More FOMO: Limiting Social Media Decreases Loneliness and Depression,” Journal of Social and Clinical Psychology 37, no. 10 (2018): 751–68.

11 Jean Twenge, Thomas Joiner, Megan Rogers, and Gabrielle Martin, “Increases in Depressive Symptoms, Suicide-Related Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links to Increased New Media Screen Time,” Clinical Psychological Science 6, no. 1 (2018): 3–17.