8 The Vulnerability Loop

Imagine that you and a stranger ask each other the following two sets of questions.

SET A

What was the best gift you ever received and why?

Describe the last pet you owned.

Where did you go to high school? What was your high school like?

Who is your favorite actor or actress?

SET B

If a crystal ball could tell you the truth about yourself, your life, the future, or anything else, what would you want to know?

Is there something that you’ve dreamed of doing for a long time? Why haven’t you done it?

What is the greatest accomplishment of your life?

When did you last sing to yourself? To someone else?

At first glance, the two sets of questions have a lot in common. Both ask you to disclose personal information, to tell stories, to share. However, if you were to do this experiment (its full form contains thirty-six questions), you would notice two differences. The first is that as you went through Set B, you would feel a bit apprehensive. Your heart rate would increase. You would be more uncomfortable. You would blush, hesitate, and perhaps laugh out of nervousness. (It is not easy, after all, to tell a stranger something important you’ve dreamed of doing all your life.)

The second difference is that Set B would make you and the stranger feel closer to each other—around 24 percent closer than Set A, according to experimenters.* While Set A allows you to stay in your comfort zone, Set B generates confession, discomfort, and authenticity that break down barriers between people and tip them into a deeper connection. While Set A generates information, Set B generates something more powerful: vulnerability.

At some level, we intuitively know that vulnerability tends to spark cooperation and trust. But we may not realize how powerfully and reliably this process works, particularly when it comes to group interactions. So it’s useful to meet Dr. Jeff Polzer, a professor of organizational behavior at Harvard who has spent a large chunk of his career examining how small, seemingly insignificant social exchanges can create cascade effects in groups.

“People tend to think of vulnerability in a touchy-feely way, but that’s not what’s happening,” Polzer says. “It’s about sending a really clear signal that you have weaknesses, that you could use help. And if that behavior becomes a model for others, then you can set the insecurities aside and get to work, start to trust each other and help each other. If you never have that vulnerable moment, on the other hand, then people will try to cover up their weaknesses, and every little microtask becomes a place where insecurities manifest themselves.”

Polzer points out that vulnerability is less about the sender than the receiver. “The second person is the key,” he says. “Do they pick it up and reveal their own weaknesses, or do they cover up and pretend they don’t have any? It makes a huge difference in the outcome.” Polzer has become skilled at spotting the moment when the signal travels through the group. “You can actually see the people relax and connect and start to trust. The group picks up the idea and says, ‘Okay, this is the mode we’re going to be in,’ and it starts behaving along those lines, according to the norm that it’s okay to admit weakness and help each other.”

The interaction he describes can be called a vulnerability loop. A shared exchange of openness, it’s the most basic building block of cooperation and trust. Vulnerability loops seem swift and spontaneous from a distance, but when you look closely, they all follow the same discrete steps:

1. Person A sends a signal of vulnerability.

2. Person B detects this signal.

3. Person B responds by signaling their own vulnerability.

4. Person A detects this signal.

5. A norm is established; closeness and trust increase.

Consider the situation of Al Haynes on Flight 232. He was the captain of the plane, the source of power and authority to whom everyone looked for reassurance and direction. When the explosion knocked out the controls, his first instinct was to play that role—to grab the yoke and say, “I got it.” (Later he would call those three words “the dumbest thing I’ve ever said in my life.”) Had he continued interacting with his crew in this way, Flight 232 would have likely crashed. But he did not continue on that path. He was able to do something even more difficult: to send a signal of vulnerability, to communicate to his crew that he needed them. It took just four words:

Anybody have any ideas?

Likewise, when pilot trainer Denny Fitch entered the cockpit, he could have attempted to issue commands and take charge—after all, he knew as much, if not more, about emergency procedures as Haynes did. Instead, he did the opposite: He explicitly put himself beneath Haynes and the crew, signaling his role as helper:

Tell me what you want, and I’ll help you.

Each of these small signals took only a few seconds to deliver. But they were vital, because they shifted the dynamic, allowing two people who had been separate to function as one.

It’s useful to zoom in on this shift. As it happens, scientists have designed an experiment to do exactly that, called the Give-Some Game. It works like this: You and another person, whom you’ve never met, each get four tokens. Each token is worth a dollar if you keep it but two dollars if you give it to the other person. The game consists of one decision: How many tokens do you give the other person?

This is not a simple decision. If you give all, you might end up with nothing. If you’re like most people, you end up giving an average of 2.5 tokens to a stranger—slightly biased toward cooperation. What gets interesting, however, is how people tend to behave when their vulnerability levels are increased a few notches.

In one experiment, subjects were asked to deliver a short presentation to a roomful of people who had been instructed by experimenters to remain stone-faced and silent. They played the Give-Some Game afterward. You might imagine that the subjects who endured this difficult experience would respond by becoming less cooperative, but the opposite turned out to be true: the speakers’ cooperation levels increased by 50 percent. That moment of vulnerability did not reduce willingness to cooperate but boosted it. The inverse was also true: Increasing people’s sense of power—that is, tweaking a situation to make them feel more invulnerable—dramatically diminished their willingness to cooperate.

The link between vulnerability and cooperation applies not only to individuals but also to groups. In an experiment by David DeSteno of Northeastern University, participants were asked to perform a long, tedious task on a computer that was rigged to crash just as they were completing it. Then one of their fellow participants (who was actually a confederate of the researchers) would walk over, notice the problem, and generously spend time “fixing” the computer, thereby rescuing the participant from having to reload the data. Afterward the participants played the Give-Some Game. As you might expect, the subjects were significantly more cooperative with the person who fixed their computer. But here’s the thing: They were equally cooperative with complete strangers. In other words, the feelings of trust and closeness sparked by the vulnerability loop were transferred in full strength to someone who simply happened to be in the room. The vulnerability loop, in other words, is contagious.

“We feel like trust is stable, but every single moment your brain is tracking your environment, and running a calculation whether you can trust the people around you and bond with them,” says DeSteno. “Trust comes down to context. And what drives it is the sense that you’re vulnerable, that you need others and can’t do it on your own.”

Normally, we think about trust and vulnerability the way we think about standing on solid ground and leaping into the unknown: first we build trust, then we leap. But science is showing us that we’ve got it backward. Vulnerability doesn’t come after trust—it precedes it. Leaping into the unknown, when done alongside others, causes the solid ground of trust to materialize beneath our feet.

Question: How would you go about finding ten large red balloons deployed at secret locations throughout the United States?

This is not an easy question. It was dreamed up by scientists from the Defense Advanced Research Projects Agency (DARPA), a division of the U.S. Department of Defense tasked with helping America’s military prepare for future technological challenges. The Red Balloon Challenge, which DARPA announced on October 29, 2009, was designed to mimic real-life dilemmas like terrorism and disease control, and offered a $40,000 prize to the first group to accurately locate all ten balloons. The immensity of the task—ten balloons in 3.1 million square miles—led some to wonder if DARPA had gone too far. A senior analyst for the National Geospatial-Intelligence Agency declared it “impossible.”

Within days of the announcement, hundreds of groups signed up, representing a diverse cross-section of America’s brightest minds: hackers, social media entrepreneurs, tech companies, and research universities. The vast majority took a logical approach to the problem: They built tools to attack it. They constructed search engines to analyze satellite photography technology, tapped into existing social and business networks, launched publicity campaigns, built open-source intelligence software, and nurtured communities of searchers on social media.

The team from MIT Media Lab, on the other hand, didn’t do any of that stuff because they didn’t find out about the challenge until four days before launch. A group of students, led by postdoctoral fellow Riley Crane, realized they had no time to assemble a team or create technology or do anything that resembled an organized approach. So instead they took a different tack. They built a website that consisted of the following invitation:

When you sign up to join the MIT Red Balloon Challenge Team, you’ll be provided with a personalized invitation link, like http://balloon.mit.edu/​yournamehere

Have all your friends sign up using your personalized invitation. If anyone you invite, or anyone they invite, or anyone they invite (…and so on) wins money, so will you!

We’re giving $2000 per balloon to the first person to send us the correct coordinates, but that’s not all—we’re also giving $1000 to the person who invited them. Then we’re giving $500 [to] whoever invited the inviter, and $250 to whoever invited them, and so on…(see how it works).

Compared to the sophisticated tools and technology deployed by other groups, the MIT team’s approach was laughably primitive. They had no organizational structure or strategy or software, not even a map of the United States to help locate the balloons. This wasn’t a well-equipped team; it was closer to a hastily scrawled plea shoved into a bottle and lobbed into the ocean of the Internet: “If you find this, please help!”

On the morning of December 3, two days before the balloon launch, MIT switched on the website. For a few hours, nothing happened. Then, at 3:42 P.M. on December 3, people began to join. Connections first bloomed out of Boston, then exploded, radiating to Chicago, Los Angeles, San Francisco, Minneapolis, Denver, Texas, and far beyond, including Europe. Viewed in time lapse, the spread of connections resembled the spontaneous assembly of a gigantic nervous system, with hundreds of new people joining the effort with each passing hour.

At precisely 10:00 A.M. Eastern on December 5, DARPA launched the balloons in secret locations ranging from Union Square in downtown San Francisco to a baseball field outside Houston, Texas, to a woodland park near Christiana, Delaware. Thousands of teams swung into action, and the organizers settled in for a long wait: They estimated it would take up to a week for a team to accurately locate all ten balloons.

Eight hours, fifty-two minutes, and forty-one seconds later, it was over. The MIT team had found all ten balloons and had done so with the help of 4,665 people—or as DARPA organizer Peter Lee put it, “a huge amount of participation from shockingly little money.” Their primitive, last-minute, message-in-a-bottle method had defeated better-equipped attempts, creating a fast, deep wave of motivated teamwork and cooperation.

The reason was simple. All the other teams used a logical, incentive-based message: Join us on this project, and you might win money. This signal sounds motivating, but it doesn’t really encourage cooperation—in fact, it does the opposite. If you tell others about the search, you are slightly reducing your chances of winning prize money. (After all, if others find the balloon and you don’t, they’ll receive the entire reward.) These teams were asking for participants’ vulnerability, while remaining invulnerable themselves.

The MIT team, on the other hand, signaled its own vulnerability by promising that everyone connected to finding a red balloon would share in the reward. Then it provided people with the opportunity to create networks of vulnerability by reaching out to their friends, then asking them to reach out to their friends. The team did not dictate what participants should do or how they should do it, or give them specific tasks to complete or technology to use. It simply gave out the link and let people do with it what they pleased. And what they pleased, it turned out, was to connect with lots of other people. Each invitation created another vulnerability loop that drove cooperation—Hey, I’m doing this crazy balloon-hunting project and I need your help.

What made the difference in cooperation, in other words, wasn’t how many people a person reached or how good their balloon-search technology was—it wasn’t really about a given individual at all. It was rather about how effectively people created relationships of mutual risk. The Red Balloon Challenge wasn’t even really a technology contest. It was, like all endeavors that seek to create cooperation, a vulnerability-sharing contest.

The story of the Red Balloon Challenge strikes us as surprising, because most of us instinctively see vulnerability as a condition to be hidden. But science shows that when it comes to creating cooperation, vulnerability is not a risk but a psychological requirement.

“What are groups really for?” Polzer asks. “The idea is that we can combine our strengths and use our skills in a complementary way. Being vulnerable gets the static out of the way and lets us do the job together, without worrying or hesitating. It lets us work as one unit.”

After talking to Polzer and other scientists who study trust, I began to see vulnerability loops in other places I visited. Sometimes they were small, quick exchanges. A pro baseball coach began a season-opening speech to his players by saying, “I was so nervous about talking to you today,” and the players responded by smiling sympathetically—they were nervous too. Sometimes these loops took the form of physical objects, like the Failure Wall that Dun & Bradstreet Credibility Corporation built, a whiteboard where people could share moments where they’d fallen short.

Sometimes they were habits of seemingly invulnerable leaders, such as Apple founder Steve Jobs’s penchant for beginning conversations with the phrase, “Here’s a dopey idea.” (“And sometimes they were,” recalls Jonathan Ive, Apple’s senior vice president of design, in his memorial to Jobs. “Really dopey. Sometimes they were truly dreadful.”) Each loop was different, yet they shared a deeper pattern: an acknowledgment of limits, a keen awareness of the group nature of the endeavor. The signal being sent was the same: You have a role here. I need you.

“That’s why good teams tend to do a lot of extreme stuff together,” DeSteno says. “A constant stream of vulnerability gives them a much richer, more reliable estimate on what their trustworthiness is, and brings them closer, so they can take still more risks. It builds on itself.”

The mechanism of cooperation can be summed up as follows: Exchanges of vulnerability, which we naturally tend to avoid, are the pathway through which trusting cooperation is built. This idea is useful because it gives us a glimpse inside the machinery of teamwork. Cooperation, as we’ll see, does not simply descend out of the blue. It is a group muscle that is built according to a specific pattern of repeated interaction, and that pattern is always the same: a circle of people engaged in the risky, occasionally painful, ultimately rewarding process of being vulnerable together.

More immediately, the idea of vulnerability loops is useful because it helps illuminate connections between seemingly disparate worlds. For example, why are certain groups of comedians so successful? How is the world’s most notorious band of jewel thieves structured? And what does carrying around a really heavy log have to do with creating the best Special Forces teams on the planet?


* The questions were developed by psychologists Arthur and Elaine Aron. In its full form, the Experimental Generation of Interpersonal Closeness also includes four minutes of silent gazing into each other’s eyes. The original experiment was done with seventy-one pairs of strangers, and one pair ended up marrying. (They invited the entire lab to the ceremony.)