CHAPTER 5

Dissent to Win

CUDOS to a magician

Meyer R. Schkolnick was born on the Fourth of July, 1910, in South Philadelphia. He performed magic at birthday parties as a teenager and considered a career as a performer. He adopted the performing name “Robert Merlin.” Then a friend convinced him that a teen magician naming himself after Merlin was too on the nose, so he performed as Robert Merton. When Robert K. Merton (to distinguish him from his son, economist and Nobel laureate Robert C. Merton) died in 2003, the New York Times called him “one of the most influential sociologists of the 20th century.”

The founders of Heterodox Academy, in the BBS paper, specifically recognized Merton’s 1942 and 1973 papers, in which he established norms for the scientific community known by the acronym CUDOS: “An ideologically balanced science that routinely resorted to adversarial collaborations to resolve empirical disputes would bear a striking resemblance to Robert Merton’s ideal-type model of a self-correcting epistemic community, one organized around the norms of CUDOS.” Per the BBS paper, CUDOS stands for

Communism (data belong to the group),

Universalism (apply uniform standards to claims and evidence, regardless of where they came from),

Disinterestedness (vigilance against potential conflicts that can influence the group’s evaluation), and

Organized Skepticism (discussion among the group to encourage engagement and dissent).

If you want to pick a role model for designing a group’s practical rules of engagement, you can’t do better than Merton. To start, he coined the phrase “role model,” along with “self-fulfilling prophecy,” “reference group,” “unintended consequences,” and “focus group.” He founded the science of sociology and was the first sociologist awarded the National Medal of Science.

Merton began his academic career in the 1930s, studying the history of institutional influences on the scientific community. To him, it was a story of many periods of scientific advancement spurred on by geopolitical influences, but also periods of struggle to maintain independence from those influences. His life spanned both world wars and the Cold War, in which he studied and witnessed nationalist movements in which people “arrayed their political selves in the garb of scientists,” explicitly evaluating scientific knowledge based on political and national affiliations.

In 1942, Merton wrote about the normative structure of science. He tinkered with the paper over the next thirty-one years, publishing the final version as part of a book in 1973. This twelve-page paper is an excellent manual for developing rules of engagement for any truthseeking group. I recognized its application to my poker group and professional and workplace groups I’ve encountered in speaking and consulting. Each element of CUDOS—communism, universalism, disinterestedness, and organized skepticism—can be broadly applied and adapted to push a group toward objectivity. When there is a drift toward confirmation and away from exploring accuracy, it’s likely the result of the failure to nurture one of Merton’s norms. Not surprisingly, Merton’s paper would make an excellent career guide for anyone seeking to be a profitable bettor, or a profitable decision-maker period.

Mertonian communism: more is more

The Mertonian norm of communism (obviously, not the political system) refers to the communal ownership of data within groups. Merton argued that, in academics, an individual researcher’s data must eventually be shared with the scientific community at large for knowledge to advance. “Secrecy is the antithesis of this norm; full and open communication its enactment.” In science, this means that the community has an agreement that research results cannot properly be reviewed without access to the data and a detailed description of the experimental design and methods. Researchers are entitled to keep data private until published but once they accomplish that, they should throw the doors open to give the community every opportunity to make a proper assessment. Any attempt at accuracy is bound to fall short if the truthseeking group has only limited access to potentially pertinent information. Without all the facts, accuracy suffers.

This ideal of scientific sharing was similarly described by physicist Richard Feynman in a 1974 lecture as “a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results . . .”

It is unrealistic to think we can perfectly achieve Feynman’s ideal; even scientists struggle with it. Within our own decision pod, we should strive to abide by the rule that “more is more.” Get all the information out there. Indulge the broadest definition of what could conceivably be relevant. Reward the process of pulling the skeletons of our own reasoning out of the closet. As a rule of thumb, if we have an urge to leave out a detail because it makes us uncomfortable or requires even more clarification to explain away, those are exactly the details we must share. The mere fact of our hesitation and discomfort is a signal that such information may be critical to providing a complete and balanced account. Likewise, as members of a group evaluating a decision, we should take such hesitation as a signal to explore further.

To the extent we regard self-governance in the United States as a truthseeking experiment, we have established that openness in the sharing of information is a cornerstone of making and accounting for decisions by the government. The free-press and free-speech guarantees of the Constitution recognize the importance of self-expression, but they also exist because we need mechanisms to assure that information makes it to the public. The government serves the people, so the people own the data and have a right to have the data shared with them. Statutes like the Freedom of Information Act have the same purpose. Without free access to information, it is impossible to make reasoned assessments of our government.

Sharing data and information, like the other elements of a truthseeking charter, is done by agreement. Academics agree to share results. The government shares information by agreement with the people. Without an agreement, we can’t and shouldn’t compel others to share information they don’t want to share. We all have a right of privacy. Companies and other entities have rights to trade secrets and to protect their intellectual property. But within our group, an agreement to share details pertinent to assessing the quality of a decision is part of a productive truthseeking charter.

If the group is discussing a decision and it doesn’t have all the details, it might be because the person providing them doesn’t realize the relevance of some of the data. Or it could mean the person telling the story has a bias toward encouraging a certain narrative that they likely aren’t even aware of. After all, as Jonathan Haidt points out, we are all our own best PR agents, spinning a narrative that shines the most flattering light on us.

We have all experienced situations where we get two accounts of the same event, but the versions are dramatically different because they are informed by different facts and perspectives. This is known as the Rashomon Effect, named for the 1950 cinematic classic Rashomon, directed by Akira Kurosawa. The central element of the otherwise simple plot was how incompleteness is a tool for bias. In the film, four people give separate, drastically different accounts of a scene they all observed, the seduction (or rape) of a woman by a bandit, the bandit’s duel with her husband (if there was a duel), and the husband’s death (from losing the duel, murder, or suicide).

Even without conflicting versions, the Rashomon Effect reminds us that we can’t assume one version of a story is accurate or complete. We can’t count on someone else to provide the other side of the story, or any individual’s version to provide a full and objective accounting of all the relevant information. That’s why, within a decision group, it is helpful to commit to this Mertonian norm on both sides of the discussion. When presenting a decision for discussion, we should be mindful of details we might be omitting and be extra-safe by adding anything that could possibly be relevant. On the evaluation side, we must query each other to extract those details when necessary.

My consultation with the CEO who traced his company’s problems to firing the president demonstrated the value of a commitment to data sharing. After he described what happened, I requested a lot more information. As he got into details of the hiring process for that executive and approaches to dealing with the president’s deficiencies on the job, that led to further questions about those decisions, which, in turn, led to more details being shared. He was identifying what he thought was a bad decision, justified by his initial description of the situation. After we got every detail out of all the dimensions of the decision, we reached a different conclusion: the decision to fire the president had been quite reasonable strategically. It just happened to turn out badly.

Be a data sharer. That’s what experts do. In fact, that’s one of the reasons experts become experts. They understand that sharing data is the best way to move toward accuracy because it extracts insight from your listeners of the highest fidelity.

You should hear the amount of detail a top poker player puts into the description of a hand when they are workshopping that hand with another player. A layperson would think, “That seems like a lot of irrelevant, nitpicky detail. Why are they saying all that stuff?” When two expert poker players get together to trade views and opinions about hands, the detail is extraordinary: the positions of everyone acting in the hand; the size of the bets and the size of the pot after each action; what they know about how their opponent(s) has played when they have encountered them in the past; how they were playing in the particular game they were in; how they were playing in the most recent hands in that game (particularly whether they were winning or losing recently); how many chips each person had throughout the hand; what their opponents know about them, etc., etc. What the experts recognize is that the more detail you provide, the better the assessment of decision quality you get. And because the same types of details are always expected, expert players essentially work from a template, so there is less opportunity to convey only the information that might lead the listener down a garden path to a desired conclusion.

Hall of Fame football coach John Madden, in a documentary about Vince Lombardi, told a story about how, as a young assistant coach, he attended a coaching clinic where Lombardi spoke about one play: the power sweep, a running play that he made famous with the Green Bay Packers in the 1960s. Lombardi held the audience spellbound as he described that one play for eight hours. Madden said, “I went in there cocky, thinking I knew everything there was to know about football, and he spent eight hours talking about this one play. . . . I realized then that I actually knew nothing about football.”

We are naturally reluctant to share information that could encourage others to find fault in our decision-making. My group made this easier by making me feel good about committing myself to improvement. When I shared details that cast me in what I perceived to be a bad light, I got a positive self-image update from the approval of players I respected. In my consulting, I’ve encouraged companies to make sure they don’t define “winning” solely by results or providing a self-enhancing narrative. If part of corporate success consists of providing the most accurate, objective, and detailed evaluation of what’s going on, employees will compete to win on those terms. That will reward better habits of mind.

Agree to be a data sharer and reward others in your decision group for telling more of the story.

Universalism: don’t shoot the message

The well-known advice “don’t shoot the messenger” is actually good shorthand for the reasons why we want to protect and encourage dissenting ideas. Plutarch’s Life of Lucullus provided an early, literal example: the king of Armenia got advance notice that Lucullus’s troops were approaching. He killed the messenger for delivering that message and, henceforth, messengers stopped reporting such intelligence. Obviously, if you don’t like the message, you shouldn’t take it out on the messenger.

The Mertonian norm of universalism is the converse. “Truth-claims, whatever their source, are to be subjected to preestablished impersonal criteria.” It means acceptance or rejection of an idea must not “depend on the personal or social attributes of their protagonist.” “Don’t shoot the message,” for some reason, hasn’t gotten the same historical or literary attention, but it addresses an equally important decision-making issue: don’t disparage or ignore an idea just because you don’t like who or where it came from.

When we have a negative opinion about the person delivering the message, we close our minds to what they are saying and miss a lot of learning opportunities because of it. Likewise, when we have a positive opinion of the messenger, we tend to accept the message without much vetting. Both are bad.

Whether the situation involves facts, ideas, beliefs, opinions, or predictions, the substance of the information has merit (or lack of merit) separate from where it came from. If you’re deciding the truth of whether the earth is round, it doesn’t matter if the idea came from your best friend or George Washington or Benito Mussolini. The accuracy of the statement should be evaluated independent of its source.

I learned an early lesson in my poker career about universalism. I started playing poker using that list of hands my brother Howard wrote on a napkin. I treated this initial advice like a holy document. Therefore, when I saw someone playing hands off-list, I immediately labeled them as a bad player. When I saw such a player subsequently execute a strategy I didn’t have as part of my game, I dismissed it. Doing that across the board (especially when I was labeling these players as “bad” based on one view of a sound beginner’s strategy) was an expensive lesson in universalism. For so many things going on at the table in the first years of my poker career, I shot the message.

I was guilty of the same thing David Letterman admitted in his explanation to Lauren Conrad. He spent a long time assuming people around him were idiots before considering the alternative hypothesis, “Maybe I’m the idiot.” In poker, I was the idiot.

As I learned that Howard’s list was just a safe way to get me started and not the Magna Carta scrawled in crayon on a napkin, I developed an exercise to practice and reinforce universalism. When I had the impulse to dismiss someone as a bad player, I made myself find something that they did well. It was an exercise I could do for myself, and I could get help from my group in analyzing the strategies I thought those players might be executing well. That commitment led to many benefits.

Of course, I learned some new and profitable strategies and tactics. I also developed a more complete picture of other players’ strategies. Even when I determined that the strategic choices of that player weren’t, in the end, profitable, I had a deeper understanding of my opponent’s game, which helped me devise counter-strategies. I had started thinking more deeply about the way my opponents thought. And in some instances, I recognized that I had underestimated the skills of certain players who I initially thought were profitable for me to play against. That led me to make more objective decisions about game selection. And my poker group benefited from this exercise as well because, in workshopping the strategies with each other, we multiplied the number of playing techniques we could observe and discuss. Admitting that the people I played against had things to teach me was hard, and my group helped me feel proud of myself when I resisted the urge to just complain about how lucky my opponents were.

Nearly any group can create an exercise to develop and reinforce the open-mindedness universalism requires. As an example, with politics so polarized, we forget the obvious truth that no one has only good ideas or only bad ideas. Liberals would do well to take some time to read and watch more conservative news sources, and conservatives would do well to take some time to read and watch more liberal news sources—not with the goal of confirming that the other side is a collection of idiots who have nothing of value to say but to specifically and purposely find things they agree with. When we do this, we learn things we wouldn’t otherwise have learned. Our views will likely become moderated in the same way judges from opposite sides of the political aisle moderate each other. Even if, in the end, we don’t find much to agree with, we will understand the opposing position better—and ours as well. We’ll be practicing what John Stuart Mill preached.

Another way to disentangle the message from the messenger is to imagine the message coming from a source we value much more or much less. If we hear an account from someone we like, imagine if someone we didn’t like told us the same story, and vice versa. This can be incorporated into an exploratory group’s work, asking each other, “How would we feel about this if we heard it from a much different source?” We can take this process of vetting information in the group further, initially and intentionally omitting where or whom we heard the idea from. Leading off our story by identifying the messenger could interfere with the group’s commitment to universalism, biasing them to agree with or discredit the message depending on their opinion of the messenger. So leave the source out to start, giving the group the maximum opportunity to form an impression without shooting (or celebrating) the message based on their opinion of the messenger (separate from the expertise and credibility of the messenger).

John Stuart Mill made it clear that the only way to gain knowledge and approach truth is by examining every variety of opinion. We learn things we didn’t know. We calibrate better. Even when the result of that examination confirms our initial position, we understand that position better if we open ourselves to every side of the issue. That requires open-mindedness to the messages that come from places we don’t like.

Disinterestedness: we all have a conflict of interest, and it’s contagious

Back in the 1960s, the scientific community was at odds about whether sugar or fat was the culprit in the increasing rates of heart disease. In 1967, three Harvard scientists conducted a comprehensive review of the research to date, published in the New England Journal of Medicine, that firmly pointed the finger at fat as the culprit. The paper was, not surprisingly, influential in the debate on diet and heart disease. After all, the NEJM is and was a prestigious publication and the researchers were, all three, from Harvard. Blaming fat and exonerating sugar affected the diets of hundreds of millions of people for decades, a belief that caused a shift in eating habits that has been linked to the massive increase in obesity rates and diabetes.

The influence of this paper and its negative effects on America’s eating habits and health provides a stunning demonstration of the imperative of disinterestedness. It was recently discovered that a trade group representing the sugar industry had paid the three Harvard scientists to write the paper, according to an article published in JAMA Internal Medicine in September 2016. Not surprisingly, consistent with the agenda of the sugar industry that had paid them, the researchers attacked the methodology of studies finding a link between sugar and heart disease and defended studies finding no link. The scientists’ attacks on and defenses of the methodology of studies on fat and heart disease followed the same pro-sugar pattern.

The scientists involved are all dead. Were they alive, it’s possible, if we could ask them, that they may not have even consciously known they were being influenced. Given human nature, they likely, at least, would have defended the truth of what they wrote and denied that the sugar industry dictated or influenced their thinking on the subject. Regardless, had the conflict of interest been disclosed, the scientific community would have viewed their conclusions with much more skepticism, taking into account the possibility of bias due to the researchers’ financial interest. At the time, the NEJM did not require such disclosures. (That policy changed in 1984.) That omission prevented an accurate assessment of the findings, resulting in serious harm to the health of the nation.

We tend to think about conflicts of interest in the financial sense, like the researchers getting paid by the sugar industry. But conflicts of interest come in many flavors. Our brains have built-in conflicts of interest, interpreting the world around us to confirm our beliefs, to avoid having to admit ignorance or error, to take credit for good results following our decisions, to find reasons bad results following our decisions were due to factors outside our control, to compare well with our peers, and to live in a world where the way things turn out makes sense. We are not naturally disinterested. We don’t process information independent of the way we wish the world to be.

Remember the thought experiment I suggested at the beginning of the book about what the headlines would have looked like if Pete Carroll’s pass call had won the 2015 Super Bowl? Those headlines would have been about his brilliance. People would have analyzed Carroll’s decision differently. Knowing how something turned out creates a conflict of interest that expresses itself as resulting.

Richard Feynman recognized that in physics—a branch of science that most of us consider as objective as 2 + 2 = 4—there is still demonstrable outcome bias. He found that if those analyzing data knew, or could even just intuit, the hypothesis being tested, the analysis would be more likely to support the hypothesis being tested. The measurements might be objective, but slicing and dicing the data is vulnerable to bias, even unconsciously. According to Robert MacCoun and physics Nobel laureate Saul Perlmutter in a 2015 Nature article, outcome-blind analysis has spread to several areas of particle physics and cosmology, where it “is often considered the only way to trust many results.” Because the idea—introducing a random variable so that those analyzing the data could not surmise the outcome the researcher might be hoping for—is hardly known in biological, psychological, and social sciences, the authors concluded these methods “might improve trust and integrity in many sciences, including those with high-stakes analyses that are easily plagued by bias.” Outcome blindness enforces disinterestedness.

We can apply this idea of outcome blindness to the way we communicate information as we workshop decisions about our far more ambiguous pursuits—like describing a poker hand, or a family argument, or the results of a market test for a new product. If the group is going to help us make and evaluate decisions in an unbiased way, we don’t want to infect them in the way the data analysts were infected if they could surmise the hypothesis being tested. Telling someone how a story ends encourages them to be resulters, to interpret the details to fit that outcome. If I won a hand, it was more likely my group would assess my strategy as good. If I lost, the reverse would be true. Win a case at trial, the strategy is brilliant. Lose, and mistakes were made. We treat outcomes as good signals for decision quality, as if we were playing chess. If the outcome is known, it will bias the assessment of the decision quality to align with the outcome quality.

If the group is blind to the outcome, it produces higher fidelity evaluation of decision quality. The best way to do this is to deconstruct decisions before an outcome is known. Attorneys can evaluate trial strategy before the verdict comes in. Sales teams can evaluate strategy before learning whether they’ve closed the sale. Traders can vet process prior to positions being established or prior to options expiring. After the outcome, make it a habit when seeking advice to give the details without revealing the outcome. In poker, it isn’t practical to analyze hands before knowing how they turn out since the results come within seconds of the decisions. To address this, many expert poker players often omit the outcome when seeking advice about their play.

This became such a natural habit that I didn’t realize, until I started conducting poker seminars for players newer to the game, that this was not the norm for everyone. When I used hands I had played as illustrations, I would describe the hand up to the decision point I was discussing and no further, leaving off how the hand ended. This was, after all, how I had been trained by my poker group. When we finished the discussion, it was jarring to watch a roomful of people look at me like I had left them teetering on the edge of a cliff.

“Wait! How did the hand turn out?”

I gave them the red pill: “It doesn’t matter.”

Of course, we don’t have to be describing a poker hand to use this strategy to promote disinterestedness. Anyone can provide the narrative only up to the point of the decision under consideration, leaving off the outcome so as not to infect their listeners with bias. And outcomes aren’t the only problem. Beliefs are also contagious. If our listeners know what we believe to be true, they will likely work pretty hard to justify our beliefs, often without even knowing they are doing it. They will develop an ideological conflict of interest created by our informing our listeners of our beliefs. So when trying to vet some piece of information, some fact or opinion, we would do well to shield our listeners from what our opinion is as we seek the group’s opinion.

Simply put, the group is less likely to succumb to ideological conflicts of interest when they don’t know what the interest is. That’s MacCoun and Perlmutter’s point.

Another way a group can de-bias members is to reward them for skill in debating opposing points of view and finding merit in opposing positions. When members of the group disagree, a debate may be of only marginal value because the people debating are biased toward confirming their position, often creating stalemate. If two people disagree, a referee can get them to each argue the other’s position with the goal of being the best debater. This acts to shift the interest to open-mindedness to the opposing opinion rather than confirmation of their original position. They can’t win the debate if they can’t forcefully and credibly argue the other side. The key is for the group to have a charter that rewards objective consideration of alternative hypotheses so that winning the debate feels better than supporting the pre-existing position. The group’s reinforcement ought to discourage us from creating a straw-man argument when we’re arguing against our beliefs, and encourage us to feel good about winning the debate. This is one of the reasons it’s good for a group to have at least three members, two to disagree and one to referee.

What I’ve generally found is that two people whose positions on an issue are far apart will move toward the middle after a debate or skilled explanation of the opposing position. Engaging in this type of exchange creates an understanding of and appreciation for other points of view much deeper and more powerful than just listening to the other perspective. Ultimately, it gives us a deeper understanding of our own position. Once again, we are reminded of John Stuart Mill’s assertion that this kind of open-mindedness is the only way to learn.

Organized skepticism: real skeptics make arguments and friends

Skepticism gets a bum rap because it tends to be associated with negative character traits. Someone who disagrees could be considered “disagreeable.” Someone who dissents may be creating “dissention.” Maybe part of it is that “skeptical” sounds like “cynical.” Yet true skepticism is consistent with good manners, civil discourse, and friendly communications.

Skepticism is about approaching the world by asking why things might not be true rather than why they are true. It’s a recognition that, while there is an objective truth, everything we believe about the world is not true. Thinking in bets embodies skepticism by encouraging us to examine what we do and don’t know and what our level of confidence is in our beliefs and predictions. This moves us closer to what is objectively true.

A productive decision group would do well to organize around skepticism. That should also be its communications guide, because true skepticism isn’t confrontational. Thinking in bets demands the imperative of skepticism. Without embracing uncertainty, we can’t rationally bet on our beliefs. And we need to be particularly skeptical of information that agrees with us because we know that we are biased to just accept and applaud confirming evidence. If we don’t “lean over backwards” (as Richard Feynman famously said) to figure out where we could be wrong, we are going to make some pretty bad bets.

If we embrace uncertainty and wrap that into the way we communicate with the group, confrontational dissent evaporates because we start from a place of not being sure. Just as we can wrap our uncertainty into the way we express our beliefs (“I’m 60% sure the waiter is going to mess up my order”), when we implement the norm of skepticism, we naturally modulate expression of dissent with others. Expression of disagreement is, after all, just another way to express our own beliefs, which we acknowledge are probabilistic in nature. Therefore, overtly expressing the uncertainty in a dissenting belief follows. No longer do we dissent with declarations of “You’re wrong!” Rather, we engage by saying, “I’m not sure about that.” Or even just ask, “Are you sure about that?” or “Have you considered this other way of thinking about it?” We engage this way simply because that is faithful to uncertainty. Organized skepticism invites people into a cooperative exploration. People are more open to hearing differing perspectives expressed this way.

Skepticism should be encouraged and, where possible, operationalized. The term “devil’s advocate” developed centuries ago from the Catholic Church’s practice, during the canonization process, of hiring someone to present arguments against sainthood. Just as the CIA has red teams and the State Department has its Dissent Channel, we can incorporate dissent into our business and personal lives. We can create a pod whose job (literally, in business, or figuratively, in our personal life) is to present the other side, to argue why a strategy might be ill-advised, why a prediction might be off, or why an idea might be ill informed. In so doing, the red team naturally raises alternate hypotheses. Likewise, companies can implement an anonymous dissent channel, giving any employee, from the mail room to the boardroom, a venue to express dissenting opinions, alternative strategies, novel ideas, and points of view that may disagree with the prevailing viewpoint of the company without fear of repercussions. The company should do its best to reward this constructive dissent by taking the suggestions seriously or the expression of diverse viewpoints won’t be reinforced.

Less formally, look for opportunities to recruit a devil’s advocate on an ad hoc basis. When seeking advice, we can ask specific questions to encourage the other person to figure out reasons why we might be wrong. That way, they won’t be as reticent to challenge the action we want to pursue; we’re asking for it, so it’s not oppositional for them to disagree or give us advice contrary to what they think we want to hear.

Make no mistake: the process of seeing ourselves and the world more accurately and objectively is hard and makes us think about things we generally avoid. The group needs rules of engagement that don’t make this harder by letting members get away with being nasty or dismissive. And we need to be aware that even a softer serve of dissent to those who have not agreed to the truthseeking charter can be perceived as confrontational. See David Letterman for details.

Communicating with the world beyond our group

This chapter has focused primarily on forming truthseeking groups on our own initiative or being part of such groups. Unless we have control over the culture around us,* those of us more actively seeking dissent will generally be in the minority when we are away from our group. That doesn’t mean that truthseeking is off-limits in those settings. It just means we have to take the most constructive, civil elements of truthseeking communication and introduce them carefully. There are several ways to communicate to maximize our ability to engage in a truthseeking way with anyone.

First, express uncertainty. Uncertainty not only improves truthseeking within groups but also invites everyone around us to share helpful information and dissenting opinions. Fear of being wrong (or of having to suggest someone else is wrong) countervails the social contract of confirmation, often causing people to withhold valuable insights and opinions from us. If we start by making clear our own uncertainty, our audience is more likely to understand that any discussion that follows will not involve right versus wrong, maximizing our truthseeking exchanges with those outside our chartered group.

Second, lead with assent. For example, listen for the things you agree with, state those and be specific, and then follow with “and” instead of “but.” If there is one thing we have learned thus far it is that we like having our ideas affirmed. If we want to engage someone with whom we have some disagreement (inside or outside our group), they will be more open and less defensive if we start with those areas of agreement, which there surely will be. It is rare that we disagree with everything that someone has to say. By putting into practice the strategies that promote universalism, actively looking for the ideas that we agree with, we will more naturally engage people in the process of learning with us. We will also be more open-minded to what others have to say as well, enhancing our ability to calibrate our own beliefs.

When we lead with assent, our listeners will be more open to any dissent that might follow. In addition, when the new information is presented as supplementing rather than negating what has come before, our listeners will be much more open to what we have to say. The simplest rhetorical touches can make a difference. If someone expresses a belief or prediction that doesn’t sound well calibrated and we have relevant information, try to say and, as in, “I agree with you that [insert specific concepts and ideas we agree with], AND . . .” After “and,” add the additional information. In the same exchange, if we said, “I agree with you that [insert specific concepts and ideas you agree with], BUT . . . ,” that challenge puts people on the defensive. “And” is an offer to contribute. “But” is a denial and repudiation of what came before.

We can think of this broadly as an attempt to avoid the language of “no.” In the performance art of improvisation, the first advice is that when someone starts a scene, you should respond with “yes, and . . .” “Yes” means you are accepting the construct of the situation. “And” means you are adding to it. That’s an excellent guideline in any situation in which you want to encourage exploratory thought. The important thing is to try to find areas of agreement to maintain the spirit of partnership in seeking the truth. In expressing potentially contradictory or dissenting information, our language ideally minimizes the element of disagreement.

Third, ask for a temporary agreement to engage in truthseeking. If someone is off-loading emotion to us, we can ask them if they are just looking to vent or if they are looking for advice. If they aren’t looking for advice, that’s fine. The rules of engagement have been made clear. Sometimes, people just want to vent. I certainly do. It’s in our nature. We want to be supportive of the people around us, and that includes comforting them when they just need some understanding and sympathy. But sometimes they’ll say they are looking for advice, and that is potentially an agreement to opt in to some truthseeking. (Even then, tread lightly because people may say they want advice when what they really want is to be affirmed.)

This type of temporary agreement is really just a reverse version of the kind of temporary opting out that we did in my poker group when someone just had to vent about an especially intense, still-raw loss. Flipping that on its head, it doesn’t have to be offensive to ask, “Do you want to just let it all out, or are you thinking of what to do about it next?”

Finally, focus on the future. As I said at the beginning of this book, we are generally pretty good at identifying the positive goals we are striving for; our problem is in the execution of the decisions along the way to reaching those goals. People dislike engaging with their poor execution. That requires taking responsibility for what is often a bad outcome, which, as David Letterman found out, will shut down the conversation. Rather than rehashing what has already happened, try instead to engage about what the person might do so that things will turn out better going forward. Whether it’s our kids, other family members, friends, relationship partners, coworkers, or even ourselves, we share the common trait of generally being more rational about the future than the past. It’s harder to get defensive about something that hasn’t happened yet.

Imagine if David Letterman had said, “It’s too bad you have all these kooky people creating all that drama in your life. Have you thought about how you might get rid of all this drama in the future?” If Lauren Conrad had said something “dramatic,” like “I’ve got so many problems I can’t even think about the future,” or “I’m stuck with these people so there’s nothing I can do about it,” that obviously would be a good time to end the discussion. But the more likely result is that she would have engaged. And that focus on the future could get her to circle back to figure out why all the drama occurred; she wouldn’t be able to sensibly answer the question about the future without examining the past. When we validate the other person’s experience of the past and refocus on exploration of the future, they can get to their past decisions on their own.

This is a good approach to communicating with our children, who, with their developing egos, don’t necessarily need a red pill shoved down their throats. A child isn’t equipped to consent to the challenges of truthseeking exchanges. But they can be nudged. I know, in The Matrix, Morpheus took Neo to visit the Oracle and, while waiting in the lobby, he saw children bending spoons with their minds and engaging in other precocious red-pill behavior. But real-life kids are sensitive to feeling judged. And no real-life parent wants a kid with the ability to mentally send cutlery flying across the room.

My son was expert at fielding bad test scores as the teacher’s fault. I had to be careful not to Letterman him. Instead, I would tell him, “It must be hard to have a teacher like that. Do you think there’s anything you can do to improve your grade in the future?” That at once provided validation and led to productive discussions about subjects like developing strategies for preparing for future tests and setting up meetings with the teacher to figure out what the teacher was looking for in assignments. Meeting with the teacher also created a good impression that would likely be reflected in future grades. Ultimately, even with our own kids’ decisions, rehashing outcomes can create defensiveness. The future, on the other hand, can always be better if we can get them to focus on things in their control.

These methods of communicating with people outside our truthseeking group focus on future goals and future actions. When it works, they take a short trip into the future, beyond the frustrations of the present and toward ways to improve things they can control. Accountability to a truthseeking group is also, in some ways, a time-travel portal. Because we know we will have to answer to the group, we start thinking in advance of how that will go. Anticipating and rehearsing those rational discussions can improve our initial decision-making and analysis, at a time when we might not otherwise be so rational.

That leads to the final decision strategy of this book: ways to use time-travel techniques for better decision-making. By recruiting past and future versions of yourself, you can become your own buddy.