Cognitive biases come in several flavors. These mental “bugs” can affect your decision making, memory, perception, rationality, and so on. There are a lot of them. Wikipedia lists some ninety common cognitive biases. I’ve met some folks who surely had a few more than that.
Here are some of my personal favorites:
Just seeing a number will affect how you then predict or decide some quantity. For instance, if I keep mentioning something about having 100 books for sale, I’ve primed you with that number. If I now offer you a book for $85, you’ll tend to anchor on the 100, and the 85 sounds like a bargain.
We tend to ascribe other people’s behavior to their personality, instead of looking at the situation and the context in which their behavior occurs. We might excuse our own actions more easily (“I was tired; I felt a cold coming on.”). But people who are normal in all respects can be driven to extraordinary actions, including theft, murder, and mayhem, especially in time of war or personal crisis. It doesn’t have to take such extreme conditions; as we saw earlier, context is everything. Remember that behavior is oftentimes more because of reaction to a context than because of fundamental personality traits.
This is the tendency to believe that if the project is a success, I’m responsible. If it tanked, then I’m not. This behavior is probably a protective mechanism, but remember that you’re part of the system—whether it turns out well or not.
We are not comfortable with doubt and uncertainty—so much so that we’ll go to great lengths to resolve open issues and to remove uncertainty and reach closure. But uncertainty can be a good thing: it leaves your choices open. Forcing premature closure, as in Big Design Up Front (BDUF),[78] cuts off your options and leaves you vulnerable to errors. Artificially declaring a decision, such as the end date of a project, doesn’t remove the inherent uncertainty; it just masks it.
Everyone looks for choice facts to fit your own preconceptions and pet theories. You could argue that this entire book (and most books) are giant examples of confirmation biases of the author.
We tend to prefer things just because they are familiar. This includes tools, techniques, or methods that aren’t working well anymore or that are even actively causing harm.
Researchers have noticed that people have a tendency to change their behaviors when they know they are being studied. You’ll see this when you introduce a new practice or a new tool on a team. At first, while everyone is watching—and everyone knows they are being watched—results look great. Discipline is high, and the excitement of something new fuels the effort. But then the novelty wears off, the spotlight moves away, and everyone slides inexorably back to previous behaviors.
It’s actually pretty easy for your brain to confuse imagined events with real memories. We’re susceptible to the power of suggestion; as we saw earlier, memory isn’t written to some static store in the brain. Instead, it’s an active process—so much so that every read is a write. Your memories are constantly rewritten in light of your current context: age, experience, worldview, focus, and so on. That incident at your sixth birthday party? It probably didn’t happen that way, and it may not have happened at all.
As we saw earlier, L-mode is anxious to provide a quick symbol to represent a complex object or system, which loses at least the nuances and sometimes even the truth of the matter.
A kind of symbolic reduction problem; this is the idea that labeling a thing means you can explain it or understand it. But a label is just that; and naming alone does not offer any useful understanding. “Oh, he’s ADHD” doesn’t enhance understanding any more than “She’s a Republican” or “They’re from Elbonia.”
And all this is just the beginning. Our irrational nature could take several books.[79]
It’s tough to make predictions, especially about the future.
➤ Yogi Berra, Philosopher
Symbolic reduction is an especially pernicious problem because it’s so deeply ingrained in our usual analytical, programmatic thinking. Indeed, the only way the brain can keep up with the complexity of reality is to reduce large, complex systems to simple, easily manipulated symbols. This is an essential mechanism in the brain and a very useful one in computer programming and knowledge-based work. But if you take it for granted, you fall into the symbolic reduction fallacy.
We’ve seen examples of the symbolic reduction fallacy before; for instance, when you’re trying to draw a human hand, the L-mode reduces the complexity of light, shadow, and texture to “five lines and a stick.” That reduction can be thought of as taking complex reality and treating it as if it were comprised of very basic, archetypical elements: platonic solids.
Named for Plato, these ideal forms supply a sort of universal, commonly understood set of building blocks.
Think of a kid’s building block set with cubes, blocks, cones, archways, and columns. From these basic shapes, you can construct a wide array of larger structures. Plato’s ideal forms work in a similar fashion; they are simplified building blocks of reality. But this approach of reducing reality into an idealized form leaves a hole, called the platonic fold. An awful lot can hide in this hole, and we get blindsided by these kinds of unexpected events.
The future hides in the platonic fold.
The concept of the platonic fold, described in The Black Swan: The Impact of the Highly Improbable [Tal07], emphasizes that humans are really bad at trying to extrapolate future events from previous events. We assume that events form a more or less stable, linear progression, with easily defined cause and effect.
They don’t. That’s why we fail to predict the future in so many cases. In fact, because of our blind spots—including the platonic fold—it turns out that all consequential events in history come from the wholly unexpected.
That’s where the book’s titular “black swan” comes from. For many years, it was assumed that swans could only be white. Because no one had ever seen a black swan, its existence was thought to be impossible by the scientific community—until a black swan showed up.
As a group, we tend to miss important developments because we’re focused on the wrong thing or are asking the wrong questions. For example, I was cleaning my office last year when I stumbled upon a stack of magazines dating from the early to mid-1990s (I also found a 14.4 modem in the middle of a tangle of active cables, but that’s another story).
Unexpected events change the game.
The magazines made a convenient time capsule. Cover after cover fanned the ferocious debate over the most important issue of the day: who would win the desktop wars? Would the interface to conquer the desktop be based on Open Look or on Motif?
It was the wrong question, as it turned out, and Windows—which wasn’t even considered one of the contenders—took over. Then there was the middleware war; who would win? RMI or CORBA?
It was the wrong question again, because the growth of the Web largely made the issue moot. The Web was a classic black swan, an unanticipated development that changed the rules of the game completely. And on it went: pages and pages of analysis and speculation, forecasting and fretting, almost always over the wrong question. Our biases make it nearly impossible to predict the future and very difficult to navigate in the present.
As you can see, just because you “think so” doesn’t make it right. Recognizing and overcoming your own cognitive bias is surely easier said than done. But here are a few suggestions that might help.
“Astronomically unlikely coincidences happen daily.”[80] Recently, we’ve witnessed all manner of devastation from 500-year floods to 100-year storms, but geologically speaking that’s just a drop in the bucket—these events are not that rare. They may freak people out because they haven’t happened within their memory or the memory of their parents (or even grandparents). But that doesn’t mean they can’t happen, and it doesn’t prevent them from happening three times in a row.
In 2004, your odds of being killed by lightning in the United States were around 1 in 6,383,844.[81] That sounds like pretty good odds, right? But forty-six people died that year from lightning, despite the six-million-to-one odds. And you had sixteen times greater odds of dying from falling out of bed, although that’s probably not something you’d think of as particularly dangerous. Even though it’s rare, it still happens. On a more positive note, you can expect to experience a one-in-a-million miracle about once a month.[82]
The black swan cautions us not to discount unobserved or rare phenomena as impossible.
Truly random events form a mix of values that are clumped together as well as lone values; homogeneity and randomness are different things. It’s perfectly valid in a completely random sample to have three Category Five hurricanes in a row, for instance.
Recipe 18 | Watch the outliers: “rarely” doesn’t mean “never.” |
Look into the platonic fold, and think about what you might be missing. Any one of those minor elements that you overlooked can be the one that changes history.
Take time to examine the “crazy” outliers or those “impossible,” astronomically unlikely events. If any of those actually did happen, what would it mean to you? What would you do differently because of it? What concerns wouldn’t matter anymore, and which would become prominent? Remember, these are still unlikely events, so don’t start stocking up on canned food or Hazmat suits just yet. But never say never.
Never say never.
Our need for closure means we are driven to try to eliminate uncertainty—ready or not. But fixing on a decision prematurely reduces your options, perhaps to the point of eliminating the successful choice.
On a software project, as with an exploratory or inventive project in any discipline, it’s a given that you’ll learn a little bit more every day. You’ll learn more about the users, the project itself, your team, and the technology, as shown in the following figure.
That means you’ll be at your peak of intelligence at the very end of the project and at your most ignorant at the very beginning. So, do you want to make decisions early on? No; you want to defer closure for as long as possible in order to make a better decision later. But that means critical issues may stay unsettled for a long time, which makes many people acutely uncomfortable.
Resist the pressure. Know that you will reach a decision, and the matter will be settled, just not today.
Recipe 19 | Be comfortable with uncertainty. |
Agile software development embraces the idea of working with uncertainty. Early on, you don’t know what the project end date will really be. You’re not 100 percent certain which features will be present in the next iteration. You don’t know how many iterations there will be. And that’s perfectly OK: that’s the sort of uncertainty you want to be comfortable with. You’ll find answers as you go along, and by the end, everything will have been answered.
You can, of course, take some concrete steps to try to reduce uncertainty. You might talk the matter over with peers, google around for more information, or build a prototype—that sort of thing. But although these steps might help a little or a lot, they’re not a cure. There will always be elements that are just plain uncertain, and that’s not a bad thing. Chip away at it, but don’t be in a rush to nail down details if it’s not ready yet. Be comfortable with the fact you don’t know.
For something you don’t know but that has to be known by others, such as a go-live date, you can express it as a “target” date along with an indication of your confidence in the estimate. That is, you might report a target date such as Oct. 1, with a 37 percent chance of making that date. But be careful when reporting a date with an 80 percent probability. Folks may tend to hear that as “nearly certain” without appreciating there’s a 20 percent chance it won’t happen. At least you’re being up front about the inherent uncertainty.
Guess with explicit probabilities.
But realize that it can be really, really hard for other folks in the organization to be comfortable with these ideas. They are programmed to seek closure at all costs and will try to do so at every turn. Educate them as best you can, but be prepared for resistance.
Finally, remember that you don’t remember very well. Memory is unreliable, and old memories will change over time, which just reassures you that your misconceptions and prejudices are valid. Don’t rely exclusively on your memory. The Chinese proverb is correct: the palest ink is better than the best memory.
Recipe 20 | Trust ink over memory; every mental read is a write. |
Instead, augment your memory with some kind of reality check. Whether it’s notes that you keep or a conversation with someone else with their own memories, you need something to help keep your memories from drifting too far from reality.
![]() | Next Actions |