And there we have the crux of many matters of human decision making and behavior—perceived consequences or the lack thereof.
We do what we do because we expect the things we want to occur as the result. Likewise, we also don’t do what we don’t do because we expect that if we did, things would happen that we expect would be, “umm, well … unpleasant.”
Expectations about the future, particularly how we imagine we will feel, serve as the cornerstone for deciding whether or not we drink that third glass of wine, run that mile, or say something provocative to our boss. Now, in each of these work-a-day world cases, we can come very close to a precise prediction about what will happen—we will wake up with a headache, we will feel more energized, or we will suffer grating wrath.
Yet, what do we do when we can’t be so sure of the consequences? How do we choose our actions when the information available in the present fails to be enough to know what to expect? (Like say when it comes to predicting whether the value of a stock, bond, or commodity will go up.)
Particularly when it comes to markets, we turn to the mechanisms of statistics and probability—those certain kinds of numbers that make us feel we have measured the future, but in reality only deceive us into thinking we know what we need to do. Reality points to a very big gap between where the numbers leave off and exceptional performance begins! Traditional trading education repeatedly advises students to “analyze what confluence of circumstances you are looking for, know what outcomes they have led to in the past and when they re-occur, take the trade.” Likewise, if you don’t see the same situation, do nothing.
Peter Bernstein wrote in his market classic, Against the Gods: The Remarkable Story of Risk, that mankind’s modern times began when we learned to understand, measure, and weigh the consequences of risk. Normally (if there is such a thing), markets—bonds, stocks, commodities—don’t all trade in the same direction at the same time. Stocks go up while bonds go down. Markets that appear to not trade with any relationship to one another—something like AAPL and Spanish government debt might be marked at .1. Take the stocks of big USA-based technology companies and it wouldn’t be uncommon to find correlations marked at .7 or .8. Offsetting risk in one market simply required being active in another, relatively uncorrelated one.
The MBAs here of course understand this; the psychology gang, I realize that you may or may not.
Yet, no less than the CEO of Goldman Sachs himself called the violent market swings of August 2007 a 25th deviation event. According to the discipline of probability, what we saw with our own eyes could not happen—not in our lifetimes or the lifetimes of all of our ancestors and all of our children, grandchildren, and their great grandchildren. Then a mere 13 months later, markets stunned the entire planet when every single one went simultaneously in one direction—another thing that, statistically speaking, could not happen.
Theoretically, given our 21st-century capacity to capture every minute detail of a pattern (and to react to it within milliseconds), the earlier excruciatingly painful whipsaws of 1929 or 1987 wouldn’t re-occur. Yet, in the 21st century, in each successively larger billion-dollar bonfire, from 1997 to 2001 to 2008, the world elite of Bernstein’s measurers had indeed measured not only once, not only twice, but a hundred times.
Today in the spring of 2011, the gut-wrenching days of 2008 may be fading from our memories, but one thing is for sure. Despite widespread blame of alleged greedy bankers, it makes no sense to believe that they expected to lose money. It makes no sense that dedicated life-long employees who invested all of their retirement accounts in the stocks of their own companies, BSC and LEH in particular, expected, probabilistically or otherwise, for their companies, monies, and lives to go up in the smoke of billion-dollar bonfires. Indictments of greed are overrated as useful explanations or contributions to solutions.
In fact, many were looking at numbers that had been analyzed every which way from Sunday and that still showed money coming in the door—practically right up until the moment that it stopped. A few outsiders “interpreted” the numbers a different way and made literally billions of dollars. Reportedly, Matthew Tannin of Bear Stearns had a sense—not a number—that caused him to alert his boss to the possibility that the numbers weren’t telling the whole truth—numbers that landed them both in court.
Nevertheless, PhDs in fields like physics, game theory, and theoretical math at esteemed firms like Renaissance spend every day dreaming up new ways to slice and dice the latest probabilities hidden in whatever the current mood of the markets seems to be. But you’ve got to wonder: if it truly was a matter of uncovering the market data equivalent of E = mc2, then wouldn’t they have found it by now? Or, doesn’t the fact that they keep looking, in and of itself, prove that at best any probabilistic viewpoint is only temporarily relevant? And if only temporarily relevant, how do they detect when the relevance ends?
Logically, if you have a probability that you know will apply for only a limited period of time and by definition that probability tells you that you have some significant chance of being wrong, even while it still applies, how much do you really know?
So, my question to you is: “Just because we can, does it mean we should?” Or, even more to the point, just because we can dazzle one another with complex mathematical feats, does it mean we have been more “rational” because we have been numerical?
With the few exceptions like Nassim Taleb and his Black Swans or Benoit Mandelbrot, almost everyone who purports to be an expert on predicting markets preaches the probability gospel.
Mandelbrot discovered fractal geometry and showed convincingly in his books, The (mis)Behavior of Markets and Fractals and Scaling in Finance, that the uneven reality of markets matches up much better with patterns that you find in cauliflower or broccoli than with bell curves.
But given the relative paucity of applications of his work, it would appear his ideas ran into the resistance of what, time and time again, academic experimenting has demonstrated—that we greatly prefer to know what exactly our odds are. We feel more confident and less anxious when we know or think we know our exact chances. Known as the “ambiguity aversion,” many decision theorists have shown that game players greatly prefer to play a game where they choose between 50 red and 50 black balls versus a game where they have 100 balls but don’t know the exact mix of colors. Daniel Ellsberg demonstrated it, but many others, including John Maynard Keynes, are reported to have declared the same result.
Renee, somewhat to Michael’s surprise, raised her hand. “Ms. Shull, aren’t you saying that managing money ends up being a lot like playing poker? You have got the cards and their odds, but that isn’t really the game?”
Yes indeed, Renee, poker offers a great example of what I’m talking about. (Or maybe I should say winning at poker does!) For instance, listen to inexperienced poker players talk about the game. They will wax on about it being only about the odds of the cards. No matter what you say, they will just talk about the numbers. But ask the traders I know who also are skilled poker players and every one of them will tell you that winning is NOT about the stats. The novices want to believe that, but if you watch high-stakes poker, where by definition the players have a proven ability to win, you often see sunglasses and baseball caps. What does obscuring your vision have to do with a purely numbers game? Stories abound of another oddity in poker—playing without ever looking at one’s cards! If it is only about the numbers, how could anyone ever do that?
Indeed, poker provides numerous market decision parallels.
It is impossible to know the future.
After all, being the future, it has not happened yet.
“Live in the moment” may be the mantra of many a hedonist and Eastern philosopher alike, but in the ever-quickening pace of a world on a non-stop grid, most lives roll from one decision about the future to the next. In the moment or in the macro, starting about the time our parents let us stay home alone for the first time, we begin very clearly deciding what is and what isn’t worth it and what will be the likely outcome of a choice. Yes, we get a little lackadaisical about it during the hormone rush of high school—even so, whether we should or shouldn’t play soccer, join the drama club, date that wild-child guy, try to get into Harvard, sleep through Art History 101 simply because we can—it all boils down to what we perceive and therefore believe the future will bring if we choose A, B, or C. Our imaginations paint a picture of what life will feel like if we do this or if we do that.
As time passes, what we imagine may turn out that “there is nothing new under the sun,” and then again, it might not. In fact there may indeed be a better than 51% probability that the future will look like the past, but what about the other 49%? Unless time travel becomes a reality, we have literally no way of knowing the “anythings” that can happen in the next moment, month, or month after that.
Things that have never happened before happen all the time. In this day and age of nanosecond global communication, at a very minimum, everything happens faster and more people know about it instantaneously. This means that reactions occur more quickly and just the dimension of speed creates phenomena never seen before. To a great degree, that technological change alone added fuel to the trillion dollar travesties of the recent past.
Data crunching had indeed lured bankers to get more creative. Lots of money was floating around and it seemed to need to go somewhere. So almost all of a sudden, it became a good idea to lend money with no documentation and to give mortgages to people without bothering about down payments. The numbers they used to predict what would happen indicated that while, yes, there would be defaults, the number or character of those defaults wouldn’t create a problem. The amount of number crunching fueling these creative statistical analyses probably surpassed the totality of mortgage number crunching for the previous 50 years.
But alas, as we all know all too well—the numbers lied.
The same applies to life’s unspeakable tragedies and everyday annoyances. On Monday evening September 10, 2001, in New York, the suffocating humidity of afternoon thunderstorms or Donald Rumsfeld on TV talking about the military budget gave no clue about the terrorist warfare about to ensue.
In the less catastrophic mundane circumstances of most days, we break our fingers, dent our cars, and even catch colds—all when we aren’t expecting it. We thought we were on our way to soccer, going to win the game, and then study for the GMAT. Instead, we are at the body shop alone with a splint on our finger and pain pills in our pocket, while our buddies brood about their loss over a beer.
Of course, insurance companies have a rather detailed idea of how many broken fingers, dented fenders, and cases of the flu will occur over a large group of people. Ask an insurance executive, however, who exactly is going to dent their car and their rep will conclude that he or she should add mental health risk to your profile!
In March 2011, an earthquake led to a tsunami that led to a partial nuclear and market meltdown in Japan. Systematic or purely numbers-based systems got whipsawed into a money-losing month. Taleb categorizes such events in the lake of the Black Swans, but do we really have to live with that? Do we just have to expect that we can never know more and predict more accurately? I submit that we don’t—if we are brave enough to look beyond the numbers.
The truth is: Probabilities tell us something—just not everything.
And as anyone who is a stickler about honesty will say, knowingly omitting crucial information, even without directly lying, amounts to telling a lie. Omissions can and do easily mislead.
Michael had to bristle a bit. What about all the successful computerized strategies running in the markets today? He himself had been approached to join a firm that wanted his help in reverse-engineering market data to deduce the probabilities of oncoming human perception in a variety of market scenarios.
But before he could raise his hand, it was almost like Ms. Shull read his mind and responded.
Believing in the penultimate power of numbers, some money managers market the idea that they have programmed neural networks into their market-stalking computers. Sure, it sounds sexy and sophisticated; but at a minimum, overstates reality and at a maximum, can’t be true.
Despite Paul Allen’s newly announced atlas of the brain, we would be hard pressed to find a neuroscientist on the planet who can tell you precisely how a neural network operates or, more importantly, what chemical, electrical, and other processes specifically give rise to thought. To the layperson, it sounds like we know this because we get news flash after news flash regarding this or that part of the brain being responsible for this or that. But in reality, that knowledge typically relies on associating areas of the brain with tasks through an fMRI machine and can’t technically prove causation or even be regarded as a finely calibrated tool!
My point? If we don’t actually know how something works, then how can we presume to duplicate it?
Even more importantly, however, recent research reveals the whole idea of a neural network as the model for the brain to be grossly incorrect. Known as the “neural doctrine” in academia, the idea actually is an artifact of early research, like the original geocentric idea of the solar system. Actually, neurons make up only about 15% of the brain. The rest has until very recently been relegated to the term “junk.”
Indulge me in a bit of neuroscience here as I think it will help you be able to absorb the gravity of the need to rethink our dependence on numbers—and, in fact, rethink thinking altogether.
Neurons and synapses work with an electrical charge passed through a liquid chemical substrate; what in this day and age we all know as neurotransmitters. To date, all of our mental capacities have been assumed to emerge from this electro-chemical communication. The other classes of brain tissues known as glia cells were thought to simply clean up any extra fluid or voltage.
The key word there is “were.” Now we know that the cells formerly assigned to janitor duty communicate “without” electricity, and in a model more like a broadcast than a node-to-node network. It gets even better, or at least even more revolutionary. These cells not only sense the electricity coursing through the neurons and synapses but have the power to change, modify, or even control it! R. Douglas Fields recently wrote in his book, The Other Brain, “Glia are the key to understanding this new view of the brain.”
Work done on Einstein’s brain helps prove his point. A careful counting of Einstein’s neurons versus 11 other male brains showed essentially the same number of neurons in all 12. The difference, however, in Einstein’s brain tissue showed up in the glia. The 11 comparative samples of brain tissue, from men in their middle to older ages, had one glia for every two neurons. In Einstein’s brain the ratio was 1-to-1 or twice as many “neural glue” cells. According to Fields, the biggest differential in neurons to glia existed in areas known for abstract concepts and complex thinking. Glia clearly not only aren’t junk but they may be the greater arbiter of a type of intelligence we all recognize.
In short, numbers in fact are relatively easy. They are clean and clear and, as we have agreed, make us feel secure. But time and time again, we have manipulated them and they have manipulated us into false senses of security. Maybe we just have to step up and admit we shouldn’t trust them anywhere nearly as much as we do.
Maybe the other quant buzz word of 2011, “machine learning,” holds the missing mathematical clue. After all, IBM’s Watson beat two human Jeopardy champions, relying purely on the 0s and 1s that underlie all computing firepower. Isn’t that proof enough that numbers alone in the end will still win? No, indeed it is not. First of all, there was a known answer to every question Watson answered. Watson had “read” all of Wikipedia and enormous amounts of historical and current texts. “He” had analyzed plays and movies. Yes, this anthropomorphized computer performed brilliantly, but every question already had a known answer.
More importantly, what tends to escape everyone’s notice is how IBM’s Watson emulated the actual human decision-making process. Can anyone think what I mean by that statement? I’ll give you a clue. Watson calculated a number for it before he decided to answer.
Furthermore, the government releases a monthly numerically interpreted version of “IT” but as an entity, it proves to be elusive, ephemeral, and instantaneously changeable. In any team sport or for any athlete in general, it can reverse and reverse again within moments. The play goes well and the running back scores, and “it” appears. They run the kick back for a responding touchdown, and “it” disappears.
Yet Watson, an inorganic, electrically driven player depended on “it,” or the simulation of “it,” to win at Jeopardy. If his calculated confidence level, the “it”, wasn’t high enough, he didn’t ring the Jeopardy bell. Is that machine learning, or are machines emulating humans?
I submit that if a computer figures out how confident it “feels” about something, then indeed the discovery says more about us than it does about our ability to give a computer enough information to deduce answers to known questions.
The bottom line? We all know that we can get the numbers to say just about anything we want. What comes before, the context, and what we infer, or what comes after, from our models makes the difference in what we expect.
Wouldn’t we therefore be able to extract a powerful advantage if we spent more time logically analyzing what the numbers cannot tell us?