Chapter Two

Silicon Valley Is a State of Mind

“Data! Data! Data!” he cried impatiently. “I can’t make bricks without clay!”

—Arthur Conan Doyle, The Adventure of the Copper Beaches

The library will endure; it is the universe. As for us, everything has not been written; we are not turning into phantoms. We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and our future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.

—Jorge Luis Borges, “The Library of Babel”

In a 2013 conference call with Wall Street analysts, Mark Zuckerberg told investors that, along with prioritizing increased connectivity across the globe and emphasizing a knowledge economy, Facebook was committed to a new vision called “understanding the world.” He described what this “understanding” would soon look like: “Every day, people post billions of pieces of content and connections into the graph [Facebook’s algorithmic search mechanism], and in doing this, they’re helping to build the clearest model of everything there is to know in the world.”

This is only one of many grandiose claims coming out of Silicon Valley right now. Google’s mission statement is famously to “organize the world’s information and make it universally accessible and useful.” In a 2013 interview in Fortune, Jeremiah Robison, the vice president of software at Jawbone, explained that the goal with their fitness tracking device Jawbone UP was “to understand the science of behavior change.” Jack Dorsey, of Twitter and Square Inc. fame, told a room full of entrepreneurs that start-ups are following in the footsteps of Gandhi and the Founding Fathers. In a 2014 press release, Uber’s CEO and cofounder Travis Kalanick announced that with Uber’s “growth and expansion, the company has evolved from being a scrappy Silicon Valley tech start-up to being a way of life for millions of people in cities around the world.” With this work, “we fight the good fight.”

In contrast to the persistent narrative of American economic malaise and political gridlock, the Silicon Valley vision provides an infusion of hope and optimism. As a result, the reigning ethos of Silicon Valley has grown popular in American cultural life, assuming a greater role as our attachment to our technological devices has grown, and more and more of our daily life happens on the Internet. Like any community, Silicon Valley has a strong shared culture and outlook, one that it credits for its success. Its well-worn mantras have now seeped into mainstream discourse: everything from the “sharing economy” to “leapfrogging” to “fail forward” to the “lean start-up.” Despite the different phrases, the ideology remains the same. The promise is that technology will solve it—whatever it is. And the solution is sure to be revolutionary. No one in Silicon Valley ever launches a start-up by saying, “We will make small, incremental changes in the field of X based on all of the small, incremental changes that have been made over the last five decades.” Everything is a disruption: a clean break from the past leaning far forward into the future.

The culture has upended the way we educate our children, the way we do our business, and the way we conceive of ourselves as citizens. In the process, Silicon Valley either belittles humanities-based education or renders it irrelevant to the work of the twenty-first century. Influential venture capitalist Marc Andreessen reflects this way of thinking when he argued in his 2014 blog post “Culture Clash” that the liberal arts are behind the cultural curve. “For people who aren’t deep into math and science and technology,” he wrote, “it is going to get far harder to understand the world going forward.” PayPal founder and investor Peter Thiel went so far as to establish the Thiel Fellowship, which pays young entrepreneurs to forgo university studies so that they might fast-track their start-up projects.

So what is valued in this state of mind? Let’s critique some of the main assumptions at play in this ideology so we can better understand the way it is changing our notions of an intellectual life. In a “Silicon Valley” state of mind, sensemaking has never been more lacking or more urgently needed.

Assumptions behind Disruptive Innovation

There is a lot of talk about “disruption” in Silicon Valley. Successful entrepreneurs upend traditional ways of doing things; they “disrupt” a market, rather than simply selling a product. When we unpack the assumption implicit in this “disruption,” we gain a keen insight into the way that Silicon Valley thinks about innovation and progress. Disrupting an industry, in Silicon Valley parlance, suggests a clean break between “before” and “after.” This reflects scientific thinking, wherein a hypothesis is presented and it is regarded as operational and “true” until it is shown to be false, or is superseded. As long as that hypothesis holds up to scrutiny, it takes precedence over all previous work. This, of course, is temporary; a new hypothesis will eventually take its place. It’s a discipline that’s always moving forward.

This way of thinking stands in sharp contrast to the intellectual tradition of the humanities, which does not suggest clean breaks in knowledge, and does not regard past experience as outmoded or outdated. Instead, it focuses on the ways in which dominant powers and attitudes shape contemporary culture, and the possibility of recovery of knowledge and understanding that have been obscured (either intentionally or not) by the passage of time or the distance of space. As T. S. Eliot wrote in his 1940 poem “East Coker”: “There is only the fight to recover what has been lost / And found and lost again and again.”

But Silicon Valley culture feels that the humanities have little relevance to professional life. To disrupt is to reject what has come before. Silicon Valley wants a radical break with accumulated knowledge. Because this “disruption” reflects a widely held belief that innovation requires a fearless willingness to change and a break with the past, is it almost exclusively associated with youth. Silicon Valley celebrates inexperience because it makes it easier to take risks. Mark Zuckerberg summed up a popular attitude when he told attendees at a Stanford event in 2007 that “young people are just smarter.” Echoing this idea, venture capitalist Vinod Khosla told the audience at a 2011 tech event in Bangalore that “people over forty-five basically die in terms of new ideas.” In this environment, rejection of traditional intellectual life is de rigueur. An unnamed analyst told New Yorker journalist George Packer, “If you’re an engineer in Silicon Valley, you have no incentive to read The Economist.”

One way this attitude manifests itself is in an obsession with quantification, which for the youth of Silicon Valley is a stand-in for the knowledge of wisdom and experience. Quantification takes many forms, among them the “quantified self” movement, where adherents use devices to track and quantify aspects of their behavior. It also reflects a broad trend in American society toward quantification: in health care in education, in government, in our personal lives. This is now familiar to us through the term big data.

Assumptions behind Big Data

Big data concerns itself with correlation, not causation. It can establish a statistically significant relationship, but it cannot explain why it is so. And with increasingly large data sets, there is an increased risk of misleading statistically significant correlations—it can appear that there are many needles in a very large haystack. Big data offers information without explanations for it. As economist and journalist Tim Hartford put it in a 2014 article for the Financial Times, “Big data does not solve the problem that has obsessed statisticians and scientists for centuries: the problem of insight, of inferring what is going on, and figuring out how we might intervene to change a system for the better.”

What happens when big data is used instead of traditional research methods, rather than alongside them? The case of Google Flu Trends provides us with a salient example. In 2008, researchers at Google wanted to explore the idea of using search terms to predict widespread outbreaks of illness. By isolating flu-related queries on Google and tracking them, the researchers postulated that they could predict an outbreak of the flu sooner than the data from the Centers for Disease Control and Prevention. Using a technique they dubbed “nowcasting,” the researchers put their theory into action and published the results in Nature. By all accounts, it seemed like a grand success. The Google queries were predicting flu outbreaks two weeks sooner than data coming from the CDC.

And then Google Flu Trends started to fail. It missed the entire H1N1 pandemic in 2009 and wildly overestimated flu outbreaks in the 2012–13 season. In a two-year period ending in 2013, scientists estimated that Google Flu Trends predictions were high in 100 out of 108 weeks. What went wrong? Among other problems, Google’s algorithm was vulnerable to any queries that were related to flu season but not related to actual flu outbreaks. Therefore, searches like “high school basketball” and “chicken soup” ticked off a flu warning—correlation by pure chance with no real causal relationship to a case of the flu. This is because big data doesn’t care about explaining why. Instead, it reflects an empiricist’s mindset. Big data wants to remove human bias from the equation, embracing deductive thinking and jettisoning inductive modes of inquiry. With enough data, the numbers speak for themselves and you don’t need theory. But, as we discovered in the case of Google Flu Trends, deeper analysis is required for correlations to have implications, and to establish causality. Big data cannot simply shake off its reliance on traditional research methods; its meaning still comes out of its interpretation. Try as Silicon Valley might, big data will never be neutral.

Despite examples like Google Flu Trends that expose the limitations of big data, Silicon Valley’s data evangelists continue to proselytize. They base their arguments on a now-legendary 2008 Wired magazine article entitled “The End of Theory” by Chris Anderson. According to the article’s argument, the way we explained systems in the past—through models and hypotheses—is becoming an increasingly irrelevant, crude approximation to the truth. In 2008, the Internet, smartphones, and CRM software were already delivering a superabundance of data. “The numbers speak for themselves,” Anderson wrote as he quoted business leaders like Peter Norvig, director of research at Google. “All models are wrong, and increasingly you can succeed without them.” Ultimately, Anderson took Norvig’s ideas and went to town with them:

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

These companies have embraced a teleology of data, where more data will yield progressively more: better results for consumers, a more accurate understanding of consumers’ needs and wants, and better outcomes for society at large. But is bigger really better?

Understanding the world with a sample size in the millions is a radical departure from other types of inquiry. Big data may tell us something about people, but it can tell us precious little about individual persons. How much truth about a situation can Silicon Valley tell us, for example, if it doesn’t acknowledge that human behavior is always embedded in a context?

Nineteenth-century pragmatist William James critiqued this naïve approach to data when he responded to the reductionists of his era. In his 1890 book, The Principles of Psychology, James wrote: “No one ever had a simple sensation by itself. Consciousness… is of a teeming multiplicity of objects and relations.” A white swan seems red in a red light; to understand the color of swans, we also have to understand the properties of light. Facts, in other words, always live in a context, and hacking them into discrete data points renders them meaningless and incomplete.

Assumptions behind Frictionless Technology

One popular concept in Silicon Valley is “frictionless” technology. It’s the standard for innovation in the Valley. Technology can be deemed frictionless when it operates smoothly and intuitively, without requiring any human input in the form of thoughts or emotions. In these instances, technology becomes a seamless part of real life. But what do these sorts of technologies mean for human thought and effort? Should we all take for granted the role of technology in our lives, or are there times and situations where we want a more thoughtful engagement with the technology we use? As Silicon Valley’s way of thinking about frictionless technology grows popular, it will shape the kind of innovation that people think is exciting, and the work that merits funding and study, narrowing—rather than expanding—our sense of possibility.

In a 2010 interview in the Wall Street Journal, Google’s then-CEO Eric Schmidt argued, “Most people don’t want Google to answer their questions… they want Google to tell them what they should be doing next.” This reflects a subtle shift in the culture of the Internet, and in Western culture and public life more broadly, that should raise red flags. When we search on Google or post on Facebook, the ever-changing algorithms that underpin these platforms shape the information we get about our friends, about what’s going on in the world, about our health and well-being. In ways that people overlook, Silicon Valley shapes the information we get access to, all in the name of tailoring it more effectively to our needs and preferences.

One oft-made point is that this personalization leads to polarization. By feeding people content that reflects their views and shielding them from people they may disagree with, filtering mechanisms make the public sphere less and less dynamic. Internet activist Eli Pariser dubbed this the “filter bubble.”

The dangers of “frictionless technology” lie not in what it can or cannot do for us, but in how it shapes our thinking. Why seek out new information, why learn something different, or push the boundaries of debate or previously accepted ideas, when data can serve up exactly what reflects already-established outlooks and preferences? This is what journalists, commentators, and political analysts have dubbed the “post-truth era.” In a Silicon Valley state of mind, we care less about actively seeking out the truth than we do about engaging in discourse and experiences that make us feel affirmed and acknowledged.

It goes without saying that there are tremendous benefits to all of the innovations happening in both the real Silicon Valley and in its culture at large. No one is arguing for completely doing away with all of the cutting-edge technology or the spirit of entrepreneurship that makes this culture such a prominent player in our global economy. The critique here is of Silicon Valley’s quiet, creeping costs on our intellectual life. The humanities, or our tradition of describing the rich reality of our world—its history, politics, philosophy, and art—are being denigrated by every assumption at play in Silicon Valley.

When we believe that technology will save us, that we have nothing to learn from the past or that numbers can speak for themselves, we are falling prey to dangerous siren songs. We are seeking out silver bullet answers instead of engaging in the hard work of piecing together the truth.

Sensemaking is a corrective to all of these misguided assumptions of Silicon Valley. Even with the magnificent computational power now at our disposal, there is no alternative to sitting with problems, stewing in them, and struggling through them with the help of careful, patient human observation. In the chapters to follow, I will reveal the road map for how to do just that.