Given that the technology industry has long been fed by the technology education industry, where I’m originally from, I had long assumed that the matters we addressed at the source would make their way downstream. Just a few years after I had joined the MIT faculty, then president Charles M. Vest released a statement following an official report about the women faculty at the MIT School of Science in 1999:
I learned two particularly important lessons from this report and from discussions while it was being crafted. First, I have always believed that contemporary gender discrimination within universities is part reality and part perception. True, but I now understand that reality is by far the greater part of the balance. Second, I, like most of my male colleagues, believe that we are highly supportive of our junior women faculty members. This also is true. They generally are content and well supported in many, though not all dimensions. However, I sat bolt upright in my chair when a senior woman, who has felt unfairly treated for some time, said “I also felt very positive when I was young.”
I naively thought that with this statement made public by MIT, and subsequently reported by The New York Times, the great injustice of gender discrimination was henceforth officially banished from the technology world. So I was more than flabbergasted when I arrived in Silicon Valley over a decade later and was introduced to a room full of the “top UX designers in Silicon Valley,” only to find two women present. By the time I left Silicon Valley, the situation had improved to fifty-fifty for any gathering I organized or participated in as a speaker, because I found it to be the most effective way to improve the overall quality of an event for everyone attending. So it only felt logical to me.
As I began to dig deeper into the statistics for tech, I became concerned. I learned that there was only 21 percent representation by women in tech, whereas the overall population of women in the United States is roughly 50 percent—an obvious imbalance. In 2014, the US Equal Employment Opportunity Commission reported that the high-tech sector employed 7.4 percent African Americans, 8 percent Hispanics, and 14 percent Asian Americans, whereas the average overall representation in the private sector was 14.4 percent for African Americans, 13.9 percent for Hispanics, and 5.8 percent for Asian Americans. As an Asian American, I couldn’t help but notice a study by the Center for Employment Equity reporting that, despite the relatively high proportion of Asian Americans in tech, managerial and executive jobs are more likely occupied by white men—or “pale males,” as the new lingo goes. Outside the United States (China, for example) there’s a general inclination toward males in the tech industry as well—which indicates that it’s not just a problem limited to the paleness of one’s skin.
Even more concerning than the simple disproportion in these figures is how such imbalances affect the quality of life for workers outside of the majority in the tech industry at all levels. The Kapor Center for Social Impact researched the main reasons why these folks leave the tech industry, citing discrimination, bullying, sexual harassment, and racism as the top causes. The study also found that women and people of color were the most likely to experience harassment and be passed over for promotions. To connect back to my MIT story, in hindsight I can see that MIT and most universities across the world never aligned with Dr. Vest’s ideal “reset.” And so the system that has fed much of tech with engineering talent—which has a gender imbalance to begin with—has simply done what all systems do when they’re biased in a certain direction. They just continue along the default path. And leaders who are awake to those lost opportunities to create higher-quality work environments can choose to do something about it or just let it be.
From a systems perspective, we can predict that an imbalance in the tech industry of this magnitude will likely perpetuate without self-correction. Tech companies need to run at full speed to keep up with Moorean time scales, which fosters the pressure to optimize for “culture fit” among potential hires—meaning people who are “just like us.” That way, a new person will take less time to onboard (because they are “like us”), create less day-to-day friction (because they are “like us”), and follow the boss (because they are “like boss”). And these people, in turn, will hire more people like themselves—unless there are explicit systemwide interventions and incentives or penalties that can break this cycle. Whether it’s the friends you went to college with who have similar tastes, or the people in your neighborhood who moved there for similar reasons, or your professional circle that’s already sorted itself for maximal camaraderie, our tendency is to reduce friction and choose sameness over difference.
So it shouldn’t surprise anyone that the tech industry is filled with people who are more likely to think alike and come from similar backgrounds, because the need to move fast will always outweigh a slower, considered approach. But when there is a “we” that defines us, there is a corresponding category of “not like us” that is naturally excluded because “they” think differently and will slow us down. The Temple of Tech is no different from the Temple of Finance or any other specialized profession that seeks to foster its own culture. The boundaries of any temple will nurture safe cultures of like-minded people who prefer to avoid the friction they might feel whenever they’re not with their own tribe. The difference is that, although we should care about inclusivity in any sphere, the techies exert disproportionate influence as they operate at a whole different Moorean speed and scale.
An “oops” by a professional in any company can negatively impact many people, but an “oops” in a computational system can impact all connected customers within a few milliseconds after the tap of a keystroke. When the biases of a business monoculture lie at the foundation of an “oops”—like in an emailed reply-to-all from Finance that subtly puts down anyone who doesn’t know what EBITDA means—that can be unfortunate. But when the biases of a tech product team are deployed to millions of users simultaneously—such as copy in an on-screen button label that will read as insensitive to non–pale male customers—that takes it to the next level. Fortunately, the feedback loop of social media is swift and relentless when aimed at companies’ missteps, but there are even worse “oops” that can sit deeper within a company’s culture. For example, when an internal hiring tool was programmed by an Amazon team of majority pale male AI experts who’d leveraged past data from hiring decisions by likely majority pale male managers, the computational systems demerited résumés that mentioned attending women’s colleges or used the word “women’s.” So what can easily be pointed to as a “computer program error” needs to be considered as more of a “culture error” if we are to truly prioritize accountability.
An imbalanced system will produce imbalanced outcomes. When applying that thought to the tech industry, we can expect imbalanced products to be produced for the foreseeable future. With the players in the Temple of Tech running at computational speeds and scale, we can expect the velocity and level of imbalance to be unparalleled—and eventually fully automated. Beyond the social implications of why that’s not a good thing in terms of equity and justice, from an organizational perspective it represents a suboptimal path for achieving breakthrough innovations. A culture of sameness, devoid of catalysts for innovation, is a losing strategy for a business to achieve outstanding growth. It’s also a source of risk to your business’s ongoing performance when you ship an insensitive “oops” in your product that could have been avoided if the team had been more inclusive of diverse backgrounds and viewpoints in the first place. A work environment where no one is afraid to speak up for fear of having a different point of view is how costly mistakes are most quickly avoided. But you need different kinds of people to hear different kinds of perspectives.
Sara Wachter-Boettcher’s landmark book Technically Wrong documents the many ways the technology industry has unconsciously let the biases of its primarily pale, cisgender male culture impact its products. The result is everything from menstruation apps that refer to users as “girls” to shopping apps that push notifications to women to shop for a Valentine’s Day gift to delight “him.” Or when a popular social media company released a real-time image filter to add slant eyes to any face to approximate an Asian caricature—just months after it had released a real-time image filter to darken a light-skinned face to look black—such unacceptable mistakes result in PR backlash that, while costly, may not result in the hiring investments that are necessary to make better product decisions. So this can be expected to continue as a natural outcome of the imbalance inherent to the tech industry, which also goes to the heart of how startups are funded, managed, and overseen by their boards—making those compromises ripe for disrupting.
Smartly, there is a wave of pragmatic business leaders in tech who see new opportunities in adopting more inclusive approaches to creating their products. They know that failing to serve the broadest possible range of customers represents a kind of old-world ignorance—and leads to lost business opportunities. Instead, they’re actively working to diversify their company cultures to serve their customers better by, for one thing, addressing the gender pay gap in the tech industry. There’s interest in moving from the narrow-mindedness of the “culture fit” approach to instead valuing differences as “culture add”—that is, bringing new voices and ways of thinking into an organization as a positive asset. For all of Google’s diversity and inclusion challenges—as illustrated by the firing of an employee for his internal antidiversity manifesto memo—the company has been steadily investing in a promising area called “product inclusion.” Headed by business leader Annie Jean-Baptiste, the initiative takes as its central thesis that diverse teams make better products. As a result, Google is examining everything from supplier diversity for sourcing of supplies and equipment to an “inclusive images competition” to populate their image databases with more diverse representation. We will see more of these kinds of sanguine efforts as consumers demand higher standards not only in the quality of their products and the ethics of how they get made, but also the character of the company they do business or share data with.
The challenge of righting the imbalance in the tech industry presents an immediate opportunity for new business growth and disruptive innovation. Accepting this challenge can feel at times daunting, even impossible, when considering the deep roots in the inequality of tech education, and even deeper in the wealth inequality that spans multiple generations in our country. Yet I’m increasingly optimistic, because I believe that the most leverageable starting point is for the power of computation to be comprehensible to more people—specifically people who were previously unaware of its implications because it lives in an invisible universe. It wasn’t your fault that you didn’t know much about it, as you couldn’t have noticed it in the first place. But now you know it’s there, and everywhere. And when you consider the endless, untapped opportunities at hand whenever we cross national, race, gender, cultural, religious, age, and socioeconomic boundaries in thinking about future products and services, there’s undoubtedly plenty of innovations to be born. This is called “greenfield” or “white space” in the jargon of my businessy colleagues—a space where there’s still nascent development or prior ownership, so newcomers still have a chance to enter and grow their influence and build new business opportunities.
Imagine rebooting tech education by following the example of President Maria Klawe at Harvey Mudd College, which has achieved gender parity in its computer science program. Or consider how Advanced Placement computer science exams have seen record increases in female, black, and Latino students, in part by shifting the emphasis away from pure computer programming toward instead addressing meaningful problems through code. Or consider the incredibly diverse learning ecosystem of WordPress, with its network of open-source volunteers of all ages who informally teach each other how to code and use digital technologies to gain practical skills to put food on their families’ tables. All it takes is for the sameness of tech to tap into the full diversity of humanity, and we can potentially rid it of the imbalances that have been created at the speed of Moore’s law. Sound impossible? Absolutely. But computation enables the impossible, and if we fully harness its capabilities, we humans can re-create the Temple of Tech as one that is welcome to all.
Given that we can easily deploy computational products that are incomplete and instrumented, we have the opportunity to get back in return a large amount of data to determine how to modify and improve those products. Success will usually look like harvesting lots of user data as the best means to improve the statistical accuracy of your data-driven conclusions. So with computational products, it’s easy to become biased toward broadly observing the behavior of thousands of users instead of consciously investing time to delve deeply into just a few individuals’ experiences with your products. Why? The simple reason is that it’s so much easier (and thus cheaper) to take the instrumented approach to studying aggregate behavior, due to the computational power available to us today. It also makes you look super smart when you can rattle off a convincing, scientific-sounding factoid like “7.2 percent versus 1.2 percent.” By contrast, it’s much more low-tech (and thus more expensive) to study individual people’s behavior through methods developed by anthropologists—in essence, ethnography. It’s a positive sign of customer empathy to share insights after spending time with that one nontechie customer named James who is having a few challenges with your product.
Here’s the problem: the “scientific” response given as numerical data will usually get the majority of affirming nods compared with the stories of James’s challenges and the roadblocks they’re currently facing. That’s because a quantified viewpoint appears like fact—an important signal extracted from the noise—while a qualified viewpoint appears like a noisy customer who just doesn’t “get it” and can be discounted because he falls outside the 7.2 percent. In actuality neither the aggregate data nor the individual’s story constitutes fact, because both contexts involve people. Human beings are by nature unpredictable, so anything involving predictions of human behavior is ultimately going to be a guess. One kind of guess uses quantitative data and the other uses qualitative data. We pay big money for high-quality guesses as one of the means to lower risk in decision making, but no guess can give a 100 percent guarantee of success. That’s why it’s a guess, not a fact. And the best way to guess better, as any investor knows all too well, is to create a portfolio of bets so that all the casino chips aren’t placed on only one of the guesses.
It’s gotten so easy to harvest quantitative data in the computational era that part of the challenge today will be to rip techie folks away from their standing desks with giant monitors and copious snacks to do an old-fashioned, face-to-face customer visit. This is especially difficult since effortless access to quantitative data is a primary benefit and de facto outcome of the computational era, so to folks who live every day in the future, it may feel like going in the opposite direction. Furthermore, if it only costs five dollars a month to gather and analyze millions of your online customers using your products, it can seem unnecessarily expensive and inefficient to make the investment in working one-on-one with a customer, which can cost hundreds of dollars per month. To draw on another finance industry analogy, the best investors will not only carefully analyze the funds they are engaged with, but they will fly out and do a site visit to the fund managers for that extra bit of due diligence as an investing professional. So if the due diligence of the investing world sets the highest standards, then talking with a real customer from time to time makes good business sense.
This is the lesson of good ethnography: to understand a cultural phenomenon, you need to get as close as possible to “first-source” information, instead of relying on second- or thirdhand information. Furthermore, to truly understand first-source information, you need to invest time in knowing and understanding the cultural context that surrounds it. Cultural anthropologist Clifford Geertz defined the ultimate goal of ethnography as “thick description,” as opposed to “thin description.” Thin description focuses merely on the superficial details, whereas thick description goes much deeper than immediate observations and attempts to capture the many layers beneath just the surface.
For instance, from my own experience working on WordPress products, I know it’s not uncommon to hear a thin description like, “Ninety percent of people spend most of their time checking their blog’s viewing stats,” which leads to the conclusion that it’s an important feature that needs to be improved. But such an analysis is quickly disrupted by a thick description from a user telling you that their blog’s viewing stats are the first page they land on in WordPress, and since the stats page shows zero views, they’re not motivated to continue using their blog. So the problem isn’t about improving the stats page, the problem is enabling a blogger to write content that garners views and builds a readership base. It’s easy to let impressive-sounding aggregate data present a persuasive case for action that can miss an underlying, bigger problem. So when presented with quantitative data, it’s important to demand what tech ethnographer Tricia Wang calls “thick data,” in contrast with “big data.” Gathering thick data takes time, and interpreting it well can take even longer. You need to marinate in the thick data that you gather to fully capture the many contexts of your fellow human beings, or else there will be little benefit from your added investment. The allure and ease of quantitatively processing big data will constantly pull you away from the time commitments required to comprehend thick data.
As a fellow busy person, I confess that I like to hide behind my computer screen and sit in my comfy task chair, because I can efficiently get a lot of work done and my rhythm doesn’t have to get disrupted by upending my surroundings. But ever since I started to actively work face-to-face with customers—as inspired by Intuit founder Scott Cook’s habit of “going home with the customer,” watching them install and run his software system, a practice he began back when he first launched Quicken—I now fully realize that the time is well worth the investment. It raises the stakes in your work of serving customers. It can be a terribly uncomfortable thing to do, because you will quickly know when you’ve let down a fellow human being with the decisions you’ve made in your product. And when gathering thick data on your own, be careful of how easily you can become biased that your one customer’s problems are every customer’s problems. By now you’ve embraced imperfection, so just go for it.
The one piece of advice I’d give for “going thick” is to try not to focus on the specific problems your customer is facing with your system. Instead, keep in mind their overall goal for wanting to work with you in the first place. For example, I recall how in the nineties Japanese copy machine makers were designing elaborate user interfaces to manage paper jams, only to be blindsided by organizations going paperless as a better way to share information. I liken this to what we often encounter in customer support as managing a lot of “flat tire” situations, and then we immediately want to spend all our time to make a tire that can’t go flat—or more often getting really good at fixing the flat tire. Meanwhile, we can forget to think about where the customer was going in the first place—asking instead, “What was their destination, and what hopes and dreams were associated with it?” By starting from the motivation question as the driving force behind thick data, you’ll remain more strategic as you immerse yourself in firsthand information. Remember that you’re looking for subtle, human details that are impossible to capture with charts and numbers, so try to rely on your ability to smell and feel a situation. Be what AI cannot do.
I learned a version of this lesson as an undergraduate researcher at the MIT AI lab working for a visiting engineer from Digital Equipment Corporation, or DEC as it was affectionately called until it vanished like many early computing companies. She told me an unforgettable story about how a major soup company had invested a fortune in creating an “expert system” (the first generation of AI) to make soup in their factories just like the human operators. The soup company’s problem was that their best factory operators were all getting older and they weren’t sure how to deal with them all eventually retiring. So the soup-making experts were carefully observed, and all their actions and ways of thinking were then encoded as IF-THEN rules. The day finally came for the factory to fire up the au levain AI system and make some soup. But the results were disappointing—the soup tasted terrible, in fact. Now, at the time I was a big fan of expert systems and was shocked to hear about this failure, so of course I asked the visiting engineer if they’d figured it out. “It was really quite simple and funny,” she said. “They asked one of the old guys to explain why the soup tasted bad. He stepped forward, leaned over the soup bowl, and sniffed it a few times loudly. His response was, ‘It smells bad.’” I love this example because it still holds true today. Indeed, complex systems have many intangible aspects that are easily dismissed with even the most computationally advanced techniques. Being human is still pretty cool, so f*** the AIs.
So with your nose pointed forward into the future, keep in mind the three traps that have always gotten me in trouble. You’ll bump into them as you get more computationally adept and more design-y opinionated, and also simply by getting older like me you might start to believe your own blah-blah-blah. I’ll be brief so you can rush ahead to the end of the book as I know you’re almost there!
Thinking like a classical engineer, and believing there’s got to be only one way to build it right. Henry Ford believed, like a good engineer, that everyone would want a Model T available in only one simple, pragmatic configuration, and painted black. Alfred P. Sloan at General Motors believed that there should be many kinds of cars for many kinds of people to give them what they wanted. Ford lost and GM won by having a better nose.
Thinking like a classical designer, and believing your solution is the one that all will bow down to and adapt. The standards that elite institutions uphold as the cultural compass for the classical design world are underwritten by subjective decisions and invisible wealth networks that facilitate what gets remembered versus forgotten. The Temple of Design narrative of the “genius designer” is not a reliable pathway to success—it’s seductive, but it’s stupid. Use this nose less.
Thinking like a senior leader, and believing that what worked well in the past is obviously applicable yet again. I’ve trained to catch myself when I say, “Back when I was at X, I did Y, and this problem we’re facing is the exact same thing again. I know how to solve this one. Follow me!” I stop myself here. That’s because I know we live in the computational era, where we can’t expect what worked ten years ago to apply right now. This is what entrepreneur Barry O’Reilly calls having the rigor to “unlearn” from past successes, or you will miss new ones. So when in doubt, go get a brand new nose.
We can address the many imbalances that have seeped into the technology universe by focusing on the human element of our work. Computational machines are master copycats and are powered by the quantitative data at their foundations. So we’ll need to pay attention to our completely normal “human nature” of relying on our usual biases, aka “wisdom.” The computational era will easily turn out poorly for all of us if we don’t balance out all of our quantitative data with more qualitative data. So start now and broaden your data portfolio. Invest like a smarter-than-average boss. And gather as many observations from people who are unlike yourself—because triangulation works best when you have the most diverse set of sources with which to tune and retune your data-informed guesses. Rather than just trying to protect your nose from a thick and occasionally unpleasant smell, we live in a time that requires your full sensorial attention and your inquisitive curiosity. Go thick. Smell hard.
When the cofounder of Google talks passionately about his fear of what AI can bring, it isn’t just a ploy to boost Google’s share price. It’s because those who have been (literally) plugged in to the power of Moore’s law know something about its impact that the general population doesn’t know. When Harvard scholar Jill Lepore says this about the transformation of US politics:
Identity politics is market research, which has been driving American politics since the 1930s. What platforms like Facebook have done is automate it.
the key word is “automate”—because machines run loops, machines get large, machines are living. It’s not like turning a steel crank by hand to make a plastic toy figurine move about. It’s like pressing a button and watching the toy get up, wave at you, and then start answering all your emails for the rest of your life. When a young person says on camera that Cambridge Analytica and Facebook together were able to sway the 2016 US election, we look at them and think, No way. Because there can’t be that many human workers in the world who could process millions of pieces of information at a low enough cost. But you’re aware of computation now, and understand that the present is significantly different than even the recent past (just over a year ago).
Automation in Moorean terms is very different from simple machines that wash our clothes or vacuums that scoot about our floors picking up dirt. It’s the Moorean-scale processing network that spans every aspect of our lives that carries the sum total of our past data histories. That thought quickly moves from wonder to concern when we consider how all that data is laden with biases, in some cases spanning centuries. What happens as a result? We get crime prediction algorithms that tell us where crime will happen, so officers are sent to neighborhoods where crime has historically been high—that is, underprivileged neighborhoods. And we get crime-sentencing algorithms like COMPAS, which are likely to be harsher on black defendants because they are based on past sentencing data and biases. When asked about AI and its ramifications, comedian D. L. Hughley optimistically replied: “You can’t teach machines racism.” Unfortunately, his assessment is incorrect, because AI has already learned about racism—from us.
Let’s recall again how the new form of artificial intelligence differs from the way it was engineered in the past. Back in the day, we would define different IF-THEN patterns and mathematical formulas, like in a Microsoft Excel spreadsheet, to describe the relation between inputs and outputs. When the inference was wrong, we’d look at the IF-THEN logic that we’d encoded to see if we were missing something and/or we’d look at the mathematical formulas to see if an extra adjustment was needed. But in the new world of machine intelligence, you pour data into the neural networks and then a magic black box gets created: you give it some inputs, and outputs magically appear. You’ve made a machine that is intelligent without having explicitly written any program per se. And when you can feed it with tons of data, you can take a quantum leap in terms of what machine intelligence can do with newer deep-learning algorithms—and the results get significantly better with the availability of more and more data.
Machine learning feeds off the past. So if it hasn’t happened before, it can’t happen in the future—which is why if we keep perpetuating the same behavior, AI will ultimately automate and amplify existing trends and biases. In other words, if AI’s masters are bad, then AI will be bad. But when systems are largely running on autopilot like the new AIs, will media backlash lead to an “oops” being attributed to AI error rather than human error? We must never forget that everything is a human error, and when humans start to correct those errors, the machines are more likely to observe us and learn from us too. But they’re unlikely to make those corrections on their own unless they’ve been exposed to examples set by a sufficient number of humans who can provide the right corrective behavioral data to rebalance their numerical brains.
In our machine intelligence era, an apt analogy is how children inevitably copy what their parents do. Oftentimes they can’t help growing up to become like their parents—no matter how hard they try otherwise. While in the old days solving a complex problem by writing a computer program could take months or years, now machine intelligence can rapidly conjure an equivalent computational machine when just fed with past data by which it can model a past behavior. Automating a past outcome can happen instantly and with increasingly less human intervention. So rather than imagining that all the data we generate gets printed out onto pieces of paper in a giant room somewhere at Google, with a staff of twenty running around trying to cross-reference all the information, think instead of how machines run loops, get large, and are living. The logical outcome of that computational power—a rising army of billions of zombie automatons—will tirelessly absorb all the information we generate and exponentially improve at copying us. The AIs are not to blame when they do bad things. We will be the ones to blame for what they do in the service of us.
With revelations that systems like Facebook are able to alter how an individual behaves, we’re reaching a critical moment in how we want to coexist with computational systems. If we expose ourselves to technology that is programmed to be sexist, misogynistic, homophobic, racist, and so forth, we shouldn’t be surprised to see things like The Wall Street Journal’s “Blue Feed, Red Feed,” which shows you what you will see if Facebook tags you as liberal or conservative. What you see is what you might easily become today. Try scrolling through the feed of a stranger sitting right next to you, and you’ll see that their online reality can be a lot different from yours. That’s because today we get the news we “want”—we get to be hedonically stimulated to confirm what we think is the truth, thus validating how smart we are: I get the Daily Me, and you get the Daily You. And then, when occasionally exposed to an “opposing point of view,” we have no other recourse but to assume the other side’s ignorance compared with our own superior views. Meanwhile, all along the machines are feeding this to us because we programmed them to do so. They’re watching how we react positively or negatively to what we’re fed, and in turn learn our individual extremes for good and bad.
But it’s not too late to make computational machines that have a broader understanding of the human experience. We can easily start the process by first understanding ourselves better—this journey is under way for me right now as I dig deeply into the world of “inclusive design.” This approach to leveraging diversity is the key to making better products. I’m embarrassed to say that at first I didn’t fully understand why the area interested me so much, but in hindsight I realize it’s because I could smell something. The rise of computational design, and the incredible business value it has brought with it, was also creating imbalances in ways that were not immediately apparent to the people and companies at the center of it all. The fact that AI à la levure was odorless really bothered me.
Fortunately there’s change afoot, led by inclusive design expert Kat Holmes, with ideas that began when she was at Microsoft that are now spreading across the world via her 2018 book, Mismatch: How Inclusion Shapes Design. I first encountered Holmes’s work a year prior to Mismatch and featured it in the “Design in Tech Report” back when it was just starting to take off. Now leading user experience design at Google, Holmes is poised to reshape the cloud in ways that I’m excited to see come true. Holmes’s three design principles for addressing imbalance are simple enough to put into practice, and yet deep enough to spend a lifetime trying to master. They are:
“Recognize exclusion.” Make a conscious effort to notice when someone or a group of people is being excluded. You’ll need to consciously step into uncomfortable situations when doing so, but it’s an easy task when considering how those who are being excluded already felt uncomfortable in the first place.
“Learn from human diversity.” Go thick, and go into neighborhoods and cultures that are unlike your own. That means you need to leave the safety and comfort of your home or workplace and place yourself in danger or discomfort—which is a hard sell at first, but your return on investment will be high.
“Solve for one, extend to many.” Construct solutions that break your biases and help you find new markets. Innovation is what achieves growth, and innovating is about bringing new perspectives to existing problems, it gets even better when entirely new problems get introduced that wouldn’t have been obvious without a different point of view.
Kat Holmes’s framework helps to disrupt our natural biases to exclude with the positive intent to concentrate, focus, and deepen the solutions that we want to design for ourselves. Anything that we feel comfortable with is going to be laden with biases, and because computation has been built by techies, we can expect it to be laden with techie biases. Computation isn’t the only medium infested with biases. Just before digital cameras, there was chemical photography—which was tuned for lighter skin tones. Or think about this: if you ask a Temple of Design acolyte to name ten masters of the Bauhaus school, they will surely name ten men—even though the Bauhaus was half men and half women. Or look to the number of movies directed by women, or the number of Fortune 500 CEOs who are women or decidedly “unpale.” What’s different about computation? You know it—it’s incomplete. We can reshape it. We can improve it. We just need to start immediately.
Kat Holmes often points out the origin of the word “exclude”: derived from Latin excludere, where ex- means “out” and claudere means “to shut.” In the people world that translates to literally shutting out a group of people from a special club that lets everyone else in. It’s hard to spin exclusion in a more positive way because it is unfair by its very nature—which you know if you’ve ever felt excluded or “shut out” yourself. But exclusion makes complete sense when considered from the point of view of business, because to have an “unfair advantage” is considered to be a winning weapon when competition is fierce. Having something that your competition does not incentivizes you to shut them out and adopt what’s called in the computing world a “closed” approach.
It’s a common practice in industry to make closed systems, because when successful they provide the invaluable ability to exercise full control. When launching the first Mac computer in 1984, Apple famously went with a closed computing system that wasn’t easy to extend like the competing Wintel PC standard at the time. As a result, Apple was able to control the entire user experience in ways that no other computing brand could achieve. This strategy of a closed system approach played out again later with Apple’s launch of the iPhone, and the rest is history. Meanwhile, there was an emerging mobile OS project called Android that chose the unusual path of making all of its computer code (“source code”) openly available. Today, there are more devices powered by Android than Apple’s operating systems. As of 2019, Apple’s approach has started to weaken, and the company is being forced to participate outside its own closed universe. Are its unfair advantages eroding?
“Open source” is the official term for computer code that is open and accessible to anyone to modify for their own purposes. It’s the opposite of “shutting out” others—instead, it’s about including anyone and everyone. A few well-known open source projects include the operating system Linux (on which Android is built), Firefox (a popular web browser), WordPress (the website management system for over a third of web traffic), and PHP (the popular computer language that powers WordPress). The term “open source software” was coined by Christine Peterson in 1998 as a way to better embody the community values that are inherent in it, as opposed to the prevailing term of the time, “free software,” which connotes lesser quality. Having had the opportunity to witness the WordPress community firsthand through my work at Automattic, I don’t recall ever encountering a more welcoming, inclusive, and world-spanning group of people in my entire life. And over time, I came to realize that PHP in the WordPress universe stood for “People Helping People,” given the way each local community welcomes anyone who wants to learn computation, with no strings or costs attached to getting involved as a contributor. In open source, the software is the community and not just the code.
In contrast, “closed source software” governs the majority of apps and services that you use every day. You’ll never be able to examine what the programming code is actually doing, and if you want it to work differently it’s impossible for you to make changes to the software. This includes your Facebook apps and most of everything running on your phone or desktop computer or online. Now, even if you could access the source code of all your apps, that doesn’t mean you would automatically be able to understand what’s there. The same goes for a complicated open source system like WordPress. But it’s a fact that your Facebook app shuts you out of the opportunity to look under the hood, whereas WordPress is fully inclusive at the source code level if you ever want to change any aspect of how it works for you.
Another way to think about the difference between closed source and open source is to consider the distinction between “cooperation” and “collaboration.” Cooperation is about working with another party at arm’s length, whereas collaboration is about having arms hugged around each other. The advantage of collaboration over cooperation is that mutual benefits result from working together, with all parties making compromises of varying degrees. In the absence of the ability to collaborate, the only recourse for governments to rein in the Temple of Tech today is to attempt to regulate it. Interestingly, if all software by Temple of Techsters were fully open source, then there would be no need for governments to take their current course of actions against them. Why? Because the source code would be inspectable for violations of issues that we are all concerned about today, like knowing what they’re doing with all the data they are gathering about us. It’s harder to do evil when there aren’t opaque walls shutting everyone else out. An open systems approach is an alternative to government regulation, and so I expect we’ll see more of this approach when politicians who can speak machine, like you, get elected. Maybe it’s time for you to run for office? I hereby open source an OPEN campaign slogan for you, with a little recursive twist: “OPEN Promotes Equity, Naturally.”
There is a downside to open source: there are no secrets to be kept anywhere. In a world where everyone seeks to collaborate with each other and inflict no harm, then full transparency can mean “sharing is caring” and lasting harmony. However, there will always be a few bad actors who are always looking for ways to manipulate a situation in their favor for reasons that can only be explained as human nature. So open source is not always the way to go. For example, you would never want to publish all the source code for your personal electronic banking system that can easily access all of your finances. An open source approach might be commendable if such an act of sharing let others make a similar system for themselves, but you can bet that all your money would soon vanish if your source code included sensitive information, like bank account numbers and passwords. Or if Facebook’s algorithms were all open sourced, then an entity with malicious intent could rewrite the timeline codes and easily manipulate your timeline. And of course, there will always be competitive advantages to justify why a business would want to keep its code private: to keep its unfair advantages over its rivals.
Nonetheless, businesses are recognizing the value of open source. Microsoft surprised the world with its acquisition of GitHub, the world’s largest community of open source software development. To grasp the magnitude of that acquisition, just ask your programmer friends about it—some may not even know that Microsoft owns GitHub, because Microsoft has chosen not to alter or rebrand how it currently operates. Besides Android, another example at Google is the Chrome web browser, which runs on an open source engine. Now anybody can make a web browser on top of it—and even Microsoft has announced it will be switching over to Chrome’s engine. Relatedly, Apple’s web browser Safari is an example of a hybrid open/closed system: it shares its lineage with the same open source engine as Google, but the rest of its code is not accessible to the public. When using open source in your product, be sure to check the licensing rules—some licenses let you use the code without any restrictions, while others require you to openly share the rest of your code if you use theirs. The former is often referred to as an “MIT license,” which gives you a lot of freedom; the latter is a GNU “GPL license,” which is more about giving others a lot of freedom.
Let’s not forget about the other kind of programming that has less to do with shareable computer codes—I’m talking specifically about AI à la levure. Newer machine intelligence systems aren’t composed of readable computer codes but are instead packaged as opaque black boxes of numbers and data with no clear logical flow. It’s long been a concern that these methods are so complex that we don’t really know how they work—they’re not legible by human beings because they’re essentially piles of raw numbers. The severely closed nature of these systems—which have biases based in the data that they are fed to be trained—has set off alarm bells about the need to address their inherent opacity. New work is now emerging on computational ways to inspect these opaque AIs to behave more like “gray boxes” that might give us more insight into how they work. And if we can’t figure out how they work, there are also efforts under way for AIs to start asking why they’re being told to do something, so they might build the equivalent of a conscience. We should expect and demand more efforts around both understanding AIs and teaching ethics to AIs while channeling the machines’ never-ending diligence to loop forever until they succeed.
Frankly, it’s easy to be terrified of AI as it’s becoming an increasingly common topic in popular media. We’re not far away from hearing about a cleaning robot that refuses to listen to you, or an app-enabled pacemaker that extorts you, or a cybercrime cartel that has replaced your entire online presence with a bot you can’t control—none of these has happened yet, but all are entirely possible with existing technology. If and when such calamities do occur, just remember that computation right now is just one of two things: readable source code or black boxes of numbers. It’s au levaine or à la levure. Both are made by human beings as IF-THEN logic or data-powered black boxes, and they’re either openly shared or hidden behind closed doors.
When they’re open technologies, we have the opportunity to share, collaborate, and learn together. And when we share similar values with the cocreators of the open source code, we are less fearful of the technology. If you join an open source community, you’ll feel the responsibility to do the right thing by everyone in it. Most of the communities don’t require you to be an expert computer programmer, and even rudimentary “machine speakers” are more than welcome. In case you would like to get involved in one of the many open source communities to grow your machine-speaking abilities, I’ve compiled a list on howtospeakmachine .com. Why do you want to get involved? OPEN Promotes Equity, Naturally.
Shortly after four a.m. on December 6, 2015, I went out for a jog on El Camino Real in Palo Alto. It was like every early morning run for me—not too cold, not too hot. Dry. Safe. Along a route I knew well. On my mind was a six a.m. phone call I needed to make after my run. There were no cars out and about, but when I was making my way across a crosswalk on a six-lane road, the light began to turn to red, so I sped up. The other side of the street was dark, as it always was. I made it across the street before the light turned red, feeling slightly victorious, when my right foot caught the edge of the sidewalk.
And I tripped.
I landed on the sidewalk flat on my face, arm, and knee. My head was ringing. It was still quiet and dark, with nobody else around. I touched my face with my left hand. It felt wet and I quickly surmised that I was bleeding. My right hand couldn’t move, and I soon realized that something had happened to my right elbow, as it couldn’t go straight. My elbow felt a bit like little Lego pieces. I was scared.
A few cars passed by hastily. I was wearing random startup swag black. No phone with me. And no wallet or ID. I had my new Apple Watch to measure my steps . . . but it couldn’t help me call for help as it was still the first version. I started to shiver in shock. I knew I needed to get back to my Airbnb—which was roughly ten California-size blocks away. There was nobody else on the streets. I also thought that nobody was likely to help me as I was dressed in a dark hoodie, bleeding from my face, and really . . . what commuter in a hurry would want to stop to help this creature fresh out of a horror movie? As I looked up into the dark sky, I felt oddly at ease because it struck me how insignificant I was—just another random organism on the surface of the planet, of no more significance than anyone else. It was a wonderfully humbling feeling. I felt peacefully in pain.
My MIT-trained engineering mind kicked in all of a sudden. Inexplicably, I imagined myself to be a Mars autonomous rover that had a few broken parts and needed to get back to base for repair. Engineers know well how those units are equipped with many redundant systems in case any systems fail. And so I imagined that there must be some way I could get back to my Airbnb too. This mental image of “becoming a machine” helped me completely forget about the pain. The adrenalin probably helped too.
I soon realized that I couldn’t go a few steps without passing out, so I simply got up, took a few steps, and then I would lie on the ground. I made continual progress in this manner. When I started to turn off the broad concrete fairway of El Camino Real, the soft, relaxing feeling of placing my face against the grass on a few neighbors’ lawns kept me motivated to continue forward.
Fortunately, I got back to the Airbnb, to my phone, located the nearest hospital with a call to my assistant, tried to clean off some of the blood, and called an Uber. When I got to the ER around five thirty a.m., I was handed a clipboard and pencil. My good arm was broken and I am a righty, but I quickly started to adapt as a lefty to my best abilities and filled out the form with the penpersonship of a second grader.
I anxiously waited in one of the inpatient rooms for about an hour, wanting nothing more than to see a doctor. Finally, a doctorlike person walked in and looked at my torn face where my front teeth had broken through my upper lip. “You look terrible!” he said. I figured he couldn’t be a doctor with such a bedside manner, but then again I couldn’t be sure. He then said, “Can you move your neck?” I did so. With a serious look, he then exclaimed, “You’re lucky!” And at that moment, I felt immediately relieved. I agreed, “I’m lucky!” I thought to myself how bad it would have been if I couldn’t move my neck—I would have been stuck on the sidewalk and unable to walk. I would have been an entirely broken machine. I felt so grateful. He then stitched up my face.
Half an hour later a nurse came in, looked at me, and asked, “What happened?”
I replied, “I was jogging and tripped.”
In a serious, scolding tone, he said, “Exercise isn’t good for you. Don’t you know that?”
I tried to smile but the anesthesia from the stitches didn’t let me.
He then said, “Were you wearing a fluorescent vest or a light?”
“No,” I said.
“You could have been hit by a car!”
Another flash of joy came as I thought, Yeah. I really could have been hit by a car. For two years I had been jogging in the morning darkness wearing all black—which seemed brainless in hindsight. I could easily picture myself getting hit by one of those popular, speedy (and silent) electric cars in Silicon Valley due to my carelessness. So I felt even more relieved and happy!
The next day, as I was being wheeled into the operating room and with the anesthesia just starting to kick in, my epiphany came. Just as I started to see the twilight pink sending me off to sleep, I felt a jolt of realization that the computational era needed to address the imbalance between technology and humans.
Although my path to recovery involved a great deal of technology, it was my religious moment of awakening that drove me to pay careful attention to all the people in my direct surroundings. So instead of marveling at the latest technology that was repairing me, I observed the many compassionate human beings working alongside the machines. The doctors, nurses, technicians, receptionists, my Airbnb hosts Betty and Benny who took care of me after my operation, cleaners, food service workers, the flight attendant who lifted my roll-about bag into the overhead bin when I realized I couldn’t do it myself, and a whole host of nonmachines who eventually got me back to work. I also was conscious of the people who indirectly impacted me, like all the engineering, design, and product folks who I’d never had a chance to meet, and all the invisible teams out there who had shipped the machines (and parts) that helped to repair me. My recovery took ten months in fits and starts, and it definitely did not transpire at Moorean speed. But I would do it all over again because the journey was absolutely worth it.
Recognizing one’s own humanity while recognizing the humanity of others is the kind of gift that technology cannot give to you. I wouldn’t wish sickness on anybody, and yet whenever someone asks me why I care so much about inclusion, I suggest that they break a bone in their body. Their response is usually a polite “No, thanks.” And yet I’ll continue to insist that they do, because that was what enabled me to fully acknowledge the privilege I have been given and earned. During my recovery I was often filled with an overwhelming sense of being lucky to have been born into a family with parents who sacrificed everything they had so I could go to a special place like MIT to learn everything I’d ever need to know about machines. They never had access to the kind of health care that I enjoyed in my special position in society—and finding this sense of gratitude along with the accountability it brings to the humans around us, and who have enabled us, is what has landed and planted itself firmly in the foreground for me.
I guess that’s the one last thought I’d like to leave with you as you head out with your new language skills, now that you know how to speak machine too. As a fellow computational thinker, never forget: mind the humans. We are the ones who brought the computational era into existence. And we’re grappling with what that means for computational products and services today that ship incomplete and instrumented. Now more than ever we need to think and work inclusively in order to directly address the imbalances that will be automated if we don’t consciously create new paths.
It all comes back to the difference between cooperation and collaboration:
COOPERATION |
COLLABORATION |
= working together independently |
= working together dependently |
Working cooperatively is easier than working collaboratively, because to cooperate you don’t really have to understand the other party deeply. For most of the history of the computer’s evolution, the majority world of everyday humans has been learning to cooperate and cope with machines that keep changing for some unknown reason. Meanwhile, it’s the comparatively smaller group of computational thinkers in Silicon Valley and the like that has instead been working collaboratively with the omniscience lurking in the cloud and knowingly at Moorean speeds. And we’re trusting them all to collaborate well on behalf of humanity. Maybe until you had read this book you couldn’t easily collaborate with computers, or with their human collaborators directly. That’s understandable because you hadn’t yet taken the time to visit their invisible universe to learn their history, customs, and norms. You couldn’t speak the language of the machine at all. Now you do. If just a “bit.”
Our machines run loops. Our machines operate at infinitely large and infinitesimally small scales. Our machines are becoming alive. Our machines are incomplete and imperfect, like us. Our machines are increasingly instrumented and know what we’re up to.
Our machines are automating imbalance all over the world on our watch. Mind the machines. Mind the humans. Let’s go.