Google occupies a special spot in the firmament of Silicon Valley.1 It’s not only one of the region’s most successful companies—it also defines a cultural ideal. Silicon Valley presents itself as a playground for weird geniuses, as a place where creativity, commerce, and a little bit of counterculture fuse to form a new synthesis capable of generating extraordinary wealth. Few companies appear to embody this synthesis better than Google.
Yet over the years, Google has changed. And its transformations have paralleled the broader shifts of the region as a whole, as a generation of companies that saw themselves as eccentric underdogs evolved into corporate leviathans. At Google, this dissonance was especially intensely felt. It became a source of internal tensions, which in turn helped make the company a hot spot for white-collar worker organizing.
There is no better vantage point than Google to observe how Silicon Valley has changed in recent years. We talked to an engineer who spent nine years at the company, and experienced many of these changes firsthand. What happens when Silicon Valley’s golden child grows up? What does it look like for a company to have a midlife crisis?
I didn’t really have a computer growing up. Then, when I was in high school, my parents bought one for their business. You could use the modem to dial into the BBS [bulletin board system] of the local public library, and connect to the internet from there. One of the first things that I remember thinking was, Oh, the internet is really cool!
It was around that time that I started programming. The library had a book about Perl.2 So I taught myself Perl, and soon I was making websites for local businesses. That was my first tech job, I guess.
Yeah, I went to college in 1999. At the time, the dot-com boom was going strong. There was a lot of optimism in my undergrad class. The computer science major was bigger than it had been in previous years.
I definitely felt behind my peers. I had always been at the top of my class in math and science, but I didn’t have a whole lot of programming experience. The homework was hard.
When you’re first learning programming and something goes wrong, you don’t really know how to tell where it’s going wrong or why. It’s an intuition you have to develop over time. You learn where to look or what to push on to figure out why this particular piece of code isn’t working the way you expected it to work. And, even for experienced programmers, you never know how long that process is going to take. Sometimes you figure it out in a few minutes, sometimes it’s a few hours. Sometimes you never figure it out, and you have to start over from scratch.
As I mentioned, I started college in 1999, during the dot-com boom. By the time I graduated in 2003, the bubble had popped. Given what I was hearing about the job market, I decided to go to grad school.
I wasn’t sure. The actual experience of being a Ph.D. student was definitely hard for me. I felt again like, Oh my God, these other people are so much smarter than me.
When it came time to identify a research topic and write a thesis proposal, I really struggled. I think that was the hardest part of the whole process. I didn’t have a lot of academics among my family or friends. I didn’t know where to start. By the time I got through the thesis proposal, I was drained. The whole thing had left me feeling pretty burned-out—maybe about as burned-out as I’ve ever been.
My adviser had a very large stable of grad students. One summer, she didn’t have funding for all of us, so I ended up working with a different professor on a research project that eventually became a startup: something called reCAPTCHA.
Anybody who’s been around the internet for long enough has seen a CAPTCHA. These days, it’s the little thing that pops up with a checkbox that says “I am not a robot.” And sometimes it asks you to prove it by clicking images that have a taxi or a traffic light or whatever.
The professor I was working for invented the original CAPTCHA for Yahoo. Back in the day, Yahoo had a bunch of people signing up for free email accounts and then using them to send spam. The CAPTCHA was supposed to put a check on that.
The idea was that you’d display this distorted text and tell the user to type it. A computer could generate these tests very easily and know what the right answer was. But at the time it was hard for computers to read the distorted text. So the CAPTCHA prevented people from writing programs to automatically create a hundred thousand Yahoo email accounts for sending spam.
CAPTCHAs started to get used everywhere on the web. At some point we did the math and figured out, “Wow, people are filling in millions, maybe billions of these a day. They are collectively wasting a huge chunk of time typing in these obnoxious characters. Why don’t we try to do some good for the world and use CAPTCHAs to digitize books?”
It’s the same idea as the original CAPTCHA. But instead of displaying random words, you’re displaying words from old books or newspapers or magazines that optical character recognition software has trouble reading. So we get humans to read the words and tell us what they are.
We would display two words. One was a word that we actually knew. The other word was taken from a scanned book, and maybe we had some guesses. We would use the word we knew to confirm that the person was actually a human. Then, assuming that they passed the first word, we would count their answer for the other word as a vote for the correct spelling of that word.
If they happened to agree with the optical character recognition software, then, great—it was probably right. If they disagreed, then maybe you send the word out to a couple more people and try to get some agreement on what the word is.
Well, our idea was that there must be places that have old works that they want to digitize. We could partner with them. And that became the business model. It started out as an academic research project but, by the end of that summer—this was 2007—we had decided to make it into an actual company.
We ended up getting a contract with The New York Times. We started digitizing old years of the paper that were in the public domain. So we started with 1922 or 1923 and then kept going backward. We went in reverse chronological order because obviously the older scans were harder to read.
That was a really fun project to work on. We were a small team, maybe six people, and we never had an office. It felt like being at another university research lab.
Then, in 2009, we found out that Google was considering acquiring us.
They wanted to use it to digitize Google Books.
At that point, Google had been scanning books in the public domain from libraries for many years. And, it wasn’t announced publicly yet, but Google was getting into the e-book business.
They wanted an e-book service that could compete with the Amazon Kindle. And part of their plan for doing that was to say, “Hey, we can offer higher-quality versions of all these old public domain books than what Amazon has.”
At the time, Amazon was claiming to have four hundred thousand e-books. Three hundred thousand of those were public domain books that had been scanned by somebody and gone through one pass of optical character recognition. There would be typos or misspellings or wrong words everywhere. Not really joyful to read.
The idea was that Google could have a similar catalog of these older public domain works but that they would actually be readable. Honestly, I think it was largely a marketing thing. To be able to say, “We have five hundred thousand e-books and nobody else has five hundred thousand e-books.”
Yeah. Google had a team, and still does, that worked on optical recognition software. The project was open-source and called Tesseract. Tesseract was closely tied to the Google Books team. We met with them during our first few weeks, and sat next to them.
Tesseract was okay, but it wasn’t as good as the commercial software we had been using for reCAPTCHA, which was called ABBYY. So Google wanted reCAPTCHA to improve the text quality.
It was exciting. We had a lot of code that was specific to The New York Times. They had a particular format of articles and sections and so on. Now we were doing books, which is a very different sort of thing.
Also, the scale was completely new. At Google, we were working with millions of books. And we had way more access to computing power, obviously.
Yes, and reCAPTCHA was an old-school startup, before there was any of this cloud stuff. The front ends that served the CAPTCHAs were hosted on four servers: two on the East Coast and two on the West Coast. Eventually we added three servers in Europe for latency reasons—because of a big client that wanted low latency for their European users.
So we went from having a handful of servers that we had to manage ourselves to having as many resources as we wanted. In the first year or two at Google, we easily scaled up our traffic by six to eight times what it had been before.
When we got acquired, we were serving maybe four or five thousand CAPTCHAS per second. Which is not bad. Facebook used reCAPTCHA. So did Ticketmaster, Twitter, and a bunch of sites that were a big deal ten years ago and that nobody remembers anymore.
But within a year at Google, we were easily double or triple that. Not due to us doing any special marketing or anything. It was just organic growth from the sites that were already using us, plus others saying, “Okay, they’re part of Google now, so they’re not going to just disappear.” Which I guess is different than what people say when startups get acquired by Google these days!
Yeah.
By the time we arrived, Google Books had been going on for years. It was first announced back in 2004. All of the book scanning was done in collaboration with libraries. Harvard and the University of Michigan were the two largest ones in the U.S.
The way it worked was that books that weren’t checked out would get trucked off from the library to a Google scan center. There, they had people turning the pages and taking photos with cameras from above to scan them, to ensure it was a nondestructive scanning process.
I did get to see a scan center at one point. It’s one of the first situations that I became aware of where Google was using TVCs.3 Google wasn’t directly employing the people scanning the books—there was some third party that was responsible for the scan center operations at any given place.
Libraries liked the project, because their whole point was the preservation of written material. So preserving that material digitally for future generations seemed good. And the libraries got the scans of the books to do whatever they wanted to with them.
My best guess is that Larry Page just thought it would be cool. He probably decided it was worth doing because compared to the scale of Google, the amount of resources required was not huge. And it made sense given Google’s culture and mission back in 2004. Google was a search engine. The reasoning that I always heard was, “So you can search the web, but there’s a whole bunch of human knowledge that’s stuck in dead-tree form. Why can’t you search all of that as well?”
Many different things changed at Google, both culturally and engineering-wise, over the nine years I was there.
The Google of nine years ago felt much closer to Larry and Sergey’s original vision. It was honest techno-utopianism. Google Books was a great example of that. “We’re just going to try and scan all the world’s books because we did some numbers on the back of an envelope and it seemed like we could.” And they did actually scan 20 percent of all the books that had ever been published!4
When I started, Eric Schmidt was in charge. He seemed fine with having a loose, university-style atmosphere. Different teams worked on different things. Some of them succeeded and some of them didn’t. But the company made a ridiculous amount of money from Search so it didn’t really matter that much.
Then Larry Page became CEO in 2011, and things started to change.
He introduced a lot more structure and hierarchy. Previously, there were relatively few divisions. Most projects were under Search—one might be web search, another might be book search. Larry reorganized the company into major product areas. Search was a division, but not the only one; there was also Android, Cloud, and so on. And he put a senior vice president in charge of each of them.
Right away, there was less of that university atmosphere where you could just walk up and talk to anybody about their project and maybe help them out with it. Now the decisions were coming down from the senior vice president who was in charge of that product area. And those decisions were being driven by business objectives: overall, the company started caring more about the business and less about whimsical projects. Google started to become more like a typical big company.
I can’t think of a specific moment where there was a sudden influx. But percentage-wise, more and more of Google became temporary workers. It’s now a majority—Google employs more TVCs than full-time employees.5 It was probably 10 or 20 percent when I started.
At first, the introduction of TVCs seemed justified. Google is not in the business of hiring people to do everything. It doesn’t have the time. So having some third party manage that seemed like it made sense. But over time, Google has gone from, “There’s this special project that needs a few hundred people who have skill sets that no Googler currently has,” to taking full-time positions and turning them into temporary or contract positions.
Recruiters. I had some awareness of this when I arrived, because it was in the middle of the Great Recession. Before, recruiters were full-time employees. But after the financial crisis, they scaled down hiring and basically fired a bunch of people because they had nothing to do. Then when they spun hiring back up again, résumé screeners and college recruiting coordinators were hired on a temporary basis. I remember older employees who had been there longer than me grumbling about it. They used to know the recruiting folks in this office or that office. Then they started turning over every year or two.
It makes everyone’s life worse. That’s the point. I worked on one project where we hired a third-party design firm to do the web design for a data visualization that we were gonna release publicly. And that was incredibly obnoxious from an engineering perspective because they can’t see our code base. So I’m writing some stuff and they’re writing some stuff and when we stick it together it’s a giant mess because we’re not all developing in the same place.
The first thing that occurs to me is Google Plus.
Google Plus was meant to be a social networking competitor of Facebook. From the start, a decision was made that people were going to have to use their real names on Google Plus. A bunch of Googlers then pointed out that this policy was problematic for a bunch of reasons. Trans people may be known by different names in different contexts. Sex workers might not feel safe using their real name. More generally, anybody who doesn’t want to be automatically doxing6 themselves for the opinions they post on the internet might not want to use their real name.
Their main argument was that anonymous discourse on the internet is toxic. The idea was that if you made people sign up with their real name, there would be less bad online behavior, less trolling.
I very specifically remember an exec making an analogy to a restaurant. When you go to a nice restaurant, you have to wear a shirt and pants. If you want to eat at home, you can eat wearing whatever you want. But as members of polite society, we accept certain restrictions.
The analogy landed extremely poorly on a bunch of people internally. These execs have millions of dollars and are basically public figures. Of course they don’t have a problem with using their real names, so they couldn’t possibly imagine why anyone wouldn’t want to. Also, from a logical standpoint, their argument didn’t make a lot of sense. You can come up with an alias that looks like a real name and post the most toxic stuff in the world. It’s not violating the names policy that’s the problem—it’s the behavior you’re engaging in.
It took a while, but Googlers were able to push back and get the policy changed. It ended up in the state that I think should have been the state initially: you can type anything you want into the name field, so long as it’s not offensive or you’re not impersonating anyone. In fact, I remember one particular instance in which we disabled Neil Gaiman’s account for impersonating Neil Gaiman! He escalated on Twitter, and Googlers went and fixed it.
Yes. Although in the subsequent years, many more rifts manifested and grew wider. Because back then, the feedback mechanism within Google was still working. There was still a measure of trust.
The Damore memo was definitely a turning point.
In July 2017, James Damore wrote and circulated his memo. He was fired the following month. Soon after, there were all these leaks of conversations within Google that got sent to his lawyers—screenshots of email threads or internal Google Plus threads. A lot of the posts had nothing to do with the memo. In many cases, they were written years before the memo. But Damore’s legal counsel used them to make Google look like one big evil leftist conspiracy.
Then the leaks ended up on right-wing sites. A bunch of Googlers found themselves getting doxed and getting death threats. A lot of people were pretty scared. People’s photos were getting posted on 4chan and Stormfront and 8chan and all these other terrible sites.7 A bunch of well-known alt-right provocateurs, including Vox Day and Milo Yiannopoulos and various others on Breitbart, were involved.
We were completely blindsided. There had never been any culture of leaking internal posts to score political points before. Or of people getting doxed and threatened. And the company didn’t know how to deal with it.
Google has a physical security group that is very responsive. If there is an earthquake or a natural disaster or something, they call all the Googlers in the area to ensure they’re safe and to provide help if needed. But for this kind of online attack, they didn’t have a clue what to do.
Initially, there was no official support from Google for the Googlers who were affected. We got sent some useless stock resources that told us not to use our real name and address online—ironic, given the Google Plus controversy. Nobody seemed to have any idea what the hell was going on.
Google’s lawyers made the argument that the court should redact employees’ names because they weren’t relevant to the lawsuit. Eventually the judge agreed, and the docs that were leaked were retroactively redacted from the official court website.
But by then the screenshots were all over the most toxic parts of the right-wing internet. You can’t remove stuff once it’s gotten out there. Some Googlers put together a letter to management asking for more resources for keeping the workplace safe. Basic things, like having codes of conduct on internal mailing lists that were unmoderated. But the letter was largely ignored.
I don’t know the numbers. Nobody I worked closely with. On some mailing lists, there were certainly people who took his firing as proof that Google was biased against conservative employees. Of course, the downside of Google’s mailing list culture is that it’s easy for twenty or thirty people to troll every thread.
Tech has an eclectic mix of political beliefs.
I would say that most rank-and-file people in tech tend to be on the liberal or socialist side of the spectrum. They believe in democratic institutions and government and things like that. But then you also have very libertarian people. For them, governments are bad at understanding technology. Therefore any regulation will be unhelpful or misguided or even straight-up malicious. Governments shouldn’t try to regulate technology, so it’s useless—or worse—for them to try.
Then you have the actual executives of these companies, who are often socially liberal but very fiscally conservative. They’re multimillionaires or billionaires, so they would rather not pay taxes. They do everything that they can to reduce the taxes that the corporation pays and the taxes that they personally pay, because it’s a huge chunk of their net worth.
The politics of tech mostly falls into this tripartite division.
As long as I can remember, there was always a basic recognition within Google that big tech companies have real power—that their decisions can affect the geopolitics of the whole world. In 2010, very early in my tenure at Google, the company pulled out of China because the Chinese Communist Party was hacking into Gmail accounts belonging to dissidents and reporters.8 Up to that point, Google had been offering censored search results on Google.cn. In response to the hacking, the company said they would start providing uncensored search results or nothing at all—which quickly became nothing at all.
So it was always clear that what we did mattered. And that recognition was what motivated the rank-and-file campaign around the Google Plus real-names policy: people saw that there were downsides to the policy that would negatively affect certain groups.
But I would say that 2016 and the aftermath brought these issues into much sharper focus. Algorithmic news feeds, fake news, content that’s misleading or scammy or worse—Cambridge Analytica is one famous example.9
Overall, there was more and more of an understanding within Google and within the tech industry more generally of the consequences of what our companies were building. And it felt like a real departure from the old techno-utopian idea that if you just provide access to information, everything will turn out great. People on the internet are jerks. You have to design your systems with the assumption that hostile actors are going to try to use them to do bad things in various ways. And those actors aren’t always just individual assholes. They’re often part of large, well-coordinated groups. We’re in the middle of a planetary information war.
This returns to our discussion earlier about the reorganization of the company that started after Larry Page became CEO in 2011, and which continued when Sundar Pichai took over in 2015.
The way the company was restructured into different divisions with distinct product areas changed the incentives when it came to pursuing controversial projects like reopening Search in China or working with the U.S. Department of Defense.
Take the Department of Defense. One of the divisions is Google Cloud. They want to be number one in cloud computing. They want to beat Amazon and Microsoft and the other competitors in the market. So for the senior vice president in charge of that division, it’s a no-brainer to take military contracts. At the end of the day, what matters is increasing revenue for that division.
The early Google was different. Back then, it was clear that 90 percent of Google was Search, and everything else was free fun stuff that would eventually redirect people to Search. So you could make the argument that if Google engages in projects that compromise its credibility, people will trust Google less, and Search revenue will go down. Now that the company is split up into these separate fiefdoms, it’s harder to make that case. Cloud doesn’t really care if they take a controversial contract that undermines trust in Search.
Yeah. It’s more hierarchical and has less of that academic feel. The number of engineers and product managers and designers that you can have working on your project is driven by the business case for that project. It’s far less of the freewheeling atmosphere of, “Sure, we can have ten or fifty people working on this experimental thing without knowing whether there’s revenue there or not.”
So there are fewer organic projects growing out of the curiosity of small teams. The direction is coming from the top and reflects specific business objectives, such as the need to break into this market or beat this competitor.
In the Google Plus situation, there was an escalation path and a dialogue between rank-and-file workers and upper management. It was mediated by a senior engineer on the project who served as a kind of liaison. He would answer the questions about the real-names policy at TGIF, Google’s weekly all-hands meeting, with a level of candor and humanness that the other execs did not really exude.
It was clear that he understood the reason people had problems. He was willing to compromise—even if there were challenges, even if it was going to take a while. He also had credibility on both sides: as one of the project’s technical leaders, he was trusted by the rank-and-file engineers, but he was also trusted by upper management. Management was used to respecting his technical decisions, so they respected his arguments about other aspects of the project as well.
He left Google a couple of years ago. When he did, we lost a good liaison between the two sides. But as Google has gotten larger, I also think there’s a growing feeling among the executives that this kind of back-and-forth isn’t worth it. They feel impatient. They don’t have time.
Sundar has said on more than one occasion that Google doesn’t run the company by referendum. Which is not something that anybody has actually asked for! It’s a very strange response to employee concerns.
The point is not necessarily to make every decision democratically but to at least help employees understand the reasons why a decision has been made. Then they’re free to disagree, and can refuse to work on the project, or even leave the company. But these days, the answers from management just come across as business-speaky and vague. They try to placate people without actually showing that they’ve understood the substance of the concerns that have been raised. That makes it hard to feel heard, or even to know your own feelings about a specific project.
Dragonfly is one where I could see an ethical gray area. We were building a search engine that gave the Chinese government the ability to censor certain topics and pages, and to surveil specific citizens and their searches.
On the other hand, people in China currently use Baidu, which is not very good.11 It returns all kinds of wrong answers about medical information that they search for. We know that’s a problem. We know they’re not going to get effective treatment. Baidu is bad for their health. So you could argue that if Google provided better search results with better medical knowledge, the Chinese people using our search engine would be healthier and live longer lives.
I could see plausible arguments on either side. I could even line up on the side of Dragonfly being a net good if Google leadership had showed signs that they had understood and thought about these ethical issues ahead of time instead of after the fact—only after people raised concerns. After you’ve already built the prototype is not really the time to start thinking about the ethical ramifications. And the arguments that were actually presented by the executives were very bad. Like, as a college freshman I would’ve been able to tell that they weren’t valid arguments.
A lot of what was missing was the mediation aspect. With Google Plus, we had somebody who could act as a go-between. We had an escalation path for concerns. You could send an email and get a response.
I have never once received a response to an email that I wrote to a Google executive who is on the board now. It just doesn’t happen. They’re busy people. Maybe they read it, maybe they don’t. Either way, it’s not a useful mechanism for feedback. And as the company and the number of controversies have grown so much larger, the all-hands meeting has become much less useful. You can’t have a dialogue if all you get to do is ask one question every week or two.
It’s also become harder to know who to even ask. When Dragonfly first became widely known internally, it wasn’t clear who was running the project. This felt intentional: the execs went into panic mode when Dragonfly was discovered, so they stonewalled. It wasn’t clear who you could ask questions of other than Sundar, and that remained the case for the first month or so that we knew about it. It is extremely weird not to have an escalation path that doesn’t involve going up the org chart to your CEO.
I have a friend whose opinion is that Google strongly believes in doing the right thing—so long as it doesn’t cost Google money.
Honestly, I don’t know what the right level of cynicism is. With the JEDI contract, Google probably wouldn’t have won anyway because Amazon is so heavily favored.12 So, when the employee advocacy started, the execs might have figured they could placate the workers by not competing for something that they weren’t going to win anyway.
There’s definitely been a major loss of trust on both sides. One way this manifests is through leaking: information that would have previously remained confidential keeps getting leaked to media outlets.
This creates a vicious cycle. Execs feel like they can’t say anything useful because anything they say might end up on Twitter. And workers don’t feel listened to because the execs aren’t saying anything useful—which then makes them more likely to try methods of pressure that don’t involve keeping the conversation inside the company.
If I were Google leadership I don’t know how I would break this cycle. It’s probably mathematically impossible at this point.
Media pressure is currently among the most useful forms of pressure that workers can exert on Google. They try to inflict a PR hit on the company for doing controversial things.
This can also affect hiring and retention. If Google is seen by engineers who have many job prospects as a place that’s doing uncool or unethical work, people will simply take another job elsewhere. It’ll be harder for Google to get talent and in some cases to retain the existing talent because people object to these projects.
Google certainly is its own separate world in terms of company culture. I get the feeling from folks at Amazon or Microsoft or other places that they have fewer company-wide forums in which rank-and-file employees can express their displeasure about something.
To be clear, these forums aren’t just about social or political or product issues. There are many mailing lists that anybody can join. There are mailing lists for people who like skiing and people who like video games and people who like music. There are mailing lists for people who are trying to go walk their dogs together every Thursday or whatever.
So Google’s culture does seem somewhat unique in that way. The mailing lists make it easy to quickly organize a couple hundred to a couple thousand people around an issue. You saw that with all of the worker campaigns, going back to Google Plus. The feeling that I get from workers at other companies is that this sort of culture doesn’t exist elsewhere.
At some point, it felt like the controversies were stacking up faster than we could handle them. I could have made the decision to ignore them and just go heads-down on my engineering work. For a while, I tried.
Over the years, even as my feelings about the company grew more complicated, I had felt an ethical duty to stay and to continue doing what I could to push for changes in the direction of certain projects. I knew that I could apply more pressure from within the company than from outside. But eventually it felt like there was no way that I could usefully participate in that process. I lost faith that my opinions would be reflected in product decisions anymore. So I decided to leave.
Other people made different choices. Some people resigned much sooner. Some people are still around. One reason I felt all right about leaving, in fact, was because we’ve got a deep bench now. It’s far from over. There are a lot of people inside who are going to keep pushing.