TECH CEO HUBRIS
In Greek mythology, King Minos of Crete commissions the master craftsman Daedalus to build a subterranean labyrinth in which he can keep the Minotaur – a human-bull hybrid, born to his wife. Once its construction is complete, the king imprisons Daedalus and his son Icarus in a tower so that he can keep the existence of the labyrinth secret. Planning their escape, Daedalus makes wings from feathers held together with string and wax, and teaches Icarus to fly. As they get ready to take off, Daedalus reminds his son not to fly too low over the sea in case the spray saturates the feathers, or too high in case the sun melts the wax. Icarus, of course, ignores him. So enraptured by the experience of flight that he forgets his own mortality, he soars into the sky; the wax holding the wings together melts and he plunges to his death.
The story of Icarus can be read as a warning of the dangers of hubris: excessive pride that leads to over-reaching, and ends with a disastrous fall to earth. It is commonly used in writing about tech company founders and CEOs; Elizabeth Holmes and Adam Neumann are two recent examples.
Holmes’s company Theranos promised revolutionary blood tests that could be administered with a single finger-prick. Just a few drops of blood would be enough for a Theranos lab to screen for hundreds of medical conditions, from STDs to diabetes to cancer. After raising some $700 million from investors including Rupert Murdoch, Carlos Slim and the Walton family, Theranos spent ten years developing its blood tests in great secrecy. It did not publish papers about its research in peer-reviewed journals. Demonstrations were only given to people who were prepared to sign a non-disclosure agreement and be accompanied to the toilet when visiting its Palo Alto headquarters. Upstairs, Holmes’ conference room was modelled on the Oval Office, with the same configuration of furniture and bullet proof windows. She had a private jet and a detail of bodyguards who referred to her by the code name ‘Eagle One’.
In 2015, forty Theranos testing centres opened in branches of the pharmacy chain Walgreens in Phoenix, Arizona. The company appeared to be on the verge of realising its vision to make comprehensive blood screening available at a fraction of the usual cost. The trouble was, the tests didn’t work. Investigative journalism revealed that Theranos hadn’t succeeded in developing new screening technology after all; instead, they had resorted to diluting their tiny blood samples and analysing them using standard equipment manufactured by Siemens. The test results were highly unreliable as a result, which led to Walgreens customers who used Theranos’s service being wrongly advised to stop taking medication they still needed, or to have medical procedures that weren’t necessary. The company, valued at $10 billion, collapsed, and Holmes was indicted on multiple counts of fraud.
While Adam Neumann has not committed fraud, he has destroyed significantly more shareholder value than Holmes as co-founder and CEO of upscale office rental company WeWork. Early in 2019, Japan’s SoftBank ploughed $2 billion into WeWork at a valuation of $47 billion, with the expectation that it would IPO at a valuation of between $60 and $100 billion later that year. For context, $47 billion is almost double the market capitalisation of Tesco or Barclays Bank at the time of writing. Barely a year later, after failing to IPO, WeWork is wholly owned by SoftBank, which now values it at $2.9 billion – $44 billion less than previously.
Neumann was renowned for his personal charisma and eccentricity. It took him only twenty-eight minutes to persuade SoftBank to make its initial $4.4 billion investment in WeWork in 2016. He served tequila shots after redundancy announcements and smoked weed on a rented Gulfstream jet, leaving a spare portion on board for the return journey, stashed in a cereal box. He personally trademarked the term ‘We’, and then sold it back to the company for $5.9 million. WeWork’s IPO paperwork recorded the company mission as being ‘to elevate the world’s consciousness’ – a tall order for a serviced office business, even if it does offer its tenants free flat whites.
Unlike Holmes, Neumann didn’t mislead anyone about his company’s product or capabilities. But he did convince investors that there was something magical about WeWork that meant it belonged alongside companies with proprietary technologies. Tech companies’ valuations are predicated on their marginal costs being close to zero: creating a million new Twitter accounts or a million new copies of Microsoft Office is basically free. As a result, the only practical limit on the growth of a tech company is demand. WeWork, however, is completely different: satisfying more demand means leasing and fitting out more buildings. By 2017, it had used five million kilograms of aluminium to create the office dividers in its 253 locations; it bought a million square metres of white oak flooring in that year alone. It is hard to soar into the sky when you are carrying all that weight.
Tech CEOs’ desire to defy gravity can be literal as well as metaphorical; the goal for Elon Musk’s SpaceX is to reach, geo-engineer and colonise Mars, as he finds the thought of a future in which we are not a ‘multiplanet species’ ‘incredibly depressing’. The same goes for their desire to transcend mortality. Musk’s vision of living forever involves brain-computer interfaces that will enable people to upload their minds to the cloud; other tech CEOs, including Jeff Bezos of Amazon, Google founders Larry Page and Sergey Brin, and PayPal’s Peter Thiel have invested in cryogenic facilities, stem cell treatments, experimental blood transfusion procedures and anti-ageing drugs. Their underlying assumption is that humanity’s planetary and biological limitations are engineering problems that can be methodically solved.
What is it about the tech sector that seems to encourage hubris? It may have something to do with being remarkably successful at a young age. Steve Jobs was a billionaire at forty; Jeff Bezos at thirty-five; Bill Gates at thirty-one; Larry Page at thirty. Mark Zuckerberg and Evan Spiegel both reached the milestone before their twenty-sixth birthday. If you achieve so much at such an early stage of life, perhaps it’s not surprising that you think you can do literally anything.
Their hubris must also be to do with their forgetfulness or ignorance of history. The basis of all the world’s largest tech companies is massive US government investment during the Cold War: the work of the Defence Advanced Research Projects Agency (DARPA) laid the foundations for the internet, on which Apple, Amazon, Microsoft, Google, Facebook, Snap and every other big tech company has been built. And yet it is rare to hear tech CEOs credit the government in their own origin stories – they are more likely to express libertarian views, even while their accountants work to maximise subsidies and minimise corporation tax bills. To return to the myth that began this chapter, we might think of DARPA as Daedalus: once Icarus has taken off, he forgets all about his father the master craftsman, instead attributing the miracle of flight to his own genius.
There are many more examples of hubristic technological visions, ranging from Musk selling 20,000 flamethrowers to help Americans defend themselves in the event of a zombie apocalypse, to Thiel’s support for “seasteading” – the plan to build new, privately owned city states on concrete barges in the ocean. It’s enjoyable to poke fun at them and to experience schadenfreude when their plans come crashing down to earth. However, their hubris also has a less spectacular and much more serious side.
The hubris of tech CEOs gives us a world in which the role of the state and intergovernmental organisations is increasingly subsumed by private foundations – a system sometimes referred to as ‘philanthrocapitalism’. Bill Gates is sympathetic to the account of inequality laid out by the economist Thomas Piketty, who in his hugely influential book Capital in the Twenty-First Century shows that return on capital always exceeds economic growth, leading to socially damaging concentrations of wealth. Nevertheless, Gates opposes Piketty’s proposal of a global wealth tax to address inequality. As Linsey McGoey writes, we can only conclude that Gates believes that he and his co-trustees of the Gates Foundation, Melinda Gates and Warren Buffett, are better than governments and international agencies at spending money to alleviate poverty, feed the hungry, raise educational attainment and promote health. He must also believe that the results achieved by private foundations justify their lack of transparency and accountability, and the damage they do to popular support for government spending on public services.
This is not to criticise the work of the Gates Foundation – its endowment is clearly a more socially beneficial use of $47 billion than, say, buying all the shares in WeWork. Rather, I want to highlight Gates’s hubris in determining that his personal goodwill is an acceptable substitute for political systems. In the event that the world’s governments manage to implement a wealth tax, surely he should pay up like the rest of us.
The hubris of tech CEOs also leads to a world in which norms of corporate governance are dissolving. One example is the prevalence of dual-class stock structures, which give some shareholders voting rights that are disproportionate to their equity holding. At an early stage of a company’s life, having different classes of shares can protect founders with long-term visions from being forced to take a direction they disagree with by investors with short-term financial motivations. This was certainly a consideration for us at Bought By Many. We were trying to address longstanding deficiencies in insurance, and didn’t want investors to be able to make us charge higher prices or cut investment in product development. In the event, we managed to keep more than 50 per cent of the equity in the company for long enough that we never needed extra protection, but creating a different share class that gave founders outsize voting rights would have been helpful if we had needed to raise more capital. However, dual-class structures are now being used to entrench founder control at a much later stage, when companies are preparing to float on the stock market.
At WeWork, a dual-class stock structure gave Adam Neumann twenty times the voting rights of ordinary shareholders. Further-more, its IPO papers gave his wife and business partner Rebekah Paltrow Neumann the exclusive right to choose his successor in the event of his death. Incredibly, Adam and Rebekah even intended future generations of Neumanns to retain a controlling stake in the company. Speaking to WeWork employees in 2019, Neumann said: ‘It’s important that one day, maybe in 100 years, maybe in 300 years, a great-great-granddaughter of mine will walk into that room and say, “Hey, you don’t know me; I actually control the place. The way you’re acting is not how we built it.”’ Neumann’s assertion was that only his biological descendants would be capable of holding the company to its consciousness-raising mission.
WeWork’s dual-class structure is not an outlier: Pinterest’s founders receive twenty votes for each one vote by a holder of ordinary shares, Snap issued non-voting shares at its IPO in 2017 and Larry Page and Sergey Brin still control more than 50 per cent of the voting rights at Google. But the most significant example of all is Facebook. Not only is Mark Zuckerberg both chairman and CEO; he also holds a majority of the voting stock. He can, at his discretion, complete an acquisition, merger or disposal that Facebook’s other shareholders unanimously oppose, or block one that they support. He can appoint or remove members of Facebook’s board at will, but cannot himself be dismissed. His management of the company is therefore completely unchecked – he is accountable only to himself. Even Facebook’s own filings with the US Securities and Exchange Commission refer to this arrangement as one of ‘concentrated control’. When he dies, control will pass intact to whoever he has nominated.
How does Zuckerberg justify ‘concentrated control’? He claims that as Facebook is what he calls ‘a controlled company’, it can resist ‘the whims of short-term shareholders’ and instead ‘serve our community’ by making ‘decisions that don’t always pay off right away’, such as the rejection of take-over approaches, investments in security that adversely impact profitability or the acquisition of Instagram. He does not argue that Facebook’s mission of ‘giv[ing] people the power to build community and bring the world closer together’ justifies extraordinary means, but claims that ‘concentrated control’ is a superior form of corporate governance. He appears to believe that conventional governance does nothing more than promote the immediate material interests of shareholders at the expense of a company’s users. Like Bill Gates, Zuckerberg also makes a philanthrocapitalist argument in favour of ‘concentrated control’: Facebook’s dual-class stock structure enables the vast majority of his personal wealth to be channelled into third-sector and charitable initiatives that promote education, public health and social justice.
The irony of Zuckerberg’s attachment to ‘concentrated control’ is that he is otherwise a quintessential liberal. I am not using the word loosely, as a synonym for ‘progressive’, but to denote a particular political ideology that emerged in the seventeenth and eighteenth centuries, and is still dominant in the West. Historically, liberalism is associated with thinkers like John Stuart Mill, Alexis de Tocqueville and Jeremy Bentham, while its more recent sages are philosophers like John Rawls and Martha Nussbaum. While there is much debate among political theorists about what liberalism is and isn’t, one of the things they all agree on is that liberals can’t stand absolute power, because of the danger it poses to human freedom. Whenever they see power being concentrated, liberals become suspicious and instinctively want to constrain it though laws and systems of checks and balances. Otherwise impeccably liberal, Zuckerberg seems blind to the fact that his ‘concentrated control’ of Facebook is an illiberal affront. And that’s hubris.
However, it is when hubris is combined with liberal politics that Facebook is at its most dangerous. To understand how it has enabled authoritarian repression and ethnic cleansing, we need to understand what I will call ‘Zuckian liberalism’.
Zuckian Liberalism
When looking for a synthesis of Zuckerberg’s politics, commentators have generally referred to a 5,700-word post called ‘Building Global Community’, which he published in 2017. In it, Zuckerberg articulated the role he wanted Facebook to play in the liberal cause of ‘spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty and accelerating science’. His critics were unimpressed: John Naughton dismissed the post as ‘astonishingly naïve’, while Shoshana Zuboff regarded it as a smokescreen for Facebook’s version of surveillance capitalism.
But there is much more to Zuckerberg’s thought than this post. In fact, a team at Marquette University in Milwaukee has collected more than 1,000 transcripts of things he has said or written since 2004 in a comprehensive digital archive called ‘The Zuckerberg Files’. It’s from mining this resource that I draw my conclusions about Zuckerberg’s liberal ideology and his commitment to Facebook’s stated mission. Over the course of the next few pages, I will outline why you should believe him when he says he wants to give power to people and bring the world closer together.
So what exactly is Zuckian liberalism? The easiest place to start in answering that question is by saying what it’s not. Although it takes for granted that freedom of economic exchange across borders is a good thing, Zuckian liberalism is not the same as neoliberalism – the ideology of deregulation and free markets developed by Friedrich Hayek and later embraced by Margaret Thatcher and Ronald Reagan. Nor is it the libertarianism associated with figures from Silicon Valley counterculture like John Perry Barlow, which rejects the usefulness of government and advocates utopian experiments like the Burning Man festival and seasteading. Instead, Zuckian liberalism is a more classical version, which would be recognisable as liberalism by the ideology’s canonical thinkers. It’s based on four interlinked ideas:
1. Humans have a plurality of values.
2. They are also rational.
3. Therefore, encouraging freedom of expression . . .
4. . . . leads to greater mutual understanding and progress.
Using Zuckerberg’s own words, we can elaborate these ideas. Technology is ‘a huge lever for improving people’s lives’, so Facebook has a ‘moral responsibility’ to provide universally accessible ‘social infrastructure’ that enables ‘meaningful connections’ and ‘meaningful interactions’, offline as well as online. In a world of ‘wildly different social and cultural norms’, Facebook must be ‘a platform for all ideas’ that ‘gives every person a voice’, ‘helping promote diversity and a plurality of opinions’ while avoiding judgements about what constitutes acceptable speech. So fundamental is the importance of freedom of expression, that even the speech rights of ‘people who deny that the Holocaust happened’ must be defended. Since ‘most people are pretty open-minded’ when engaged in a dialogue, this will ultimately ‘build common understanding’, allowing problems such as Islamophobia to be overcome through a focus on ‘what unites us’. Once ‘all people have the power to share their experiences, the entire world will make progress’ towards becoming a ‘global community that works for everyone’.
Zuckerberg’s perspective is intimately related to the history of liberal thought. The view of the past as the story of human progress can be traced back to the German liberal philosopher Hegel (1770–1831), although Zuckerberg’s Facebook posts suggest his immediate influences are books by modern-day admirers of Hegel, like The Better Angels of Our Nature by Steven Pinker and Sapiens by Yuval Noah Harari. The idea that connections between people of different nationalities contribute more to world peace than international relations between states appears in the writing of the liberal political theorist Benjamin Constant (1767–1830) and the radical liberal politician Richard Cobden (1804–65). The mutual interdependence implied by Zuckerberg’s ‘global community’ echoes what New Liberals like L. T. Hobhouse (1864–1929), J. A. Hobson (1858–1940) and Herbert Croly (1869–1930) called ‘organicism’. Influenced by advances in biology, organicism saw political communities as living organisms, meaning that if one part of the ‘body politic’ was injured or diseased, it would inevitably harm the whole – an idea that provided intellectual justification for public health reforms and the introduction of state pensions and unemployment benefits in the early twentieth century.
What’s more, Zuckian liberalism emphasises the value of ‘offline’ community, as seen in clubs, associations, town hall meetings and so on. In doing so, it draws on two classic liberal texts: Alexis de Tocqueville’s Democracy in America (1835), which celebrated civic culture, and Robert Putnam’s Bowling Alone (2000), which lamented its decline. Zuckian liberalism is also pluralist, meaning it affirms that people with diverse values, beliefs and lifestyles can peacefully co-exist. Pluralism is rooted in the liberal philosophy of Isaiah Berlin (1909–97), Martha Nussbaum (1947–) and John Rawls (1921–2002), who is clearly a direct influence on Zuckerberg. In A Theory of Justice, Rawls proposed a thought experiment called the ‘Original Position’, asking the reader to imagine themselves before birth, without any awareness of their gender, race, nationality, abilities or preferences – and then to consider what sort of political system they would want to be born into. (Spoiler: you are meant to realise that liberal democracy is the fairest system of all). Zuckerberg has described using Rawls’s thought experiment as a tool to help him make decisions, as has his long-time lieutenant Andrew ‘Boz’ Bosworth, who used it to explain to staff why Facebook would not be fact-checking ads or posts by candidates in the 2020 US presidential election. Another core Zuckian concept is ‘enough common understanding’, a paraphrase of Rawls’s ‘overlapping consensus’ – the idea that it’s possible for groups with very different values to agree on common principles as the basis for political institutions. Essentially, Zuckian liberalism claims that if ‘enough common understanding’ is built between Facebook users, Facebook can form the basis of a global political community and divisive nation-states can wither away.
Finally, the Zuckian liberal case for freedom of expression is a version of the arguments made by John Stuart Mill in On Liberty, one of the most important texts in the history of liberalism. Mill argues that people are ‘improved by free and equal discussion’, and that ‘the collision of adverse opinions’ enables those involved in a lively debate to get progressively closer to the truth. As a result, it’s critical that people have ‘absolute freedom’ of ‘expressing and publishing’ ‘opinion and sentiment on all subjects’, without fear of being censored or punished. For Mill, it is particularly important to protect minority perspectives, so that ‘the tyranny of prevailing opinion and feeling’ can be avoided. Under Zuckerberg’s leadership, Facebook has sought to enshrine these liberal principles for the digital age.
For many readers of this book, these ideas will sound eminently sensible; Western cultural norms are so permeated by liberalism that they may seem self-evident. But here’s the rub: the arguments presented in On Liberty are from 1859. The mid-nineteenth century was liberalism’s youthful and optimistic heyday. At that point, when schooling was not yet compulsory, when just one in five men – and no women – had the vote and when urban poverty was widespread, there was every reason for liberals to believe that state education, the expansion of political and civic freedoms, and advances in material prosperity would bring an end to social ills like prejudice, intolerance and sectarianism.
But it was not to be. By the first decade of the twentieth century, there was already a ‘Crisis of Liberalism’; a 1902 essay of that title by Célestin Bouglé wondered how far liberals should tolerate freely-chosen racial and religious bigotry before using the state’s power to suppress it. Hobson’s 1909 book of the same name argued that liberals were losing a struggle with conservative and reactionary forces – or, to put it in today’s terms, a ‘culture war’. Even with the Battle of the Somme, Hiroshima and countless other modern atrocities still in the future, the limitations of classical liberalism were already apparent.
The digital age has underlined these limitations. Mill’s notion of ‘harm’ was narrow: it encompassed physical injury and being dispossessed of property, which is not sufficient in a world where people may be harmed by cyberbullying, trolling, revenge porn or livestreamed violence. Mill thought the collision of opinions revealed the truth because it helped the ‘disinterested bystander’ see things more clearly. But in our own time, social media feed algorithms filter out the debates in which we would be ‘disinterested bystanders’, meaning we see only the ones we are already engaged participants in. Mill was ahead of his time, rejecting convention in his personal relationships and speaking up passionately in favour of equality for women, but it’s a mistake to assume his ideas about free speech are timeless truths that can be applied today without re-evaluation.
I believe that Zuckian liberalism is needed to explain many of Facebook’s actions; the imperatives of capitalism aren’t sufficient by themselves. Let’s consider the $19 billion acquisition of WhatsApp as an example. Shoshana Zuboff attributes it to Facebook’s desire to control ‘the gargantuan flows of human behaviour’ that ‘pour through’ the application. I don’t find that explanation convincing: end-to-end encryption of WhatsApp means that most of this data is not accessible to Facebook; furthermore, WhatsApp contains no media inventory and therefore generates no advertising revenue. Explaining the rationale for the acquisition, Zuckerberg situated it within the context of Facebook’s goal to ‘connect the entire world’ and ‘build the infrastructure for global community’, which is consistent with the argument he makes in texts that appear in The Zuckerberg Files. It is worth noting that when he made these remarks, Zuckerberg’s audience was composed of investment analysts from banks such as UBS, JP Morgan and Nomura Securities. If Zuckian liberalism is a smokescreen or a decoy, it would serve no purpose in such an overtly capitalist setting. Instead, its presence demonstrates that it is Facebook’s guiding ideology. That the company’s vice-president of global affairs, Sir Nick Clegg, and its vice-president of public policy in the EMEA region, Lord Allan of Hallam, are both former Liberal Democrat MPs is not a coincidence.
Zuckian liberalism is also the key to understanding why Facebook is so indiscriminate in its distribution of reach power, the distinctive new form of power wielded by ordinary users of digital technology. Many of the worst unintended consequences of Facebook’s actions stem from the Zuckian liberal conviction ‘that on balance people are good, and that therefore amplifying [their capacity with technology] has positive effects’. It seems that when one is ideologically committed to the idea of human goodness and progress, one fails to imagine how the Graph API might ‘amplify the capacity’ of app developers with sinister intent, how Lookalike Audiences might enable marginal political parties to grow as rapidly as e-commerce start-ups or how encrypted group WhatsApp messaging might be used to incite and co-ordinate violence.
Presented with evidence of these unintended consequences, Zuckian liberalism is remarkably durable. In a post on Facebook’s role as a channel for disinformation during the 2016 US presidential election, Zuckerberg reaffirmed his faith in human virtue and progress, concluding, ‘In my experience, people are good, and even if you may not feel that way today, believing in people leads to better results over the long term.’ Despite earning negligible revenue from Myanmar and incurring significant reputational costs, Facebook continues to operate there. Even in acknowledging that the business he created had facilitated the ethnic cleansing of the Rohingya, Zuckerberg returned to his liberal optimism about human progress, finding it ‘heartening’ that millennials ‘identify the most with not their nationality or even their ethnicity . . . [but] as a citizen of the world’. His conclusion is not that he must reckon with the consequences of Facebook’s indiscriminate distribution of reach power, but that the rise of a younger generation of enlightened global citizens will enable humanity to transcend national and ethnic conflict. For Zuckian liberals, human cruelty is like the colonisation of Mars or mortality: an engineering problem waiting to be solved. As a result, we should expect that Facebook’s products will continue to be used to inflict suffering.
The hubris of tech CEOs makes a potent cocktail when it is mixed with a different kind of politics. While Mark Zuckerberg wants to transcend the nation-state, other tech CEOs want to entrench it. For them, the world isn’t a place where humanity comes together to make progress; it’s the arena of ceaseless competition between great powers. In international relations, this view is known as realpolitik or realism. While liberals believe that co-operation yields the best results, realists play a zero-sum game that can only end with winners and losers, and in which you have to pick a side.
For realist CEOs, the proper role of American tech companies is to advance American strategic interests. At times, that may be aligned with liberal values – for example, when America’s foreign policy agenda includes promoting democracy and free trade – but at other times, it may be in tension with them. During the so-called ‘War on Terror’, many tech companies appear to have made their peace with helping the US government illiberally monitor the private communications of its own citizens, as the NSA whistle-blower Edward Snowden later revealed. At a time of intensifying economic and geopolitical competition with China, the former Google CEO and executive chairman Eric Schmidt is in a new job as chair of the Pentagon’s Defence Innovation Board. There, he warns that it would be a strategic mistake for America and its allies to allow Huawei to build 5G infrastructure for them – not because of the risk of being spied on by the Chinese state, but because it would allow a Chinese company to consolidate its advantage over Western rivals in a critical emerging technology.
Google seems to have taken a more liberal turn since Schmidt’s departure. In 2018 it withdrew from Project Maven, a Pentagon initiative that used artificial intelligence to analyse aerial imagery. In an open letter to the CEO Sundar Pichai, Google employees expressed concern that Maven would be used to improve the accuracy of drone strikes, asserting that ‘Google should not be in the business of war’ – perhaps forgetting that Google owes its existence to DARPA. Luckily for the US government, the CEO of Palantir Technologies, Alex Karp, took a different view and was more than happy to step up to the plate.
Founded in the wake of 9/11 with seed investment from the CIA’s venture capital fund, Palantir is often referred to as a ‘big data’ company, but that’s slightly misleading. Unlike Google or Facebook, Palantir doesn’t collect data itself – instead, it provides software tools and analytical consulting services to help organisations get insights from their own data. Having initially focused on US defence and national security, it has since diversified into other branches of government like immigration and law enforcement, and into industries like financial services, manufacturing and pharmaceuticals. It turns out that the same tools and techniques used to predict the locations of roadside bombs in Iraq and uncover international cyber espionage networks can forecast fraudulent benefit claims and detect insider trading at investment banks.
Palantir’s critics regard this as another example of technology and data being ‘weaponised’: products developed to fight America’s adversaries are being turned against American citizens, on American soil. In addition to Project Maven, a particularly controversial example is their involvement with so-called ‘predictive policing’ – the use of big data analytics to deploy law enforcement personnel in anticipation of crimes being committed. Eerily reminiscent of the film Minority Report, predictive policing can lead to self-reinforcing cycles of racial and class discrimination. A further example is Palantir’s work building the software that enables US Immigration and Customs Enforcement (ICE) to identify illegal immigrants and compile the evidence needed to deport them. Data from a variety of sources including the FBI, the Drug Enforcement Administration and private security contractors is pooled on Palantir’s platform, empowering ICE agents to do their case work more effectively.
This is very different from the use of Facebook Looka-like Audiences to recruit supporters to far-right groups, or of encrypted WhatsApp messaging to incite racial hatred. Better-targeted bombs, higher numbers of arrests and faster deportations are not unintended consequences; they are the kinds of outcomes Palantir expects its technology to achieve. So how does Karp justify this? If we scrutinise public statements he has made in interviews and presentations, what we find is realism. He describes Palantir’s business as ‘tech that saves lives and protects lives’ by ‘finding people who are up to . . . bad things’. These ‘bad things’ might be ‘in the anti-terrorism area, in cyber’ or they might be ‘in financial malfeasance [or] mortgage fraud’, but it is the ‘mission impact of our government work’ that makes Karp ‘most proud’. It seems to infuriate him that many tech company CEOs see Silicon Valley as an ‘island’ rather than recognising that it is ‘part of the United States proper . . . part of a larger whole that made your company possible, that’s protecting you against terror’ – a fight that they ‘ought to be involved in’, so that ‘Western values win’. Terrorism is not the only threat to these values. Like Schmidt, Karp sees Chinese technology as undermining American hegemony and wants American tech companies to set aside their moral qualms and ‘bring our A game’ to the contest.
For Karp, Adam Neumann’s talk of ‘elevating the world’s consciousness’ or Mark Zuckerberg’s of ‘building global community’ is dangerous mumbo jumbo. The world is a battle-ground. There are winners and losers, and you are either with us or against us.
Unlike most of Palantir’s critics, I don’t think the company’s products are morally objectionable per se – I can see no essential difference between them and those of Palantir’s competitors in the boringly named business intelligence (BI) software category. When I hear Karp describe the process of integrating different sources of data to identify ‘bad people’, it sounds functionally identical to the ‘single customer view’ projects that were all the rage when I worked in retail banking nearly twenty years ago. Along with telephone and utility companies, the banks built their IT systems in silos, based on account types. That meant it was easy to see the customers who had a specific type of account, but almost impossible to see the types of account held by a specific customer – as you will know if you ever tried asking the same customer service agent about your credit card and mortgage at that time. BI software companies like Tableau and SAP evolved in response to this problem, along with products like Microsoft’s Power BI. Today, these companies pitch for the same contracts as Palantir.
With that context in mind, objections to Palantir’s involvement with Project Maven, predictive policing and immigration enforcement start to look like objections to the policies the technology is being used to implement, rather than to the technology itself. And of course, accountability for those policies rests with political representatives, who can be petitioned by their constituents and kicked out of office at the ballot box. Palantir is the wrong target.
There are other reasons to be sceptical of claims that Palantir’s software is immoral. Unlike Facebook ads, Gotham, its product for finding ‘bad people’, abounds with controls. There are features to ensure that data collected for one purpose can’t be appropriated for different purposes or viewed without the right permissions, and that users can be held accountable for the analysis they conduct. Karp also likes to draw attention to another Palantir product, Foundry. In the manufacturing, automotive and aerospace sectors, it helps technicians use big data to determine the best time to replace engine parts before they fail (among other things). Unlike most applications of machine learning by tech companies, Karp claims, Foundry doesn’t displace American jobs – it protects them, by enhancing workers’ capabilities.
For the foreseeable future, humans will remain far superior to robots and algorithms at performing most tasks, meaning that the greatest productivity gains will come from humans and AI working in concert – not from trying to automate people’s jobs out of existence. So instead of accentuating the concentration of wealth in Silicon Valley at the expense of everywhere else, Palantir improves the economic prospects of America’s industrial heartlands, reducing geographic inequality.
Karp might be exaggerating the robustness of Gotham’s controls and Foundry’s positive social impact. But even if he is, I still don’t think big data analytics is delegitimated by Palantir’s work for the Pentagon, US police departments or ICE, any more than geodemographic targeting is delegitimated by Cambridge Analytica. After all, these same techniques give us less annoying interactions with our banks, in addition to projects like Opportunity Insights (discussed in Chapter Four), where the benefits to society are clear. I also don’t think Palantir produces the form of digital power that should concern us most – reach power. While Facebook distributes power indiscriminately, Palantir only amplifies the capacities of the already powerful: commercial enterprise clients, not to mention the governments of the United States and its allies. That’s not straightforwardly a good thing, but it’s much less likely to produce disastrous unintended consequences.
Does that mean we shouldn’t be concerned about tech companies like Palantir that aim to work in the national interest? Unfortunately not. In contrast to most of the tech CEOs we have met in this chapter, Karp and Schmidt are not like Icarus. To understand them, we might imagine Icarus has an obedient and dutiful brother – let’s call him Aleric. Aleric is fully aware of the opportunities his father Daedalus has given him; he is profoundly grateful, and proud to offer his service in return. Ethical questions are moot for Aleric; if his father says something must be done, it must be done – whatever it takes. Aleric may not always agree with his father, but filial loyalty is a higher good. People who oppose Daedalus’ agenda, meanwhile, are plain wrong – at best they inadvertently aid the enemy; at worst they are evil. In this way, Aleric acts as Daedalus’ proxy.
For Americans and citizens of America’s allies who are generally supportive of American hegemony, tech companies taking their lead from the American government may not be so alarming. But what about people who are on the receiving end of American hard power and don’t have any democratic leverage – undocumented immigrants from Mexico, for example, or families displaced by the conflict in Syria? And what happens when the American presidency stops promoting settled liberal norms like press freedom and the rule of law and instead deliberately undermines them, snubbing democratic leaders and praising authoritarian ones? During the early stages of the coronavirus outbreak in the UK, the NHS chose Palantir Foundry to support a ‘single customer view’-type project, drawing criticism from privacy campaigners and parliamentarians. The argument made by groups like No Tech For Tyrants was twofold: firstly, that the bar for allowing a private company to analyse highly sensitive health records should have been set much higher; and secondly, that it was simply wrong for the UK government to work with a business whose products were implicated in the oppression of migrants and ethnic minorities. However, awareness of the realist politics of Alex Karp points us in a different direction. It is not unthinkable that an American president might see a benefit in having NHS data covertly analysed – to gain advantage in a post-Brexit trade negotiation over pharmaceuticals, for example. If Palantir were asked to undertake that analysis in service of the national interest, there is plenty in Karp’s public statements to suggest that he would comply. The hubris of the son can be in obeying the instructions of the father, as well as in ignoring them.
If Zuckian liberalism is predisposed to underestimate the evil in the world, Karpian realism is primed to exaggerate it. A broad definition of ‘bad guys’ as ‘everyone who threatens America’s interests’ already puts disgruntled employees, people who lie on loan application forms and residents of deprived neighbourhoods into a category with enemy combatants, foreign intelligence agents and mass murderers. The presidency of Donald Trump shows how easily a populist take on those interests can include judges, journalists, civil servants and political opponents in the same category. It’s clear that for the power created by digital technology to be channelled in the most socially beneficial way, we need something more than the political ideologies of tech company CEOs, whether they are liberals or realists.