CHAPTER 7

THE DARK TRIAD

-

By August 2014, just two months after we launched the app, Cambridge Analytica had collected the complete Facebook accounts of more than 87 million users, mostly from America. They soon exhausted the list of MTurk users and had to engage another company, Qualtrics, a survey platform based in Utah. Almost immediately, CA became one of their top clients and started receiving bags of Qualtrics-branded goodies. Jucikas would sashay around wearing an I QUALTRICS T-shirt under his otherwise perfectly tailored Savile Row suit, which everyone found both amusing and ridiculous. CA would get invoices sent from Provo, billing them each time for twenty thousand new users in their “Facebook Data Harvest Project.”

As soon as CA started collecting this Facebook data, executives from Palantir started making inquiries. Their interest was apparently piqued when they found out how much data the team was gathering—and that Facebook was just letting CA do it. The executives CA met with wanted to know how the project worked, and soon they approached our team about getting access to the data themselves.

Palantir was still doing work for the NSA and GCHQ. Staffers there told CA that working with Cambridge Analytica could potentially open an interesting legal loophole. At a meeting in the summer of 2014 at Palantir’s U.K. head office, in Soho Square, it was pointed out that government security agencies, along with contractors like Palantir, couldn’t legally mass-harvest personal data on American citizens, but—here’s the catch—polling companies, social networks, and private companies could. And despite the ban on directly surveilling Americans, I was told that U.S. intelligence agencies were nonetheless able to make use of information on American citizens that was “freely volunteered” by U.S. individuals or companies. After hearing this, Nix leaned in and said, “So you mean American polling companies…like us.” He grinned. I didn’t think anyone was actually being serious, but I soon realized that I underestimated everyone’s interest in accessing this data.

Some of the staff working at Palantir realized that Facebook had the potential to become the best discreet surveillance tool imaginable for the NSA—that is, if that data was “freely volunteered” by another entity. To be clear, these conversations were speculative, and it is unclear if Palantir itself was actually aware of the particulars of these discussions, or if the company received any CA data. The staff suggested to Nix that if Cambridge Analytica gave them access to the harvested data, they could then, at least in theory, legally pass it along to the NSA. In this vein, Nix told me we urgently needed to make an arrangement with staff at Palantir happen, “for the defense of our democracy.” But that, of course, was not why Nix gave them full access to the private data of hundreds of millions of American citizens. Nix’s dream, as he had confided in our very first meeting, was to become the “Palantir of propaganda.”

One lead data scientist from Palantir began making regular trips to the Cambridge Analytica office to work with the data science team on building profiling models. He was occasionally accompanied by colleagues, but the entire arrangement was kept secret from the rest of the CA teams—and perhaps Palantir itself. I can’t speculate about why, but the Palantir staff received Cambridge Analytica database logins and emails with fairly obvious pseudonyms like “Dr. Freddie Mac” (after the mortgage company that was bailed out by the federal government in the 2008 housing crisis). I do know that after Palantir data scientists started building their own Facebook harvesting apps and scrapers, Nix asked them to stay after hours to keep working on applications that could replicate the Facebook data Kogan was getting—without the need for Kogan. It was no longer simply Facebook apps that were being used. Cambridge Analytica began testing innocuous-looking browser extensions, such as calculators and calendars, that pulled access to the user’s Facebook session cookies, which in turn allowed the company to log in to Facebook as the target user to harvest their data and that of their friends. These extensions were all submitted—and approved—in the independent review processes of several popular Web browsers.

It wasn’t clear whether these Palantir executives were visiting CA officially or “unofficially,” and Palantir has since asserted that it was only a single staff member who worked at CA in a “personal capacity.” I honestly didn’t know who or what to believe at this point. As he often did with contractors on projects in Africa, Nix would bring to the office bags filled with U.S. currency and pay contractors in cash. As contractors would work, Nix would sit at his desk flicking through the green bills, counting them into small piles, each worth thousands of dollars. Sometimes contractors were given tens of thousands of dollars each week.

Many years before, Nix had been rejected by Britain’s foreign intelligence service, MI6. He often joked about it, saying it had happened because he wasn’t boring enough to blend into a crowd, but the rejection had obviously stung him. Now he almost didn’t care who got access to CA’s data; he would have shown it to anyone, just to hear how amazing he was.


BY LATE SPRING 2014, Mercer’s investment had spurred a hiring spree of psychologists, data scientists, and researchers. Nix brought on a new team of managers to organize the fast-growing research operations. Although I remained the titular director of research, the new operations managers were now given control over direct oversight and planning of this rapidly growing exercise. New projects seemed to pop up each day, and sometimes it was unclear how or why projects were being approved to go to field. I complained to Nix that I was losing track of who was doing what, but he didn’t see the problem. Nix simply couldn’t see beyond prestige and money. He told me that most people would be grateful to be given less responsibility and work to do but still allowed to keep their title.

At this point, I did start to feel weird about everything, but whenever I spoke with other people at the firm, we all managed to calm one another down and rationalize everything. Nix would talk about shady things, but that’s just who he was and no one took him seriously. And after Mercer installed Bannon, I overlooked or explained away things that, in hindsight, were obvious red flags. Bannon had his “niche” political interests, but Mercer seemed to be too serious a character to dabble in Bannon’s trashy political sideshows. The potential for our work to benefit Mercer’s financial interests made so much more sense as an explanation for why he would spend all this money on something so highly speculative. Mercer literally gave CA tens of millions of dollars before the firm had acquired any data or built any software in America. From any investor’s perspective, this would have been a high-risk seed capital investment. But CA also knew Mercer was not dumb or reckless, and he would have calculated the risk carefully. At the time, many on the team simply assumed that to justify taking such a high financial risk on our ideas, Mercer must have expected that the research had the chance of making tons of money at his hedge fund. In other words, the firm was not there to build an alt-right insurgency, it was there to help Mercer make money, and Nix’s conspicuous love of money reinforced everyone’s assumptions.

Of course, we know now that none of that happened. I don’t know what else to say other than I was more naïve than I thought I was at the time. Even though I had a great deal of experience for my age, I was only twenty-four and clearly still had a lot of learning to do. When I joined SCL, I was there to help the firm explore areas like counter-radicalization in order to help Britain, America, and their allies defend themselves against new threats emerging online. I began to get accustomed to the unusual environment of this line of work, which normalized a lot of things that would seem weird to a casual observer. Information operations is not your average nine-to-five desk job, and the people or situations you encounter are all a bit odd. And anytime someone would ask about the ethics of a surreptitious project in a far-off country, they would be mocked for their naïveté about how the rest of the world “really worked.”

It was the first time I was allowed to explore ideas without the constraints of petty internal politics or people snubbing an idea just because it had never been tried before. As much as Nix was a dick, he did give me a lot of leeway to try out new ideas. After Kogan joined, I had professors at the University of Cambridge constantly fawning over the groundbreaking potential that the project could have for advancing psychology and sociology, which made me feel like I was on a mission. And if their colleagues at universities like Harvard or Stanford were also getting interested in our work, I thought that surely we must be onto something. The institute that Kogan proposed really was an inspiring idea to me, and I saw how unlocking this data for researchers around the world could contribute so much value to so many fields. As corny as this might sound, it really felt like I was working on something important—not just for Mercer or the company, but for science. However, I let this feeling distract me to the point of allowing myself to excuse the inexcusable. I told myself that truly learning about society includes delving into uncomfortable questions about our darker sides. How could we understand racial bias, authoritarianism, or misogyny if we did not explore them? What I did not appreciate is the fine line between exploring something and actually creating it.

Bannon had assumed control of the company, and he was an ambitious and surprisingly sophisticated cultural warrior. He felt that the identity politics of Democrats, with their focus on racial or ethnic blocs of voters, was actually less powerful than that of Republicans, who often insisted that American identity went beyond skin color, religious preference, or gender. A white man living in a trailer park doesn’t see himself as a member of a privileged class, though others may see him that way just because he’s white. Every mind contains multitudes. And Bannon’s new job was to figure out how to target people accordingly.

I told Bannon that the most striking thing CA had noticed was how many Americans felt closeted—and not just gay people. This first came up in focus groups and later was confirmed in quantitative research done via online panels. Straight white men, particularly ones who were older, had grown up with a value set that granted them certain social privileges. Straight white men did not have to moderate their speech around women or people of color, because casual racism and misogyny were normalized behaviors. As social norms in America evolved, these privileges began to erode and many of these men were experiencing challenges to their behavior for the first time. At the workplace, “casual flirting” with female secretaries now imperiled your job, and talking about the “thugs” in the African American part of town could get you shunned by peers. These encounters were often uncomfortable and threatening to their identity as “regular men.”

Men who were not used to moderating their impulses, body language, and speech began to resent what they saw as the unfair mental and emotional labor it took to change and constantly correct how they presented in public. What I found interesting was how similar the discourse that emerged from these groups of angry straight men was to liberation discourse from gay communities. These men began to experience the burden of the closet, and they did not like the feeling of having to change who they felt they were in order to “pass” in society. Although there were very different reasons for the closeting of gays and the closeting of racists and misogynists, these straight white men nonetheless felt a subjective experience of oppression in their own minds. And they were ready to emerge from the closet and return to a time when America was great—for them.

“Think about it,” I said to Bannon. “The message at a Tea Party rally is the same as at a Gay Pride parade: Don’t tread on me! Let me be who I am!” Embittered conservatives felt like they couldn’t be “real men” anymore, because women wouldn’t date men who behaved the way men had behaved for millennia. They had to hide their true selves to please society—and they were pissed about it. In their minds, feminism had locked “real men” in the closet. It was humiliating, and Bannon knew that there was no force more powerful than a humiliated man. It was a state of mind he was eager to explore (and exploit).

The incel community, just coming to the fore when Cambridge Analytica was being established, was the kind of group he had in mind. Incels, or “involuntary celibates,” were men who felt ignored and chastised by a society—particularly women—that did not value average men anymore. An offshoot of the Men’s Rights Movement, the incel community was in part propelled by the increasing economic inequality depriving young millennial men from accessing the same kinds of well-paying jobs their fathers had. This economic deprivation was coupled with increasingly unattainable body image standards for men in conventional and social media (without the same public recognition of male body issues or gendered pressures as for women) and the growing importance placed on physical looks in a dating scene increasingly defined by swiping left or right on a split-second glance at a photo. And as women had become more economically independent, they could afford to be more selective about their partners. Deprived of good looks and a respectable paycheck, “average men” faced a hard reality of constant romantic rejection.

Some of these men began congregating on forums like 4chan, which grew into a repository of memes, weird fantasy fandoms, niche porn, pop culture, and the countercultural reactions of frustrated youth in an increasingly atomized society. In the early 2010s, nihilistic discussions began among young men who were resigned to lives of loneliness. A new vocabulary emerged to describe their circumstances, including “betas” (inferior men), “alphas” (superior men), “vocels” (voluntary celibates), MGTOW (Men Going Their Own Way, walking away from women), “incels” (involuntary celibates), and “robots” (incels with Asperger’s).

Irrespective of the privileges afforded to them as straight white men, these groups lacked identity, direction, and a sense of self-worth and grabbed on to anything that instilled a feeling of belonging and solidarity. Self-defining as the “beta” males of society, many incels would talk about accepting the “black pill”—a moment of reckoning with what they believed were certain innate truths about sexual and romantic attraction. Forums would include topics such as “suicide fuel,” which were examples from their daily lives of rejection that reinforced their feelings of hopelessness and ugliness. For many incels, this angry desperation had morphed into extreme misogyny.

The doctrine of the black pill was bleak and rigid, stating that only physical looks matter to women, and that certain features, including race, fall into a hierarchy of sexual desirability. Incels would share graphs and observations signaling an innate advantage for white men, as women from all races would accept a white partner, and a strong disadvantage for Asian men. To be fat or poor or old or disabled or a person of color was to be a member of America’s most unwanted. Nonwhite incels would use terms like “JBW”—“just be white”—as a way of trying to explain or mitigate what they saw as their innate racial disadvantages. There was a surprising amount of open recognition of white privilege, but incel discourse would frame this privilege as part of the inherent racial superiority of white men, at least in the context of sexual selection.

Ongoing jokes and memes would be shared about resisting their life sentences and waging a Beta Rebellion or Beta Uprising to fight for the redistribution of sex for the betas. But lurking behind the strange humor was the rage of a life of rejection. In scrolling through these narratives of victimhood, my mind turned back to the narratives of extreme jihadist recruitment media, with the same naïve romanticism of oppressed men breaking the shackles of a vapid society to transform themselves into glorified heroes of rebellion. Likewise, these incels were perversely attracted to society’s “winners,” like Donald Trump and Milo Yiannopoulos, who in their warped view represented the epitome of the same hypercompetitive alphas who brutalized them, to lead the charge. Many of these seething young men were ready to burn society to the ground. Bannon sought to give them an outlet via Breitbart, but his ambition didn’t stop there. He saw these young men as the early recruits in his future insurgency.

When Cambridge Analytica launched, in the summer of 2014, Bannon’s goal was to change politics by changing culture; Facebook data, algorithms, and narratives were his weapons. First we used focus groups and qualitative observation to unpack the perceptions of a given population and learn what people cared about—term limits, the deep state, draining the swamp, guns, and the concept of walls to keep out immigrants were all explored in 2014, several years before the Trump campaign. We then came up with hypotheses for how to sway opinions. CA tested these hypotheses with target segments in online panels or experiments to see whether they performed as the team expected, based on the data. We also pulled Facebook profiles, looking for patterns in order to build a neural network algorithm that would help us make predictions.

A select minority of people exhibit traits of narcissism (extreme self-centeredness), Machiavellianism (ruthless self-interest), and psychopathy (emotional detachment). In contrast to the Big Five traits found in everyone to some degree as part of normal psychology—openness, conscientiousness, extroversion, agreeableness, and neuroticism—these “dark triad” traits are maladaptive, meaning that those who exhibit them are generally more prone to antisocial behavior, including criminal acts. From the data CA collected, the team was able to identify people who exhibited neuroticism and dark-triad traits, and those who were more prone to impulsive anger or conspiratorial thinking than average citizens. Cambridge Analytica would target them, introducing narratives via Facebook groups, ads, or articles that the firm knew from internal testing were likely to inflame the very narrow segments of people with these traits. CA wanted to provoke people, to get them to engage.

Cambridge Analytica did this because of a specific feature of Facebook’s algorithm at the time. When someone follows pages of generic brands like Walmart or some prime-time sitcom, nothing much changes in his newsfeed. But liking an extreme group, such as the Proud Boys or the Incel Liberation Army, marks the user as distinct from others in such a way that a recommendation engine will prioritize these topics for personalization. Which means the site’s algorithm will start to funnel the user similar stories and pages—all to increase engagement. For Facebook, rising engagement is the only metric that matters, as more engagement means more screen time to be exposed to advertisements.

This is the darker side of Silicon Valley’s much celebrated metric of “user engagement.” By focusing so heavily on greater engagement, social media tends to parasitize our brain’s adaptive mechanisms. As it happens, the most engaging content on social media is often horrible or enraging. According to evolutionary psychologists, in order to survive in premodern times, humans developed a disproportionate attentiveness toward potential threats. The reason we instinctually pay more attention to the blood and gore of a rotting corpse on the ground than to marveling at the beautiful sky above is that the former was what helped us survive. In other words, we evolved to pay keen attention to potential threats. There’s a good reason you can’t turn away from grisly videos: You’re human.

Social media platforms also use designs that activate “ludic loops” and “variable reinforcement schedules” in our brains. These are patterns of frequent but irregular rewards that create anticipation, but where the end reward is too unpredictable and fleeting to plan around. This establishes a self-reinforcing cycle of uncertainty, anticipation, and feedback. The randomness of a slot machine prevents the player from being able to strategize or plan, so the only way to get a reward is to keep playing. The rewards are designed to be just frequent enough to reengage you after a losing streak and keep you going. In gambling, a casino makes money from the number of turns a player takes. On social media, a platform makes money from the number of clicks a user performs. This is why there are infinite scrolls on newsfeeds—there is very little difference between a user endlessly swiping for more content and a gambler pulling the slot machine lever over and over.


IN THE SUMMER OF 2014, Cambridge Analytica began developing fake pages on Facebook and other platforms that looked like real forums, groups, and news sources. This was an extremely common tactic that Cambridge Analytica’s parent firm SCL had used throughout its counterinsurgency operations in other parts of the world. It is unclear who inside the firm actually gave the final order to set up these disinformation operations, but for many of the old guard who had spent years working on projects around the world, none of this seemed unusual. They were simply treating the American population in the exact same way they would treat the Pakistani or Yemeni populations on projects for American or British clients. The firm did this at the local level, creating right-wing pages with vague names like Smith County Patriots or I Love My Country. Because of the way Facebook’s recommendation algorithm worked, these pages would pop up in the feeds of people who had already liked similar content. When users joined CA’s fake groups, it would post videos and articles that would further provoke and inflame them. Conversations would rage on the group page, with people commiserating about how terrible or unfair something was. CA broke down social barriers, cultivating relationships across groups. And all the while it was testing and refining messages, to achieve maximum engagement.

Now CA had users who (1) self-identified as part of an extreme group, (2) were a captive audience, and (3) could be manipulated with data. Lots of reporting on Cambridge Analytica gave the impression that everyone was targeted. In fact, not that many people were targeted at all. CA didn’t need to create a big target universe, because most elections are zero-sum games: If you get one more vote than the other guy or girl, you win the election. Cambridge Analytica needed to infect only a narrow sliver of the population, and then it could watch the narrative spread.

Once a group reached a certain number of members, CA would set up a physical event. CA teams would choose small venues—a coffee shop or bar—to make the crowd feel larger. Let’s say you have a thousand people in a group, which is modest in Facebook terms. Even if only a small fraction shows up, that’s still a few dozen people. A group of forty makes for a huge crowd in the local coffee shop. People would show up and find a fellowship of anger and paranoia. This naturally led them to feel like they were part of a giant movement, and it allowed them to further feed off one another’s paranoia and fears of conspiracy. Sometimes a Cambridge Analytica staffer would act as a “confederate”—a tactic commonly used by militaries to stir up anxieties in target groups. But most of the time, these situations unfolded organically. The invitees were selected because of their traits, so Cambridge Analytica knew generally how they would react to one another. The meetings took place in counties all across the United States, starting with the early Republican primary states, and people would get more and more fired up at what they saw as “us vs. them.” What began as their digital fantasy, sitting alone in their bedrooms late at night clicking on links, was becoming their new reality. The narrative was right in front of them, talking to them, live in the flesh. Whether or not it was real no longer mattered; that it felt real was enough.

Cambridge Analytica ultimately became a digitized, scaled, and automated version of a tactic the United States and its allies have used in other countries. When I first started at SCL, the firm had been working on counter-narcotics programs in a South American country. The strategy was, in part, to identify targets to disrupt narcotics organizations from within. The first thing the firm would do was find the lowest-hanging fruit, meaning people who its psychologists reasoned would be more likely to become more erratic or paranoid. Then the firm would work on suggesting ideas to them: “The bosses are stealing from you” or “They’re going to let you take the fall.” The goal was to turn them against the organization, and sometimes, if a person hears something enough times, they come to believe it.

Once those initial individuals were sufficiently exposed to these new narratives, it would be time to have them meet one another so that they could form a group which could then organize. They would share rumors, working one another into deeper paranoia. That was when you introduced the next tier: people whose initial resistance to rumors had started to weaken. And this is how you gradually destabilize an organization from the inside. CA wanted to do the same to America, using social media as the spearhead. Once a county-based group begins self-organizing, you introduce them to a similar group in the next county over. Then you do it again. In time, you’ve created a statewide movement of neurotic, conspiratorial citizens. The alt-right.

Internal tests also showed that the digital and social ad content being piloted by CA was effective at garnering online engagement. Those being targeted online with test advertisements had their social profiles matched to their voting records, so the firm knew their names and “real world” identities. The firm then began to use numbers on the engagement rates of these ads to explore the potential impact on voter turnout. One internal memo highlighted the results from an experiment involving registered voters who had not voted in the two previous elections. CA estimated that if only 25 percent of the infrequent voters who began clicking on this new CA content eventually turned out to vote, they could increase statewide turnout for the Republicans in several key states by around 1 percent, which is often the margin of victory in tight races. Steve Bannon loved this. But he wanted CA to go further—and darker. He wanted to test the malleability of the American psyche. He urged us to include what were in effect racially biased questions in our research, to see just how far we could push people. The firm started testing questions about black people—whether they were capable of succeeding in America without the help of whites, for example, or whether they were genetically predetermined to fail. Bannon believed that the civil rights movement had limited “free thinking” in America. He was determined to liberate people by revealing what he saw as the forbidden truths about race.

Bannon suspected that there were swaths of Americans who felt silenced by the threat of being labeled “racist.” Cambridge Analytica’s findings confirmed his suspicion: America is filled with racists who remain silent for fear of social shunning. But Bannon wasn’t just focused on his emerging alt-right movement; he also had Democrats in mind.

While “typical Democrats” talk a good game when it comes to supporting racial minorities, Bannon detected an underlying paternalism that betrayed their professed wokeness. The party, he felt, was full of “limousine liberals”—a term coined in the New York mayoral race of 1969 and instantly seized on by populists to denigrate do-gooder Democrats. These were the white Democrats who supported school busing but sent their own kids to majority-white private schools, or who professed to care about the inner city but lived in gated communities. “The Dems always treat blacks like children,” Bannon said on one call. “They put them in projects…give them welfare…affirmative action…send white kids to hand out food in Africa. But Dems are always afraid to ask the question: Why do those people need so much babysitting?

What he meant was that white Democrats revealed their prejudices against minorities without realizing it. He posited that although these Democrats think that they like African Americans, they do not respect African Americans, and that many Democratic policies stemmed from an implicit acknowledgment that those people cannot help themselves. Speechwriter Michael Gerson perfectly encapsulated this idea with a phrase he coined for then–presidential candidate George W. Bush in 1999: “the soft bigotry of low expectations.” According to this argument, Democrats were hand-holders, enablers of bad behavior and poor testing results because they didn’t actually believe that minority students could do as well as their non-minority peers.

Bannon had a starker, more aggressive take on this idea: He believed the Democrats were simply using American minorities for their own political ends. He was convinced that the social compact that emerged after the civil rights movement, where Democrats benefited from African American votes in exchange for government aid, was not born out of any moral enlightenment, but instead out of shrewd calculation. In his framing, the only way the Democrats could defend what he saw as the inconvenient truths of this social compact was through political correctness. Democrats subjected “rationalists” to social shame when they spoke out about this “race reality.”

“Race realism” is the most recent spin on age-old tropes and theories that certain ethnic groups are genetically superior to others. Race realists believe, for example, that black Americans score lower on standardized tests not because the tests are skewed, or because of the long history of oppression and prejudice that blacks must overcome, but because they’re inherently less intelligent than white Americans. It’s a pseudoscientific notion, embraced by white supremacists, with roots in the centuries-old “scientific racism” that underlies, among other disasters of human history, slavery, apartheid, and the Holocaust. The alt-right, led by Bannon and Breitbart, adopted race realism as a cornerstone philosophy.

If Bannon were to succeed in his quest for liberation of his “free thinkers,” he needed a way of inoculating people from political correctness. Cambridge Analytica began studying not only overt racism but racism in its many other incarnations. When we think about racism, we often think of overt hatred. But racism can persist in different ways. Racism can be aversive, where a person consciously or subconsciously avoids a racial group (e.g., gated communities, sexual and romantic avoidance, etc.), and racism can be symbolic, where a person holds negative evaluations of a racial group (e.g., stereotypes, double standards, etc.). However, because the label “racism” can hold such social stigma in modern America, we found that white people often ignore or discount their internalized prejudices and react strongly to any inference that they hold such beliefs.

This is what is known as “white fragility”: White people in North American society enjoy environments insulated from racial disadvantages, which fosters an expectation among white people of racial comfort while lowering their ability to tolerate racial stress. In our research, we saw that white fragility prevented people from confronting their latent prejudices. This cognitive dissonance also meant that subjects would often amplify their responses expressing positive statements toward minorities in an effort to satiate their self-concept of “not being racist.” For example, when presented with a series of hypothetical biographies with photos, some respondents who scored higher in prior implicit racial bias testing would rate minority biographies higher than identical white biographies. See? I scored the black person higher, because I am not racist.

This cognitive dissonance created an opening: Many respondents were reacting to their own racism not out of concern about how they may be contributing to structural oppression, but rather to protect their own social status. For Bannon, this was enough to convince him that his theory about Democrats was true—that they just pay lip service to minorities, but deep down they are just as racist as anyone else in America. The difference was who was living in what “reality.”


BANNON ENVISIONED A VEHICLE to help white racists move past all this and become liberated “free thinkers.” In 2005, when Bannon started at IGE, the Hong Kong–based gaming company, the firm employed a factory of low-wage Chinese gamers to play World of Warcraft in order to win items in the game. Instead of trading them, or selling them through the game’s interface, which was allowed, IGE would sell the digital assets to Western players for a profit. This activity was largely seen by other players as cheating, and a civil suit and backlash online against the firm ensued. It’s possible this was Bannon’s early exposure to the rage of online communities; some of the commentary was reportedly “anti-Chinese vitriol.” Bannon became a regular reader of Reddit and 4chan and began to see the hidden anger that comes out when people are anonymous online. To him, they were revealing their true selves, unfiltered by a “political correctness” that was preventing them from speaking these “truths” in public. It was through the process of reading these forums that Bannon realized he could harness them and their anonymous swarms of resentment and harassment.

This was especially true after Gamergate, in the late summer of 2014, right before Bannon was introduced to SCL. In many ways, Gamergate created a conceptual framework for Bannon’s alt-right movement, as he knew there was an undercurrent populated by millions of intense and angry young men. Trolling and cyberbullying became key tools of the alt-right. But Bannon went deeper and had Cambridge Analytica scale and deploy many of the same tactics that domestic abusers and bullies use to erode stress resilience in their victims. Bannon transformed CA into a tool for automated bullying and scaled psychological abuse. The firm started this journey by identifying a series of cognitive biases that it hypothesized would interact with latent racial bias. Over the course of many experiments, we concocted an arsenal of psychological tools that could be deployed systematically via social media, blogs, groups, and forums.

Bannon’s first request of our team was to study who felt oppressed by political correctness. Cambridge Analytica found that, because people often overestimate how much others notice them, spotlighting socially uncomfortable situations was an effective prime for eliciting bias in target cohorts, such as when you get in trouble for mispronouncing a foreign-sounding name. One of the most effective messages the firm tested was getting subjects to “imagine an America where you can’t pronounce anyone’s name.” Subjects would be shown a series of uncommon names and then asked, “How hard is it to pronounce this name? Can you recall a time where people were laughing at someone who messed up an ethnic name? Do some people use political correctness to make others feel dumb or to get ahead?

People reacted strongly to the notion that “liberals” were seeking new ways to mock and shame them, along with the idea that political correctness was a method of persecution. An effective Cambridge Analytica technique was to show subjects blogs that made fun of white people like them, such as People of Walmart. Bannon had been observing online communities on places like 4chan and Reddit for years, and he knew how often subgroups of angry young white men would share content of “liberal elites” mocking “regular” Americans. There had always been publications that parodied the “hicks” of flyover country, but social media represented an extraordinary opportunity to rub “regular” Americans’ noses in the snobbery of coastal elites.

Cambridge Analytica began to use this content to touch on an implied belief about racial competition for attention and resources—that race relations were a zero-sum game. The more they take, the less you have, and they use political correctness so you cannot speak out. This framing of political correctness as an identity threat catalyzed a “boomerang” effect in people where counternarratives would actually strengthen, not weaken, the prior bias or belief. This means that when targets would see clips containing criticism of racist statements by candidates or celebrities, this exposure would have the effect of further entrenching the target’s racialized views, rather than causing them to question those beliefs. In this way, if you could frame racialized views through the lens of identity prior to exposure to a counternarrative, that counternarrative would be interpreted as an attack on identity instead. What was so useful for Bannon was that it in effect inoculated target groups from counternarratives criticizing ethno-nationalism. It created a wicked reinforcement cycle in which the cohort would strengthen their racialized views when they were exposed to criticism. This may be in part because the area of the brain that is most highly activated when we process strongly held beliefs is the same area that is involved when we think about who we are and our identity. Later, when Donald Trump was aggressively criticized in the media for racist or misogynist statements, these critiques likely created a similar effect, where the criticism of Trump strengthened the resolve of supporters who would internalize the critique as a threat to their very identity.

By making people angry in this way, CA was following a fairly wide corpus of research showing that anger interferes with information seeking. This is why people can “jump to conclusions” in a fit of rage, even if later they regret their decisions. In one experiment, CA would show people on online panels pictures of simple bar graphs about uncontroversial things (e.g., the usage rates of mobile phones or sales of a car type) and the majority would be able to read the graph correctly. However, unbeknownst to the respondents, the data behind these graphs had actually been derived from politically controversial topics, such as income inequality, climate change, or deaths from gun violence. When the labels of the same graphs were later switched to their actual controversial topic, respondents who were made angry by identity threats were more likely to misread the relabeled graphs that they had previously understood.

What CA observed was that when respondents were angry, their need for complete and rational explanations was also significantly reduced. In particular, anger put people in a frame of mind in which they were more indiscriminately punitive, particularly to out-groups. They would also underestimate the risk of negative outcomes. This led CA to discover that even if a hypothetical trade war with China or Mexico meant the loss of American jobs and profits, people primed with anger would tolerate that domestic economic damage if it meant they could use a trade war to punish immigrant groups and urban liberals.

Bannon was convinced that if you showed people what political correctness “really meant,” they would wake up to the truth. So Cambridge Analytica started asking subjects if the thought of their daughter marrying a Mexican immigrant made them feel uncomfortable. For subjects who denied discomfort with the idea, a prompt would then follow: “Did you feel like you had to say that?” Subjects would be given permission to change their answers, and many did. After the Facebook data was collected, CA began exploring ways of taking this further by pulling photos of daughters of white men in order to pair them with photos of black men—to show white men what political correctness “really looked like.”

Cambridge Analytica’s research panels also identified that there were relationships between target attitudes and a psychological effect called the just-world hypothesis (JWH). This is a cognitive bias where some people rely on a presumption of a fair world: The world is a fair place where bad things “happen for a reason” or will be offset by some sort of “moral balancing” in the universe. We found that people who displayed the JWH bias were, for example, more prone to victim-blaming in hypothetical scenarios of sexual assault. If the world is fair, then random bad things should not happen to innocent people, and therefore there must have been a fault in the victim’s behavior. Finding ways to blame victims is psychologically prophylactic for some people because it helps them cope with anxiety induced by uncontrollable environmental threats while maintaining a comforting view that the world will still be fair to them.

Cambridge Analytica found that JWH was related to many attitudes, but that it had a special relationship with racial bias. People who displayed JWH were more likely to agree with the idea that minorities were to blame for socioeconomic disparities between races. In other words, blacks have had all this time to achieve for themselves, but they have nothing to show for it. Maybe it wasn’t racist to suggest that minorities were not able to create their own success, subjects were told—maybe it was just realistic.

CA then discovered that for those with evangelical worldviews in particular, a “just world” exists because God rewards people with success if they follow his rules. In other words, people who live good lives won’t get preexisting conditions, and they will succeed in life, even if they are black. Cambridge Analytica began feeding these cohorts narratives with an expanded religious valence. “God is fair and just, right? Wealthy people are blessed by God for a reason, right? Because He is fair. If minorities complain about receiving less, perhaps there is a reason—because He is fair. Or are you daring to question God?

This gave CA a way to cultivate more punitive views toward “the other.” If the world is fair and governed by a just God, then refugees are suffering for a reason. Over time, subjects would increasingly discount examples of valid refugee claims under U.S. law and instead focus on how and why the claimants should be punished. And in some cases, the stronger the refugee claim, the harsher the responses. The targets were less and less concerned with hypothetical refugees and more concerned with maintaining the consistency of their worldview. If you are strongly invested in the idea that the world is just, evidence to the contrary can feel deeply threatening.

For Bannon’s free thinkers, race reality was not only becoming their reality, it was becoming God’s reality—a connection with a long history in America. From the time slaves were first brought to America, preachers drew from the book of Ephesians to justify the practice, quoting the line “Servants, be obedient to them that are your masters” as evidence that slave ownership was godly. In the early nineteenth century, Episcopal bishop Stephen Elliott suggested that those who wished to end slavery were behaving in an ungodly way. They should, he wrote, “consider whether, by their interference with this institution, they may not be checking and impeding a work which is manifestly providential,” as millions of “semi-barbarous people” had “learned the way to Heaven and…have been made to know their Savior through the means of African slavery!” In the post–Civil War South, states enacted “black codes” that curtailed black citizens’ newfound freedom. In cities such as Memphis and New Orleans, white politicians and city officials used fearmongering to provoke bloody riots that took dozens of black lives. Jim Crow laws, enacted in the late nineteenth and early twentieth centuries, ensured that for decades to come, the races would remain segregated in public spaces. Poll taxes rendered many blacks in the South all but unable to vote. And the Ku Klux Klan, which had virtually disappeared just after the Civil War, enjoyed a resurgence in the early twentieth century, in part by presenting itself as a national patriotic organization.

The Civil Rights Act of 1964 and the Voting Rights Act of 1965 represented a huge leap forward for the rights of American blacks. These sweeping sets of laws promised to right many of the wrongs that had been perpetrated against the black community for so many years by ensuring voting rights, mandating desegregation of public facilities, and instituting equal employment opportunity and nondiscrimination in federal programs. They also opened a new chapter in the politics of shamelessly stoking white fear.

In the late 1960s, Richard Nixon’s “southern strategy” fueled racial fear and tensions in order to shift white voters’ allegiance from the Democrats to the GOP. Nixon ran his 1968 presidential campaign on the twin pillars of “states’ rights” and “law and order”—both of which were obvious, racially coded dog whistles. In his 1980 campaign, Ronald Reagan repeatedly invoked the “welfare queen”—a black woman who supposedly was able to buy a Cadillac on government assistance. In 1988, George H. W. Bush’s campaign ran the infamous Willie Horton ad, terrifying white voters with visions of wild-haired black criminals running amok.

Steve Bannon aimed to affirm the ugliest biases in the American psyche and convince those who possessed them that they were the victims, that they had been forced to suppress their true feelings for too long. Deep within America’s soul lurked an explosive tension. Bannon had long sensed this, and now he had the data to prove it. History, Bannon was convinced, would prove to be on his side, and the right tools would hasten his prophecy. Young people, with their lack of opportunities stemming from a corpulent state and a corrupt finance system, were primed to rebel. They just did not know it yet. Bannon wanted them to understand their role in his prophecy of revolution—that they would lead a generational “turning” of history and become the “artists” who would redraw a new society filled with meaning and purpose after its “great unraveling.” Major figures in history, he said, were artists: Franco and Hitler were painters, while Stalin, Mao, and bin Laden were all poets. He understood that movements adopt a new aesthetic for society. Bannon asked why dictators always lock up the poets and artists first. Because they are often artists themselves. And for Bannon, this movement was primed to become his great performance. It was the fulfilling of his prophecy by making real the narratives of his favorite books, like The Fourth Turning, which predicts an impending crisis followed by a forgotten generation rising up in rebellion, or The Camp of the Saints, where Western civilization collapses from the weight of caravans of immigrant invaders.

But Bannon needed an army to unleash chaos. For him, this was an insurgency, and to inspire total loyalty and total engagement, he was prepared to use any narrative that worked. The exploitation of cognitive biases, for Bannon, was simply a means of “de-programming” his targets from the “conditioning” they had endured growing up in a vapid and meaningless society. Bannon wanted his targets to “discover themselves” and “become who they really were.” But the tools created at Cambridge Analytica in 2014 were not about self-actualization; they were used to accentuate people’s innermost demons in order to build what Bannon called his “movement.” By targeting people with specific psychological vulnerabilities, the firm victimized them into joining what was nothing more than a cult led by false prophets, where reason and facts would have little effect on its new followers, digitally isolated as they now were from inconvenient narratives.

In the last discussion I ever had with Bannon, he told me that to fundamentally change society, “you have to break everything.” And that’s what he wanted to do—to fracture “the establishment.” Bannon faulted “big government” and “big capitalism” for suppressing the randomness that is essential to human experience. He wanted to liberate the people from a controlling administrative state that made choices for them and thus removed purpose from their lives. He wanted to bring about chaos to end the tyranny of certainty within the administrative state. Steve Bannon did not want, and would not tolerate, the state dictating America’s destiny.