For most of the twentieth century there were no female musicians in the New York Philharmonic Orchestra. There were a couple of blips in the 1950s and 60s, when a woman or two was hired, but those aside, the proportion of women sat stubbornly at zero. But then all of a sudden, something changed: from the 1970s onwards, the numbers of female players started to go up. And up.
Turnover in orchestras is extremely low. The composition of an orchestra is fairly static (at around one hundred players), and when you’re hired, it’s often for life; it’s rare that a musician is fired. So there was something remarkable going on when the proportion of women in this orchestra grew from a statistical 0% to 10% in a decade.
That something was blind auditions.1 Instituted in the early 1970s following a lawsuit, blind auditions are what they sound like: the hiring committee can’t see who is playing in the audition, because there is a screen between them and the player.2 The screens had an immediate impact. By the early 1980s, women began to make up 50% of the share of new hires. Today, the proportion of female musicians in the New York Philharmonic stands at over 45%.3
The simple step of installing a screen turned the audition process for the New York Philharmonic into a meritocracy. But in this, it is an outlier: for the vast majority of hiring decisions around the world, meritocracy is an insidious myth. It is a myth that provides cover to institutional white male bias. And, dishearteningly, it is a myth that proves remarkably resistant to all the evidence, going back decades, that shows it up as the fantasy it most certainly is. If we want to kill this myth off, we’re clearly going to have to do more than just collect data.
The fact that meritocracy is a myth is not a popular one. Around the industrialised world, people believe that not only is meritocracy the way things should work, it’s the way things do work.4 Despite evidence suggesting that, if anything, the US is less meritocratic than other industrialised countries,5Americans in particular hold on to meritocracy as an article of faith, and employment and promotion strategies over the past few decades have increasingly been designed as if meritocracy is a reality. A survey of US firms found that 95% used performance evaluations in 2002 (compared to 45% in 1971) and 90% had a merit-based pay plan in place.6
The problem is, there is little evidence that these approaches actually work. In fact, there is strong evidence that they don’t. An analysis of 248 performance reviews collected from a variety of US-based tech companies found that women receive negative personality criticism that men simply don’t.7 Women are told to watch their tone, to step back. They are called bossy, abrasive, strident, aggressive, emotional and irrational. Out of all these words, only aggressive appeared in men’s reviews at all – ‘twice with an exhortation to be more of it’. More damningly, several studies of performance-related bonuses or salary increases have found that white men are rewarded at a higher rate than equally performing women and ethnic minorities, with one study of a financial corporation uncovering a 25% difference in performance-based bonuses between women and men in the same job.8
The myth of meritocracy achieves its apotheosis in America’s tech industry. According to a 2016 survey, the number one concern of tech start-up founders was ‘hiring good people’, while having a diverse workforce ranked seventh on the list of ten business priori-ties.9 One in four founders said they weren’t interested in diversity or work-life balance at all. Which, taken together, points to a belief that if you want to find ‘the best people’, addressing structural bias is unnecessary. A belief in meritocracy is all you need.
Actually, a belief in meritocracy may be all you need – to introduce bias, that is. Studies have shown that a belief in your own personal objectivity, or a belief that you are not sexist, makes you less objective and more likely to behave in a sexist way.10 Men (women were not found to exhibit this bias) who believe that they are objective in hiring decisions are more likely to hire a male applicant than an identically described female applicant. And in organisations which are explicitly presented as meritocratic, managers favour male employees over equally qualified female employees.
Tech’s love affair with the myth of meritocracy is ironic for an industry so in thrall to the potential of Big Data, because this is a rare case where the data actually exists. But if in Silicon Valley meritocracy is a religion, its God is a white male Harvard dropout. And so are most of his disciples: women make up only a quarter of the tech industry’s employees and 11% of its executives.11 This is despite women earning more than half of all undergraduate degrees in the US, half of all undergraduate degrees in chemistry, and almost half in maths.12
More than 40% of women leave tech companies after ten years compared to 17% of men.13 A report by the Center for Talent Innovation found that women didn’t leave for family reasons or because they didn’t enjoy the work.14 They left because of ‘workplace conditions’, ‘undermining behaviour from managers’, and ‘a sense of feeling stalled in one’s career’. A feature for the Los Angeles Times similarly found that women left because they were repeatedly passed up for promotion and had their projects dismissed.15 Does this sound like a meritocracy? Or does it look more like institutionalised bias?
That the myth of meritocracy survives in the face of such statistics is testament to the power of the male default: in the same way that men picture a man 80% of the time they think of a ‘person’, it’s possible that many men in the tech industry simply don’t notice how male-dominated it is. But it’s also testament to the attractiveness of a myth that tells the people who benefit from it that all their achievements are down to their own personal merit. It is no accident that those who are most likely to believe in the myth of meritocracy are young, upper-class, white Americans.16
If white upper-class Americans are most likely to believe in the myth of meritocracy, it should come as no surprise that academia is, like tech, a strong follower of the religion. The upper ranks of academia – particularly those of science, technology, engineering and maths (STEM) – are dominated by white, middle- and upper-class men. It is a perfect Petri dish for the myth of meritocracy to flourish in. Accordingly, a recent study found that male academics – particularly those in STEM – rated fake research claiming that academia had no gender bias higher than real research which showed it did.17 Also accordingly, gender bias is in fact plentiful – and well documented.
Numerous studies from around the world have found that female students and academics are significantly less likely than comparable male candidates to receive funding, be granted meetings with professors, be offered mentoring, or even to get the job.18 Where mothers are seen as less competent and often paid less, being a father can work in a man’s favour (a gendered bias that is by no means restricted to academia).19 But despite the abundance of data showing that academia is in fact far from meritocratic, universities continue to proceed as if male and female students, and male and female academics, are operating on a level playing field.
Career progression in academia depends largely on how much you get published in peer-reviewed journals, but getting published is not the same feat for men as it is for women. A number of studies have found that female-authored papers are accepted more often or rated higher under double-blind review (when neither author nor reviewer are identifiable).20,21 And although the evidence varies on this point, given the abundant male bias that has been identified in academia, there seems little reason not to institute this form of blind academic audition. Nevertheless, most journals and conferences carry on without adopting this practice.
Of course, female academics do get published, but that’s only half the battle. Citation is often a key metric in determining research impact, which in turn determines career progression, and several studies have found that women are systematically cited less than men.22 Over the past twenty years, men have self-cited 70% more than women23 – and women tend to cite other women more than men do,24 meaning that the publication gap is something of a vicious circle: fewer women getting published leads to a citations gap, which in turn means fewer women progress as they should in their careers, and around again we go. The citations gap is further compounded by male-default thinking: as a result of the widespread academic practice of using initials rather than full names, the gender of academics is often not immediately obvious, leading female academics to be assumed to be male. One analysis found that female scholars are cited as if they are male (by colleagues who have assumed the P stands for Paul rather than Pauline) more than ten times more often than vice versa.25
Writing for the New York Times, economist Justin Wolfers noted a related male-default habit in journalists routinely referring to the male contributor as the lead author when in fact the lead author was a woman.26 This lazy product of male-default thinking is inexcusable in a media report, but it’s even more unacceptable in academia, and yet here too it proliferates. In economics, joint papers are the norm – and joint papers contain a hidden male bias. Men receive the same level of credit for both solo and joint papers, but, unless they are writing with other female economists, women receive less than half as much credit for co-authored papers as men do. This, a US study contends, explains why, although female economists publish as much as male economists, male economists are twice as likely to receive tenure.27 Male-default thinking may also be behind the finding that research perceived to have been done by men is associated with ‘greater scientific quality’:28 this could be a product of pure sexism, but it could also be a result of the mode of thinking that sees male as universal and female as niche. It would certainly go some way to explaining why women are less likely to appear on course syllabuses.29
Of course before a woman gets to face all these hidden hurdles, she must have found the time to do the research in the first place, and that is by no means a given. We’ve already discussed how women’s unpaid workload outside of paid employment impacts on their ability to do research. But their unpaid workload inside the workplace doesn’t help either. When students have an emotional problem, it is their female professors, not their male professors they turn to.39 Students are also more likely to request extensions, grade boosts, and rule-bending of female academics.31 In isolation, a request of this kind isn’t likely to take up much time or mental energy – but they add up, and they constitute a cost on female academics’ time that male academics mostly aren’t even aware of, and that universities don’t account for.
Women are also asked to do more undervalued admin work than their male colleagues32 – and they say yes, because they are penalised for being ‘unlikeable’ if they say no. (This is a problem across a range of workplaces: women, and in particular ethnic minority women, do the ‘housekeeping’ – taking notes, getting the coffee, cleaning up after everyone – in the office as well as at home.33) Women’s ability to publish is also impacted by their being more likely than their male colleagues to get loaded with extra teaching hours,34 and, like ‘honorary’ admin posts, teaching is viewed as less important, less serious, less valuable, than research. And we run into another vicious circle here: women’s teaching load prevents them from publishing enough, which results in more teaching hours, and so on.
The inequity of women being loaded with less valued work is compounded by the system for evaluating this work, because it is itself systematically biased against women. Teaching evaluation forms are widely used in higher education and they represent another example of a situation where we have the data, but are simply ignoring it. Decades of research35 in numerous countries show that teaching evaluation forms are worse than useless at actually evaluating teaching and are in fact ‘biased against female instructors by an amount that is large and statistically significant’.36 They are, however, pretty good at evaluating gender bias. One of these biases is our old friend ‘men are the default human’, which shows up in objections to female lecturers straying away from a focus on white men. ‘I didn’t come out of this course with any more information except gender and race struggles, than I came in with,’ complained one student who apparently felt that gender and race were not relevant to the topic at hand: US confederation.37
Falling into the trap we encountered in the introduction, of not realising that ‘people’ is as likely to mean ‘women’ as it is to mean ‘men’, another student complained that, ‘Although Andrea stated on the first day she would teach a peoples [sic] perspective it was not illustrated how much was going to be focused on first nation and women’s history.’ Incidentally, it’s worth taking the implication that this lecturer focused almost exclusively on ‘first nations and women’s history’ with a pinch of salt: a friend of mine got a similarly unhappy review from a male student for focusing ‘too much’ on feminism in her political philosophy lectures. She had spoken about feminism once in ten classes.
Less effective male professors routinely receive higher student evaluations than more effective female teachers. Students believe that male professors hand marking back more quickly – even when that is impossible because it’s an online course delivered by a single lecturer, but where half the students are led to believe that the professor is male and half female. Female professors are penalised if they aren’t deemed sufficiently warm and accessible. But if they are warm and accessible they can be penalised for not appearing authoritative or professional. On the other hand, appearing authoritative and knowledgeable as a woman can result in student disapproval, because this violates gendered expectations.38 Meanwhile men are rewarded if they are accessible at a level that is simply expected in women and therefore only noticed if it’s absent.
An analysis39 of 14 million reviews on the website RateMyProfessors.com found that female professors are more likely to be ‘mean’, ‘harsh’, ‘unfair’, ‘strict’ and ‘annoying’. And it’s getting worse: female instructors have stopped reading their evaluations in droves, ‘as student comments have become increasingly aggressive and at times violent’. A female political history lecturer at a Canadian university received the following useful fredback from her student: ‘I like how your nipples show through your bra. Thanks.’40 The lecturer in question now wears ‘lightly padded bras’ exclusively.
The teaching evaluation study that revealed women are more likely to be ‘mean’ also found that male professors are more likely to be described as ‘brilliant’, ‘intelligent’, ‘smart’ and a ‘genius’. But were these men actually more in possession of raw talent than their female counterparts? Or is it just that these words are not as gender neutral as they appear? Think of a genius. Chances are, you pictured a man. It’s OK – we all have these unconscious biases. I pictured Einstein – that famous one of him sticking his tongue out, his hair all over the place. And the reality is that this bias (that I like to call ‘brilliance bias’) means that male professors are routinely considered more knowledgeable, more objective, more innately talented. And career progression that rests on teaching evaluations completely fails to account for it.
Brilliance bias is in no small part a result of a data gap: we have written so many female geniuses out of history, they just don’t come to mind as easily. The result is that when ‘brilliance’ is considered a requirement for a job, what is really meant is ‘a penis’. Several studies have found that the more a field is culturally understood to require ‘brilliance’ or ‘raw talent’ to succeed – think philosophy, maths, physics, music composition, computer science – the fewer women there will be studying and working in it.41 We just don’t see women as naturally brilliant. In fact, we seem to see femininity as inversely associated with brilliance: a recent study where participants were shown photos of male and female science faculty at elite US universities also found that appearance had no impact on how likely it was that a man would be judged to be a scientist.42 When it came to women, however, the more stereotypically feminine they looked, the less likely it was that people would think they were a scientist.
We teach brilliance bias to children from an early age. A recent US study found that when girls start primary school at the age of five, they are as likely as five-year-old boys to think women could be ‘really really smart’.43 But by the time they turn six, something changes. They start doubting their gender. So much so, in fact, that they start limiting themselves: if a game is presented to them as intended for ‘children who are really, really smart’, five-year-old girls are as likely to want to play it as boys – but six-year-old girls are suddenly uninterested. Schools are teaching little girls that brilliance doesn’t belong to them. No wonder that by the time they’re filling out university evaluation forms, students are primed to see their female teachers as less qualified.
Schools are also teaching brilliance bias to boys. As we saw in the introduction, following decades of ‘draw a scientist’ studies where children overwhelmingly drew men, a recent ‘draw a scientist’ meta-analysis was celebrated across the media as showing that finally we were becoming less sexist.44 Where in the 1960s only 1% of children drew female scientists, 28% do now. This is of course an improvement, but it is still far off reality. In the UK, women actually outnumber men in a huge range of science degrees: 86% of those studying polymers, 57% of those studying genetics, and 56% of those studying microbiology are female.45
And in any case, the results are actually more complicated than the headlines suggest and still provide damning evidence that data gaps in school curriculums are teaching children biases. When children start school they draw roughly equal percentages of male and female scientists, averaged out across boys and girls. By the time children are seven or eight, male scientists significantly outnumber female scientists. By the age of fourteen, children are drawing four times as many male scientists as female scientists. So although more female scientists are being drawn, much of the increase has been in younger children before the education system teaches them data-gap-informed gender biases.
There was also a significant gender difference in the change. Between 1985-2016, the average percentage of female scientists drawn by girls rose from 33% to 58%. The respective figures for boys were 2.4% and 13%. This discrepancy may shed some light on the finding of a 2016 study which found that while female students ranked their peers according to actual ability, male biology students consistently ranked their fellow male students as more intelligent than better-performing female students.46 Brilliance bias is one hell of a drug. And it doesn’t only lead to students mis-evaluating their teachers or each other: there is also evidence that teachers are mis-evaluating their students.
Several studies conducted over the past decade or so show that letters of recommendation are another seemingly gender-neutral part of a hiring process that is in fact anything but.47 One US study found that female candidates are described with more communal (warm; kind; nurturing) and less active (ambitious; self-confident) language than men. And having communal characteristics included in your letter of recommendation makes it less likely that you will get the job,48 particularly if you’re a woman: while ‘team-player’ is taken as a leadership quality in men, for women the term ‘can make a woman seem like a follower’.49 Letters of recommendation for women have also been found to emphasise teaching (lower status) over research (higher status);50 to include more terms that raise doubt (hedges; faint praise);51 and to be less likely to include standout adjectives like ‘remarkable’ and ‘outstanding’. Women were more often described with ‘grindstone’ terms like ‘hard-working’.
There is a data gap at the heart of universities using teaching evaluations and letters of recommendation as if they are gender neutral in effect as well as in application, although like the meritocracy data gap more broadly, it is not a gap that arises from a lack of data so much as a refusal to engage with it. Despite all the evidence, letters of recommendation and teaching evaluations continue to be heavily weighted and used widely in hiring, promoting and firing, as if they are objective tests of worth.52 In the UK, student evaluations are set to become even more important, when the Teaching Excellence Framework (TEF) is introduced in 2020. The TEF will be used to determine how much funding a university can receive, and the National Students Survey will be considered ‘a key metric of teaching success’. Women can expect to be penalised heavily in this Excellent Teaching new world.
The lack of meritocracy in academia is a problem that should concern all of us if we care about the quality of the research that comes out of the academy, because studies show that female academics are more likely than men to challenge male-default analysis in their work.53 This means that the more women who are publishing, the faster the gender data gap in research will close. And we should care about the quality of academic research. This is not an esoteric question, relevant only to those who inhabit the ivory towers. The research produced by the academy has a significant impact on government policy, on medical practice, on occupational health legislation. The research produced by the academy has a direct impact on all of our lives. It matters that women are not forgotten here.
Given the evidence that children learn brilliance bias at school, it should be fairly easy to stop teaching them this. And in fact a recent study found that female students perform better in science when the images in their textbooks include female scientists.54 So to stop teaching girls that brilliance doesn’t belong to them, we just need to stop misrepresenting women. Easy.
It’s much harder to correct for brilliance bias once it’s already been learnt, however, and once children who’ve been taught it grow up and enter the world of work, they often start perpetuating it themselves. This is bad enough when it comes to human-on-human recruitment, but with the rise of algorithm-driven recruiting the problem is set to get worse, because there is every reason to suspect that this bias is being unwittingly hardwired into the very code to which we’re outsourcing our decision-making.
In 1984 American tech journalist Steven Levy published his bestselling book Hackers: Heroes of the Computer Revolution. Levy’s heroes were all brilliant. They were all single-minded. They were all men. They also didn’t get laid much. ‘You would hack, and you would live by the Hacker Ethic, and you knew that horribly inefficient and wasteful things like women burned too many cycles, occupied too much memory space,’ Levy explained. ‘Women, even today, are considered grossly unpredictable,’ one of his heroes told him. ‘How can a [default male] hacker tolerate such an imperfect being?’
Two paragraphs after having reported such blatant misogyny, Levy nevertheless found himself at a loss to explain why this culture was more or less ‘exclusively male’. ‘The sad fact was that there never was a star-quality female hacker’, he wrote. ‘No one knows why.’ I don’t know, Steve, we can probably take a wild guess.
By failing to make the obvious connection between an openly misogynistic culture and the mysterious lack of women, Levy contributed to the myth of innately talented hackers being implicitly male. And, today, it’s hard to think of a profession more in thrall to brilliance bias than computer science. ‘Where are the girls that love to program?’ asked a high-school teacher who took part in a summer programme for advanced-placement computer-science teachers at Carnegie Mellon; ‘I have any number of boys who really really love computers,’ he mused.55 ‘Several parents have told me their sons would be on the computer programming all night if they could. I have yet to run into a girl like that.’
This may be true, but as one of his fellow teachers pointed out, failing to exhibit this behaviour doesn’t mean that his female students don’t love computer science. Recalling her own student experience, she explained how she ‘fell in love’ with programming when she took her first course in college. But she didn’t stay up all night, or even spend a majority of her time programming. ‘Staying up all night doing something is a sign of single-mindedness and possibly immaturity as well as love for the subject. The girls may show their love for computers and computer science very differently. If you are looking for this type of obsessive behavior, then you are looking for a typically young, male behavior. While some girls will exhibit it, most won’t.’
Beyond its failure to account for female socialisation (girls are penalised for being antisocial in a way boys aren’t), the odd thing about framing an aptitude for computer science around typically male behaviour is that coding was originally seen as a woman’s game. In fact, women were the original ‘computers’, doing complex maths problems by hand for the military before the machine that took their name replaced them.56
Even after they were replaced by a machine, it took years before they were replaced by men. ENIAC, the world’s first fully functional digital computer, was unveiled in 1946, having been programmed by six women.57 During the 1940s and 50s, women remained the dominant sex in programming,58 and in 1967 Cosmopolitan magazine published ‘The Computer Girls’, an article encouraging women into programming.59 ‘It’s just like planning a dinner,’ explained computing pioneer Grace Hopper. ‘You have to plan ahead and schedule everything so that it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are ‘naturals’ at computer programming.’
But it was in fact around this time that employers were starting to realise that programming was not the low-skilled clerical job they had once thought. It wasn’t like typing or feeling. It required advanced problem-solving skills. And, brilliance bias being more powerful than objective reality (given women were already doing the programming, they clearly had these skills) industry leaders started training men. And then they developed hiring tools that seemed objective, but were actually covertly biased against women. Rather like the teaching evaluations in use in universities today, these tests have been criticised as telling employers ‘less about an applicant’s suitability for the job than his or her possession of frequently stereotyped characteristics’.60 It’s hard to know whether these hiring tools were developed as a result of a gender data gap (not realising that the characteristics they were looking for were male-biased) or a result of direct discrimination, but what is undeniable is that they were biased towards men.
Multiple-choice aptitude tests which required ‘little nuance or context-specific problem solving’ focused instead on the kind of mathematical trivia that even then industry leaders were seeing as increasingly irrelevant to programming. What they were mainly good at testing was the type of maths skills men were, at the time, more likely to have studied at school. They also were quite good at testing how well networked an applicant was: the answers were frequently available through all-male networks like college fraternities and Elks lodges (a US-based fraternal order).61
Personality profiles formalised the programmer stereotype nodded to by the computer-science teacher at the Carnegie Mellon programme: the geeky loner with poor social and hygiene skills. A widely quoted 1967 psychological paper had identified a ‘disinterest in people’ and a dislike of ‘activities involving close personal interaction’ as a ‘striking characteristic of programmers’.62 As a result, companies sought these people out, they became the top programmers of their generation, and the psychological profile became a self-fulfilling prophecy.
This being the case, it should not surprise us to find this kind of hidden bias enjoying a resurgence today courtesy of the secretive algorithms that have become increasingly involved in the hiring process. Writing for the Guardian, Cathy O’Neil, the American data scientist and author of Weapons of Math Destruction, explains how online tech-hiring platform Gild (which has now been bought and brought in-house by investment firm Citadel63) enables employers to go well beyond a job applicant’s CV, by combing through their ‘social data’.64 That is, the trace they leave behind them online. This data is used to rank candidates by ‘social capital’ which basically refers to how integral a programmer is to the digital community. This can be measured through how much time they spend sharing and developing code on development platforms like GitHub or Stack Overflow. But the mountains of data Gild sifts through also reveal other patterns.
For example, according to Gild’s data, frequenting a particular Japanese manga site is a ‘solid predictor of strong coding’.65 Programmers who visit this site therefore receive higher scores. Which all sounds very exciting, but as O’Neil points out, awarding marks for this rings immediate alarm bells for anyone who cares about diversity. Women, who as we have seen do 75% of the world’s unpaid care work, may not have the spare leisure time to spend hours chatting about manga online. O’Neil also points out that ‘if, like most of techdom, that manga site is dominated by males and has a sexist tone, a good number of the women in the industry will probably avoid it’. In short, Gild seems to be something like the algorithm form of the male computer-science teacher from the Carnegie programme.
Gild undoubtedly did not intend to create an algorithm that discriminated against women. They were intending to remove human biases. But if you aren’t aware of how those biases operate, if you aren’t collecting data and taking a little time to produce evidence-based processes, you will continue to blindly perpetuate old injustices. And so by not considering the ways in which women’s lives differ from men’s, both on and offline, Gild’s coders inadvertently created an algorithm with a hidden bias against women.
But that’s not even the most troubling bit. The most troubling bit is that we have no idea how bad the problem actually is. Most algorithms of this kind are kept secret and protected as proprietary code. This means that we don’t know how these decisions are being made and what biases they are hiding. The only reason we know about this potential bias in Gild’s algorithm is because one of its creators happened to tell us. This, therefore, is a double gender data gap: first in the knowledge of the coders designing the algorithm, and second, in the knowledge of society at large, about just how discriminatory these AIs are.
Employment procedures that are unwittingly biased towards men are an issue in promotion as well as hiring. A classic example comes from Google, where women weren’t nominating themselves for promotion at the same rate as men. This is unsurprising: women are conditioned to be modest, and are penalised when they step outside of this prescribed gender norm.66 But Google was surprised. And, to do them credit, they set about trying to fix it. Unfortunately the way they went about fixing it was quintessential male-default thinking.
It’s not clear whether Google didn’t have or didn’t care about the data on the cultural expectations that are imposed on women, but either way, their solution was not to fix the male-biased system: it was to fix the women. Senior women at Google started hosting workshops ‘to encourage women to nominate themselves’, Laszlo Bock, head of people operations, told the New York Times in 2012.67 In other words, they held workshops to encourage women to be more like men. But why should we accept that the way men do things, the way men see themselves, is the correct way? Recent research has emerged showing that while women tend to assess their intelligence accurately, men of average intelligence think they are more intelligent than two-thirds of people.68 This being the case, perhaps it wasn’t that women’s rates of putting themselves up for promotion were too low. Perhaps it was that men’s were too high.
Bock claimed Google’s workshops as a success (he told the New York Times that women are now promoted proportionally to men), but if that is the case, why the reluctance to provide the data to prove it? When the US Department of Labor conducted an analysis of Google’s pay practices in 2017 it found ‘systemic compensation disparities against women pretty much across the entire workforce’, with ‘six to seven standard deviations between pay for men and women in nearly every job category’.69 Google has since repeatedly refused to hand over fuller pay data to the Labor Department, fighting in court for months to avoid the demand. There was no pay imbalance, they insisted.
For a company built almost entirely on data, Google’s reluctance to engage here may seem surprising. It shouldn’t be. Software engineer Tracy Chou has been investigating the number of female engineers in the US tech industry since 2013 and has found that ‘[e]very company has some way of hiding or muddling the data’.70 They also don’t seem interested in measuring whether or not their ‘initiatives to make the work environment more female-friendly, or to encourage more women to go into or stay in computing’, are actually successful. There’s ‘no way of judging whether they’re successful or worth mimicking, because there are no success metrics attached to any of them’, explains Chou. And the result is that ‘nobody is having honest conversations about the issue’.
It’s not entirely clear why the tech industry is so afraid of sex-disaggregated employment data, but its love affair with the myth of meritocracy might have something to do with it: if all you need to get the ‘best people’ is to believe in meritocracy, what use is data to you? The irony is, if these so-called meritocratic institutions actually valued science over religion, they could make use of the evidence-based solutions that do already exist. For example, quotas, which, contrary to popular misconception, were recently found by a London School of Economics study to ‘weed out incompetent men’ rather than promote unqualified women.71
They could also collect and analyse data on their hiring procedures to see whether these are as gender neutral as they think. MIT did this, and their analysis of over thirty years of data found that women were disadvantaged by ‘usual departmental hiring processes’, and that ‘exceptional women candidates might very well not be found by conventional departmental search committee methods’.72 Unless search committees specifically asked department heads for names of outstanding female candidates, they may not put women forward. Many women who were eventually hired when special efforts were made to specifically find female candidates would not have applied for the job without encouragement. In line with the LSE findings, the paper also found that standards were not lowered during periods when special effort was made to hire women: in fact, if anything, the women that were hired ‘are somewhat more successful than their male peers’.
The good news is that when organisations do look at the data and attempt to act on it, the results can be dramatic. When a European company advertised for a technical position using a stock photo of a man alongside copy that emphasised ‘aggressiveness and competitiveness’ only 5% of the applicants were women. When they changed the ad to a stock photo of a woman and focused the text on enthusiasm and innovation, the number of women applying shot up to 40%.73 Digital design company Made by Many found a similar shift when they changed the wording of their ad for a senior design role to focus more on teamwork and user experience and less on bombastic single-minded egotism.74 The role was the same, but the framing was different – and the number of female applicants more than doubled.
These are just two anecdotes, but there is plenty of evidence that the wording of an ad can impact on women’s likelihood to apply for a job. A study of 4,000 job ads found that women were put off from applying for jobs that used wording associated with masculine stereotypes such as ‘aggressive’, ‘ambitious’ or ‘persistent’.75 Significantly, women didn’t consciously note the language or realise it was having this impact on them. They rationalised the lack of appeal, putting it down to personal reasons – which goes to show that you don’t have to realise you’re being discriminated against to in fact be discriminated against.
Several tech start-ups have also taken a leaf out of the New York Philharmonic’s book and developed blind recruitment systems.76 GapJumpers gives job applicants mini assignments designed for a specific post, and the top-performing applicants are sent to hiring managers without any identifying information. The result? Around 60% of those selected end up coming from under-represented backgrounds. Tech recruiter Speak with a Geek found a similarly dramatic result when they presented the same 5,000 candidates to the same group of employers on two different occasions. The first time, details like names, experience and background were provided; 5% selected for interviews were women. The second time, those details were suppressed. The proportion of women selected for interview was 54%.
While blind recruitment might work for the initial hiring process, it is less easy to see how it could be incorporated into promotions. But there is a solution here too: accountability and transparency. One tech company made managers truly accountable for their decisions on salary increases by collecting data on all their decisions and, crucially, appointing a committee to monitor this data.77 Five years after adopting this system, the pay gap had all but disappeared.