THE COMPANIES
THE NEW TITANS
The AI revolution is being led by companies that don’t follow, and often don’t answer to, the usual corporate rules. This has created some fundamental problems, often of an ethical nature, and it will continue to do so.
Many previous scientific revolutions started in universities and government laboratories. Penicillium was discovered, for example, by Sir Alexander Fleming in 1928 at St Mary’s Hospital Medical School, then part of the University of London. It was a serendipitous discovery, and the medical revolution that followed undoubtably changed the planet.
As a second example, the double-helix structure of DNA was discovered by James Watson and Francis Crick in 1953 at the University of Cambridge, using data collected by Maurice Wilkins and Rosalind Franklin at King’s College London. This was an important step in unlocking the mysteries of life. The genetic revolution that followed has only just begun to change our lives.
Perhaps most relevant to our story, the first general-purpose digital computer was built in 1945 at the University of Pennsylvania’s Moore School of Electrical Engineering. It was a 30-ton hulk made up of 18,000 vacuum tubes, over 7000 crystal diodes, 1500 relays and 10,000 capacitors that consumed 150 kilowatts of electric power. But it was a thousand times faster than the electro-mechanical calculators it replaced. The computing revolution that followed has most definitely changed our lives greatly.
While artificial intelligence also started out in universities – at places like MIT, Stanford and Edinburgh during the 1960s – it is technology companies such as Google, Facebook, Amazon and IBM, as well as younger upstarts like Palantir, OpenAI and Vicarious, that are driving much of the AI revolution today. These companies have the computing power, data sets and engineering teams that underpin many of the breakthroughs, and that numerous academic researchers like me unashamedly envy.
The largest of these companies are rightly known as ‘Big Tech’. But not because they employ lots of people. Indeed, for every million dollars of turnover, they employ roughly 100 times fewer people than companies in other sectors. Facebook, for example, has over 120 times fewer employees per million dollars of turnover than McDonald’s.
The impressive market capitalisation of the Big Tech companies is one reason for the moniker ‘big’. Their share prices are truly spectacular, a concentration of wealth that the world has never seen before. The first trillion-dollar company in the history of the planet was Apple. It crossed over to a market cap of 13 figures in August 2018. Two years later, Apple had doubled in value. Apple is now worth more than the whole of the FTSE 100 Index, the 100 most valuable companies listed on the stock market in the United Kingdom.
Since Apple became a trillion-dollar company, three other technology companies have joined the four-comma club: Amazon, Microsoft and Alphabet (the parent company of Google). Facebook is likely to join these trillion-dollar stocks shortly. The immense wealth of these companies gives them immense power and influence. Governments around the world are struggling to contain them. In 2019, Amazon had a turnover of over $280 billion. That is more that the GDP of many small countries. For example, Apple’s turnover is more than the GDP of Portugal ($231 billion in 2020), and nearly 50 per cent more than that of Greece ($189 billion). Amazon’s turnover puts the productivity of its 1 million employees on a par with the 5 million people of Finland, who together generated $271 billion of wealth in 2020.
The Big Tech companies dominate their markets. Google answers eight out of every nine search queries worldwide. If it weren’t effectively locked out of China, it would probably answer even more. The other Big Tech companies are also dominant in their own spaces. Two billion out of the nearly 8 billion people on the planet use Facebook. In the United States, Amazon is responsible for around half of all e-commerce. And in China, Alibaba’s payment platform Alipay is used for about half of all online transactions.
The founders of technology companies, large and small, are unsurprisingly celebrated like rock stars. We know many of them by their first names. Bill and Paul. Larry and Sergey. Mark and Jeff. But they are modern-day robber barons just like Mellon, Carnegie and Rockefeller, the leaders of the technological revolution of their time.
Many of these founders wield huge power. This goes well beyond the power that CEOs of companies in other sectors typically possess. In part, this is to be welcomed. It has enabled innovation and allowed technology companies to move fast. But in moving fast, many things have been broken.
One reason for such power is the unconventional share structures of technology companies. Even when they have given up majority ownership of their companies, many have retained absolute or near-absolute decision-making power. They can easily push back against any resistance.
As an example, Facebook’s Class B shares have ten times the voting rights of the Class A shares. And Mark Zuckerberg owns 75 per cent of Facebook’s Class B shares. He has therefore been able to ignore calls from other shareholders for Facebook to reform. And for reasons that are hard to understand, the Securities and Exchange Commission has no problem with founders like Zuckerberg being both Facebook’s CEO and the chair of its board.
Perhaps most egregious was the listing on the New York Stock Exchange of Snap Inc., the company behind Snapchat, in March 2017. This IPO sold shares to the public that had no voting rights at all. Despite this, the IPO raised $500 million. And the stock closed its first day up 44 per cent. What was the Securities and Exchange Commission thinking? How did we go from executives of publicly listed companies being accountable to the shareholders, to executives being unaccountable to anyone but themselves? And how did investors learn to not care?
NOTHING VENTURED
Another reason technology companies are so powerful is that the market doesn’t expect them to be profitable. Unicorns like Uber, Snapchat and Spotify have, for example, never turned a profit. Even those that do are not expected to return much to investors. This is ironic, as they can better afford to return dividends to their shareholders than many companies that traditionally do.
Most Big Tech companies are sitting on large cash mountains. It is estimated that US companies have over $1 trillion of profits waiting in offshore accounts for a tax break to bring them home. Apple is one of the worst offenders, with around $250 billion sitting offshore. But at least it pays out a small cash dividend, giving shareholders a yield of around 1 per cent. Amazon is sitting on around $86 billion in cash and has annual profits of around $33 billion, yet has never returned any of this to its investors in the form of a dividend.
Even those technology companies that haven’t been able to return a profit have had little difficulty in raising billions of dollars from investors. Uber, for example, got over $20 billion when it went public in 2019. In that same year, it lost $1 out of every $4 it earned in its revenue of $14 billion. Indeed, it’s not clear to me if Uber can ever be profitable.
The typical view within technology companies is that it is better to invest for growth and gain market share, rather than being a profit-making business in the short term. There is some truth to this growth-at-all-costs strategy – Amazon has demonstrated its long-term value. But for every Amazon, there are poorly managed companies like Pets.com, the poster child of the dotcom crash, that were never going to be sustainable.
The problem begins with the ease with which technology companies can raise money. Venture capital distorts markets. Why haven’t drivers banded together and formed a cooperative to compete against Uber? Ideally, we should use technology to connect drivers seamlessly with passengers, to create a frictionless marketplace. That is the brilliance of Uber’s business model. But at the end of the day, Uber is largely a tax sitting in between those wanting a ride and those who can offer a ride. Uber is stealing much of the value out of the system. Uber drivers who earn so little they have to sleep in their cars are certainly not getting sufficient value out of the market. Digital technologies are meant to get rid of friction. In the long term, the ultimate winner in the ride-sharing market is neither the passenger nor the driver. The winner is simply the venture fund with the deepest pockets.
A driver cooperative can’t compete against a business like Uber that doesn’t need to make money. And certainly not against a business like Uber that doesn’t even need to break even, but will happily lose money year after year until its competitors have been driven out of business.
Such factors have made technology companies an alarming force in the development and deployment of artificial intelligence. Awash with cheap money. Lacking in transparency and accountability. Engineered to disrupt markets. And driven by idiosyncratic founders with near absolute power. It’s hard to think of a more dangerous cocktail.
SUPER-INTELLIGENCE
One somewhat distant concern about artificial intelligence is the threat posed by the emergence of super-intelligence. From what I can tell, most of my colleagues, other researchers working in AI, are not greatly worried about the idea that we might one day build super-intelligent machines. But this possibility has tortured many people outside the field – like the philosopher Nick Bostrom.1
One of Bostrom’s fears is that super-intelligence poses an existential threat to humanity’s continuing existence. For example, what if we build a super-intelligent machine and ask it to make paperclips? Might it not use its ‘superior’ intelligence to take over the planet and turn everything, including us, into paperclips?
This is what is called a ‘value alignment problem’. The values of this super-intelligent paperclip-making machine are not properly aligned with those of humankind. It’s very difficult to specify precisely what we would like a super-intelligence to do. Suppose we want to eliminate cancer. ‘Easy,’ a super-intelligence might decide: ‘I simply need to get rid of all hosts of cancer.’ And so it would set about killing every living thing!
One reason I don’t have existential fears about some non-human super-intelligence is that we already have non-human super-intelligence on Earth. We already have a machine more intelligent than any one of us. A machine with more power and resources at its disposal than any individual. It’s called a company.
Companies marshal the collective intelligence of their employees to do things that individuals alone cannot do. No individual on their own can design and build a modern microprocessor. But Intel can. No individual on their own can design and build a nuclear power station. But General Electric can.
Probably no individual on their own will build an artificial general intelligence, an intelligent machine that matches or even exceeds any human intelligence. But it is highly likely that a company will, at some point in the future, be able to do so. Indeed, as I say, companies already are a form of super-intelligence.
That brings me neatly back to the problem of value alignment. This seems precisely to be one of the major problems we face today with these super-intelligent companies. Their parts – the employees, the board, the shareholders – may be intelligent, ethical and responsible. But the behaviours that emerge out of their combined super-intelligent efforts may not be ethical and responsible. So how do we ensure that corporate values are aligned with the public good?
THE CLIMATE EMERGENCY
If you will indulge me, I’m going to make a little detour to the topic of the climate emergency. Here we have perhaps the clearest example of an issue where the values of companies have not been aligned with the public good. On a positive note, however, there is some evidence that in recent years those corporate values are starting to align with the public good.
More than a century ago, in 1899, the Swedish meteorologist Nils Ekholm suggested that the burning of coal could eventually double the concentration of atmospheric CO2, and that this would ‘undoubtedly cause a very obvious rise of the mean temperature of the Earth’. Back then, the fear was mostly of a new ice age. Such an increase in temperature was therefore considered desirable: a way of preventing this eventuality.
However, by the 1970s and 1980s, climate change had become a subject of serious scientific concern. These concerns culminated in the setting up of the Intergovernmental Panel on Climate Change (IPCC) by the United Nations in 1988. The IPCC was established to provide a scientific position on climate change, as well as on its political and economic impacts.
Meanwhile, oil companies such as Exxon and Shell were also researching climate change. In July 1977, one of Exxon’s senior scientists, James Black, reported to the company’s executives that there was general scientific agreement that the burning of fossil fuels was the most likely cause of global climate change. Five years later, the manager of Exxon’s Environmental Affairs Program, M.B. Glaser, sent an internal report to management that estimated that fossil fuels and deforestation would double the carbon dioxide concentrations in the Earth’s atmosphere by 2090. In fact, CO2 concentrations in the Earth’s atmosphere have already increased by around 50 per cent. The best estimates now are that a doubling will occur even sooner, perhaps by 2060.
Glaser’s report suggested that a ‘doubling of the current [CO2] concentration could increase average global temperature by about 1.3 degrees Celsius to 3.1 degrees Celsius’, that ‘there could be considerable adverse impact including the flooding of some coastal land masses as a result of a rise in sea level due to melting of the Antarctic ice sheet’, and that ‘mitigation of the “greenhouse effect” will require major reductions in fossil fuel combustion’.
Despite these warnings, Exxon invested significant resources in climate change denial during the decades that followed. Exxon was, for example, a founding member of the Global Climate Coalition, a collection of businesses opposed to the regulation of emissions of greenhouse gases. And Exxon gave over $20 million to organisations denying climate change.
It wasn’t until 2007 that ExxonMobil (as the company had become) publicly acknowledged the risks of climate change. The company’s vice president for public affairs, Kenneth Cohen, told the Wall Street Journal that ‘we know enough now – or, society knows enough now – that the risk is serious and action should be taken’. However, it would take seven more years before ExxonMobil released a report acknowledging the risks of climate change for the first time.
Clearly, the values of ExxonMobil were not aligned with those of the wider public. But there is some evidence that corporate values are starting to shift in a favourable direction. ExxonMobil has, for instance, invested around $100 million in green technologies. The company now supports a revenue-neutral tax on carbon, and has lobbied for the United States to remain in the Paris Climate Agreement.
Many other companies are starting to act on climate change. The news was rather overshadowed by the unfolding pandemic, but in February 2020 the oil and gas giant BP promised to become carbon-neutral by 2050 or sooner. Also in February 2020, Australia’s second-largest metals and mining company, Rio Tinto, announced it would spend $1 billion to reach net zero emissions by 2050.
In July 2020, Australia’s largest energy provider, and the country’s biggest carbon emitter, AGL Energy, announced a target of net zero emissions by 2050. It even tied the long-term bonuses of its executives to that goal. And in September 2020, the world’s largest mining company, BHP, joined the club, planning to be net zero by 2050. While politicians are in general failing to act with sufficient urgency on the climate emergency, it is good to see that many companies are finally prepared to lead the way.
Indeed, it is essential that companies act. Since the founding of the IPCC in 1988, just 100 companies have been responsible for 71 per cent of greenhouse gas emissions globally. ExxonMobil is the fifth-most polluting out of these 100 corporations: it alone produces 2 per cent of all global emissions. We can make personal changes to reduce our carbon footprint, but none of that will matter if these 100 companies don’t act more responsibly.
BAD BEHAVIOUR
Let’s return to the tech sector. There is plentiful evidence that technology companies, just like the 100 companies responsible for the majority of greenhouse gas emissions, have a value alignment problem. You could write a whole book about the failures of technology companies to be good corporate citizens. I’ll just give you a few examples, but new ones are uncovered almost every day.
Let’s begin with Facebook’s newsfeed algorithm. This is an example of a value alignment problem on many levels. On the software level, its algorithm is clearly misaligned with the public good. All Facebook wants to do is maximise user engagement. Of course, user engagement is hard to measure, so Facebook has decided instead to maximise clicks. This has caused many issues. Filter bubbles. Fake news. Clickbait. Political extremism. Even genocide.2
Facebook’s newsfeed algorithm is also an example of a value alignment problem at the corporate level. How could it be that Facebook decided that clicks were the overall goal? In September 2020, Tim Kendall, who was ‘Director of Monetization’ for Facebook from 2006 until 2010, told a Congressional committee:
We sought to mine as much attention as humanly possible . . . We took a page from Big Tobacco’s playbook, working to make our offering addictive at the outset . . . We initially used engagement as sort of a proxy for user benefit. But we also started to realize that engagement could also mean [users] were sufficiently sucked in that they couldn’t work in their own best long-term interest to get off the platform . . . We started to see real-life consequences, but they weren’t given much weight. Engagement always won, it always trumped.3
In 2018, as evidence of the harmful effects of Facebook’s newsfeed algorithm became impossible to ignore, Mark Zuckerberg announced a major overhaul: the newsfeed would now emphasise ‘meaningful social interactions’ over ‘relevant content’. The changes prioritised content produced by a user’s friends and family over ‘public content’, such as videos, photos or posts shared by businesses and media outlets.
Facebook’s corporate values are arguably in opposition to the public good in a number of other areas too. In October 2016, for example, the investigative news outlet ProPublica published a story under the headline ‘Facebook Lets Advertisers Exclude Users by Race’.4 The story exposed how Facebook’s micro-targeting tools let advertisers direct adverts at its users5 according to their race and other categories.
Adverts for housing or employment that discriminate against people based on race, gender or other protected features are prohibited by US federal law. The Fair Housing Act of 1968 bans adverts that discriminate ‘based on race, color, religion, sex, handicap, familial status, or national origin’. And the Civil Rights Act of 1964 prohibits job adverts which discriminate ‘based on race, color, religion, sex and national origin’.
Despite the outcry that followed ProPublica’s story, Facebook continued to let advertisers target their adverts by race. One year later, in November 2017, ProPublica ran the headline ‘Facebook (Still) Letting Housing Advertisers Exclude Users by Race’.6 Nothing much had changed. As a computer programmer myself, I can’t believe that it takes years to remove some functionality from the part of Facebook’s code that sells adverts. Facebook has 45,000 employees to throw at the problem. I can only conclude that the company doesn’t care. And that the regulator didn’t make it care.
I could pick on many other technology companies that have demonstrated values misaligned with the public good. Take Google’s YouTube, for instance. In 2019, Google was fined $170 million by the US Federal Trade Commission (FTC) and New York’s attorney-general for violating children’s privacy on YouTube. The Children’s Online Privacy Protection Act (COPPA) of 1998 protects children under the age of 13, meaning parental consent is required before a company can collect any information about a child.
Google knowingly violated COPPA by collecting information about young viewers of YouTube. There are over 5 million subscribers of its ‘Kids Channel’, most of whom, it seems fair to guess, are children. And many of the 18.9 million subscribers to its Peppa Pig channel are also probably children. But Google collects information about these subscribers to engage them longer on YouTube, and to sell adverts.
Google boasted to toy companies such as Mattel and Hasbro that ‘YouTube was unanimously voted as the favorite website for kids 2-12’, and that ‘93% of tweens visit YouTube to watch videos’. Google even told some advertisers that they did not have to comply with COPPA because ‘YouTube did not have viewers under 13’. YouTube’s terms of service do indeed require you to be over 12 years old to use the service. But anyone with kids knows what a lie it is for Google to claim that YouTube does not have child viewers.
The $170-million fine was the largest fine the FTC has so far levelled against Google. It is, however, only a fraction of the $5-billion fine the FTC imposed on Facebook earlier in 2019, in response to the privacy violations around Cambridge Analytica. This matched the $5-billion fine that the EU imposed on Google for antitrust violations connected to its Android software.
The case the FTC brought against Google is not the end of the YouTube matter. A new lawsuit was filed in a UK court in September 2020, claiming that YouTube knowingly violated the United Kingdom’s child privacy laws; it is seeking damages of over $3 billion. You have to wonder how big the fines need to be for Big Tech to care.
CORPORATE VALUES
Technology companies often have rather ‘wacky’ values. Some of this is marketing. But it also tells us something about their goals, and how they intend to pursue those goals.
Paul Buchheit – lead developer of Gmail and Google employee number 23 – came up with Google’s early motto, ‘Don’t be evil’. In a 2007 interview, he suggested, with not a hint of irony, that it’s ‘a bit of a jab at a lot of the other companies, especially our competitors, who at the time, in our opinion, were kind of exploiting the users to some extent’. He also claimed that he ‘wanted something that, once you put it in there, would be hard to take out’.7
At least this part of his prediction has come true: ‘Don’t be evil’ is still in Google’s code of conduct. But only just. In 2018, ‘Don’t be evil’ was relegated from the start to the final sentence on the 17th and last page of Google’s rather long and rambling code of conduct.
Facebook’s motto was perhaps even more troubling: ‘Move fast and break things’. Fortunately, Facebook realised that breaking too much in society could be problematic, and so in 2014 the motto was changed to ‘Move fast with stable infrastructure’. Not so catchy, but it does suggest a little more responsibility. This is to be welcomed as, for most of its corporate life, moving fast and breaking things seemed an excellent summary of Facebook’s troubled operations.
Amazon’s mission slogan is also revealing: ‘We strive to offer our customers the lowest possible prices, the best available selection, and the utmost convenience. To be Earth’s most customer-centric company, where customers can find and discover anything they might want to buy online.’ Amazon is clearly laser-focused on its customers. So it’s not surprising, then, that news stories regularly reveal how workers in its distribution centres are being poorly treated. Or how suppliers are being squeezed by the tech giant.
Microsoft’s mission is ‘to empower every person and organization on the planet to achieve more’. But GoDaddy, the domain name register and web-hosting site, is even more direct: ‘We are here to help our customers kick ass.’ Clearly, value alignment in Big Tech is a problem we need to address.
GOOGLE’S PRINCIPLES
Following the controversy around Project Maven, Google drew up seven principles that would guide its use of AI.8 These principles were made public in June 2018.
We believe that AI should:
1.Be socially beneficial.
2.Avoid creating or reinforcing unfair bias.
3.Be built and tested for safety.
4.Be accountable to people.
5.Incorporate privacy design principles.
6.Uphold high standards of scientific excellence.
7.Be made available for uses that accord with these principles.
In addition to the above principles, Google also promised not to ‘design or deploy AI in the following application areas’:
1.Technologies that cause or are likely to cause overall harm . . .
2.Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3.Technologies that gather or use information for surveillance violating internationally accepted norms.
4.Technologies whose purpose contravenes widely accepted principles of international law and human rights.
The last promise seems entirely unnecessary. Contravening international law is already prohibited. By law. Indeed, the fact that many of these ethical principles needed to be spelled out is worrying. Creating unfair bias. Building technology that is unsafe, or lacking in accountability. Causing harm to your customers. Damaging human rights. No business should be doing any of this in the first place.
Even if Google’s AI principles sound good on paper, a couple of significant problems remain. In particular, who is going to police Google? And how can we be sure they will? Nine months after the AI principles were first published, in March 2019, Google established the Advanced Technology External Advisory Council (ATEAC) to consider the complex challenges arising out of its AI principles. The council had eight members. It was slated to meet four times each year to consider ethical concerns around issues like facial recognition, the fairness of machine-learning algorithms, and the use of AI in military applications.
There was, however, an immediate outcry about this AI ethics board, both from within and outside Google. One of the eight council members was Kay Coles James, president of the Heritage Foundation, a conservative think-tank with close ties to the administration of then president Donald Trump. Thousands of Google employees signed a petition calling for her removal over what they described as ‘anti-trans, anti-LGBTQ and anti-immigrant’ comments she had made.
One board member, Alessandro Acquisti, a professor of Information Technology and Public Policy at Carnegie Mellon University, quickly resigned, tweeting: ‘While I’m devoted to research grappling with key ethical issues of fairness, rights and inclusion in AI, I don’t believe this is the right forum for me to engage in this important work.’
Another board member tweeted: ‘Believe it or not, I know worse about one of the other people.’ This was hardly a glowing endorsement of Kay Coles James. Indeed, it was more of an indictment of another (unnamed) member of the board.
One week after announcing the external ethics board, and after a second board member had resigned, Google accepted the inevitable. It closed the board down and promised to think again about how it policed its AI principles. At the time of writing, in February 2022, Google has yet to announce a replacement.
IBM’S THINKING
Around the same time that Google was grappling with its AI principles, IBM offered up its own guidelines for the responsible use of AI. It proposed these principles both for itself and as a roadmap by which the rest of industry could address concerns around AI and ethics.
IBM’s Principles for Trust and Transparency
1.The purpose of AI is to augment human intelligence.
2.Data and insights belong to their creator.
3.New technology, including AI systems, must be transparent and explainable.9
Unfortunately, all three principles seem poorly thought out. It’s as though IBM had a meeting to decide on some AI principles, but the meeting wasn’t scheduled for long enough to do a proper job. Let me go through each principle in turn.
Yes, AI can ‘augment’ human intelligence. AI can help us be better chess players, radiologists or composers. But augmenting human intelligence has never been AI’s sole purpose. There are many places where we actually want artificial intelligence to replace human intelligence entirely. For example, we don’t want humans doing a dangerous job like clearing a minefield, when we can get robots to do it instead. Less dramatically, we don’t want human intelligence doing a dull job like picking items in a warehouse, when we can get robot intelligence to do it for us.
There are also places where we want artificial intelligence not to augment human intelligence but to conflict with human intelligence. The best way to surpass human intelligence might not be to try to extend what we can do, but to go about things in completely new ways. Aeroplanes don’t augment the natural flight of birds. They go about heavier-than-air motion in an entirely different way.
We’ve built many tools and technologies that don’t ‘augment’ us. X-ray machines don’t augment our human senses. They provide an entirely new way for us to view the world. Similarly, AI won’t always simply extend human intelligence. In many settings, it will take us to totally new places. It is misguided, then, to say that AI is only ever going to ‘augment’ us, or that AI is never going to take over some human activities completely. At best, it is naive; at worst, it is disingenuous. It comes across to me as a clumsy attempt to distract attention from fears about jobs being replaced.
Asserting that data and insights belong to the creators – IBM’s second ethical principle – may seem like a breath of fresh air in an industry notorious for stealing data from people and violating their privacy. However, international laws surrounding intellectual property already govern who owns such things. We don’t need any new AI principles to assert these rights. Again, this seems a somewhat clumsy attempt to distract from the industry’s failure to respect such data rights previously. Indeed, claiming that ‘data and insights belong to their creator’ is itself an ethical and legal minefield. What happens when that creator is an AI? IBM, do you really want to open that Pandora’s box?
Finally, IBM’s third ethical principle is that AI systems must be transparent and explainable. While it’s often a good idea for AI systems to be transparent and explainable, there are many settings where this simply might not be possible. Let’s not forget that the alternative – human decision-making – is often not very transparent. Transparency can even be a bad thing. It may enable bad people to hack you.
As for explanation: we cannot get a computer-vision system today to explain how its algorithms recognise a stop sign. But then, even the best neurobiologist struggles to explain how the human eye and brain actually see a stop sign. It’s entirely possible that we’ll never be able to explain adequately either human or artificial sight.
Transparency and explainability help build trust, but, as I shall argue later, there are many other components of trust, such as fairness and robustness. Transparency and explainability are a means to an end. In this case, the end is trust. But – and this is where IBM gets it wrong – transparency and explainability are not an end in themselves.
What good would there be in an AI system that transparently explained that it was not offering you the job because you were a woman? Unless you have money and lawyers, there would likely be nothing you could do about this harm. A much better third principle would have been that any new technology, AI included, should be designed and engineered to be worthy of your trust.
RETHINKING THE CORPORATION
Given all these concerns about technology companies, and their faltering steps to develop and deploy AI responsibly, it is worth considering how we might change things for the better. It is easy to forget that the corporation is a relatively new invention, and very much a product of the Industrial Revolution. Most publicly listed companies came into being only very recently. And many will be overtaken by technological change in the near future.
Fifty years ago, companies on the S&P 500 Index lasted around 60 years. Today, most companies on the index last only around two decades. It is predicted that three-quarters of the companies on the S&P 500 today will have disappeared in ten years’ time.
It is also worth remembering that the corporation is an entirely human-made institution. It is designed in large part to permit society to profit from technological change. Corporations provide the scale and coordination to build new technologies. Limited liability lets directors take risks with new technologies and new markets without incurring personal debt. And venture capital, along with bond and equity markets, gives companies access to funds that enable them to invest in new technologies and expand into new markets.
Twenty years ago, only two of the top five companies in the world were tech companies. The industrial heavyweight General Electric was the most valuable publicly listed company, followed by Cisco Systems, ExxonMobil, Pfizer and Microsoft. Today, all five of the most valuable listed companies are digital technology companies: Apple, Microsoft, Amazon, Alphabet and Facebook. Close behind them are Tencent and Alibaba.
Perhaps it is time to think, then, about how we might reinvent the idea of the corporation to better suit the ongoing digital revolution. How can we ensure that corporations are more aligned with the public good? How do we better share the spoils of innovation?
There was another invention of the Industrial Revolution that was designed to meet many of these ends. But, sadly, it seems to be dying away. This was the mutual society. Unfortunately, mutuals have some competitive disadvantages. For example, until recently, mutuals were unable to raise capital other than by retaining past profits. This has put them at a severe disadvantage to publicly listed companies.
A newer invention may be part of the answer: the B (or benefit) corporation. This is a purpose-driven for-profit business that creates benefit for all stakeholders, not just shareholders. A business that balances purpose and profit by considering the interests of workers, customers, suppliers, community and the environment, and not just those of shareholders.
There are over 3300 certified B corporations today across 150 industries in over 70 countries. Household names like Ben & Jerry’s (ice cream makers) and Patagonia (outdoor apparel) are certified B corporations. But as yet there is only one B corporation I’m aware of that is focused on AI.
Lemonade is a for-profit B corporation that is using artificial intelligence to disrupt the staid insurance industry. It offers homeowners, renters and pet owners insurance in the United States. It returns underwriting profits to non-profits selected by its community during its annual giveback. The company aims to do well in its business by doing good in its community.
Lemonade uses AI chatbots and machine learning to supercharge and automate much of the customer experience. It takes just 90 seconds to get insured, and a mere three minutes to pay many claims. If you believe the marketing, the company is much loved by its customers. Perhaps we need a lot more such B corporations developing AI responsibly and giving back to their communities?10
There was one other high-profile non-profit set up to develop AI responsibly. OpenAI was founded in San Francisco in late 2015 by Elon Musk and some other investors. They collectively pledged to invest a cool $1 billion in OpenAI to ensure that AI benefits all of humanity. In 2019, Microsoft invested an additional $1 billion. But at the same time, OpenAI stopped being a non-profit company.
In May 2020, OpenAI announced the world’s largest neural network, a 175-billion-parameter language neural network called GPT-3. Many AI experts were blown away by the neural network’s ability to generate stories, write poems, even generate simple computer code. Three months later, in September 2020, OpenAI announced it had exclusively licensed GPT-3 to Microsoft.
I’m still waiting for OpenAI to announce that its name has changed to ClosedAI. It’s hard to see OpenAI delivering on its goal of ensuring that AI benefits all of humanity. In the meantime, we’ll continue to worry about the ethics of the companies, as well as the people, building AI.
So far I have focused on the people and companies building AI, as their goals and desires are reflected in some of the AI that is being created today. I turn next to some of the pressing ethical issues surrounding AI’s development and deployment.