By Electra Japonas1
1Founder and CEO, The Law Boutique
AI is not perfect, but it’s here to stay. From Alexa’s skills to Spotify’s music recommendations, AI is silently improving our daily lives. The use of AI in the development of Google’s algorithm has increased the efficiency of searches with predictive technology. Uber’s use of AI allows us to retrieve accurate time estimates of when we can expect our driver or food deliveries. AI has even transformed Amazon’s supply chain through the integration of data from maintenance, manufacturing and inventory tracking. It is undoubtedly leading the next wave of the tech revolution, and financial services firms are in the forefront.
At the time of writing, there are more than 2000 AI startups in 70 countries that have raised more than $27 billion.1 The financial services industry is leading the way when it comes to both creation and adoption of AI – from managing assets to safeguarding against theft, managing investment, customer engagement, fraud detection, regulatory compliance and stock predictions. In financial services, decisions about individuals’ creditworthiness have traditionally been made using a transparent process with defined rules and relatively limited data sets. This transparency, however, may not always be achievable when AI drives big data.2
Most AI companies go to great lengths to create objective algorithms. However, even the most mindful expert trainers are susceptible to cultural, geographical or educational influences which can severely impact the underlying assumptions that inform machine-learning code and skew results. This type of unconscious bias poses an important challenge, because bias does not need to be intentional to land financial institutions in hot water.3 Courts and regulators are focusing more on discriminatory effects of credit decisions and policies than on a financial institution’s rationale or motivation. This means that if bias is seen to result in an alleged discriminatory act, there is no need to show that discrimination was intentional to establish liability – only that the discrimination occurred.
As an experienced commercial and data protection lawyer with expertise in deep tech and data protection, I look at this from both a legal and a business standpoint. I see:
And when I look at examples from non-financial sectors, for example, historic (and current) use of AI in policing, I see clear lessons to be learnt in advance for and by financial institutions in their own understanding, deployment and oversight of AI.
In 2002, the Wilmington Delaware Police Department made headlines when it thought it would be a good idea to employ the so-called “jump out squads” technique.
A jump-out squad is described as follows: “…they descend on corners, burst out of marked and unmarked vehicles and make arrests in seconds. Up to twenty officers make up each squad. Police routinely line the people on the corners against a wall and pat them down for weapons…Then the police take the men’s names and addresses, snap their pictures and send them on their way.”4 Justified as a Terry stop (from a 1968 Supreme Court decision, Terry vs. Ohio, that allows officers to stop, question and frisk people they think are suspicious or people in high-crime areas), most of the 200 people that had their pictures taken during this policy roll-out were young black males. Fast forward a decade or so and add in a dose of facial recognition and associated AI technologies and we’re in the realm of movies like the Minority Report. Except today, AI is routinely used by law enforcement agencies globally, in many ways and for various reasons, including “crime forecasting” or “predictive policing”. A good example of this is COMPAS,5 an algorithm widely used in the US to guide sentencing by predicting the likelihood of criminal reoffending. In probably the most notorious case of AI prejudice to date, in May 2016 the US news organization ProPublica reported that COMPAS consistently predicts that black defendants pose a higher risk of reoffending than they actually do – and the reverse for white defendants.6,7
But it’s not just racial bias that’s problematic.
Biases find their way into the AI systems we design and are used to make decisions within governments and businesses alike. An IBM Study from 201819,20 confirms that AI systems are only as good as the data we put into them. More than 180 human biases have been defined and classified, which means that data selected, prioritized and converted into algorithms by humans can contain implicit racial, gender, or ideological biases. The fact that AI marginalizes some groups is bad enough. But the real-world implications of biased AI go much further and are much more harmful. AI can increase the already troubling racial imbalances in the justice system. And applied to financial services, woven so thoroughly into the fabric of our lives, the impacts are just as serious and troubling – if not more so.
Some of the main legal concerns surrounding AI are the fact that they generally handle large amounts of data, which are often personal or sensitive in nature. Data protection measures need to be put in place by AI providers and the companies that implement AI into their business, to safeguard both any information inputted by users, and databases that are queried by the software. The General Data Protection Regulation (GDPR) has brought in stringent requirements to which companies must adhere:
The Purpose Limitation principle set out at Article 5(1)(b) of the GDPR says that data should be collected for specified, explicit and legitimate purposes and that it mustn’t be processed in a way that is incompatible with the purposes for which it was collected.21
In my view, there is an incompatibility here, as data that may initially be collected for one reason will almost undoubtedly be used for another when fed into algorithmic modelling. There is also the operational tendency to collect all data that can be obtained, which means that some data may be collected for an unspecific purpose. Repurposing of data – whereby data collected is used to improve the performance of a service and to generate new types of data – may result in a situation where, practically, it is not clear to the individual what purpose the data will be used for from the outset.
Board members and executives in financial services should carefully assess the AI solutions that are implemented as part of their business. The assessment should not only rely on what the law allows a company to do today, but a holistic view should be taken instead. A commitment to data ethics as a core value is a wise move, as although we may not have the established rules and legislation to guide our every decision, documenting the rationale behind the decisions and demonstrating that thought has gone into assessing the impact on the rights and freedoms of the people who will be served by these AI solutions, is imperative.
Governments across the world have been reluctant to regulate AI, but this may be changing. A 2018 report from the Computers, Privacy and Data Protection Conference suggested that the European Commission is “considering the possibility of legislating for Artificial Intelligence”. Karolina Mojzesowicz, deputy head of the Data Protection Unit at the European Commission, said that the Commission is “assessing whether national and EU frameworks are fit for purpose for the new challenges”. The Commission is exploring, for instance, whether to specify “how big a margin of error is acceptable in automated decisions and machine learning”.28,29 There is also a tech industry push for more law. In July 2018, Microsoft’s president and chief legal officer, Brad Smith, asserted that there should be “public regulation” of facial recognition technology to address the risk of bias and discrimination in facial recognition tech, the risk to intrusions of privacy, and the potential that mass surveillance might impinge on democratic freedom. Regulators around the world are grappling with how to address AI. What happens when an autonomous car and bus collide? Or when smart contract systems incorrectly record a negotiated mortgage or personal loan agreement? Or when AI-enabled due diligence misses the point? The emerging consensus on approach involves a number of steps:
So whilst financial institutions grapple with the strategic and executional aspects of what, when and how to deploy powerful AI technologies, my advice is that the important “Why?” and “Should we?” questions – with their direct application to legal and reputational ramifications – must be at least as high up the board agenda, if not, higher.