CHAPTER 56
AI and the Law: Challenges and Risks for the Financial Services Sector

By Electra Japonas1

1Founder and CEO, The Law Boutique

AI is not perfect, but it’s here to stay. From Alexa’s skills to Spotify’s music recommendations, AI is silently improving our daily lives. The use of AI in the development of Google’s algorithm has increased the efficiency of searches with predictive technology. Uber’s use of AI allows us to retrieve accurate time estimates of when we can expect our driver or food deliveries. AI has even transformed Amazon’s supply chain through the integration of data from maintenance, manufacturing and inventory tracking. It is undoubtedly leading the next wave of the tech revolution, and financial services firms are in the forefront.

At the time of writing, there are more than 2000 AI startups in 70 countries that have raised more than $27 billion.1 The financial services industry is leading the way when it comes to both creation and adoption of AI – from managing assets to safeguarding against theft, managing investment, customer engagement, fraud detection, regulatory compliance and stock predictions. In financial services, decisions about individuals’ creditworthiness have traditionally been made using a transparent process with defined rules and relatively limited data sets. This transparency, however, may not always be achievable when AI drives big data.2

Most AI companies go to great lengths to create objective algorithms. However, even the most mindful expert trainers are susceptible to cultural, geographical or educational influences which can severely impact the underlying assumptions that inform machine-learning code and skew results. This type of unconscious bias poses an important challenge, because bias does not need to be intentional to land financial institutions in hot water.3 Courts and regulators are focusing more on discriminatory effects of credit decisions and policies than on a financial institution’s rationale or motivation. This means that if bias is seen to result in an alleged discriminatory act, there is no need to show that discrimination was intentional to establish liability – only that the discrimination occurred.

As an experienced commercial and data protection lawyer with expertise in deep tech and data protection, I look at this from both a legal and a business standpoint. I see:

And when I look at examples from non-financial sectors, for example, historic (and current) use of AI in policing, I see clear lessons to be learnt in advance for and by financial institutions in their own understanding, deployment and oversight of AI.

Sci-Fi — or Real Life?

In 2002, the Wilmington Delaware Police Department made headlines when it thought it would be a good idea to employ the so-called “jump out squads” technique.

A jump-out squad is described as follows: “…they descend on corners, burst out of marked and unmarked vehicles and make arrests in seconds. Up to twenty officers make up each squad. Police routinely line the people on the corners against a wall and pat them down for weapons…Then the police take the men’s names and addresses, snap their pictures and send them on their way.”4 Justified as a Terry stop (from a 1968 Supreme Court decision, Terry vs. Ohio, that allows officers to stop, question and frisk people they think are suspicious or people in high-crime areas), most of the 200 people that had their pictures taken during this policy roll-out were young black males. Fast forward a decade or so and add in a dose of facial recognition and associated AI technologies and we’re in the realm of movies like the Minority Report. Except today, AI is routinely used by law enforcement agencies globally, in many ways and for various reasons, including “crime forecasting” or “predictive policing”. A good example of this is COMPAS,5 an algorithm widely used in the US to guide sentencing by predicting the likelihood of criminal reoffending. In probably the most notorious case of AI prejudice to date, in May 2016 the US news organization ProPublica reported that COMPAS consistently predicts that black defendants pose a higher risk of reoffending than they actually do – and the reverse for white defendants.6,7

Racial Bias — the Tip of the Iceberg?

But it’s not just racial bias that’s problematic.

Biases find their way into the AI systems we design and are used to make decisions within governments and businesses alike. An IBM Study from 201819,20 confirms that AI systems are only as good as the data we put into them. More than 180 human biases have been defined and classified, which means that data selected, prioritized and converted into algorithms by humans can contain implicit racial, gender, or ideological biases. The fact that AI marginalizes some groups is bad enough. But the real-world implications of biased AI go much further and are much more harmful. AI can increase the already troubling racial imbalances in the justice system. And applied to financial services, woven so thoroughly into the fabric of our lives, the impacts are just as serious and troubling – if not more so.

Legal and Ethical Issues for Your Watch List

Some of the main legal concerns surrounding AI are the fact that they generally handle large amounts of data, which are often personal or sensitive in nature. Data protection measures need to be put in place by AI providers and the companies that implement AI into their business, to safeguard both any information inputted by users, and databases that are queried by the software. The General Data Protection Regulation (GDPR) has brought in stringent requirements to which companies must adhere:

  • Companies that implement AI into their businesses, and technology companies that build and sell AI tools, have a legal obligation to be compliant “by design”. This means that the very reliability of AI to provide accurate and meaningful results – particularly in cases where it is relied upon for legal outcomes – is imperative.
  • But as it is a developing technology, margins of error should be assumed and calculated, and manual monitoring of automated results must, by law, still take place.
  • Protection should be put in place against any malicious cyberattacks which could, e.g. manipulate the information or reduce accuracy.
  • If an algorithm is programmed to make crucial decisions, such as approve someone’s eligibility for insurance cover or a loan, any ethical ramifications should be assessed. This could include the introduction of discrimination by the programmers or trainers of AI.

The GDPR – a Deeper Dive on Key Data Principles

The Purpose Limitation principle set out at Article 5(1)(b) of the GDPR says that data should be collected for specified, explicit and legitimate purposes and that it mustn’t be processed in a way that is incompatible with the purposes for which it was collected.21

In my view, there is an incompatibility here, as data that may initially be collected for one reason will almost undoubtedly be used for another when fed into algorithmic modelling. There is also the operational tendency to collect all data that can be obtained, which means that some data may be collected for an unspecific purpose. Repurposing of data – whereby data collected is used to improve the performance of a service and to generate new types of data – may result in a situation where, practically, it is not clear to the individual what purpose the data will be used for from the outset.

  1. Lawful Basis: All firms need to consider whether they are able to satisfy the lawful bases for processing personal data. For example, if data has been collected relying on a person’s consent, then fresh consent would be required if the same data set is to be used for a different purpose. The initial consent provided covers only the explicit purpose set out at (and agreed) the original collection.
  2. The Data Minimization principle: Personal data needs to be adequate, relevant and limited to what is necessary. This is a huge issue for AI as the principle of data minimization is completely opposed to the concept and use of big data.
  3. Individual Rights: Under the GDPR, data subjects have the right to have their data deleted or rectified. They can also object to the processing of their data. This is clearly extremely difficult, if not impossible, once the data has been fed into an algorithm and decisions made on the basis of that data.
  4. Data Protection Impact Assessments (DPIA): Companies must do a DPIA before they begin any type of processing that is “likely to result in a high risk” to the rights and freedoms of individuals. In particular, the GDPR says you must do a DPIA if you plan to use systematic and extensive profiling with significant effects; process special category or criminal offence data on a large scale; or systematically monitor publicly accessible places on a large scale. But with the self-evolving nature of AI and the often unpredictable output these AI algorithms can produce, how can companies ensure that they are complying with this obligation?,2223
  5. Automated Decision-making: The GDPR does not prevent automated decision-making or profiling, but it does give individuals a qualified right not to be subject to purely automated decision-making. Can AI decision-making now ever be implemented with the GDPR requirement for automated decision-making to always have a human element?,24,25,2627

Board members and executives in financial services should carefully assess the AI solutions that are implemented as part of their business. The assessment should not only rely on what the law allows a company to do today, but a holistic view should be taken instead. A commitment to data ethics as a core value is a wise move, as although we may not have the established rules and legislation to guide our every decision, documenting the rationale behind the decisions and demonstrating that thought has gone into assessing the impact on the rights and freedoms of the people who will be served by these AI solutions, is imperative.

Self-Regulation – A Viable Strategy?

Governments across the world have been reluctant to regulate AI, but this may be changing. A 2018 report from the Computers, Privacy and Data Protection Conference suggested that the European Commission is “considering the possibility of legislating for Artificial Intelligence”. Karolina Mojzesowicz, deputy head of the Data Protection Unit at the European Commission, said that the Commission is “assessing whether national and EU frameworks are fit for purpose for the new challenges”. The Commission is exploring, for instance, whether to specify “how big a margin of error is acceptable in automated decisions and machine learning”.28,29 There is also a tech industry push for more law. In July 2018, Microsoft’s president and chief legal officer, Brad Smith, asserted that there should be “public regulation” of facial recognition technology to address the risk of bias and discrimination in facial recognition tech, the risk to intrusions of privacy, and the potential that mass surveillance might impinge on democratic freedom. Regulators around the world are grappling with how to address AI. What happens when an autonomous car and bus collide? Or when smart contract systems incorrectly record a negotiated mortgage or personal loan agreement? Or when AI-enabled due diligence misses the point? The emerging consensus on approach involves a number of steps:

  1. Establishing governmental advisory centres of AI excellence.
  2. Adapting existing regulatory frameworks to cater for AI where possible.
  3. Some system of registration for particular types of AI.

So whilst financial institutions grapple with the strategic and executional aspects of what, when and how to deploy powerful AI technologies, my advice is that the important “Why?” and “Should we?” questions – with their direct application to legal and reputational ramifications – must be at least as high up the board agenda, if not, higher.

Notes

  1. 1https://www.venturescanner.com/blog/2019/artificial-intelligence-report-highlights-q2-2019.
  2. 2https://www.whitecase.com/publications/insight/ai-financial-services.
  3. 3 Ibid.
  4. 4http://www.talkleft.com/story/2002/08/26/652/40763/civilliberties/Delaware-s-New-Jump-Squads.
  5. 5https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/.
  6. 6https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  7. 7https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say.
  8. 8https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals.
  9. 9https://www.sciencemag.org/news/2017/04/even-artificial-intelligence-can-acquire-biases-against-race-and-gender.
  10. 10https://www.pnas.org/content/115/16/E3635.
  11. 11https://thesocietypages.org/socimages/2009/05/29/nikon-camera-says-asians-are-always-blinking/.
  12. 12https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai.
  13. 13https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind.
  14. 14https://law-campbell.libguides.com/lawandtech/AI.
  15. 15https://www.abc.net.au/news/2017-06-30/bilnd-recruitment-trial-to-improve-gender-equality-failing-study/8664888.
  16. 16https://www.personneltoday.com/hr/is-blind-recruitment-truly-gender-blind/.
  17. 17https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html.
  18. 18https://www.theguardian.com/news/2019/apr/11/china-hi-tech-war-on-muslim-minority-xinjiang-uighurs-surveillance-face-recognition.
  19. 19www.digitaljournal.com/tech-and-science/technology/a-i-systems-are-only-as-good-as-the-data-we-put-into-them/article/531246.
  20. 20https://www.forbes.com/sites/bernardmarr/2019/01/29/3-steps-to-tackle-the-problem-of-bias-in-artificial-intelligence/#3a109d627a12.
  21. 21siarticles.com/bundles/Article/pre/pdf/84451.pdf.
  22. 22https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/data-protection-impact-assessments-dpias/when-do-we-need-to-do-a-dpia/;%20https:/www.itgovernance.co.uk/blog/gdpr-six-key-stages-of-the-data-protection-impact-assessment-dpia.
  23. 23https://www.dataprotection.ie/en/organisations/know-your-obligations/data-protection-impact-assessments.
  24. 24https://ec.europa.eu/info/law/law-topic/data-protection/reform/rights-citizens/my-rights/can-i-be-subject-automated-individual-decision-making-including-profiling_en.
  25. 25https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/rights-related-to-automated-decision-making-including-profiling/.
  26. 26https://www.itgovernance.co.uk/blog/gdpr-automated-decision-making-and-profiling-what-are-the-requirements.
  27. 27https://privacylawblog.fieldfisher.com/2017/let-s-sort-out-this-profiling-and-consent-debate-once-and-for-all.
  28. 28https://www.gigacycle.co.uk/news/eu-considers-the-possibility-of-legislating-for-artificial-intelligence/.
  29. 29https://www.forbes.com/sites/washingtonbytes/2019/02/08/the-eu-should-not-regulate-artificial-intelligence-as-a-separate-technology/#4f0c147c52c9.