CHAPTER 58
Regulation of AI within the Financial Services Sector

By Tim Molton1

1Associate, MJM Limited

Governments are invariably reactive when legislating for technological advances, rather than having the foresight to anticipate how emerging technologies might develop and consequently be utilized throughout varying industries. This is true even of relatively modest developments, but particularly those which are incredibly complex and have the potential to disrupt global markets, such as artificial intelligence (AI).

The same could be said of financial services professionals, many of whom endorse a laissez-faire approach to technology, not seeking to learn about or engage in discussions surrounding emerging technologies. There is an expectation (albeit predominantly among the pre-millennial generation) that new technologies will not ultimately be adopted “en masse” within the financial services sector, but merely used by a small number of businesses with limited efficacy before being superseded or becoming redundant.

This is demonstrated by the banks’ ongoing struggle with their legacy systems and the view that wholesale (and extremely costly) changes to accommodate innovative technology are not a palatable solution – hence the rise of the challenger banks and other FinTechs. Indeed, the emergence of AI in banking is due largely to innovative FinTech startups rather than the established institutions, albeit many banks have partnered with AI companies to offer improved services to their customers. This rise in the use of AI has raised many questions about how (and if) it ought to be regulated.

The Need for Regulation

There are a number of reasons why the use of AI in Financial Services is seen as necessary. For instance, AI requires vast volumes of data to develop the technology, all of which could be vulnerable to malicious actors seeking to misuse the data. AI is also able to analyse customer data almost instantly and can recommend investments and other financial products at a fraction of the cost of a human. This potential for industry-wide redundancies is both an economic and a social concern for governments. Barack Obama in his 2017 farewell address stated that “The next wave of economic dislocation won’t come from overseas…[but] from the relentless pace of automation that makes many good, middle-class jobs obsolete”. Financial advisors and administrative intermediaries are likely to be anxious at the thought of advanced AI being used in their field of work, and with good reason.

In an era of mobile payments, customers are demanding more from financial institutions. Businesses such as Moneybox and Betterment have recognized this demand and have implemented AI into their service, offering to reduce time and monetary costs. Since the 2008 global crisis, regulators have been focused on putting measures in place to ensure good corporate governance in traditional financial institutions, yet the FinTechs are taking advantage of open banking and emerging technologies to disrupt the financial markets free of many of these constraints, such as reporting obligations and licensing requirements.

Further, there is potential for major disruption and manipulation of the markets with the use of AI. Machine learning can facilitate predictive decision-making, for example, in relation to market patterns and behaviour. Such predictive analytics could result a self-fulfilling prophecy, whereby the “wisdom of the crowd” is actually based on AI market predictions which directly influence investor behaviour. Algorithmic “robo” trading has already been blamed for market crashes, such as the flash crash of 6 May 2010 in which US stocks dropped by around a trillion USD in one day. A lack of understanding of the role of sophisticated AI could clearly have dire consequences, and so while different regulators express differing views, there is a general appetite to at least monitor its development, with policies to stem from the findings of such oversight.

Common Technical Standards

Most commentators would agree that common technical standards need to be implemented if oversight is to be meaningful and effective. These standards can relate to a wide range of issues such as data privacy, security, product safety, accuracy and ethics (e.g. managing social biases), and they will be crucial to the success of AI in the long term. Common standards will help to facilitate integration of technologies and ensure transparency, security and consumer protection. Such technical and algorithmic consistency across offerings will allow regulators to more easily assess the appropriateness of such standards and whether a company’s offering is objectively up to standard. They will also provide a benchmark for the legislature and the judiciary when determining how to deal with AI in a legal capacity.

Regulatory Measures

There have, at the time of writing, been a number of steps taken by governments to acknowledge the potential impact of AI and the need to (at least) monitor the progress of the technology. For instance, on 10 April 2018, the UK and 24 other EU Member States signed the Declaration of Cooperation on Artificial Intelligence, which was followed (albeit belatedly) on 8 April 2019 by the European Commission’s Ethics Guidelines for trustworthy AI. The latter suggested that AI must be lawful, ethical and robust, but what constitutes lawfulness will undoubtedly be subject to change in the short term.

The EU also saw the General Data Protection Regulation (GDPR) come into effect on 25 May 2018; it recognizes the use of automation and provides that no decision can be taken solely on the basis of automated processing (article 22). This clearly demonstrates the regulators’ desire to ensure that the rights and liberties of individuals are not usurped by emerging, innovative technology.

In the UK, the April 2018 House of Lords Select Committee AI report highlighted the long-term educational needs surrounding AI, and addressed concerns surrounding the mass processing of customer data. However, despite concluding that there was no obvious need for further legislation at that stage, the Committee was concerned that the data sets used to train AI systems are poorly representative of the wider population, and so AI systems learning from the data could make unfair decisions which reflect the wider prejudices of societies.1 Indeed, perpetuating biases is a well-publicized concern of professionals, consumers and politicians and appropriate regulation to prevent this will be crucial the future success of AI-driven financial services.

Questions of Liability

A significant and complicated issue to consider when regulating AI is that of liability. There will be cases that come before the courts in due course which raise new questions about the liability for loss suffered by humans as a result (at least in part) of AI services or products. Where there is a clear causal link between the damage suffered and the AI service, questions about who is ultimately liable for the loss will need to be addressed.

For instance, if a human relies upon the investment advice of a robo-advisor and suffers significant losses as a result, could liability be attributed to the service provider, the developer or even the AI product itself (although we are a long way from attaching legal status to AI products)? This is something that the legislature and judiciary will have to turn their minds to in due course.

Future Regulation

As regulations are implemented and companies take comfort from the success of innovators, many of which will have tested the market by means of regulatory sandboxes, new businesses will emerge seeking to mirror and enhance the offering of the first wave. This will ultimately lead to increased competition and, in theory, a more cost-effective and efficient product/service offering to customers.

However, as AI becomes more advanced and more broadly adopted throughout the financial services sector, governments and regulators must be adequately prepared and well-resourced to ensure that the technology is used safely and in the interests of consumers, without stifling innovation. This means receiving intelligible and constructive input from industry professionals, technology experts, lawyers and consumers as to the potential uses and pitfalls of such technology.

Clearly there are significant challenges faced in dealing with such complex and far-reaching technology as it becomes ubiquitous. Regulators must be willing and able to grapple with the technical, social, economic and legal consequences of AI use in the financial services sector, and ensure that the rights of individuals (including the right to privacy and the right to work) are not usurped as a consequence of global adoption.

Note

  1. 1The House of Lords Select Committee on Artificial Intelligence (2018) Report of Session 2017–19, AI in the UK: ready, willing and able? Paragraph 119.