By Reema Patel1
1Head of Public Engagement, Ada Lovelace Institute
Trust, many people have argued, is the “secret sauce” that underpins finance and banking. The 2008 financial crisis was amongst the many incidents which illustrated the centrality of credibility and trust to the effective and smooth operating of the financial services industry. One must not lose sight of the fact that finance and money itself can be understood as a “social technology” – for it to serve its proper function, it is trust that is central to its operation. However, famously, Onora O’Neill has critiqued a focus on trust alone, arguing that it is trustworthiness that we must be more focused on she says:
We should aim for more trust in trustworthy things but not in untrustworthy things…intelligently placed and intelligently refused trust should be the proper aim for society.
—Onora O’Neill
This argument has often been interpreted as a need for technology that is trustworthy. But in O’Neill’s infamous speech, “What we don’t understand about trust,” she later makes clear that trustworthiness is not a property that can be held by the “thing” – the technology – rather, it is something that is a property of the people and the organizations developing the technology. She says:
Trust is well placed if it’s directed to matters in which the other party is reliable, competent and honest – so, trustworthy. Can you trust the corner shop to sell fresh bread? Can you trust your postman to deliver letters?…Trust is badly placed if it’s directed to matters in which others are dishonest or incompetent or unreliable.
—Onora O’Neill
For our purposes, we need to ask whether we can trust the designer, the developer, the public servant to design, use and deploy the technology (and in particular, FinTech) most appropriately – in an honest, competent and reliable fashion. It’s clear that the role for industry in setting high standards, building trustworthiness and shaping AI for good is essential.
Although trust and trustworthiness matters, arguably, even this is not enough to secure artificial intelligence (AI) that works in the public interest. Relying on citizens to determine trustworthiness is particularly problematic given that there is often a lack of transparency about the benefits and harms, as well as a lack of choice when it comes to the use of data and AI. You might not think Facebook is trustworthy, but you might still feel it limits your life or causes you some material harm to give it up. And when it comes to interacting with public services or accessing financial services such as mortgages, banking options or low-cost credit, there might be even less choice there.
Increasingly, the notion that society must be “in the loop” through increased public involvement in the governance of the technology is starting to take hold. Understanding how best those standards can be set requires drawing upon the notion of legitimacy and the creation of a social licence. We understand legitimacy as the broad base of public support that allows companies, designers, public servants and others to deliver beneficial outcomes for people and for wider society through the use of AI. Legitimacy is about ensuring that there is a reasonable settlement between those who use AI and data, and those who are directly affected by that use.
For legitimacy to be secured, we need ethical innovation. Ethics and innovation are often seen as uneasy bedfellows – and yet it appears that there is an enormous gap in enabling what is often described as ethical innovation – “mission-led” innovation which is purposeful, in service of humanity and its mission, and which supports us collectively to make the most of our resources and our capacities as part of wider society. As I have already flagged, finance raises particular challenges for those seeking to design a trustworthy and more legitimate approach to the use of AI. We are dealing with a complex system that is evolving at pace – which has features of being emergent, interactive, unpredictable and non-linear, and of posing profound systemic risk to people and society if issues are not identified and spotted early. However, “retreat” from technological innovation means we face a double-edged sword; “techlash” places innovation that works for people and society at risk, preventing us from realizing the benefits of financial innovation.
There are numerous barriers to change preventing us from realizing these benefits, such as:
In 2018, the Ada Lovelace Institute worked in partnership with the Finance Innovation Lab, convening technologists and startup entrepreneurs to better understand how to support ethical innovation in the financial services sector. Our participants told us that in developing and designing AI and tech-based interventions, they felt that having a clear ethical purpose and mission had helped to motivate and inspire support in their work. They also told us that they felt this particularly powerfully in 2018 when there was a tangible “cultural moment” that meant ethics was taken more seriously as a standard part of the tech development process. Participants saw a parallel with a not dissimilar moment for their industry – the 2007/08 Global Financial Crisis – which also underlined the urgency for change in our financial system.
We also heard from these finance entrepreneurs about the benefits that recent legislation and regulation such as PSD2, open banking and GDPR had brought about by providing certainty and setting standards in the market that they could comply with. And numerous participants spoke particularly favourably of the opportunities to participate in sandboxes enabled by the Financial Conduct Authority – creating safe spaces that allowed for both the protection of groups and individuals, whilst simultaneously balancing these with the need to support a thriving innovation sector. They similarly recognized that the pace of technological development in, for example, machine learning and angular code language also “made their businesses fly”. Whilst changing behaviour is thought to be incredibly difficult, there was also recognition that FinTechs might be best places to have the tools to uncover how best to influence it.
During the course of our workshop, finance entrepreneurs identified four practical interventions through which more trustworthy and ethical innovation in the financial services sector might be enabled. These are as follows.
There is a need to build capacity within design and development – so as to enable greater reflexivity and responsiveness to people (who, it was recognized, were likely to be far more diverse and less homogenous than those developing the technology). It is important to consider the use of agile and flexible approaches, the importance of techniques such as UX journey design, open innovation and open co-design, as well as the use of public deliberation when designing AI and the technology in the financial services industry. Initiatives such as data trusts, kitemarking, ethical scoring and ranking systems, and application of a code of ethical standards might be other useful and practical ways through which to strengthen FinTech’s alignment with public values.
Media backlash and “techlash” has served to undermine ethical innovation. This often stemmed from a lack of understanding of finance itself, as well as widespread public reluctance to engage in discussion about related matters (e.g. there is generally a sense that it is taboo to talk about money, or even to talk about how to make money from the use of data/AI, which is divorced from the realities of everyday practice). Changing public narratives in this space, in the deepest sense, is not simply limited to changing norms about the technology – but also about finance and money itself.
The lack of transparency about, e.g. who an organization is lending to and who is being rejected, as well as the criteria being applied, can risk overlooking an issue, which is that tech might benefit some groups at the expense of others (i.e. there is invariably likely to be distributional impact). It felt clear that there was a debate to be had about the winners and losers from the use of the technologies, but there was a lack of willingness to engage in a conversation about what was acceptable, and what was not. In addition, it is central to ensure that those who are designing technologies are diverse, and thus able to better respond to and understand the lived experience of underserved groups and minorities. Given the complexity of the system (driven by the pace, uncertainties and unpredictabilities of the changes posed), designing effective interventions would be limited if they were to follow an exclusively formulaic process. A particularly strong common theme was placed on the importance of the need for shared resources, capacity building and collaboration (across industry/regulation/research) to enable developers to better understand what ethical good practice looked like, and embed that in practice in a more anticipatory manner, upstream of public “techlash”.
Last but absolutely not least, those we spoke to at the front line of designing these technologies highlighted that more was required in terms of supporting FinTechs and others to understand more practically how best to meet ethical standards beyond blind adherence to “tick box” processes. These might well be provided by independent bodies or a peer support network distinct from the role of the regulator. There was also a recognition that regulators were often under-resourced to deliver their roles effectively – with calls for increased resourcing for the FCA to support them to enable their effective operation.