By Shân M. Millie1
1Founder, Bright Blue Hare
Let’s say you’ve skipped to this section of the book because you’re in (re)insurance; maybe you’re an executive on the board, or an independent non-executive/external director, or leading a critical function in the firm. Almost certainly you’ve grown up in insurance, and are technically adept in underwriting, finance or actuarial. This is a business that (a) productizes data-driven insights, measurement, tracking and mitigation, and (b) specializes in imagining and quantifying risks that have not been invented yet. Technology is constantly on the board agenda in one way or another…so, no-one can say that your data and technology foundations are shaky. But technical literacy means something different in the 2020s, and as an insistence on individual accountability for leaders across financial services strengthens,1 this matters to individual leaders.
So to a long agenda of “everyday” issues – results; COR; analysts’ ratings; economic, political and social turbulence; investment returns and the rest – insurance executives must now add the requirement to develop, articulate and (increasingly) defend decision-making on that cluster of technologies labelled “artificial intelligence” and on algo-driven business. This chapter will not list use cases, or survey scenarios (covered thoroughly in this section). Instead, it tries to sit where you’re sitting and consider the questions to ask yourselves and the firm, to go beyond “response”, to leadership.
Insurance has been a primary funder of InsurTech from the outset, and an enthusiastic investor, incubator and funder of AI technologies, startups and scaleups. That’s not leadership, it’s a business strategy. And it’s not enough for insurance, mainly because of how very important it is. Insurance’s role as a social protection mechanism cannot be overstated. If it didn’t already exist, a system to recover from the stuff we really cannot control – illness, accident, death, natural disasters – would have to be invented.
Insurance:
When you think about what insurance firms do at the most basic level – organize capital, resources and experts to be there for the customer on their worst day (or at least a very bad one) – it’s much easier to compare it to a health or emergency service than to, say, a wealth manager. Whether the customer is a shipping firm, a factory owner, or an individual wanting to protect their health and family, insurance participates in highly charged and critical moments of vulnerability. In health insurance, this can even mean life and death. The very importance of insurance means that different, arguably higher, standards apply, as these examples illustrate:
Civic society has decided to limit the use of powerful genetic testing technologies by insurers for life insurance. Moratoria on insurers using genetic information are in force in Australia, the UK and in some European Union member states if the sum assured is below a certain threshold. Canada’s 2017 Canadian Genetic Non-Discrimination Act imposes a complete ban on underwriting based on the results of any disclosed genetic test results.2
Super-accurate, personalized property risk ratings (and premia) developed by UK household insurers using proprietary data sets, sophisticated modelling and predictive analytics created a situation of effective exclusion: the people who needed the protection most were being priced out – or not priced at all. Society, in this case, the UK government, decided this was not acceptable. UK Insurance created Flood Re, a scheme whereby every household policyholder in the UK pays a small extra premium each year towards a reinsurance pool, which then allows others deemed to be in areas of high flood risk to be offered insurance at normal rates.
So, insurance plays such a vital role, what’s expected is different; further, its toxic reputational legacy amplifies and energizes legitimate scrutiny with suspicion, often at the heart of brand storytelling new players use to differentiate themselves. At the time of writing Swedish property insurance startup Hedvig is making headlines with a million-dollar funding round and its “Nice Insurance” meme, further described as building a “modern full-stack Insurance company” by “not being inherently greedy”.3 My professional experience is that insurance is full of skilled, empathetic, and conscientious people. But, euphemistic jargon like “dual pricing” and “loyalty penalty” really doesn’t wash outside our professional bubble, does it? Insurance is starting from such a dire position of “non-trust” generally, hyper-vigilance is required on AI.4
The confident prescriptions for your firm’s success (or imminent demise) dependent on “enthusiastic adoption” of AI coming at you from all directions belie an inconvenient truth: we’re at the very beginning of getting used to life and business with widespread AI. There is no blueprint, even “professional disagreements” amongst AI technologists, debating with each other vociferously on definitions and the timing of “the singularity” (the hypothesized future merger between human intelligence and machine intelligence to create something bigger than itself).5 There is general agreement that having experienced two “AI Winters” (see AI milestones timeline at the end of this chapter), and despite deal velocity and mind-boggling valuations echoing dot.com bubbles past, we’re now at a tipping point where algo-driven activities are embedding into and affecting every area of human life. It is erroneous (dangerous, even) to think that (a) AI is a source of absolute and totally predictable “truth”, and (b) its design and control is no-one’s business except the technical AI builders and specialists. These two quotes alone should hammer home why that really is not good enough:
Programs are not products; they are processes and we will never be sure what a process does until we run it – as occurred recently when Amazon’s facial recognition software misidentified 28 members of Congress as criminal suspects.
David Fisk, Emeritus Professor Imperial College, Centre for Systems Engineering and Innovation, Imperial College6
Many AI techniques remain untested in financial crisis scenarios. There have been several instances in which the algorithms implemented by financial firms appeared to act in ways quite unforeseen by their developers, leading to errors and flash crashes (notably the pound’s flash crash following the Brexit referendum in 2016).
Bonnie G. Buchanan, PhD, FRSA, Artificial Intelligence in Finance, Alan Turing Institute7
Insurance leaders in the 2020s need to have a working understanding of AI technologies that is sufficient to confidently ask – and answer – the business-critical question: Just because we can, should we?
Let’s look at three examples common in insurance as food for thought:
Insurance has enthusiastically adopted behavioural analytics techniques for predictive and habit-changing purposes as a core competency and business process, notably in auto and health insurance. It turns out that humans may not work that way after all: in a recent study, researchers analysed the data of 382 Singapore residents who, in the hope of getting an insurance discount, agreed to let an app monitor and rate their driving. So far, so “run of the mill”. The research found that driving scores were noticeably worse on trips people took right after reviewing their ratings, compared to trips taken when people hadn’t reviewed them. What’s going on here? The research concluded that “the best approach is to provide individualized feedback because no single approach is going to work well for everybody.”8
For a while, one could have been forgiven for thinking that, or even confidently making this assertion, and in public forums (as I’ve personally seen insurance leaders do). How times change. There is a clear and powerful shift ongoing in public and civic society: the novelty of bartering personal data in return for utility – on Google, social media, or with other commercial entities like insurers – is wearing off. Data privacy, discrimination and cyber risks have entered common discourse, for many reasons including legislation (e.g. General Data Protection Regulation) and informed advocacy.9 The more we experience algo-driven activities in action – election campaigns, recruitment, facial recognition used for repression – the more “information asymmetry” changes from acceptable business “advantage” into discrimination and control. The lasting consequences of being on the wrong end of that imbalance become clear.
In the search for that game-changing “killer app”, personalization has been held up as both what insurance should be built around, and what customers really want. The Flood Re & Genetics in Life Insurance examples mentioned earlier illustrate how highly accurate models create “risk pools of one”: your challenge sat at the board table is, when does personalization (or “positive selection” or “natural segmentation”) become architecting exclusion? Insurance ethics expert Duncan Minty10 is clear:
I believe the future shape of insurance will not be formed around personalisation because it is a solution that ultimately serves the market far more than it serves the consumer. It involves too much push, and not enough pull. It’s built upon inherent partiality, and will progressively feel exclusionary, rather than complete and inclusionary. Therein lies its fatal flaw.
AI is not a plug-and-play piece of “kit” and leadership here should not be measured by the size of your data science team, yet boards routinely insist on using this type of “prove we’re doing something” approach. The World Economy Foundation’s (WEF) New Physics of Financial Services report11 prescribes the following:
Taking the last point first, this speaks to being able to articulate your purpose as a business, your “Why (are you)?”. You and your people are there for a reason, and, ultimately, that reason is the customer. It’s not a trivial matter to pinpoint and describe; it’s even more arduous to live it, because that requires defining exactly how your firm lives its purpose. Without the detail, all you have is a slogan, not an authentic promise and ambition around which your organizational culture is built and nurtured and that inspires outstanding performance from your people, and which flows through decision-making everywhere in the firm. According to the FCA, “A focus on culture is the responsibility of everyone in the firm. It should be a collaborative effort, by all areas and at all levels – and industry must take responsibility for delivering the standards it aspires to.”12 Codifying purpose – clearly and transparently – tells you where you stand on AI. Addressing all four of the WEF’s action areas, questions to ask include:
If ethical design and explainable AI (XAI) haven’t made the agenda yet, it’s only a matter of time. Whether you’re developing a leadership position or not, I believe the requirement for organizations to meet codified standards on their use of AI explicitly addressing design, transparency (in use) and accountability (use, effects and results of programmed algorithms) will be commonplace by 2025, manifested in kitemarks, mandatory audits and all the mechanics of insurance-specific regulatory oversight and accountability. Corporate carbon emissions reporting via greenhouse gas protocols may well be the blueprint here, with much faster progress to “Scope 3” equivalents for corporate use of AI – i.e. accountabilities going beyond internal operations but deep into your entire value chain including partners and investors. Below are selected and non-exhaustive lists of (a) definitions, and (b) key developments in ethical design.
AI Ethics | A set of values, principles and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies. |
Ethically Aligned Design/Ethical Design | Ethics by Design: The technical/algorithmic integration of ethical reasoning capabilities as part of the behaviour of an artificial autonomous system. |
Ethics in Design: The regulatory and engineering methods that support the analysis and evaluation of the ethical implications of AI systems, as these integrate or replace traditional social structures. | |
Ethics for Design: The codes of conduct, standards and certification processes that ensure the integrity of developers and users as they research, design, construct, employ and manage artificial intelligent systems. | |
Explainable AI (XAI) | A developing subfield of AI, focused on composing complex AI models to humans in a systematic and interpretable manner to increase the transparency of black box algorithms by providing explanations for the predictions made. |
*Source: Dignum, V. Ethics Inf Technol (2018) 20: 1. https://doi.org/10.1007/s10676-018-9450-z.
**See the chapter “Introduction on AI Approaches in Capital Markets” by Aric Whitewood, for more detail.
ECPAIS | Ethics Certification Program for Autonomous & Intelligent Systems | Certifications & processes for autonomous and smart systems (e.g. smart homes, companion robots, autonomous vehicles). |
OCEANIS | Open Community for Ethics in Autonomous and Intelligent Systems | Awareness-building platform focused on algos, sensors, big data, ubiquitous networking and technologies used in autonomous and intelligent systems across all industry sectors. |
EAD | Ethically-Aligned Design | A manual created by the IEEE of practical recommendations for policymakers, technologists and academics to advance public discussion and establish standards and policies for ethical and social implementations |
— | AI Commons | Non-profit, made up of AI practitioners, academia, NGOs, AI industry players, entrepreneurs, and others. Connects “problem owners” with “solvers”. |
CXI | Council on Extended Intelligence | Joint IEEE/MIT Media Lab initiative comprised of individuals who “prioritize people and the planet over profit and productivity.” |
FEAT | Monetary Authority of Singapore: Fairness, Ethics, Accountability, Transparency | Principles-based guidelines for data use and data protection, introduced in 2017. |
Insurance at its best blends data, tech and human judgement and applied empathy to produce responsive, relevant, cost-effective solutions to some of the most challenging situations in life and business. If it chooses, purposeful, meaningful leadership on AI could be the answer to our sector’s deeply tarnished reputation with the customer, repositioning insurance as an “agent of trustworthiness” par excellence. You have a choice: instead of organizing your firm to help prevent that “worst day”, and certainly help cope and recover from it, how much of your resources will you countenance to be directed at explaining why NOT?
Dignum, V., “Ethics in Artificial Intelligence”, Ethics Inf Technol (2018) 20:1: https://doi.org/10.1007/s10676-018-9450-z.
Marcus, Gary and Davis, Ernest, “How to Build Artificial Intelligence We Can Trust”, NY Times Opinion, 6 September 2019.
Max Tegmark, Life 3.0: Being Human in the age of Artificial Intelligence, 2017.
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition. IEEE, 2019.
Leslie, D., Understanding Artificial Intelligence Ethics and Safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute, 2019: https://doi.org/10.5281/zenodo.3240529.