• 8 •
Catastrophes and insurance

Peter Taylor

This chapter explores the way financial losses associated with catastrophes can be mitigated by insurance. It covers what insurers mean by catastrophe and risk, and how computer modelling techniques have tamed the problem of quantitative estimation of many hitherto intractable extreme risks. Having assessed where these techniques work well, it explains why they can be expected to fall short in describing emerging global catastrophic risks such as threats from biotechnology. The chapter ends with some pointers to new techniques, which offer some promise in assessing such emerging risks.

8.1 Introduction

Catastrophic risks annually cause tens of thousands of deaths and tens of billions of dollars worth of losses. The figures available from the insurance industry (see, for instance, the Swiss Re [2007] Sigma report) show that mortality has been fairly consistent, whilst the number of recognized catastrophic events, and even more, the size of financial losses, has increased. The excessive rise in financial losses, and with this the number of recognized ‘catastrophes’, primarily comes from the increase in asset values in areas exposed to natural catastrophe. However, the figures disguise the size of losses affecting those unable to buy insurance and the relative size of losses in developing countries. For instance, Swiss Re estimated that of the estimated $46 billion losses due to catastrophe in 2006, which was a very mild year for catastrophe losses, only some $16 billion was covered by insurance. In 2005, a much heavier year for losses, Swiss Re estimated catastrophe losses at $230 billion, of which $83 billion was insured. Of the $230 billion, Swiss Re estimated that $210 billion was due to natural catastrophes and, of this, some $173 billion was due to the US hurricanes, notably Katrina ($135 billion). The huge damage from the Pakistan earthquake, though, caused relatively low losses in monetary terms (around $5 billion mostly uninsured), reflecting the low asset values in less-developed countries.

In capitalist economies, insurance is the principal method of mitigating potential financial loss from external events in capitalist economies. However, in most cases, insurance does not directly mitigate the underlying causes and risks themselves, unlike, say, a flood prevention scheme. Huge losses in recent years from asbestos, from the collapse of share prices in 2000/2001, the 9/11 terrorist attack, and then the 2004/2005 US hurricanes have tested the global insurance industry to the limit. But disasters cause premiums to rise, and where premiums rise capital follows.

Losses from hurricanes, though, pale besides the potential losses from risks that are now emerging in the world as technological, industrial, and social changes accelerate. Whether the well-publicized risks of global warming, the misunderstood risks of genetic engineering, the largely unrecognized risks of nanotechnology and machine intelligence, or the risks brought about by the fragility to shocks of our connected society, we are voyaging into a new era of risk management. Financial loss will, as ever, be an important consequence of these risks, and we can expect insurance to continue to play a role in mitigating these losses alongside capital markets and governments. Indeed, the responsiveness of the global insurance industry to rapid change in risks may well prove more effective than regulation, international cooperation, or legislation.

Insurance against catastrophes has been available for many years – we need to only think of the San Francisco 1906 earthquake when Cuthbert Heath sent the telegram ‘Pay all our policyholders in full irrespective of the terms of their policies’ back to Lloyd’s of London, an act that created long-standing confidence in the insurance markets as providers of catastrophe cover. For much of this time, assessing the risks from natural hazards such as earthquakes and hurricanes was largely guesswork and based on market shares of historic worst losses rather than any independent assessment of the chance of a catastrophe and its financial consequence. In recent years, though, catastrophe risk management has come of age with major investments in computer-based modelling. Through the use of these models, the insurance industry now understands the effects of many natural catastrophe perils to within an order of magnitude. The recent book by Eric Banks (see Suggestions for further reading) offers a thorough, up-to-date reference on the insurance of property against natural catastrophe. Whatever doubts exist concerning the accuracy of these models – and many in the industry do have concerns as we shall see – there is no questioning that models are now an essential part of the armoury of any carrier of catastrophe risk.

Models notwithstanding, there is still a swathe of risks that commercial insurers will not carry. They fall into two types (1) where the risk is uneconomic, such as houses on a flood plain and (2) where the uncertainty of the outcomes is too great, such as terrorism. In these cases, governments in developed countries may step in to underwrite the risk as we saw with TRIA (Terrorism Risk Insurance Act) in the United States following 9/11. An analysis1 of uninsured risks revealed that in some cases risks remain uninsured for a further reason – that the government will bail them out! There are also cases where underwriters will carry the risk, but policyholders find them too expensive. In these cases, people will go without insurance even if insurance is a legal requirement, as with young male UK drivers.

Another concern is whether the insurance industry is able to cope with the sheer size of the catastrophes. Following the huge losses of 9/11 a major earthquake or windstorm would have caused collapse of many re-insurers and threatened the entire industry. However, this did not occur and some loss-free years built up balance sheets to a respectable level. But then we had the reminders of the multiple Florida hurricanes in 2004, and hurricane Katrina (and others!) in 2005, after which the high prices for hurricane insurance have attracted capital market money to bolster traditional re-insurance funds. So we are already seeing financial markets merging to underwrite these extreme risks – albeit ‘at a price’. With the doom-mongering of increased weather volatility due to global warming, we can expect to see inter-governmental action, such as the Ethiopian drought insurance bond, governments taking on the role of insurers of the last resort, as we saw with the UK Pool Re-arrangement, bearing the risk themselves through schemes, such as the US FEMA flood scheme, or indeed stepping in with relief when a disaster occurs.

8.2 Catastrophes

What are catastrophic events? A catastrophe to an individual is not necessarily a catastrophe to a company and thus unlikely to be a catastrophe for society. In insurance, for instance, a nominal threshold of $5 million is used by the Property Claims Service (PCS) in the United States to define a catastrophe. It would be a remarkable for a loss of $5 million to constitute a ‘global catastrophe’!

We can map the semantic minefield by characterizing three types of catastrophic risk as treated in insurance (see Table 8.1): physical catastrophes, such as windstorm and earthquake, whether due to natural hazards or man-made accidental or intentional cause; liability catastrophes, whether intentional such as terrorism or accidental such as asbestosis; and systemic underlying causes leading to large-scale losses, such as the dotcom stock market collapse.

Although many of these catastrophes are insured today, some are not, notably emerging risks from technology and socio-economic collapse. These types of risk present huge challenges to insurers as they are potentially catastrophic losses and yet lack an evidential loss history.

Table 8.1 Three Types of Catastrophic Risk as Treated in Insurance

Image

Catastrophe risks can occur in unrelated combination within a year or in clusters, such as a series of earthquakes and even the series of Florida hurricanes seen in 2004. Multiple catastrophic events in a year would seem to be exceptionally rare until we consider that the more extreme an event the more likely it is to trigger another event. This can happen, for example, in natural catastrophes where an earthquake could trigger a submarine slide, which causes a tsunami or triggers a landslip, which destroys a dam, which in turn floods a city. Such high-end correlations are particularly worrying when they might induce man-made catastrophes such as financial collapse, infrastructure failure, or terrorist attack. We return to this question of high-end correlations later in the chapter.

You might think that events are less predictable the more extreme they become. Bizarre as it is, this is not necessarily the case. It is known from statistics that a wide class of systems show, as we look at the extreme tail, a regular ‘extreme value’ behaviour. This has, understandably, been particularly important in Holland (de Haan, 1990), where tide level statistics along the Dutch coast since 1880 were used to set the dike height to a 1 in 10,000-year exceedance level. This compares to the general 1 in 30-year exceedance level for most New Orleans dikes prior to Hurricane Katrina (Kabat et al., 2005)!

You might also have thought that the more extreme an event is the more obvious must be its cause, but this does not seem to be true in general either. Earthquakes, stock market crashes, and avalanches all exhibit sudden large failures without clear ‘exogenous’ (external) causes. Indeed, it is characteristic of many complex systems to exhibit ‘endogenous’ failures following from their intrinsic structure (see, for instance, Sornette et al., 2003).

In a wider sense, there is the problem of predictability. Many large insurance losses have come from ‘nowhere’ – they simply were not recognized in advance as realistic threats. For instance, despite the UK experience with IRA bombing in the 1990s, and sporadic terrorist attacks around the world, no one in the insurance industry foresaw concerted attacks on the World Trade Center and the Pentagon on 11 September 2001.

Then there is the problem of latency. Asbestos was considered for years to be a wonder material2 whose benefits were thought to outweigh any health concerns. Although recognized early on, the ‘latent’ health hazards of asbestos did not receive serious attention until studies of its long-term consequences emerged in the 1970s. For drugs, we now have clinical trials to protect people from unforeseen consequences, yet material science is largely unregulated. Amongst the many new developments in nanotechnology, could there be latent modern versions of asbestosis?

8.3 What the business world thinks

You would expect the business world to be keen to minimize financial adversity, so it is of interest to know what business sees as the big risks.

A recent survey of perceived risk by Swiss Re (see Swiss Re, 2006, based on interviews in late 2005) of global corporate executives across a wide range of industries identified computer-based risk the highest priority risk in all major countries by level of concern and second in priority as an emerging risk. Also, perhaps surprisingly, terrorism came tenth, and even natural disasters only made seventh. However, the bulk of the recognized risks were well within the traditional zones of business discomfort such as corporate governance, regulatory regimes, and accounting rules.

The World Economic Forum (WEF) solicits expert opinion from business leaders, economists, and academics to maintain a finger on the pulse of risk and trends. For instance, the 2006 WEF Global Risks report (World Economic Forum, 2006) classified risks by likelihood and severity with the most severe risks being those with losses greater than $1 trillion or mortality greater than $1 million or adverse growth impact greater than 2%. They were as follows.

1. US current account deficit was considered a severe threat to the world economy in both short (1-10% chance) and long term (<1% chance).

2. Oil price shock was considered a short-term severe threat of low likelihood (<1%).

3. Japan earthquake was rated as a 1–10% likelihood. No other natural hazards were considered sufficiently severe.

4. Pandemics, with avian flu as an example, was rated as a 1–10% chance.

5. Developing world disease: spread of HIV/AIDS and TB epidemics were similarly considered a severe and high likelihood threat (1 -20%).

6. Organized crime counterfeiting was considered to offer severe outcomes (long term) due to vulnerability of IT networks, but rated low frequency (<1%).

7. International terrorism considered potentially severe, through a conventional simultaneous attack (short term estimated at <1%) or a non-conventional attack on a major city in longer term (1 -10%).

No technological risks were considered severe, nor was climate change. Most of the risks classified as severe were considered of low likelihood (<1%) and all were based on subjective consensual estimates.

The more recent 2007 WEF Global Risks report (World Economic Forum, 2007) shows a somewhat different complexion with risk potential generally increased, most notably the uncertainty in the global economy from trade protectionism and over-inflated asset values (see Fig. 8.1). The report also takes a stronger line on the need for intergovernmental action and awareness.

It seems that many of the risks coming over the next 5–20 years from advances in biotechnology, nanotechnology, machine intelligence, the resurgence of nuclear, and socio-economic fragility, all sit beyond the radar of the business world today. Those that are in their sights, such as nanotechnology, are assessed subjectively and largely disregarded.

And that is one of the key problems when looking at global catastrophic risk and business. These risks are too big and too remote to be treated seriously.

8.4 Insurance

Insurance is about one party taking on another’s financial risk. Given what we have just seen of our inability to predict losses, and given the potential for dispute over claims, it is remarkable that insurance even exists, yet it does! Through the protection offered by insurance, people can take on the risks of ownership of property and the creation of businesses. The principles of ownership and its financial protection that we have in the capitalist West, though, do not apply to many countries; so, for instance, commercial insurance did not exist in Soviet Russia. Groups with a common interest, such as farmers, can share their common risks either implicitly by membership of a collective, as in Soviet Russia, or explicitly by contributing premiums to a mutual fund. Although mutuals were historically of importance as they often initiated insurance companies, insurance is now almost entirely dominated by commercial risk-taking.

Image

Fig. 8.1 World Economic Forum 2007 – The 23 core global risks: likelihood with severity by economic loss.

The principles of insurance were set down over 300 years ago in London by shipowners at the same time as the theory of probability was being formulated to respond to the financial demands of the Parisian gaming tables. Over these years a legal, accounting, regulatory, and expert infrastructure has built up to make insurance an efficient and effective form of financial risk transfer.

To see how insurance works, let us start with a person or company owning property or having a legal liability in respect of others. They may choose to take their chances of avoiding losses by luck but will generally prefer to protect against the consequences of any financial losses due to a peril such as fire or accident. In some cases, such as employer’s liability, governments require by law that insurance be bought. Looking to help out are insurers who promise (backed normally by capital or a pledge of capital) to pay for these losses in return for a payment of money called a ‘premium’. The way this deal is formulated is through a contract of insurance that describes what risks are covered. Insurers would only continue to stay in business over a period of years if premiums exceed claims plus expenses. Insurers will nonetheless try and run their businesses with as little capital as they can get away with, so government regulators exist to ensure they have sufficient funds. In recent years, regulators such as the Financial Services Authority in the United Kingdom have put in place stringent quantitative tests on the full range of risk within an insurer, which include underwriting risks, such as the chance of losing a lot of money in one year due to a catastrophe, financial risks such as risk of defaulting creditors, market risks such as failure of the market to provide profitable business, and operational risks from poor systems and controls.

Let us take a simple example: your house. In deciding the premium to insure your house for a year, an underwriter will apply a ‘buildings rate’ for your type of house and location to the rebuild cost of the house, and then add on an amount for ‘contents rate’ for your home’s location and safety features against fire and burglary multiplied by the value of contents. The rate is the underwriter’s estimate of the chance of loss – in simple terms, a rate of 0.2% is equivalent to expecting a total loss once in 500 years. So, for example, the insurer might think you live in a particularly safe area and have good fire and burglary protection, and so charge you, say, 0.1% rate on buildings and 0.5% on contents. Thus, if your house’s rebuild cost was estimated at £500,000 and your contents at £100,000, then you would pay £500 for buildings and £500 for contents, a total of £1000 a year.

Most insurance works this way. A set of’risk factors’ such as exposure to fire, subsidence, flood, or burglary are combined – typically by addition – in the construction of the premium. The rate for each of these factors comes primarily from claims experience – this type of property in this type of area has this proportion of losses over the years. That yields an average price. Insurers, though, need to guard against bad years and to do this they will try to underwrite enough of these types of risk, so that a ‘law of large numbers’ or ‘regression to the mean’ reduces the volatility of the losses in relation to the total premium received. Better still, they can diversify their portfolio of risks so that any correlations of losses within a particular class (e.g., a dry winter causes earth shrinkage and subsidence of properties built on clay soils) can be counteracted by uncorrelated classes.

If only all risks were like this, but they are not. There may be no decent claims history, the conditions in the future may not resemble the past, there may be a possibility of a few rare but extremely large losses, it may not be possible to reduce volatility by writing a lot of the same type of risk, and it may not be possible to diversify the risk portfolio. One or more of these circumstances can apply. For example, lines of business where we have low claims experience and doubt over the future include ‘political risks’ (protecting a financial asset against a political action such as confiscation). The examples most relevant to this chapter are ‘catastrophe’ risks, which typically have low claims experience, large losses, and limited ability to reduce volatility. To understand how underwriters deal with these, we need to revisit what the pricing of risk and indeed risk itself are all about.

8.5 Pricing the risk

The primary challenge for underwriters is to set the premium to charge the customer – the price of the risk. In constructing this premium, an underwriter will usually consider the following elements:

1. Loss costs, being the expected cost of claims to the policy.

2. Acquisition costs, such as brokerage and profit commissions.

3. Expenses, being what it costs to run the underwriting operation.

4. Capital costs, being the cost of supplying the capital required by regulators to cover the possible losses according their criterion (e.g., the United Kingdom’s FSA currently requires capital to meet a 1 -in-200-year or more annual chance of loss).

5. Uncertainty cost, being an additional subjective charge in respect of the uncertainty of this line of business. This can in some lines of business, such as political risk, be the dominant factor.

6. Profit, being the profit margin required of the business. This can sometimes be set net of expected investment income from the cash flow of receiving premiums before having to pay out claims, which for ‘liability’ contracts can be many years.

Usually the biggest element of price is the loss cost or ‘pure technical rate’. We saw above how this was set for household building cover. The traditional method is to model the history of claims, suitably adjusted to current prices, with frequency/severity probability distribution combinations, and then trend these in time into the future, essentially a model of the past playing forward. The claims can be those either on the particular contract or on a large set of contracts with similar characteristics.

Setting prices from rates is – like much of the basic mathematics of insurance – essentially a linear model even though non-linearity appears pronounced when large losses happen. As an illustration of non-linearity, insurers made allowance for some inflation of rebuild/repair costs when underwriting for US windstorm, yet the actual ‘loss amplification’ in Hurricane Katrina was far greater than had been anticipated. Another popular use of linearity has been linear regression in modelling risk correlations. In extreme cases, though, this assumption, too, can fail. A recent and expensive example was when the dotcom bubble burst in April 2000. The huge loss of stock value to millions of Americans triggered allegations of impropriety and legal actions against investment banks, class actions against the directors and officers of many high technology companies whose stock price had collapsed, for example, the collapse of Enron and WorldCom and Global Crossing, and the demise of the accountants Arthur Andersen discredited when the Enron story came to light. Each of these events has led to massive claims against the insurance industry. Instead of linear correlations, we need now to deploy the mathematics of copulas. 3 This phenomenon is also familiar from physical damage where a damaged asset can in turn enhance the damage to another asset, either directly, such as when debris from a collapsed building creates havoc on its neighbours (’collateral damage’), or indirectly, such as when loss of power exacerbates communication functions and recovery efforts (’dependency damage’).

As well as pricing risk, underwriters have to guard against accumulation of risk. In catastrophe risks the simplest measure of accumulations is called the ‘aggregate’ – the cost of total ruin when everything is destroyed. Aggregates represent the worst possible outcome and are an upper limit on an underwriter’s exposure, but have unusual arithmetic properties. (As an example, the aggregate exposure of California is typically lower than the sum of the aggregate exposures in each of the Cresta zones into which California is divided for earthquake assessment. The reason for this is that many insurance policies cover property in more than one zone but have a limit of loss across the zones. Conversely, for fine-grained geographical partitions, such as postcodes, the sum across two postal codes can be higher than the aggregate of each. The reason for this is that risks typically have a per location [per policy when dealing with re-insurance] deductible!)

8.6 Catastrophe loss models

For infrequent and large catastrophe perils such as earthquakes and severe windstorms, the claims history is sparse and, whilst useful for checking the results of models, offers insufficient data to support reliable claims analysis. Instead, underwriters have adopted computer-based catastrophe loss models, typically from proprietary expert suppliers such as RMS (Risk Management Solutions), AIR (Applied Insurance Research), and EQECAT.

The way these loss models work is well-described in several books and papers, such as the recent UK actuarial report on loss models (GIRO, 2006). From there we present Fig. 8.2, which shows the steps involved.

Quoting directly from that report:

Catastrophe models have a number of basic modules:

• Event module iem

A database of stochastic events (the event set) with each event defined by its physical parameters, location, and annual probability/frequency of occurrence.

• Hazard module iem

This module determines the hazard of each event at each location. The hazard is the consequence of the event that causes damage – for a hurricane it is the wind at ground level, for an earthquake, the ground shaking.

• Inventory (or exposure) module iem

A detailed exposure database of the insured systems and structures. As well as location this will include further details such as age, occupancy, and construction.

• Vulnerability module iem

Vulnerability can be defined as the degree of loss to a particular system or structure resulting from exposure to a given hazard (often expressed as a percentage of sum insured).

• Financial analysis module iem

This module uses a database of policy conditions (limits, excess, sub limits, coverage terms) to translate this loss into an insured loss.

Of these modules, two, the inventory and financial analysis modules, rely primarily on data input by the user of the models. The other three modules represent the engine of the catastrophe model, with the event and hazard modules being based on seismological and meteorological assessment and the vulnerability module on engineering assessment. (GIRO, 2006, p. 6)

Image

Fig. 8.2 Generic components of a loss model.

Image

Fig. 8.3 Exceedance probability loss curve.

The model simulates a catastrophic event such as a hurricane by giving it a geographical extent and peril characteristics so that it ‘damages’ – as would a real hurricane – the buildings according to ‘damageability’ profiles for occupancy, construction, and location. This causes losses, which are then applied to the insurance policies in order to calculate the accumulated loss to the insurer. The aim is to produce an estimate of the probability of loss in a year called the occurrence exceedance probability (OEP), which estimates the chance of exceeding a given level of loss in any one year, as shown in Fig. 8.3. When the probability is with respect to all possible losses in a given year, then the graph is called an aggregate exceedance probability curve (AEP).

You will have worked out that just calculating the losses on a set of events does not yield a smooth curve. You might also have asked yourself how the ‘annual’ bit gets in. You might even have wondered how the damage is chosen because surely in real life there is a range of damage even for otherwise similar buildings. Well, it turns out that different loss modelling companies have different ways of choosing the damage percentages and of combining these events, which determine the way the exceedance probability distributions are calculated. 4 Whatever their particular solutions, though, we end up with a two-dimensional estimate of risk through the ‘exceedance probability (EP) curve’.

8.7 What is risk?

[R]isk is either a condition of, or a measure of, exposure to misfortune – more concretely, exposure to unpredictable losses. However, as a measure, risk is not one-dimensional – it has three distinct aspects or ‘facets’ related to the anticipated values of unpredictable losses. The three facets are Expected Loss, Variability of Loss Values, and Uncertainty about the Accuracy of Mental Models intended to predict losses.

Ted Yellman, 2000

Although none of us can be sure whether tomorrow will be like the past or whether a particular insurable interest will respond to a peril as a representative of its type, the assumption of such external consistencies underlies the construction of rating models used by insurers. The parameters of these models can be influenced by past claims, by additional information on safety factors and construction, and by views on future medical costs and court judgments. Put together, these are big assumptions to make, and with catastrophe risks the level of uncertainty about the chance and size of loss is of primary importance, to the extent that such risks can be deemed uninsurable. Is there a way to represent this further level of uncertainty? How does it relate to the ‘EP curves’ we have just seen?

Kaplan and Garrick (1981) defined quantitative risk in terms of three elements – probability for likelihood, evaluation measure for consequence, and ‘level 2 risk’ for the uncertainty in the curves representing the first two elements. Yellman (see quote in this section) has taken this further by elaborating the ‘level 2 risk’ as uncertainty of the likelihood and adversity relationships.

We might represent these ideas by the EP curve in Fig. 8.4. When dealing with insurance, ‘Likelihood’ is taken as probability density and ‘Adversity’ as loss. Jumping further up the abstraction scale, these ideas can be extended to qualitative assessments, where instead of defined numerical measures of probability and loss we look at categoric (low, medium, high) measures of Likelihood and Adversity. The loss curves now look like those shown in Figs. 8.5 and 8.6. Putting these ideas together, we can represent these elements of risk in terms of fuzzy exceedance probability curves as shown in Fig. 8.7.

Another related distinction is made in many texts on risk (e.g., see Woo, 1999) between intrinsic or ‘aleatory’ (from the Greek for dice) uncertainty and avoidable or ‘epistemic’ (implying it follows from our lack of knowledge) risk. The classification of risk we are following looks at the way models predict outcomes in the form of a relationship between chance and loss. We can have many different parameterizations of a model and, indeed, many different models. The latter types of risk are known in insurance as ‘process risk’ and ‘model risk’, respectively.

Image

Fig. 8.4 Qualitative loss curve.

Image

Fig. 8.5 Qualitative risk assessment chart – Treasury Board of Canada.

These distinctions chime very much with the way underwriters in practice perceive risk and set premiums. There is a saying in catastrophe re-insurance that ‘nothing is less than 1 on line’, meaning the vagaries of life are such that you should never price high-level risk at less than the chance of a total loss once in a hundred years (1%). So, whatever the computer models might tell the underwriter, the underwriter will typically allow for the ‘uncertainty’ dimension of risk. In commercial property insurance this add-on factor has taken on a pseudo-scientific flavour, which well illustrates how intuition may find an expression with whatever tools are available.

Image

Fig. 8.6 Qualitative risk assessment chart – World Economic Forum 2006.

Image

Fig. 8.7 Illustrative qualitative loss curves

8.8 Price and probability

Armed with a recognition of the three dimensions of risk – chance, loss, and uncertainty – the question arises as to whether the price of an insurance contract, or indeed some other financial instrument related to the future, is indicative of the probability of a particular outcome. In insurance it is common to ‘layer’ risks as ‘excess of loss’ to demarcate the various parts of the EP curve. When this is done, then we can indeed generally say that the element of price due to loss costs (see aforementioned) represents the mean of the losses to that layer and that, for a given shape of curve, tells us the probability under the curve. The problem is whether that separation into ‘pure technical’ price can be made, and generally it cannot be as we move into the extreme tail because the third dimension – uncertainty – dominates the price. For some financial instruments such as weather futures, this probability prediction is much easier to make as the price is directly related to the chance of exceedance of some measure (such as degree days). For commodity prices, though, the relationship is generally too opaque to draw any such direct relationships of price to probability of event.

8.9 The age of uncertainty

We have seen that catastrophe insurance is expressed in a firmly probabilistic way through the EP curve, yet we have also seen that this misses many of the most important aspects of uncertainty.

Choice of model and choice of parameters can make a big difference to the probabilistic predictions of loss we use in insurance. In a game of chance, the only risk is process risk, so that the uncertainty resides solely with the probability distribution describing the process. It is often thought that insurance is like this, but it is not: it deals with the vagaries of the real world. We attempt to approach an understanding of that real world with models, and so for insurance there is additional uncertainty from incorrect or incorrectly configured models.

In practice, though, incorporating uncertainty will not be that easy to achieve. Modellers may not wish to move from the certainties of a single EP curve to the demands of sensitivity testing and the subjectivities of qualitative risk assessment. Underwriters in turn may find adding further levels of explicit uncertainty uncomfortable. Regulators, too, may not wish to have the apparent scientific purity of the loss curve cast into more doubt, giving insurers more not less latitude! On the plus side, though, this approach will align the tradition of expert underwriting, which allows for many risk factors and uncertainties, with the rigour of analytical models such as modern catastrophe loss models.

One way to deal with ‘process’ risk is to find the dependence of the model on the source assumptions of damage and cost, and the chance of events. Sensitivity testing and subjective parameterizations would allow for a diffuse but more realistic EP curve.

This leaves ‘model’ risk – what can we do about this? The common solution is to try multiple models and compare the results to get a feel for the spread caused by assumptions. The other way is to make an adjustment to reflect our opinion of the adequacy or coverage of the model, but this is today largely a subjective assessment.

There is a way we can consider treating parameter and model risk, and that is to construct adjusted EP curves to represent the parameter and model risk. Suppose that we could run several different models and got several different EP curves? Suppose, moreover, that we could rank these different models with different weightings. Well, that would allow us to create a revised EP curve, which is the ‘convolution’ of the various models.

In the areas of emerging risk, parameter and model risk, not process risk, play a central role in the risk assessment as we have little or no evidential basis on which to decide between models or parameterizations.

But are we going far enough? Can we be so sure the future will be a repeat of the present? What about factors outside our domain of experience? Is it possible that for many risks we are unable to produce a probability distribution even allowing for model and parameter risk? Is insurance really faced with ‘black swan’ phenomena (’black swan’ refers to the failure of the inductive principle that all swans were white when black swans were discovered in Australia), where factors outside our models are the prime driver of risk?

What techniques can we call upon to deal with these further levels of uncertainty?

8.10 New techniques

We have some tools at our disposal to deal with these challenges.

8.10.1 Qualitative risk assessment

Qualitative risk assessment, as shown in the figures in the chapter, is the primary way in which most risks are initially assessed. Connecting these qualitative tools to probability and loss estimates is a way in which we can couple the intuitions and judgements of everyday sense with the analytical techniques used in probabilistic loss modelling.

8.10.2 Complexity science

Complexity science is revealing surprising order in what was hitherto the most intractable of systems. Consider, for instance, wildfires in California, which have caused big losses to the insurance industry in recent years. An analysis of wildfires in different parts of the world (Malamud et al., 1998) shows several remarkable phenomena at work: first, that wildfires exhibit negative linear behaviour on a log-log graph of frequency and severity; second, that quite different parts of the world have comparable gradients for these lines; and third, that where humans interfere, they can create unintended consequences and actually increase the risk, as it appears that forest management by stamping out small fires has actually made large fires more severe in southern California. Such log-log negative linear plots correspond to inverse power probability density functions (pdfs) (Sornette, 2004), and this behaviour is quite typical of many complex systems as popularized in the book Ubiquity by Mark Buchanan (see Suggestions for further reading).

8.10.3 Extreme value statistics

In extreme value statistics similar regularities have emerged in the most surprising of areas – the extreme values we might have historically treated as awkward outliers. Can it be coincidence that complexity theory predicts inverse power law behaviour, extreme value theory predicts an inverse power pdf, and that empirically we find physical extremes of tides, rainfall, wind, and large losses in insurance showing pareto (inverse power pdf) distribution behaviour?

8.11 Conclusion: against the gods?

Global catastrophic risks are extensive, severe, and unprecedented. Insurance and business generally are not geared up to handling risks of this scale or type. Insurance can handle natural catastrophes such as earthquakes and windstorms, financial catastrophes such as stock market failures to some extent, and political catastrophes to a marginal extent. Insurance is best when there is an evidential basis and precedent for legal coverage. Business is best when the capital available matches the capital at risk and the return reflects the risk of loss of this capital. Global catastrophic risks unfortunately fail to meet any of these criteria. Nonetheless, the loss modelling techniques developed for the insurance industry coupled with our deeper understanding of uncertainty and new techniques give good reason to suppose we can deal with these risks as we have with others in the past. Do we believe the fatalist cliché that ‘risk is the currency of the gods’ or can we go ‘against the gods’ by thinking the causes and consequences of these emerging risks through, and then estimating their chances, magnitudes, and uncertainties? The history of insurance indicates that we should have a go!

Acknowledgement

I thank Ian Nicol for his careful reading of the text and identification and correction of many errors.

Suggestions for further reading

Banks, E. (2006). Catastrophic Risk (New York: John Wiley). Wiley Finance Series. This is a thorough and up-to-date text on the insurance and re-insurance of catastrophic risk. It explains clearly and simply the way computer models generate exceedance probability curves to estimate the chance of loss for such risks.

Buchanan, M. (2001). Ubiquity (London: Phoenix). This is a popular account – one of several now available including the same author’s Small Worlds – of the ‘inverse power’ regularities somewhat surprisingly found to exist widely in complex systems. This is of particular interest to insurers as the long-tail probability distribution most often found for catastrophe risks is the pareto distribution which is ‘inverse power’.

GIRO (2006). Report of the Catastrophe Modelling Working Party (London: Institute of Actuaries). This specialist publication provides a critical survey of the modelling methodology and commercially available models used in the insurance industry.

References

De Haan, L. (1990). Fighting the arch enemy with mathematics. Statistica Neerlandica, 44, 45–68.

Kabat, P., van Vierssen, W., Veraart, J., Vellinga, P., and Aerts, J. (2005). Climate proofing the Netherlands. Nature, 438, 283–284.

Kaplan, S. and Garrick, B.J. (1981). On the quantitative definition of risk. Risk Anal., 1(1), 11.

Malamud, B.D., Morein, G., and Turcotte, D.L. (1998). Forest fires – an example of self-organised critical behaviour. Science, 281, 1840–1842.

Sornette, D. (2004). Critical Phenomena in Natural Sciences – Chaos, Fractals, Self organization and Disorder: Concepts and Tools, 2nd edition (Berlin: Springer).

Sornette, D., Malevergne, Y., and Muzy, J.F. (2003). Volatility fingerprints of large shocks: endogeneous versus exogeneous. Risk Magazine.

Swiss Re. (2006). Swiss Re corporate survey 2006 report. Zurich: Swiss Re.

Swiss Re. (2007). Natural catastrophes and man-made disasters 2006. Sigma report no 2/2007. Zurich: Swiss Re.

Woo, G. (1999). The Mathematics of Natural Catastrophes (London: Imperial College Press).

World Economic Forum. Global Risks 2006 (Geneva: World Economic Forum). World Economic Forum. Global Risks 2007 (Geneva: World Economic Forum). Yellman, T.W. (2000). The three facets of risk (Boeing Commercial Airplane Group, Seattle, WA) AIAA-2000-5594 2000. In World Aviation Conference, San Diego, CA, 10–12 October, 2000.