CHAPTER 21
Risk Measurement for a Credit Portfolio: Part Two

INTRODUCTION

In the previous chapter, we discussed the covariance approach to calculating portfolio credit losses. The main limitations of the covariance approach are that we need to have faith that the Beta distribution is a good description of the loss distribution, and it is difficult to include details such as correlated changes in credit grades and changes in exposure amount. This chapter discusses four popular alternative credit-portfolio models:

• The actuarial model

• The Merton-based simulation model

• The macroeconomic default model

• The macroeconomic cash-flow model

At the end, we show that most of the models can be combined into a unified model.

THE ACTUARIAL CREDIT-PORTFOLIO MODEL

The actuarial credit model is an approach for estimating the loss distribution for a portfolio of loans. It is called an actuarial approach because it directly uses the statistics of historical losses rather than trying to define an underlying mechanism for the cause of losses. The actuarial credit model has been implemented by Credit Suisse Financial Products in their CreditRisk+™ software.1 The approach is quite complex, and you would probably not want to build such a model from scratch; however, as a risk professional, you should be aware of the principles underlying this approach.

The approach begins by grouping the loans in the portfolio according to their sector and size. The sectors can be by industry or geography. The size is measured according to the dollar amount of the loss given default. The dollar amount is the usual percentage LGD multiplied by the exposure at default:

LGD$ = LGD × EAD

For a given “condition of the world,” the losses within a group are assumed to be conditionally independent from each other. This means that the default correlation is zero within the group for that “condition of the world.” Such a group of loans, with independent default events of equal size, will have a binomial loss distribution.

With N loans in the group, and an average probability of default equal to p, the binomial distribution gives the following probability of having k defaults:

Image

Here, the term in parentheses is the factorial expression:

Image

As N becomes large and p becomes small, the binomial distribution converges towards the Poisson distribution. The Poisson distribution is given by:

Image

This distribution has a mean of p and a standard deviation equal to the square root of p. The Poisson distribution is useful because it requires only one parameter (p), and it is relatively easy to derive analytical expressions for combinations of Poisson distributions. The actuarial approach therefore approximates the loss distribution of the group of loans in the given “condition of the world” as a Poisson distribution.

Having established the loss distribution for one condition of the world, the next step is to include variations in the world’s condition. This is reflected in variations in the mean probability, p. We can think of this as saying that in a recession, for all loans, the probability of default will increase.

The mean probability is assumed to have a Gamma distribution. This distribution has long tails similar to those observed for credit losses, and again, it is possible to manipulate the Gamma distribution to derive analytic results. If we assume that the average default rate p in the Poisson distribution has a Gamma distribution, it can be shown that the overall losses in all conditions of the world will have the form of a negative binomial distribution. The probability of having k losses is calculated from negative binomial distribution as follows:

Image

Here, α is determined by the mean and standard deviation of the Gamma function:

Image

The mean of the Gamma function is chosen to be the mean probability of default for the group for all conditions of the world. There are several approaches for selecting the standard deviation of the Gamma distribution. One approach is to select the standard deviation to ensure that the final result has the same kurtosis as historically observed defaults. Another approach is to look at the historical default volatility within a sector and try to determine how much of the volatility was caused by systematic changes in the underlying mean probability of default (the Gamma distribution), and how much was caused by sampling from a population with a constant probability of default (the Poisson distribution).

Up to this point, we have discussed the method for calculating the probability distribution for a single group of loans. The groups are combined back into a portfolio by assuming that the losses from each group are independent from losses in the other groups.

The amount of loss from group g equals the number of defaults (k) and the dollar amount of loss given default for that group:

Lg = k × LDG$

The probability of having such a loss amount is given by the group’s negative binomial distribution for k:

PLg = P(k)

To estimate the portfolio’s loss distribution, we now need to step through every possible combination of k for each group.

For a given combination, the loss amount for the portfolio as a whole is the sum of the losses from each group:

Image

The probability of having this combination is the product of the probabilities of each group’s having its given level of loss:

Image

For example, consider two groups, A and B, each with two possible loss levels and the following probabilities:

PA(k = 1) = 70%, PA(k = 2) = 30%
PB(k = 1) = 80%, PB(k = 2) = 20%

If the LDG for A is $25 and for B is $16. The corresponding losses are as follows:

PA(k = 1) = $25, LA(k = 2) = $50
LB(k = 1) = $16, LB(k = 2) = $32

The four possible losses for the portfolio are as follows:

P1 = 70% × 80%, L1 = $25 + $16
P2 = 70% × 20%, L2 = $25 + $32
P3 = 30% × 80%, L3 = $50 + $16
P4 = 30% × 20%, L4 = $50 + $32

Finally, we sort the combinations according to the size of the loss, and calculate the cumulative probability to give us the cumulative-probability function for the losses.

If we had x possible loss levels for each group and y groups, the total number of combinations that we would have to calculate is xy. If there were many groups or many levels of loss to be considered, the number of combinations quickly becomes too large for efficient computation. As a practical alternative, the CSFP software uses a recurrence relationship for the probabilities. The relationship is mathematically complex but computationally efficient.

The overall process for the actuarial approach is summarized in Figure 21-1.

The main difficulty in using the actuarial approach is defining the standard deviations of the Gamma functions that effectively drive the correlations. A second

FIGURE 21-1 The Calculation Process for the Actuarial Approach

Image

difficulty is that there is no easy way to link credit and market risks in the actuarial model. Both of these difficulties are overcome in the Merton simulation approach.

THE MERTON-BASED SIMULATION MODELS

In Chapter 19, we used the Merton approach to estimate a company’s probability of default based on changes in the value of its assets. In Chapter 20, we described the covariance credit-portfolio model, in which we used the Merton approach to calculate the probability that two companies would default at the same time. Here we describe another use of the Merton model: simulating correlated defaults for a portfolio of loans.

The Merton-based simulation models create random values for each company’s assets, and if the value is too low, the model simulates a default. For all the companies in the loan portfolio, the changes in the asset values are correlated, and as we will show, this produces correlated defaults.

There are three main advantages to this approach:

• There is no need to assume a probability distribution for the losses because a distribution is produced by the simulation.

• It is relatively easy to include uncertainties in not only the number of defaults but also in the exposure at default, loss given default, and changes in value due to changes in credit grade.

• It allows the simulation of market variables, such as interest-rates, in parallel with the simulation of asset values. This allows us to calculate the credit exposure for derivatives and correctly correlate the exposure with counterparty defaults. It also opens the way to calculating credit risk and market risk in the same framework, thereby allowing them to be properly correlated.

The approach has been adopted in several forms by different banks and software companies. Models are commercially available from KMV Corporation in their PortfolioManager™ software2 and from Risk Metrics in their CreditMetrics™ software.3

Calculating Losses Due to Default Using the Merton-Based Approach

Let us start by looking at how the approach works if we just want to consider uncertainty in the default rate. Later, we will add uncertainty in the loss given default and the final credit grade. If we are just interested in uncertainty in the default rate, the process is as follows.

For each company in the portfolio, we need to find the probability of default, the exposure at default (EAD) and the loss given default (LGD). For now, we assume EAD and LGD are fixed. From the probability of default for each company, we calculate the critical distance to default using the inverse cumulative-probability function:

C = Φ−1(P)

This calculation was described in Chapter 20 when we used the Merton model to calculate the joint default probability.

Next, we calculate the correlation between the asset values of each company. The most obvious way to do this is to use the correlation between the equity values, but there is a more commonly used alternative, which we discuss later.

Once we know all the parameter values, we can begin the simulation. The first step is to create a set of random numbers that have the same correlation as the company asset values. If we had just two companies, we could use the following approach:

Image

Here, z1 represents the change in the asset value for company 1, n1 is a random number from a Standard Normal distribution, and ρ is the correlation between the asset values of the two companies. For more than two companies, correlated random values for the assets are created using Cholesky decomposition or Eigenvalue decomposition, as we discussed in the chapter on Monte Carlo VaR.

After creating the random number for each company, we test to see if it is less than the critical value. If it is, we say that the company has defaulted and record the loss given default. The losses from all companies are then summed to give the portfolio’s loss.

The process is repeated several thousand times with different sets of random asset values until we have enough results to create the loss distribution. The maximum probable loss and the economic capital can then be read from the distribution. For example, if we carried out 10,000 trials, the 10-basis-point maximum probable loss would be estimated from the tenth-worst result. Figure 21-2 summarizes the process for calculating the economic capital using the Merton-based simulation approach.

In the description above, we noted that there were alternative ways of creating the correlation between asset values. One approach is to set the correlation equal to the correlation between the equity values. The main problem with this is that if we have N companies in the portfolio, and we calculate the correlation between each company, we need to create an N-by-N correlation matrix with N times (N – 1)/2 unique correlations. If N is large, the method would be slow.

A practical alternative is that instead of using equity prices directly to calculate asset correlations, equity indices are used. This greatly reduces the amount of correlated random numbers that must be created by the Eigenvalue decomposition. In the most simple case, the asset value for each individual company (zi) could be modeled according to its Beta with a single market index (m) and a company-specific idiosyncratic term, εi:

FIGURE 21-2 The Calculation Process for the Merton Simulation Approach for Defaults

Image

Image

With this model, it would only be necessary to produce a single random value for the market index, and then N uncorrelated idiosyncratic terms. The model used in practice is slightly more complex than this.

In practice, the value of each company is set to depend on a series of indices. One index, for example, could represent energy prices, and another could represent the performance of high-tech companies. A high-tech company in the energy field would then be modeled as having its assets depend on both indices:

zi = βi,energymenergy + βi,techmtech + εi

These weights can be found by regressing the equity price against the indices.

Including Uncertainty in the Exposure at Default and Loss in the Event of Default

Until this point, we have only modeled the uncertainty in default rates. In the simulation model, it is relatively easy to add uncertainties in the loss given default (LGD) and exposure at default (EAD). Uncertainty in the LGD can be included by randomly picking an amount of loss after a company has been classified as defaulted. The loss can be purely random, with a suitable standard deviation and probability distribution (as discussed in the section on parameterizing LGD), or it could be a function of the extent to which the assets fall below the critical threshold. This would create correlation between the LGD experienced by different companies.

Uncertainty in the exposure at default can be modeled by a simple random number for a product such as a line of credit. For a derivative product such as a swap, the exposure may be modeled as a function of simulated market rates, as discussed in the section on measuring the credit exposure of derivatives. The simulated market rates would be created in the same Eigenvalue or Cholesky decomposition that creates the correlated indices. Figure 21-3 adds the process for including uncertainty in LGD and EAD.

Including Losses Due to Downgrades

It is also relatively easy to extend the Merton simulation model to include the effect of credit losses due not only to defaults but also due to downgrades. A downgrade occurs if the credit-rating agencies believe that a company’s probability of default has increased. If a downgrade occurs, investors are less willing to hold the loan because of the risk of nonpayment. Therefore, the value of the loan falls, even though there has not yet been a default. For a single loan, this phenomenon was discussed in Chapter 18, in which we described losses due to both default and downgrades. Now, we wish to calculate these losses for a whole portfolio and ensure that changes in grades are correctly correlated with each other and with defaults.

From the migration matrix in Table 18-1, we know the probability of a grade change. This is reproduced in Table 21-1 for a company that is rated BBB at the start of the year. We can see that there is an 89% probability that the company will still be BBB at the end of the year. There is also a 4.8% chance of being upgraded to single-A, and a 4.4% chance of being downgraded to single-B.

Given these probabilities, we can define bands for a variable with a Standard Normal distribution, such that the probability of falling in a band equals the probability of migrating to a specific grade. These bands are illustrated in Figure 21-4.

The position of the critical threshold between each band corresponds to the probability of falling below the threshold. The threshold is calculated from the inverse for the cumulative Normal distribution:

Threshold = Φ−1 (Probability of Falling Below)

FIGURE 21-3 The Calculation Process Used to Include Uncertainty in Exposure and Loss in the Event of Default

Image

TABLE 21-1 Probability of Grade Migration for a Company Initially Rated BBB

Image

FIGURE 21-4 Illustration of Probability of Grade Migration for a Company Initially Rated BBB

Image

For a BBB company, the thresholds are given in Table 21-2.

If we now randomly select a value from a Standard Normal probability function, the probability of falling in a specified band will equal the probability of migration to the band.

So far, this is not very useful; we have simply created a way of simulating probabilities that we already know. The useful part of this analysis is that if we have two companies, we can correlate the random values that we use to predict rating changes. Each company will have the correct probability of a grade change, and because the random numbers are correlated, they will tend to change grades at the same time.

TABLE 21-2 Critical Thresholds for Grade Migration for a BBB Company

Image

The obvious way to do this is to use the same approach as the Merton default model and assume that rating changes correspond to changes in the asset value, and that the random values represent the net asset values. With this assumption, we can bring the probability of grade changes into the same model as the probability of default. For each company, we create a Standard Normal variable, z, representing the asset value with the variables for different companies correlated according to the equity correlations. We then test z to see if it passed any of the critical thresholds. If it did, we record a loss equal to the difference in value between a loan of the initial grade and a loan of the final grade.

THE MACROECONOMIC DEFAULT MODEL

In the actuarial model, we assumed that for the given condition of the world, all defaults were independent, and we calculated the loss distribution for that condition of the world. We then said that the condition of the world can change, and that the probability of default for each loan changes as the world changes. The macroeconomic default model is very similar in that in any given economic condition, defaults are considered to be independent, but the probability of default for all loans is expected to change when the economy changes.

One form of the macroeconomic default model was developed by McKinsey and Company, and is implemented in their CreditPortfolioView™ software.4 Here, we describe not only the McKinsey model, but also other ways in which these concepts can be used.

In general, the macroeconomic default model works as follows. For each company, the probability of default is found using one of the methods discussed in Chapter 19. The probability used should be the “cycle-neutral” probability of default. The cycle-neutral probability of default is the average probability of default for that type of company across all phases of the business cycle, from boom to recession.

The next step is to create a model for the overall economy that links economic conditions to the overall probability of default for all loans. The simplest such model would be to say that the number of defaults was equal to a constant, plus a proportion of GDP Growth:

POverall = a + b × G

Here, a and b are constants, and G is the GDP growth rate. A slightly more complex model would be to use a logit function:

Image

Here, a and b are again constants, but will have different values than in the previous equation. The values for a and b can be found by carrying out a regression between historical GDP growth and loss information. The loss information may be the bank’s own aggregate losses each year, or may be national data, such as the number of bond defaults in each past year.

The next step is to create a model that can randomly create different scenarios for economic conditions such as GDP. A simple model is to say that future GDP growth will equal current GDP growth plus a Normally-distributed random number:

Gk = G0 + σGεk, εkN(0,1)

Here, Gk is the growth for random scenario number k, and σG is the standard deviation of the growth rate from one year to the next.

Now we have the probability of each company’s defaulting, a model for how the probability changes in different economic conditions, and a model to create different conditions. These can be combined into a simulation model as follows.

In the simulation model, we randomly create a macroscenario, we calculate the default rate for the whole economy in that scenario, and then modify the default rate of individual companies according to the overall change in defaults:

Image

Here, PCompany,k is the probability of default for the company in scenario k, and Image is the cycle-neutral probability of default. In this process, correlation between defaults is created by the common change in the whole portfolio’s probability of default.

Once we know the probability of default for each company in this scenario, we “flip a coin” to decide whether or not the company will actually default. The flipping of the coin has to work in such a way that the probability of default equals PCompany,k. The usual way to do this is to create a random number from a uniform distribution between zero and one. If the random number is less than PCompany,k, we say that the company has defaulted and we record the loss.5 We do this for all companies in this scenario and calculate the total loss for the portfolio. We then repeat the whole process for several thousand different scenarios, giving a distribution of possible losses. The process is illustrated in Figure 21-5.

The description above outlines a basic macroeconomic default model. Many enhancements can be added. One enhancement would be to describe the economy with several variables, such as GDP and inflation. These variables would then be correlated using the usual Eigenvalue decomposition. Another enhancement would be to add the possibility of grade migration. The cycle-neutral grade-migration probabilities are taken from the migration matrix, and the probabilities are then modified to cause more downgrades when the economy is doing badly. The final enhancement that we discuss here is a modification to the assumption that the probability of default for all companies must move in the same way.

FIGURE 21-5 Calculation for the Macroeconomic Default Model

Image

In the basic model, we said that a company’s probability of default would be given by the following equation:

Image

This forces us to assume that the probability of default for all companies changes by the same degree. As an alternative, recall that in Chapter 19, one of the models for predicting a company’s probability of default was the logistic regression:

Image

Here, Xi represents data on the customers, such as their financial ratios. When carrying out this regression, we can add a term to describe the economy:

Image

By adding the extra term, the model should be able to predict more accurately a company’s probability of default. We can also use this model directly in the simulation as follows:

Image

This allows us to model individually each company’s response to changes in the economy.

THE MACROECONOMIC CASH-FLOW MODEL

In Chapter 19, we described one technique for calculating the probability of default, which was to use a cash-flow model in which the cash flows for a project company changed according to different macroeconomic conditions. This approach can also be used to calculate the loss from a portfolio of projects. The key is to feed the same scenario into each model in parallel, then calculate the loss for each project and sum the losses to get the portfolio’s loss in that scenario. This is discussed in “Risk Measurement for Project Finance Guarantees,” Marrison, C.I., The Journal of Project Finance, Volume 7, Number 2, pp. 43-53, 2001, and illustrated in Figure 21-6.

UNIFIED SIMULATION MODEL

From the discussions of the different types of models, you probably got the sense that there were several recurring themes and that, at some level, the models must be similar. A nice discussion of the differences and similarities between the analytic versions of the covariance, actuarial, and Merton models is given in Koyluoglu, Ugur and Hickman, Andrew, “Reconcilable Differences,” Risk, October 1998, pp. 56–62.

For the simulation models, the underlying principles are so similar that they can all be combined into one model. It is relatively easy to build a model that simultaneously creates correlated scenarios for equity indices, macroeconomic variables (e.g., GDP, FX, inflation), and other market variables (e.g., oil prices, implied volatility).

FIGURE 21-6 Calculation of Portfolio Losses Using a Macroeconomic Model and CashFlow Models

Image

The equity indices can be fed into the Merton model, the macrovariables can be fed into the macrodefault model, and the macrovariables and market variables can be fed into the cash-flow model. The changes in the market variables can also be used to estimate the exposure amount for derivatives, and can even be used to drive changes in the value of the traded instruments and ALM portfolio. This combines credit risk with market risk.

You can use this unified approach to bring together results for risks that have been modeled in different ways. For example, it is usually easiest to model the defaults of retail customers and small businesses using the logistic regression of the macrodefault model. For large, publicly traded companies, it is more accurate to use the Merton approach, and for project finance deals, it is best to use the cashflow approach.

Obviously, for market risks, the approach is completely different than for credit risk because we use Value-at-Risk (VaR). However, even VaR can be driven by the Monte Carlo simulation of macroeconomic factors and market rates.

The key is to create a single scenario in which the changes in all the variables are correlated (typically using Eigenvalue decomposition). That single scenario is fed separately into each portfolio model. Each model holds a different part of the portfolio and gives the loss for that part of the portfolio. The overall loss in that scenario is the sum of the losses in each model. The structure of the unified model is illustrated in Figure 21-7.

FIGURE 21-7 Structure of a Unified Simulation Model

Image

SUMMARY

In this chapter, we completed our discussion of the common approaches used to quantify the risk in credit portfolios. Having established the amount of capital consumed by credit risk, we now use that result in calculating the risk-adjusted return on capital for loans.

NOTES

1. Credit Suisse Financial Products, “CreditRisk+: A Credit Risk Management Framework,” 1997.

2. Kealhoffer, Stephen, “Managing Default Risk in Derivative Portfolios,” in Derivative Credit Risk: Advances in Measurement and Management, Renaissance Risk Publications, London, 1995.

3. Gupton, Greg, Christopher Finger, and Mickey Bhatia, “CreditMetrics Technical Document,” Morgan Guaranty Trust Co., 1997.

4. Wilson, Tom, “Portfolio Credit Risk,” September 1997 (Part I) and October 1997 (Part II).

5. In Excel this can be done using the command “IF (RAND()<P,1,0)”.