CHAPTER 19
Estimating Parameter Values for Single Facilities

INTRODUCTION

In the previous chapter, we discussed the framework and equations to calculate the expected loss and unexpected loss for a single facility. These equations depended critically on three parameters: the probability of default (PD), the loss in the event of default (LIED), and the exposure at default (EAD). As we will find later, these three parameters are also important for calculating regulatory capital under the new guidelines from the Basel Committee.

In this chapter, we discuss the methods that banks use to find values for these three parameters, and we show example calculations of EL and UL. Most of the methods rely on the analysis of historical information; therefore, at the end of the chapter there will be a short discussion on the types of data that should be recorded by banks.

ESTIMATING THE PROBABILITY OF DEFAULT

Traditionally, the likelihood of a customer’s repaying a loan was determined in a conversation between the bank staff and the customer, possibly supplemented by discreet inquiries at the country club to ensure that the customer was trustworthy. This situation still persists in some regional banks and less-developed countries, but leading banks are now maximizing the amount of objectivity used to assess borrowers. The approaches to estimating the credit quality of a borrower can be grouped into four categories: expert credit grading, quantitative scores based on customer data, equity-based credit scoring, and cash-flow simulation. Each will be explored in more detail below.

Expert Credit Grading

There are 3 steps to estimating the probability of default through expert grading. The first step is to define a series of buckets or grades into which customers of differing credit quality can be assigned. The second step is to assign each customer to one of the grades. The final step is to look at historical data for all the customers in each grade and calculate their average probability of default. The most difficult of these 3 steps is assigning each customer into a grade.

The highest grade may be defined to contain customers who are “exceptionally strong companies or individuals who are very unlikely to default;” the lower grades may contain customers who “have a significant chance of default.” Credit-rating agencies use around 20 grades, as shown in Table 19-1. The default rating may be further broken down according to the amount that is expected to be recovered from the defaulted company.

TABLE 19-1 Ratings used by Standard & Poor’s, Fitch, and Moody’s

image

Typically, banks have around eight grades. They often think of the highest grade as corresponding to a credit-rating-agency grade of AAA, and the lowest as corresponding to a default. Traditionally, banks assigned a grade that reflected the expected loss, i.e., the combined probability of the counterparty’s defaulting (PD) and the loss in the event of default (LIED). Now banks are moving to rating the counterparty according to the PD, and using a separate rating to reflect the LIED.

Customers are assigned to each grade using expert opinion. The experts are the credit-rating staff of the bank or the rating agency. They base their opinions on all the information that they can gather about the customers. They gather quantitative balance-sheet information, such as the total assets and historical profitability; they also gather qualitative information such as the customers’ planned use of the funds and their business strategies relative to their competitors.

From many years of experience, and by studying previous defaults, the experts have an intuitive sense of the quantitative and qualitative indicators of trouble. For example, they know that any company whose annual sales are less than their assets is likely to default.

For large, single transactions, such as loans to large corporations, banks will rely heavily on the opinion of experts. However, experts are expensive because it takes many years of experience to train them. An alternative is to use an expert system. An expert system is a database of rules and questions that tries to mirror the credit expert’s decision process. Based on the answers, there are a series of decisions and further questions that finally produce a credit grade. Although an expert system is systematic, it is still qualitative. Expert systems have been used successfully for credit analysis, but are not widespread.

For large-volume, small exposures, such as retail loans, the bank reduces costs by relying mostly on quantitative data, and only using expert opinion if the results of the quantitative analysis put the customer in the gray area between being accepted and rejected.

Quantitative Scores Based on Customer Data

Quantitative scoring seeks to assign grades based on the measurable characteristics of borrowers that, at times, may include some subjective variables such as the quality of the management team of a company. The quantitative rating models are often called scorecards because they produce a score based on the given information. Table 19-2 shows the types of information typically used in a model to rate corporations, and Table 19-3 shows information used to rate individuals.

To link customer characteristics to later default behavior, data is also required to show when a customer misses any payments, defaults, or becomes bankrupt. At such a time of default, the outstanding balance should be recorded, and then every future recovery from the customer should be recorded with its timing and, if possible, the administrative expense associated with the collection. For traded credits, such as bonds and syndicated loans, a record should be kept of their trading prices. As we see later, this is useful in calculating the loss given default. By linking customer characteristics with default behavior, we can create models that predict default.

TABLE 19-2 Information Used to Rate Corporations

image

TABLE 19-3 Information Used to Rate Retail Customers

image

So far as possible, the variables used in the default model should be chosen so that they stay within a reasonable range. For example, it is better to use equity over assets rather than assets over equity, because as equity drops to zero, the latter ratio goes to infinity.

The variables used should also be relatively independent from one another. An advanced technique to ensure this is to transform the variables according to their principal components (Eigenvalues).

The number of variables in the model should be limited to those that are strongly predictive. Also, there should be an intuitive explanation as to why each variable in the model is meaningful in predicting default; for example, low profitability would intuitively signal a higher probability of default. If the variables used in the model are not intuitive, the model will probably not be accepted by the credit officers who are the ultimate users.

There are two common approaches: discriminant analysis and logistic regression. These are discussed below.

Discriminant Analysis

Discriminant analysis attempts to classify customers into two groups: those that will default and those that will not. It does this by assigning a score to each customer. The score is the weighted sum of the customer data:

image

Here, wi is the weight on data type i, and Xi is one piece of customer data. The values for the weights are chosen to maximize the difference between the average score of the customers that later defaulted and the average score of the customers who did not default. The actual optimization process to find the weights is quite complex. The most famous discriminant scorecard is Altman’s Z Score.1 For publicly owned manufacturing firms, the Z Score was found to be as follows:

Z = 1.2X1 + 1.4X2 + 3.3X3 + 0.6X4 + 1.0X5

where:

X1 = Working Capital/Total Assets

X2 = Retained Earnings/Total Assets

X3 = Earnings Before Interest and Taxes/Total Assets

X4 = Market Value of Equity/Total Assets

X5 = Sales/Total Assets

Typical ratios for the bankrupt and nonbankrupt companies in the study were as follows:

image

A company scoring less than 1.81 was “very likely” to go bankrupt later. A company scoring more than 2.99 was “unlikely” to go bankrupt. The scores in between were considered inconclusive.

This approach has been adopted by many banks. Some banks use the equation exactly as it was created by Altman, but most use Altman’s approach on their own customer data to get scoring models that are tailored to the bank.

To obtain the probability of default from the scores, we group companies according their scores at the beginning of a year, and then calculate the percentage of companies within each group who defaulted by the end of the year.

Logistic Regression

Logistic regression is very similar to discriminant analysis except that it goes one step further by relating the score directly to the probability of default. Logistic regression uses a logit function as follows:

image

Here, PC is the customer’s probability of default, and YC is a single number describing the credit quality of the customer. YC is a constant, plus a weighted sum of the observable customer data:

image

When YC is negative, the probability of default is close to 100%. When YC is a positive number, the probability drops towards 0. The probability transitions from 1 to 0 with an “S-curve,” as in Figure 19-1.

To create the best model, we want to find the set of weights that produces the best fit between PC and the observed defaults. We would like PC to be close to 100% for a customer that later defaults and close to 0 if the customer does not default. This can be accomplished using maximum likelihood estimation (MLE).

FIGURE 19-1 Illustration of the Logit Curve

image

In MLE, we define the likelihood function LC for the customer, to be equal to PC if the customer did default, and 1 – PC if the customer did not default:

LC = PC if default

LC = 1 – PC if no default

We then create a single number, J, that is the product of the likelihood function for all customers:

J = LCompany1 × LCompany2 × . . . × LCompanyN

J will be maximized if we choose the weights2 in YC such that for every company, whenever there is a default, PC is close to 1, and when there is no default, PC is close to 0. If we can choose the weights such that J equals 1, we have a perfect model that predicts with 100% accuracy whether or not a customer will default. In reality, it is very unlikely that we will achieve a perfect model, and we settle for the set of weights that makes J as close as possible to 1.

The final result is a model of the form:

image

Where the values for all the weights are fixed. Now, given the data (Xi) for any new company, we can estimate its probability of default.

Testing Quantitative Scorecards

An important final step in building quantitative models is testing. The models should be tested to see if they work reliably. One way to do this is to use them in practice and see if they are useful in predicting default. Although this is the ultimate test, it can be an expensive way to find mistakes.

The usual testing procedure is to use hold-out samples. Before building the models, the historical customer data is separated randomly into two sets: the model set and the test set. The model set is used to calculate the weights. The final model is then run on the data in the test set to see whether it can predict defaults.

The results of the test can be presented as a power curve. The power curve is constructed by sorting the customers according to their scores, and then constructing a graph with the percentage of all the customers on the x-axis and the percentage of all the defaults on the y-axis. For this graph, x and y are given by the following equations:

image

Here, k is the cumulative number of customers, N is the total number of customers, and ND is the total number of defaulted customers in the sample. Ic is an indicator that equals 1 if company c failed, and equals 0 otherwise. A perfect model is one in which the scores are perfectly correlated with default, and the power curve rises quickly to 100%, as in Figure 19-2. A completely random model will not predict default, giving a 45-degree line, as in Figure 19-3.

Most models will have a performance curve somewhere between Figure 19-2 and Figure 19-3.

FIGURE 19-2 Illustration of Power Curve for a Perfect Model

image

FIGURE 19-3 Illustration of Power Curve for a Random Model

image

Equity-Based Credit Scoring

The scoring methods described above relied mostly on examination of the inner workings of the company and its balance sheet. A completely different approach is based on work by Merton3 and has been enhanced by the company KMV.4 Merton observed that holding the debt of a risky company was equivalent to holding the debt of a risk-free company plus being short a put option on the assets of the company. The put option arises because if the value of the assets falls below the value of the debt, the shareholders can put the assets to debt holders, and in return, receive the right not to repay the full amount of the debt. In this analogy, the underlying for the put option is the company assets, and the strike price is the amount of debt.

This observation led Merton to develop a pricing model for risky debt and allowed the calculation of the probability of default. This calculation is illustrated in Figure 19-4.

It is relatively difficult to observe directly the total value of a company’s assets, but it is reasonable to assume that the value of the assets equals the value of the debt plus equity, and the value of the debt is approximately stable. This assumption allows us to say that changes in asset value equal changes in the equity price. This approach is attractive because equity information is readily available for publicly traded companies, and it reflects the market’s collective opinion on the strength of the company.

We can then use the volatility of the equity price to predict the probability that the asset value will fall below the debt value, causing the company to default. If we assume that the equity value (E) has a Normal probability distribution, the probability that the equity value will be less than zero is given by the following:

FIGURE 19-4 Relationship between Asset Volatility and Probability of Default

image

image

Here, p(E, image σE) is the Normal probability-density function with a mean equal to the current equity price (image) and a standard deviation equal to the standard deviation of the equity (σE). This integral is equivalent to the integral of the Standard Normal distribution (φ) from negative infinity up to −E/σE (for proof, see Appendix A):

image

Here, φ is the Normal probability-density function with a mean of zero and a standard deviation of one. Φ is the cumulative Normal probability function.

Most spreadsheets and statistical packages have commands to calculate these functions. For example, in Microsoft’s Excel spreadsheet, φ(x) is calculated from the command “=normdist(x,0,1,0)”. Φ(x) is calculated from the command “=normdist(x,0,1,1)”.

The value imageE is called the critical value or the distance to default. It is the number of standard deviations between the current price and zero. With these simplifying assumptions, for any given distance to default we can calculate the probability of default, as in Table 19-4. This table also shows the ratings that correspond to each distance to default.

Unfortunately, there are a few practical problems that require modifications of the above approach to create accurate estimates for the probability of default. One problem is that equity prices have a distribution that is closer to log-Normal than Normal. A related problem is that in reality, the value of the debt is not stable, and changes in the equity price do not capture all of the changes in the asset value. This is especially the case as a company moves towards default because the equity value has a floor of zero.

TABLE 19-4 Idealized Relationship Between Distance to Default and Probability of Default

image

A practical alternative is to treat the distance to default and the predicted probabilities as just another type of score. As with the other types of credit score, the score is calibrated to the probability of default as follows:

• Find historical data on companies.

• Estimate what their scores would have been based on the equity volatility at the time.

• Group the companies according to their scores.

• Calculate what proportion of the group defaulted over the next year.

The greatest advantage of credit ratings based on equity prices is that they automatically incorporate the latest market data, and therefore, they very quickly respond when a company starts to get into problems.

Cash-Flow Simulation

The methods described above rely on having historical data on company financial ratios or stock prices. For new ventures, there is no historical data. However, if the deal is tightly structured, as in project finance, we can evaluate the risk using a cashflow simulation.

Project finance is used for large projects, such as the building of a power station, toll roads, or a telecoms infrastructure. In project finance, a stand-alone project company is established by one or more parent companies. This project company raises funds in the form of debt and equity, and builds the infrastructure needed for the project. The debt and equity holders are then paid from the profits of the project. If the profits are insufficient, the debt holders have no recourse to the parent companies who were involved in setting up the project company.

Such deals are carefully planned and tightly structured so it is clear who will be paid in each circumstance. Because the operations of the project company are so well defined, it is possible to build a cash-flow model that predicts what the company’s profits will be under different economic circumstances.

With this cash-flow model, we can apply Monte Carlo evaluation to obtain cashflow statistics, including occasions in which the cumulative cash flows are so negative that default occurs. This approach is discussed in “Risk Measurement for Project Finance Guarantees,” Marrison, C.I., The Journal of Project Finance, Volume 7, Number 2, pp. 43–53, 2001.

As a simplified example, consider a project-finance deal to build an oil refinery. The refinery is built using money from shareholders and debt holders. The shareholders and debt holders are repaid from the operating profit. In year T, the operating profit is the income from selling refined oil, minus the cost of crude oil and the operating cost:

Operating ProfitT = RT × VR,TCT × VC,TOT

Here, RT and CT are the costs per barrel of refined and crude oil. VR,T is the volume of refined oil produced, which depends on the plant’s efficiency and the volume of crude oil, VC,T. OT represents the operating expenses.

The volumes and costs per barrel will depend on the structure of the project-finance deal. For example, the costs could float according to the market price of oil, or there could be a purchase contract in which another company guarantees to buy the oil at a fixed price. For this example, let us assume that the costs vary with the market prices for refined and crude oil. There is then a risk if the difference between the prices becomes too small for the operating profit to repay the debt holders.

This risk can be quantified by building a random model to simulate oil prices and then calculating the operating profit each year for each random scenario. A simple stochastic model for random oil prices would be as follows:

CT+1 = a + bCT + 1,T

ST+1 = d + eST + 2,T

RT+1 = CT+1 + ST+1

S is the “crack-spread” between the prices of crude and refined oil. a, b, c, d, e, and f are constants. The constants a, b, c, d, e, and f are found from regressions on historical oil prices. ε1 and ε2 are random numbers. The initial values for the prices would be today’s prices.

The model works as follows. We use the random models to create a scenario for prices. With these prices, we calculate the operating profit and test to see if it is sufficient to pay the outstanding debt. Any remaining profit after paying the debt is given to the shareholders or retained into the next year, according to the project contract. We then step forward in time and create another random set of prices and repeat the calculations. This is repeated for the life of the planned project. Then, we go back to year 0 and repeat the whole process again, typically 1000 times. At the end, we can calculate the probability of default as the number of scenarios in which debt payments were missed, divided by the total number of scenarios tested.

The model described above was very simple and only had two uncertain parameters: the oil prices. Other random factors in the project could be the efficiency of the plant, the operating expenses, and foreign exchange rates if, as is common, some the debt is denominated in a different currency. A full model, including all random factors, taxes, and management options, can become very complex.

The simulation can be used to not only give the probability of default, but also the exposure at default, the loss given default, and the net present value of the losses. The structure of the cash-flow model is illustrated in Figure 19-5.

Notice that in Figure 19-5, there is a module called the event model. This is an optional addition that can be added if there are significant discrete events that could affect the project but are not part of the project. For an international project, the event model can include the event of the country’s defaulting. This can be driven by currency devaluations or collapses in GDP generated by the macroeconomic simulation. If a country default occurs, we record the appropriate losses, and that becomes part of the payment probability distribution.

Now let’s move from the probability of default to discuss the exposure of default.

FIGURE 19-5 Illustration of the Structure of a Cash-Flow Simulation

image

ESTIMATING THE EXPOSURE AT DEFAULT

The exposure at default (EAD) is the outstanding amount at the time of default. For a loan, the exposure amount is set by the amortization rate. The EAD then only depends on how much the customer has paid back by the time of default. Typically, the exposure for a loan is assumed to be fixed for each year and equal to the average outstanding for the year. This methodology neglects some of the uncertainty, but greatly simplifies the analysis.

For derivatives contracts, the EAD is calculated by simulation as described in the previous chapter.

For a line of credit, the EAD depends on how much the customer draws on the line before defaulting. This customer behavior is estimated based on historical information. The procedure entails collecting information on defaulted companies, noting how much they had drawn on the line by the time they defaulted; and noting how much they had drawn on the line, the limit on the line, and their credit rating before default, one year before default.

Many banks have carried out internal analyses of EAD for lines of credit, but few of the results have been published. The article by Elliot Asarnow and James Marker, “Historical Performance of the US Corporate Loan Market: 1988–1993,” The Journal of Commercial Lending, Vol. 10, No. 2, Spring 1995, pp. 13–32, calculated the EAD as the average use of the line plus the additional use at the time of default:

image

Here, L is the dollar amount of the total line of credit, image is the average percentage of use, and ed is the additional use of the normally unused line at the time of default. Asarnow and Marker found that the EAD depended on the initial credit grade as shown in Table 19-5.

In applying these results to the assessment of a line of credit, a typical assumption would be to replace the average exposure for the grade with the actual current exposure for the line of credit being assessed. As an example, consider a BBB company that has currently drawn 42% of its line. The additional use at default would be expected to be 38% (58% times 65%), making the total EAD for this company equal to 80% (42% plus 38%).

ESTIMATING THE LOSS IN THE EVENT OF DEFAULT

The loss in the event of default (LIED) is the percentage of the exposure amount that the bank loses if the customer defaults. It is proportional to the exposure at default, plus all administrative costs associated with the default, minus the net present value of any recoveries:

TABLE 19-5 Exposure at Default for Lines of Credit

image

image

The definition above is most useful for illiquid securities, such as bank loans, where the bank takes many months to recover whatever it can from the defaulted company. An alternative definition for liquid securities, such as bonds, is to say that the LIED is the percentage drop in the market value of the bond after default:

image

Theoretically, for any one security, the LIED calculated with each of the above definitions should be the same because the value of the bond after default should equal the NPV of the recoveries, minus the administration costs.

In this section, we review three empirical studies that estimated LIED as a function of the security’s collateral, structure, and industry. As we review the studies, we also use a simple technique for estimating the standard deviation of LIED.

We require the standard deviation of LIED to estimate accurately the unexpected loss. We often face the situation in which we have an estimate for the average LIED but no information on the standard deviation. In this case, we can estimate the standard deviation from the following equation:

image

Here, image is the average LIED and A is a constant. The term image is the largest possible standard deviation for LIED, given that the average is image. This largest standard deviation happens if the LIED after each default equals either 0 or 100%. Note that the standard deviation of LIED is the same as the standard deviation of recoveries. Also, the worst case for LIED is the same as the worst case for recoveries:

image

The three studies discussed below show both the average LIED and the standard deviation of LIED for different types of collateral, structure, and industry. This allows us to estimate A by comparing the actual standard deviation with the worst case:

image

Let us now look at the empirical studies.

Lee V. Carty, David Hamilton, et al. (Bankrupt Bank Loan Recoveries, Moody’s Investors Services, Special Comment, June 1998), looked at how much had been recovered from hundreds of defaulted bank loans. One of their results was an estimate of the probability-density function for the recovery rates.

The distribution was estimated in 2 ways: first, using the NPV of recoveries, and then by using the change in the price of traded loans. The results are shown in Figure 19-6 and Figure 19-7. They show that recovery rates are highly variable, and the distribution is strongly skewed. The mean and standard deviation for Figure 19-6 is 87% and 23%, and for Figure 19-7 is 70% and 23%. The difference in the mean is most likely because the discount rate used in calculating the NPV of the recoveries was lower than the discount rates used by the market.

Carty, Hamilton, et al. also examined the effect that collateral has on loss rates, and found the results shown in Table 19-6. The results indicate that the highest recovery was 90% for loans secured by cash. Unsecured loans only recovered 79%. Surprisingly, unsecured loans have a higher recovery rate than loans secured by stock in subsidiaries; there could be a fundamental reason for this, or it may be due to the small sample size. (There were only 19 unsecured loans in the study.) Table 19-6 also shows that the factor A was on average equal to 0.66; i.e., if we only knew the average recovery rate, a reasonable estimate for the standard deviation would be 0.66 times the worst case of image.

FIGURE 19-6 Distribution of Loss-Recovery Rates Estimated from Recovery Experience

image

FIGURE 19-7 Distribution of Recovery Rates Estimated from Prices

image

A study by Standard & Poor’s Karen Van de Castle and David Keismann (“Recovering Your Money: Insights Into Losses from Defaults,” Standard & Poor’s Credit Week, June 16, 1999, pp. 29–34) gives the recovery rates of bank loans and different classes of bonds. The results, in Table 19-7, show that recoveries on bank loans are higher than for bonds. Of the bonds, senior secured bonds have the highest recovery rate (84%) and junior subordinated bonds have the lowest recovery rate (14%). This clearly shows us that the structure has a great effect on the risk.

TABLE 19-6 The Effect of Collateral on Recovery Rate

image

TABLE 19-7 The Effect of Structure on Recovery Rate

image

The industry of the borrower also affects the recovery rate. This may be due to the quality of the collateral that is available in each industry. Table 19-8 shows the recovery rates reported by Edward I. Altman and Vellore M. Kishore (“Almost Everything You Wanted to Know about Recoveries on Defaulted Bonds,” Financial Analysts Journal, November/December 1996, pp. 57–64). The table also shows that the actual standard deviation that is experienced for LIED is on average 0.48 times the theoretically worst case.

TABLE 19-8 The Effect of Industry on Recovery Rate

image

EXAMPLE CALCULATION OF EL & UL FOR A LOAN

To demonstrate the use of these results, let us work through an example. Consider a 1-year line of credit of $100 million to a BBB-rated public utility, with a 40% utilization. From the tables above, the probability of default is 0.22%, the average additional exposure at default for a BBB corporation is expected to be 65% of the unused portion of the line. The average recovery for a utility is 70% with a standard deviation of 19%.

As derived earlier, the expected loss is given by:

image

image is the average loss in the event of default (severity), and image is the average exposure at default.

Assuming that changes in exposure and severity are uncorrelated, the unexpected loss is given by:

image

σS is the standard deviation of severity (equal to the standard deviation of LIED). σEis the standard deviation of exposure for the line of credit. Here, the only thing we do not have a value for is the standard deviation of exposure. There are no published results for the standard deviation of additional exposure of default for a line of credit. In a practical application, this would be obtained by studying the bank’s historical data. Here, let us assume that the standard deviation of exposure follows the same pattern as LIED, i.e., that the standard deviation of the additional exposure is approximately half the worst case that could occur given the average additional exposure image:

image

The volatility of the overall exposure is the volatility of the additional exposure, multiplied by the amount of the unused line:

σE = σA (1 − D)LMax

= 0.24 × (1 − 40%) × $100M

= $14M

Here, D is the percentage of the line that has already been drawn down (40% in our example), and LMax is the total line ($100 million in our example). We now have all the elements required to calculate UL:

image

For the BBB company, we have the results shown in Table 19-9.

We can repeat this calculation for the same line of credit but to companies of different grades. The results are shown in Table 19-10 and plotted in Figure 19-8 (assuming AAA companies have a one-basis-point default rate). Figure 19-8 shows that EL and UL tend to converge as the credit quality decreases.

TABLE 19-9 Example of Calculation of Expected and Unexpected Loss for a BBB Company

image

INFORMATION REQUIREMENTS

All of the methods described above require large amounts of historical data on company characteristics and later default behavior. The first step in model building is the collection of this data. This data should be easily accessible to model builders and the collection should be established as soon as possible to start accumulating history. Three types of information must be collected: information on the customer and facility at the time the loan was granted (as in Table 19-2 and Table 19-3), information on the results of the models used to approve the facility, and information on later default behavior.

TABLE 19-10 Example of Calculation of Expected and Unexpected Loss for Several Different Company Credit Grades

image

FIGURE 19-8 Example of Calculation of Expected and Unexpected Loss for Several Different Company Credit Grades

image

The information on the models should be collected to back test the models. It should include information such as the credit rating, predicted exposure at default, and predicted loss in the event of default.

The information on later default behavior is compared with the predicted behavior, and is used to build models relating the customer information to the default behavior. Table 19-11 shows the minimum data requirements for recording default behavior.

TABLE 19-11 Requirements for Collecting Historical Data

image

SUMMARY

In this chapter, we discussed the methods banks employ to find values for the probability of default, the loss in the event of default, and the exposure at default. In the next few chapters, we move from measuring the risk of isolated credit facilities to measuring the credit risk of an entire portfolio.

APPENDIX A: TRANSFORMATION OF A NORMAL PROBABILITY DISTRIBUTION INTO A STANDARD NORMAL DISTRIBUTION

The Standard Normal distribution is a Normal distribution with a mean of zero and a standard deviation of one. It is often easier to use than a general Normal distribution. This appendix addresses the problem of transforming from a general distribution to a Standard distribution.

The probability-density function for a Normal distribution with mean image and standard deviation σE is as follows:

image

The probability of E being less than zero is given by integrating the probability-density function:

image

To transform to a Standard distribution, let us define a variable z:

image

Given this definition, note the following:

image

image

image

We can now write the probability as follows:

image

By substituting z into the probability-density equation, we can write it in terms of the Standard Normal distribution:

image

φ(z) is the symbol for the Standard Normal probability-density function. The probability can now be written as follows:

image

Φ is the symbol for the cumulative Standard Normal probability function.

NOTES

1. Altman E.I., 1968, “Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy,” Journal of Finance 23 pp. 189–209.

2. The weights are chosen using an optimization method such as the “solver” function in Microsoft’s Excel.

3. Merton, Robert, “On the Pricing of Corporate Debt: The Risk Structure of Interest Rates,” Journal of Finance, vol. 29, 1974.

4. Kealhoffer, Stephen, “Managing Default Risk in Derivative Portfolios,” in Derivative Credit Risk: Advances in Measurement and Management, Renaissance Risk Publications, London 1995.