The previous chapter outlined the difference between the two primary ALM risks: liquidity and interest rates. This chapter will examine the measurement of interest-rate risk in greater detail.
The purposes of measuring ALM interest-rate risk are to establish the amount of economic capital to be held against such risks, and to show managers how the risks can be reduced. Risks can be reduced by buying or selling interest-rate-sensitive instruments, such as bonds and swaps, or by restructuring the products that the bank offers, e.g., promoting floating-rate mortgages over fixed-rate mortgages or encouraging more fixed deposits.
Although ALM risk is a form of market risk, it cannot be effectively measured using the trading-VaR framework. This VaR framework is inadequate for two reasons. First, the ALM cash flows are complex functions of customer behavior. Second, interest-rate movements over long time horizons are not well modeled by the simple assumptions used for VaR. Therefore, banks use three alternative approaches to measure ALM interest-rate risk, as listed below:
• Gap reports
• Rate-shift scenarios
• Simulation methods similar to Monte Carlo VaR
These approaches are now examined in greater detail.
Gap reports have been in use for many years to monitor the interest-rate risk. They can also be used to measure liquidity risk. They characterize the balance sheet as a fixed series of cash flows. The “gap” is the difference between the cash flows from assets and liabilities.
Gap reports are useful because they are relatively easy to create, and they give a very intuitive appreciation of the overall position of the bank. Gap reports can also be used to estimate the duration of the cash flows, and therefore allow us to get an approximate measure of the risk. This measure is only approximate because gap reports do not include information on the way customers exercise their implicit options in different interest environments.
There are three types of gap reports: contractual maturity, repricing frequency, and effective maturity. Each is explained in greater detail below, then we explain how a crude estimate of required economic capital can be based on the gap report.
A contractual-maturity gap report indicates when cash flows are contracted to be paid. For liabilities, it is the time when payments would be due from the bank, assuming that customers did not roll over their accounts. For example, the contractual maturity for checking accounts is zero because customers have the right to withdraw their funds immediately. The contractual maturity for a portfolio of three-month certificates of deposit would (on average) be a ladder of equal payments from zero to three months. This ladder occurs because new deposits are continuously expiring and originating.
The contractual maturity for assets may or may not include assumptions about prepayments. In the most simple reports, all payments are assumed to occur on the last day of the contract. Such assumptions are made to ease the requirements for data gathering and analysis, but cause significant distortions in the risk measurement.
A contractual-maturity gap report is illustrated in Figure 13-1. Along the x-axis there is maturity. The height in the y-axis is the amount of assets and liabilities maturing in each time bucket. Assets appear above the line and liabilities below the line. In this figure, the bank has almost $30 billion of demand deposits, and assets whose maturity is more evenly spaced over the next 10 years.
The contractual-maturity gap report is useful in showing liquidity characteristics because it shows the mismatch of cash flows into the bank and the possible required cashflows out of the bank if customers exercised their rights to withdraw funds immediately. However, the contractual-maturity gap report gives little information on interest-rate risk.
For interest-rate measurement, an improvement on the contractual gap report is a gap report based on repricing characteristics. Repricing refers to when and how the interest payments will be reset. The repricing gap makes no assumptions about customer behavior but begins to capture the interest-rate characteristics of the balance sheet.
FIGURE 13-1 Illustration of Contractual-Maturity Gap Report
The x-axis in this maturity gap report reflects maturity. The y-axis shows the amount of assets and liabilities maturing in each time bucket.
The report matches together all assets and liabilities that have the same interest-rate basis, e.g., prime, 3-month LIBOR, 5-year fixed rate, etc. Accounts based on prime are typically included in the 3-month bucket, but may also have a bucket of their own. Compared with the contractual gap report, the main difference is that medium-term, prime-based assets such as personal loans move from the 2-year and 4-year buckets down to the 3-month bucket. Similarly, any floating-rate mortgages will move down from 10 or 30 years to below the 1-year bucket.
Although the repricing report includes the effect of interest-rate changes, it does not include the effects of customer behavior. This additional interest-rate risk is captured by showing the effective maturity. For example, the effective maturity for a mortgage includes the expected prepayments, and may include an adjustment to approximate the risk arising from the response of prepayments to changes in interest rates. The effective maturity for checking accounts typically includes the assumption that the total amount in the checking accounts will have a core component that will not be withdrawn in the near future. Checking accounts behave more like a ladder of bonds than an overnight loan.
The effective maturity for prime-based accounts is typically assumed to be two to four months, or a ladder of bonds from overnight to six months. The overall effect is as follows: assets have a shorter effective maturity than their contractual maturity, and liabilities have a longer effective maturity than their contractual maturity.
Gap reports give an intuitive view of the balance sheet, but they represent the instruments as fixed cash flows, and therefore do not allow any analysis of the nonlinearity of the value of the customers’ options. To capture this nonlinear risk requires approaches that allow cash flows to change as a function of rates.
It is relatively easy to use gap reports to get a crude estimate of the economic capital required for ALM interest-rate risks. The gap reports give us a series of cash flows. We can treat these cash flows as if they are payments from a bond for which we can calculate duration dollars. Recall that duration dollars measure the sensitivity of the value of a bond to changes in interest rates:
From historical data, we can estimate the standard deviation of rate changes. For economic-capital purposes, we need the annual standard deviation of rates. This can be calculated using annual data or from quarterly, monthly, or daily data, and then using the square-root-of-T approximation to convert to annual standard deviation:
It is generally best to use quarterly data as the best compromise between using recent data and including long-term trends such as mean reversion.
We can now calculate the standard deviation of the change in the ALM portfolio over a year:
σValue, Annual = Duration$ × σRate, Annual
Economic capital is several times the standard deviation of change in value. For example, if the bank’s target creditworthiness is a 10-basis-point probability of default, and we assume the value is Normally distributed, the economic capital should cover 3 standard deviations of value:
Economic Capital ≈ Duration$ × 3 × σRate, Annual
For this analysis, we made several significant assumptions. We assumed that value changes linearly with rate changes (i.e., there is no optionality, and bond convexity is negligible). We also assumed that the duration would be constant over the whole year; i.e., the portfolio composition would remain the same. Finally, we assumed that annual rate changes were Normally distributed. Combined, these assumptions could easily create a 20% to 50% error in the estimation of capital. We now discuss methods that do not require so many assumptions.
Rate-shift scenarios attempt to capture the nonlinear behavior of customers. A common scenario test is to shift all rates up by 1%. After shifting the rates, the cash flows are changed according to the behavior expected in the new environment; for example, mortgage prepayments may increase, some of the checking and savings accounts may be withdrawn, and the prime rate may increase after a delay. The NPV of this new set of cash flows is then calculated using the new rates. The analysis is used to show the changes in earnings and value expected under different rate scenarios.
As an example, let us consider a bank with $90 million in savings accounts and $100 million in fixed-rate mortgages. Assume that the current interbank rate is 5%, the savings accounts pay 2%, and the mortgages pay 10%. The expected net income over the next year is $8.2 million:
Interest Income = 10% × $100M - 2% × $90M = $8.2M
If interbank rates move up by 1%, assume that savings customers will expect to be paid an extra 25 basis points, and 10% of them will move from savings accounts to money-market accounts paying 5%. Nothing will happen to the mortgages. In this case the expected income falls slightly to $7.5 million:
Interest Income = 10% × $100M - 2.25% × $81M - 5% × $9M = $7.5M
Now assume that interbank rates fall by 1%. Savings customers are expected to be satisfied with 25 basis points less, but 10% of the mortgages are expected to prepay and refinance at 9%. The expected income in these circumstances is $8.3 million:
Interest Income = 10% × $90M + 9% × $10M - 1.75% × $90M = $8.3M
The example above shows the nonlinear change of income. We can extend this to show changes over several years. By discounting these changes, we can get a measure of the change in value.
An approximate estimate of the economic capital can be obtained by assuming that rates shift up or down equal to three times their annual standard deviation, and then calculating the cash flows and value changes in that scenario. The economic capital is then estimated as the worst loss from either the up or down shifts.
The rate-shift scenarios are useful in giving a measure of the changes in value and income caused by implicit options, but they can miss losses caused by complex changes in interest rates such as a shift up at one time followed by a fall. To capture such effects properly we need a simulation engine that assesses value changes in many scenarios.
The purpose of using simulation methods is to test the nonlinear effects with many complex rate scenarios and obtain a probabilistic measure of the economic capital to be held against ALM interest-rate risks.
The primary simulation method is Monte Carlo evaluation. The process is similar to that used for calculating the Monte Carlo VaR for trading portfolios. However, there are two main differences: the simulation extends out over several years rather than just one day, and the models used are not simply pricing models for financial instruments, but also include models for customer behavior and the behavior of administered rates such as prime.
Monte Carlo simulation can use the same behavior models as the rate-shift scenarios. The difference is that in a simulation, the scenarios are complex, time-varying interest-rate paths rather than simple yield-curve shifts.
The Monte Carlo simulation is carried out as shown in Figure 13-2:
• Randomly create a scenario of future interest rates for the next month.
• Use models for each type of product to estimate the changes in the product’s balance for that rate scenario, e.g., prepayments of loans and withdrawals of deposits.
• Model administered rates, such as prime, to get the interest to be paid on administered-rate products.
FIGURE 13-2 The Process for an ALM Monte Carlo Simulation
• Calculate payments of interest using the rate multiplied by the product’s balance. Calculate payments of principal using the changes in balances. This produces the net income for the month.
• The simulation then randomly creates an interest-rate scenario for the next month and steadily moves forward to a horizon typically around three to five years. At this horizon, the value of each product is assumed to be equal to the remaining balance outstanding.
• The net value of the portfolio for the given scenario is the NPV of all the cash flows generated and the remaining balance.
• The process is then repeated for several hundred new scenarios. The result is a probability distribution for the earnings at each time step and a distribution for the portfolio’s value.
An important component in the simulation approach is the stochastic (i.e., random) model used to generate interest-rate paths. Many models have been developed. The most basic model is the one commonly used in Monte Carlo VaR for trading portfolios. This basic interest-rate model assumes that the interest rate in the next period (rt+1) will equal the current rate (rt), plus a random number with a standard deviation of σ:
rt+1 = rt + σ × zt
zt N(0, 1)
This is inadequate for ALM purposes because over long periods, such as a year, the simulated interest rate can become negative. This model also lacks two features observed in historical interest rates: rates are mean reverting and heteroskedastic (their volatility varies over time).
Two classes of more sophisticated models have been developed for interest rates: general-equilibrium (GE) models and arbitrage-free (AF) models. The AF approach observes the current set of bond prices and deduces the stochastic model that would create those prices. This arbitrage-free approach provides prices that are tightly tied to the prices currently observed in the market, and therefore is generally good for trading.
The GE approach assumes a model for the random process that created the observed history of rates, and then estimates a yield curve that can be used to price bonds. The general equilibrium approach gives a better model of volatility. It therefore is better for risk management and is described here. A general model for the GE approach has a mean-reverting term and a factor that reduces the volatility as rates drop:
Here, θ is the level to which interest rates tend to revert over time. κ determines the speed of reversion. If κ is close to 1, the rates revert quickly; if it is close to 0, the model becomes like a random walk. γ gives the relative volatility of the disturbances to the rates. γ determines how significantly the volatility will be reduced as rates drop. If γ was 0, the volatility would not change if rates changed. The factor scales the volatility according to the size of the timestep. If γ is fixed equal to 0.5, the GE equation is called the Cox-Ingersoll-Ross equation. If γ is fixed equal to 1, it is called the Vasicek model.
Values for the parameters θ, κ, σ, and γ can be determined from historical rate information using maximum likelihood estimation, as explained in Appendix A to this chapter. The equation below parameterizes the three-month U.S. T-bill rate from January 1990 to January 2001. (Δt is in units of years.)
The random driving term Z3m,t is approximately Normal with a mean of zero and standard deviation of one. A common approach to finding the longer-term rates is to say that they are given by a complex equation depending on θ, κ, σ, γ, and the current value of the short rate r3m,t. An alternative approach is suggested here.
The equations below parameterize the 3-month U.S. T-bill rate, the 1-year U.S. note rate, and the 10-year U.S. T-bond rate:
The correlation between the driving terms Z3m,t, Z1y,t, and Z10y,t is as follows:
ρ3m, 1y = 0.061
ρ3m, 10y = 0.044
ρ1y, 10y = 0.741
This can also be expressed as a correlation matrix:
We can carry out an Eigenvalue decomposition on this matrix, as described in the chapter on Monte Carlo VaR:
We can then use this to create properly correlated driving terms for the three rates:
Here, N is a vector of n1, n2, and n3, each of which is an uncorrelated, random number from a Standard Normal distribution. Z is the vector of z3m,t, z1y,t, and z10y,t, which are random numbers each with a Standard Normal distribution and with the required correlation between them. The result of this is that all three rates are driven by correlated shocks, and therefore tend to move together.
Figure 13-3 shows an example in which a random scenario for the rates has been created using the above approach. In this example, the rates are simulated in monthly time steps out to 5 years. Notice that the 1-year and 10-year rates tend to move together, but the 3-month rate is more independent.
The rate scenarios created above are used to drive models of customer behavior. If we take the example of a checking account, a typical model for the balances would have a constant-growth term, changes in the balance according to the three-month rate, and changes due to uncorrelated random events:
FIGURE 13-3 Illustration of a Random Scenario for Interest Rates
Here, g is the growth term, a is a constant giving the response of the balance to changes in rates, zb,t is from a Standard Normal distribution, and σb is the annual volatility caused by external events (i.e., the residual from the regression).
To give a numerical example, let us assume that the results of a regression between the rates and the balances finds that each factor has a 10% influence. (In this example, the coefficient on the rate has been scaled by the average rate.)
Using this model for the balances, driven by the rate scenario, we produced the scenario result shown in Figure 13-4. By taking the difference in balance from one time step to the next, we get the cash flows to and from the checking account, including the final repayment of all balances assumed at the five-year horizon. The cash flows for the scenario are shown in Figure 13-5. The spike at the end is the assumed repayment of the remaining balances at the five-year horizon.
This example gives an indication of the process used to estimate earnings volatility for a single product. An evaluation for a full ALM portfolio follows the same procedure, but includes balance and payments models for all products. A more detailed model may also include costs of servicing, which become important when valuing products such as checking accounts. In the section below, we explore how to get from this to a measure of economic capital.
FIGURE 13-4 Illustration of a Random Scenario for Balances
FIGURE 13-5 Illustration of a Random Scenario for Cash Flows
In measuring the economic capital, we want to know how much capital should be held at the beginning of the year, so there is a very high probability that the portfolio’s value will still be positive at the end of the year. For example, an A-rated bank should have approximately a 99.9% chance of the value being positive at the end of the year.
Consider having a simulation that gives the NPV of the capital at the end of 10 years. If we held capital equal to the NPV of the 99.9%-confidence level of the value distribution, we would have enough capital to be sure that we could have a 99.9% chance of surviving 10 years. This is too much capital for a single A bank that only needs a 99.9% chance of surviving 1 year. One possibility is to say that if the bank plans to remain rated single A, it needs a 99% probability of surviving 10 years. This would lead us to use the NPV of the 99%-confidence level of the NPV of the 10-year distribution as the economic capital.
This is a slightly different definition of economic capital, but should give results that are close to the capital that would be calculated from the 99.9% confidence level over 1 year. Direct calculation of the 1-year capital involves a complex simulation in which we first simulate cash flows up to the 1-year point, and then do a second simulation from the 1-year point to assess the value of the remaining balances. Such a complex nested simulation takes a long time to run, and the extra accuracy rarely justifies the extra effort.
In this chapter, we examined the measurement of interest-rate risk in great detail. Knowing how to measure this risk enables us to establish the amount of economic capital to be held against such risks, and shows managers how the risk can be reduced. Next, we discuss funding liquidity risk for ALM, along with how it can be measured and managed.
In building the interest-rate models, we applied maximum likelihood estimation to find the values for the parameters. In maximum likelihood estimation, we seek the model that maximizes the combined probability of the observed events. In the case of interest rates, we seek the model that best describes the observed changes in the time series of historical interest rates.
The process is as follows:
1. We pick a model for the process, e.g., the CIR model.
2. We guess values for the parameters in the model, i.e., θ, κ, σ, and γ.
3. From historical data, we calculate the expected value of the variable in the next time step. For the CIR model, the expected value is given by the following:
E[rt+1] = rt + κ(θ - rt) Δt
4. We also calculate the standard deviation of the error that is expected due to the random driving term:
5. We calculate the probability density of the observed result happening given the model that we have chosen. For the CIR model, the probability density is the value of the Normal probability-density function for the observed rate, given the expected value and standard deviation:
p[rt+1] = φ[rt+1, E[rt+1], Standard Deviation[rt+1]]
6. We repeat this for all N time steps of historical data, then multiply all the probabilities together to get a single number, J, that measures the goodness of the model:
J[θ, κ, σ, γ] = p(r1) × p(r2) × p(r3) × . . . × p(rN)
7. We then slightly change the parameters in the model and repeat the process until we find the parameter values that maximize J.
In practice, there is a slight modification to the above algorithm because the product of the probabilities for hundreds of data points is a number that is too small for computers to handle. However, maximizing the product of a set of numbers is the same as maximizing the sum of the log of the numbers, so in step 6, the equation for J should be replaced by the following:
J[θ, κ, σ γ] = ln[p(r1)] + ln[p(r2)] + ln[p(r3)] + . . . ln[p(rN)]