CHAPTER 10
Overcoming VaR’s Limitations

INTRODUCTION

While VaR is the single best way to measure risk, it does have several limitations. The most pressing limitations are the following:

• It assumes that the variances and correlations between the market-risk factors are stable.

• It does not give a good description of extreme losses beyond the 99% level.

• It does not account for the additional danger of holding instruments that are illiquid.

One approach to addressing VaR’s limitations is to measure the risk both with VaR and with other, completely different methods, such as stress and scenario testing, as discussed in Chapter 5. However, in this chapter, we discuss approaches that can be used to augment the standard VaR methods. These methods allow the risk manager to improve the measurement of VaR, thereby improving the ability to set capital, measure performance, and identify excessive risks.

This chapter has three sections: an approach to letting variances change over time, several approaches for assessing extreme events, and finally, several approaches to quantify liquidity risk.

ALLOWING VARIANCE TO CHANGE OVER TIME

The usual approach to constructing the covariance matrix is to calculate the variance of the risk factors over the last few months and assume that tomorrow’s changes in the risk factors will come from a distribution that has the same variance as experienced historically. We could write this as an equation by saying that the expected variance tomorrow image is the variance of changes in the factor over the last few months:

image

Here, xt is the change on day t. We assume that the mean change is relatively small and therefore neglect it from the equation.

Although this approach is simple and robust, it is well known by practitioners that the volatility of the market changes over time: sometimes the market is relatively calm, then a crisis will happen, and the volatility will jump up.

GARCH is an approach that allows the estimation of image to vary quickly with recent market moves. GARCH stands for Generalized Autoregressive Conditional Heteroskedasticity, which basically means that the variance on one day is a function of the variance on the previous day. GARCH assumes that the variance is equal to a constant, plus a portion of the previous day’s change in the risk factor, plus a portion of the previous day’s estimated variance:

image

ω, α, and β are constants. xT is the latest change in the risk factor. To use this equation, we need to find values for ω, α, and β. The best value for these variables is the value that produces estimates of the future variance that are as close as possible to the variance that is later experienced. In practice, it is not possible to observe variance; all we can observe is the market change. Therefore, to find values for ω, α and β, we estimate the “true” variance by looking at the market changes for a few days before and after the day in question. The equation above was for the variance of a single factor. We can also use GARCH to estimate the covariance between two factors, x and y:

image

In general, GARCH is difficult to use for more than a few risk factors because as the number of risk factors increases, it becomes difficult to find reliable values for the parameters ω, α, and β that need to be estimated for each variance and covariance.

One simple and practical version of GARCH is achieved by setting the variables as follows:

image

In this case, GARCH reduces to being the exponentially weighted moving average (EWMA), as discussed in Appendix C to Chapter 6.

APPROACHES FOR ASSESSING EXTREME EVENTS

The usual implementations of Parametric and Monte Carlo VaR assume that the risk factors have a Normal probability distribution. As discussed in the statistics chapter, most markets, especially poorly developed markets, exhibit many more extreme movements than would be predicted by a Normal distribution with the same standard deviation.

The term used to describe probability distributions that have a kurtosis greater than that of the Normal distribution is leptokurtosis. Leptokurtosis can be considered to be a measure of the fatness of the tails of the distribution. Measuring the effects of leptokurtosis is important because risk factors with a high kurtosis pose greater risks than factors with the same variance but a lower kurtosis. We will describe four techniques that are used to assess the additional risk caused by leptokurtosis:

• Jump Diffusion

• Historical Simulation

• Adjustments to Monte Carlo Simulation

• Extreme Value Theory

Jump Diffusion

The jump-diffusion model assumes that tomorrow’s random change in the risk factor can come from one of two Normal distributions. One distribution describes the typical market movements; the other describes crisis movements. In simplified form, there is a probability of P that the sample will come from the typical distribution and a small probability of (1 – P) that it will come from the crisis distribution:

xNt, σt) with Probability P

xNc, σc) with Probability 1 - P

Here, μ is the mean daily return, and σ is the daily standard deviation. μt and σt describe the typical distribution, and μc and σc describe the crisis distribution. The result is a combined distribution that has fatter tails than a pure Normal distribution.

The main problem to this approach is that it is difficult to determine the parameter values. In the model above, five parameters must be determined (μt, σt, μc, σc, and P).

Historical Simulation

Historical simulation does not require an assumption for the form of the probability distribution. It simply takes the price movements that have occurred and uses them to revalue the portfolio directly. This has the advantage of including the full richness of the complex interactions between risk factors. However, historical simulation is strongly backward looking because the changes in the risk factors are determined by the last crisis, not the next crisis.

Adjustments to Monte Carlo Simulation

Usually, Monte Carlo simulation uses Normal distributions. However, it is also possible to carry out Monte Carlo evaluation using leptokurtic distributions, such as jump diffusion or the Student’s T distribution. It is relatively easy to create such distributions for single risk factors, but more difficult to ensure that the correlations between the factors are correct. This problem is reduced a little if Eigenvalue decomposition is used because Eigenvalue decomposition specifically isolates the few fundamental risk factors that drive most of the changes. This allows us to concentrate on ensuring that those principal risk factors have appropriate leptokurtic distributions.

Extreme Value Theory

Extreme Value Theory (EVT) takes a different approach to calculating VaR and is an alternative to the three common methods. EVT concentrates on estimating the shape of only the tail of a probability distribution. Given this shape, we can find estimates for losses associated with very small probabilities, such as the 99.9% VaR. A typical shape used is the Generalized Pareto Distribution that has the following form:1

Probability (Resultx) = (ax + b)-c

Here, a, b, and c are variables that are chosen so the function fits the data in the tail. The main problem with the approach is that it is only easily applicable to single risk factors. It is also, by definition, difficult to parameterize because there are few observations of extreme events.

LIQUIDITY RISK

The Importance of Measuring Liquidity Risk

Liquidity risks can increase a bank’s losses; therefore, they should be included in the calculation of VaR and economic capital.

There are two kinds of liquidity risk: liquidity risk in trading, and liquidity risk in funding (also known as funding risk). The funding risk is the possibility that the bank will run out of liquid cash to pay its debts. This funding risk is usually considered in the framework for asset liability management and will be discussed in later chapters. This chapter discusses liquidity risk in trading.

The liquidity risk in trading is the risk that a trader will be unable to quickly sell a security at a fair price. This could happen if few people normally trade the given security, e.g., if it was the equity for a small company. It could also happen if the general market is in crisis and few people are interested in buying new securities. This happened in the fall of 1998 after the Russian default, when even U.S. government bonds, normally the most liquid of securities, suddenly became illiquid.

Such a situation can cause losses in several ways. One way would be if the trader urgently needed to sell the position to get cash and repay a debt that was due. In this case, to find interested buyers, the trader would be forced to offer the security at a deep discount to its fair price. Losses due to illiquidity could also occur if the trader planned to sell any position that started making a loss, but then found that it took several days to sell the security at its fair price. This would expose the trader to the possibility of additional losses over that period.

We can view these two possible loss mechanisms as two extreme manifestations of the same problem. In one extreme, the trader sells immediately at an unusually low price. In the other extreme, the trader slowly sells at the current fair price, but risks suffering additional losses.

It is important to recognize the liquidity risk because it can add significantly to losses. Furthermore, if liquidity risk is not included in the risk measurement, it gives incentives to traders to buy illiquid securities. The incentives arise because in the market, illiquid securities offer a higher expected return to compensate for their higher liquidity risk. If this additional risk is not measured, traders will have an incentive to invest in these high-yielding assets, knowing that the bank will not charge them for the additional risk. Our problem now is to quantify that additional risk.

Quantifying Liquidity

The first step in quantifying the liquidity risk is to quantify the liquidity of any given instrument. One extreme approach to testing the liquidity of the market is for the risk manager to order a trader to close out a position and thereby directly observe how long it takes to clear the position, or how much of a discount must be made to close the position immediately. However, such an approach causes significant disruption to the trading operation and does not test the market in crisis conditions.

Another approach is to estimate the number of days required to close out the position. The close-out time is the time required to bring the position to a state where the bank can make no further loss from the position. It is the time taken to either sell or hedge the instrument. The number of days can be based on the size of the position held by the trader compared with the daily traded volume:

image

F is a factor that gives the percentage of the daily volume that can be sold into the market without significantly shifting the price. If F were set equal to 10%, it would imply that 10% of the daily volume can be sold each day without significantly shifting the market. The Daily Volume can be the average daily volume or the volume in a crisis period. The volume in a crisis period could be approximated as the average volume minus a number of standard deviations. This approach is quite crude but relatively objective, and it is easy to gather the required data.

Another alternative to quantifying the liquidity risk is to measure the average bidask spread relative to the mid price. The bid is the price that investors are willing to bid (or pay) to own the security. The ask is the price that owners of the security are asking to sell the security. The mid is halfway between the bid and ask. If the bid and ask are close to the mid, it implies that there are many market participants who agree on the fair value of the security and are willing to trade close to that price. If the bid-ask spread is wide, it means that few investors are willing to buy the security at the price the sellers think is fair. If a trader wanted to sell the security immediately, the trader would have to lower the ask price to equal the bid rather than wait for some investor to agree that the high ask price was fair.

Both the close-out time and the bid-ask spread can be used to quantify liquidity risk. We will explore how in the following sections.

Using Close-Out Time to Quantify Liquidity Risk

The most common approach to assessing the liquidity risk is to use the “square-root-of-T” adjustment for VaR. This is also known as “close-out-adjusted VaR.” The result of the approach is that the VaR for a position that takes T days to close is assumed to equal the VaR for an equivalent liquid position that could be closed out in one day times the square root of T:

image

VaRT is the 99th percentile cumulative loss that could be experienced over T days.

The approach assumes that the position will be held for T days, and then on the last day, it will be sold completely. It uses the reasoning that the losses over T days will be the sum of losses over the individual days:

LT = l1 + l2 + . . . + lT

where LT is the cumulative loss over T days and lt is the loss on day t. We can assume with reasonable accuracy that losses are independent and identically distributed (IID), meaning losses are not correlated day to day, and the standard deviation of losses is the same each day. The variance of the loss over T days is therefore the sum of the variance of the losses on the individual days:

image

If we assume that the variance of the losses on each day is the same, then the sum equals T times the variance on the first day:

image

The 99th percentile cumulative loss over T days is therefore:

image

A slightly refined approach is to assume that the position is closed out linearly over T days. In this case, the variance of the loss decreases linearly each day:

image

To illustrate the difference between this and the simple square-root-of-T adjustment, consider a closeout period of 10 days. The square-root-of-T method gives:

image

Whereas the linear close-out gives a measure of VaR that is significantly smaller:

image

The problem with both of these closeout-adjusted VaR approaches is that it does not give an apples-to-apples comparison with other risks. It compares the amount that could be lost over one day for a liquid position with the amount that could be lost over T days for the illiquid position. However, it is not the case that a trader holding a position with a 10-day closeout period is taking 3 times as much risk as a trader holding an equivalent liquid position. This is because in a 10-day period, the trader with the liquid position gets to gamble, and possibly lose, 10 times, whereas the trader with the illiquid position simply has one long gamble. To properly measure the significance of liquidity, we need to include the trader’s intended behavior.

The effects of trading and management styles can be assessed using simulations to produce a true apples-to-apples comparison of the additional risk caused by holding illiquid securities. An approach to this is discussed below.

Using Simulation-Based Techniques to Quantify Liquidity Risk

Consider two positions that are identical, except that one can be sold within a day, and the other cannot be sold for year. If the trader in charge of each position decides to hold it for a year, then the loss would be the same for each. However, if the trader in charge of the liquid position decides to sell on the first day, the potential loss would be significantly less. This illustrates that the impact of liquidity is a function of both the security’s liquidity and the management style. Increased liquidity gives the trader greater options to buy and sell the instrument, but if these options are not exercised, they are worthless.

With this line of thinking, we can suggest an approach that simulates market movements and the trader’s response. We can use this to quantify the amount that can be lost at the end of each year, and therefore the risk. We can then compare the loss suffered on a liquid position with the loss that would be suffered if the position was illiquid. This gives a measure of the relative risk.

This approach is taken in the paper “Changing Regulatory Capital to Include Liquidity and Management Intervention,”Marrison, C.I., Schuermann, T.D., and Stroughair, J., The Journal of Risk Finance, August, 2000. In this paper, a simulation model is used. The model includes variability in market values and the bank’s response to losses. The bank’s response is to reduce its holdings of securities whenever it suffers large losses, and increase its holdings if it has gains. Specifically, the bank tries to keep its risks in line with its remaining capital. The speed with which the bank can adjust its holdings is dictated by the closeout time of the security. The paper shows that a security with a holding period of 250 days requires about twice as much capital as a completely liquid position.

Using the Bid-Ask Spread to Assess Liquidity Risk

The closeout adjustments discussed above assumed that the trader was taking one extreme course of action by gradually closing out the position at the mid price and refusing to give any discount. The other extreme is to assume that the trader will sell out immediately by giving a discount that brings the price down to the bid price. This discount is an additional loss.

This approach was explained in Bangia, A., Diebold, F.X., Schuermann, T. and Stroughair, J. “Liquidity on the Outside,” Risk, vol. 12, pp. 68–73, June, 1999.

Bid-ask spreads change over time. Bangia et al. use the assumption that the additional drop in the price is half the usual bid-ask spread plus the 99th percentile movement in the spread as shown below:

image

Where image is the average spread, and σS is the standard deviation of the spread. This additional drop is then added to the one-day liquid VaR:

Liquidity Adjusted VaR = VaR + Additional Drop

This approach is most directly applicable for stand-alone instruments or risk factors, but it is also possible to consider it as an adjustment to the variances used in the covariance matrix.

SUMMARY

As we learned in this chapter, VaR has various limitations; however, there are approaches that can be employed to mitigate these limitations and improve the measurement of VaR. In the next chapter, we will detail how market-risk measurement is used in market-risk management.

NOTE

1 “Steps in Applying Extreme Value Theory to Finance,” Younes Bensalah, Working Paper 2000-20, Bank of Canada, ISSN 1192 5434, Nov. 2000.