Portfolio construction would seem to be a simple task given the availability of portfolio optimization software, which requires the input of only the returns for the holdings in the portfolio and provides the optimal allocations based on this input. The software will provide an efficient frontier curve, which consists of the portfolios (that is, allocation mixes) that result in the highest return for any target level of volatility. (Two efficient frontier curves—one including only stocks and bonds and the other adding alternatives to the mix—are illustrated in Figure 21.1.) If the investor decides an 8 percent annualized volatility is the desired target risk level for the portfolio, the portfolio corresponding to an 8 percent volatility on the efficient frontier curve will be the mix of assets that provides the highest return for an 8 percent volatility. So it would seem that all the investor has to do is choose the list of investments and the desired portfolio volatility level and, presto, the software would provide the mathematically derived optimal percentage allocation for each holding. Not much decision making or heavy lifting required here.
Source: EDHEC-Risk Institute. Reproduced with kind permission.
Although portfolio optimization provides an easy and seemingly scientific approach to portfolio allocation, it is based on two critically flawed implicit assumptions:
One very common problem is that the length of available track records for funds in the portfolio are too short to be representative of varied market conditions. This problem is compounded by the practical consideration that a portfolio optimization analysis is constrained to the shortest-length track record included. Insofar as a portfolio includes funds with track records as short as a few years, the choice is between restricting the portfolio analysis to a short past period that includes all (or nearly all) funds or restricting the analysis to only a portion of the portfolio (that is, those funds with track records exceeding a certain minimum length).
Because of track-record-based limitations of available data, portfolio optimizations are prone to overfit allocations to the most recent market cycle. In effect, the so-called optimal allocation will be the one that performed best in recent years. When there are market transitions, however, the investments that worked best in recent years may well be among the worst future performers. In these instances, portfolio optimization will not merely be useless, but will actually lead to worse-than-random results. For example, at the start of 2000, because of their stellar performance in the recent preceding years, long-biased equity strategies, particularly those focused on technology, would have been assigned much larger than normal allocations in an optimization—exactly at the time they were about to become among the worst-performing assets. Similarly, a portfolio optimization run at the start of 2008 would have assigned heavier weights to those strategies that proved to be most exposed to the financial meltdown later in the year (e.g., long exposure to credit risk, illiquid securities, emerging markets, etc.).
Limitations in available data will also make correlation calculations less reliable. In many instances, correlations between assets can vary widely over time, and the correlation for an insufficient-length period may reflect only part of the normal range. Also, over shorter periods, there is a significant likelihood that even unrelated funds might appear correlated simply due to chance (e.g., both witnessing large gains or losses in the same month or two for unrelated reasons).
Even when there is extensive data available, the implicit assumption that this past data can be used as a proxy for future expectations is a highly tenuous one. For example, as of 2012, Treasury bonds had been in a bull market for over 30 years, a fact that would increase their optimized weighting in any portfolio in which they were an asset. Yet, ironically, the fact that T-bonds had been in such a long-term advance made their prospects for future returns less favorable, not more favorable, since the resulting low interest rate levels suggested far more limited scope for a further decline in interest rates (that is, rise in bond prices). See Chapter 6 for a more detailed discussion of this example.
The question must always be asked: Are the factors responsible for past returns still likely to be valid for the future? If they are not, portfolio optimization would yield, at best, meaningless and, at worst, misleading results.
This inherent premise of portfolio optimization is often entirely unfounded because major risks are frequently not manifested in the track record. Also, higher volatility may sometimes be due to outsized returns that do not imply symmetrical risk. See Chapter 4 for a detailed discussion of the confusion between risk and volatility. The use of volatility as a proxy is most appropriate for highly liquid strategies, such as futures and foreign exchange (FX) trading, where event risk is not a factor, as it is for many hedge fund strategies.
Portfolio optimization provides a mathematically precise answer to the wrong question. The question it answers is: What is the optimal allocation mix for a portfolio, assuming future returns, volatilities, and correlations look like the past? The question we would like to answer is: What is the optimal allocation mix given our best assessment of prospective returns, risks, and proclivity to simultaneous losses for the investments in the portfolio? These two questions are most definitely not the same. Frequently, past returns are unrepresentative of potential future returns, past track records do not reflect known major risks, and correlations may not accurately represent tendencies toward simultaneous losses. Portfolio optimization provides an exact solution to the problem of allocating in the theoretical world. Unfortunately, we invest in the real world, and the two are often strikingly different. Consequently, a manual approach that takes into consideration key factors, including those not visible in past track records, is preferable to the easy-to-generate seeming precision of a portfolio optimization. It is better to get an approximate answer for the appropriate assumptions than an exact answer for the wrong assumptions.1
Some of the principles that underlie sound portfolio construction have already been discussed in detail in earlier chapters. Where this is the case, we simply summarize these concepts here as they pertain to portfolio allocation and indicate the reference chapter.
Investors often focus most on returns without taking into account that risk exposure will directly impact returns. A manager who doubles position sizes will double returns, but will also double risk. It would be absurd to consider such a doubling of return as representing much better performance. A focus on return/risk will avoid such nonsensical comparison biases. What if a higher-return/risk manager has a lower than desired return level? In this case, the return can be increased by leverage, while still maintaining risk at a lower level than for a lower-return/risk manager with an acceptable return level. If equivalent in qualitative terms and in terms of diversification with other portfolio holdings, a higher-return/risk manager would always be preferred.
Volatility is only one type of risk and, in some cases, may not even represent a risk as viewed from the investor’s perspective of risk, which is based on the probability and magnitude of loss. Many of the most important risks may not be reflected in the track record. The one exception where the use of volatility as a proxy for risk is roughly appropriate (although not in all cases) is for highly liquid strategies (e.g., futures and FX trading).
Frequently, managers may compile impressive performance records by taking substantial market exposure in a benign market environment. If widely varying market exposure is part of the investment process, then good performance in a bull market can be considered skill. If, however, the manager has consistently assumed substantial market exposure and the track record coincides with a rising market, then past returns may reflect the market more than manager skill.
Although, on average, the benefits of diversification are often only moderate beyond 10 diversified holdings, this view misses the point that the main value of greater diversification is mitigating worst-case outcomes (“tail risk” in the industry vernacular). Substantially greater diversification is therefore still beneficial, provided that added investments are considered of equivalent quality and are not more correlated to other holdings.
Many fund of fund managers follow a top-down philosophy to achieve diversification: They decide how much to allocate to each hedge fund category (e.g., long/short equity, event driven, global macro, etc.) and then select the individual managers within each strategy category. There are numerous logical inconsistencies with a top-down approach:
Category labels are inconsistent and potentially misleading as indicators of differentiation. If the goal is to select well-diversified managers, it makes much more sense to focus on the individual investment statistics (e.g., correlation, beta) and qualitative comparisons of strategies than on category labels, which are unavoidably arbitrary and inconsistent.
Selecting managers for a portfolio is different from selecting managers as stand-alone investments. The portfolio impact of adding a manager depends on both the manager’s individual performance and the manager’s correlation to other portfolio holdings. A manager who has a low or inverse correlation to other managers may be preferable as a portfolio addition to a qualitatively equivalent manager with a higher return/risk ratio. As another example, an inversely correlated manager with very high volatility may well reduce rather than increase portfolio volatility (see Chapter 20).
As a general guideline, a fund of funds portfolio manager or a multimanager investor should target a low average pair correlation. A pair correlation is the correlation between any two investments in the portfolio. The number of pairs in a portfolio is equal to N × (N − 1)/2, where N equals the total number of investments. For example, if there are 20 managers in a fund of funds portfolio, there would be (20 × 19)/2 = 190 pair correlations. The correlation matrix detailed in the next section provides a convenient way to look at all of the portfolio’s pair correlations.
It is also instructive to look directly at the coincidence of losses between different funds in a portfolio. A tool for detecting patterns of simultaneous losses in a portfolio is described later in this chapter in the section “Going Beyond Correlation.” The fund of funds manager should seek to minimize the number of funds in a portfolio that exhibit a strong tendency to lose money at the same time.
Assume that all the managers in a portfolio are deemed to be of approximately equal quality and all are equivalently diversified versus the other managers. How should the assets be allocated? Given the foregoing simplified example, it might appear that equal allocation would be the logical choice. Actually, equal allocation can be folly, even assuming investments of equivalent merit. The absurdity of an equal allocation approach is illustrated by the following fictitious tale of two partners who co-manage a fund but have a basic disagreement on how the fund should be traded.
Carol and Andrew are partners as managers of a fund that employs a systematic futures trading strategy. Both are happy with the trading system they have developed, but they have a problem. They are currently trading their system using a 14 percent margin-to-equity ratio, a middle-of-the-road exposure level for commodity trading advisors (CTAs). Carol is very conservative and is most concerned about keeping equity drawdowns small. Andrew, however, feels they are being too cautious and should increase exposure.
One day, Carol says, “Investors are more concerned about equity drawdowns than return, and, frankly, so am I. I think we should cut our exposure in half to 7 percent margin to equity.”
“Are you crazy?” Andrew shoots back. “We are already trading at too low an exposure level. Our worst drawdown so far has only been 10 percent. I think we should double our margin-to-equity level. We will double our returns, and most investors will still be fine with a maximum drawdown of 20 percent.”
Carol is so exasperated that she almost can’t decide where to begin. “Who says our future worst drawdown will be only as large as our past maximum drawdown? What if it is twice as large? Then with your suggestion, we would be down 40 percent and out of business!”
They decide to keep the status quo, but neither partner is satisfied. They have the same argument repeatedly over the ensuing weeks, but neither partner can budge the other. Finally, they decide to split apart, with each maintaining rights to use the system they co-developed.
Carol and Andrew each start their own funds. Both continue to use the exact same system without any alteration. The only difference is that Carol trades the system with 7 percent margin to equity, while Andrew uses 28 percent margin to equity.
Now consider a fund of funds manager who uses an equal allocation approach and adds Carol’s or Andrew’s fund to the portfolio. Both funds represent the exact same strategy and both will experience near identical return/risk performance. While equal allocation may sound like a neutral approach, it would result in sizing the same investment four times as large if Andrew’s fund is chosen instead of Carol’s. By any logic, an allocation to Andrew should be one-quarter the size of an allocation to Carol, in which case both return and risk would be equalized for what are equivalent investments.
The risk level at which a fund is run is based on the manager’s subjective preferences. There is no reason why an investor needs to buy into the same risk levels. If different investments have different risk levels, then allocation levels should be adjusted accordingly. If two funds are deemed to be equivalently attractive as holdings and one has twice the risk of the other (with risk measured in whatever way is deemed most appropriate), then it should receive half the allocation. The key point is that the starting baseline allocation should be based on equal risk, not equal assets. Of course, other factors, such as relative quantitative and qualitative assessments, as well as diversification with other holdings, should also influence the allocation size. But all else being equal, higher-risk investments should get proportionally smaller allocations.
An equal allocation fund will tend to be more volatile as higher-risk holdings exert a disproportionate impact. In contrast, a risk-based allocation approach will mitigate portfolio volatility by holding proportionally smaller allocations in higher-risk investments.
If a fund of funds portfolio is intended to be used as a diversifier to traditional investments rather than just as a stand-alone investment, it should seek to be net profitable in the majority of bear market months. To enhance the likelihood of achieving this goal, seek managers who have been net profitable across the down market months that occurred during their track records.
In comparing a portfolio of investments, it is highly useful to view correlations between the investments as a group rather than one pair at a time. A correlation matrix summarizes all pair correlations between a set of investments (or any other data). An illustration of a correlation matrix is shown in Figure 21.2. Note that both the horizontal and vertical labels are the same. To find a correlation between any given pair of investments, simply look at the cell that is the intersection of those two investments. For example, to check the correlation between Fund C and Fund E, look at the intersection of the row for Fund C and the column for row E, or equivalently, the intersection of the row for Fund E and the column for row C. Both show a correlation of 0.09. The upper diagonal half of the correlation matrix duplicates the data of the lower half of the matrix. For this reason, frequently only the lower diagonal half of the correlation matrix is shown. The diagonal values of the correlation matrix would always be 1.0, as each cell in the diagonal is the intersection of a fund and itself. Since this is a trivial case, these cells are frequently left blank. It may be useful to highlight correlation values above some threshold. For example, Figure 21.2 shades all correlation values greater than 0.7. The average of all the correlation pairs in the correlation matrix provides a good summary indication of the degree of diversification in the portfolio. The lower the average pair correlation, the better.
Although correlation is a useful tool for flagging investments that may be prone to witness simultaneous losses, for reasons detailed in Chapter 9, moderate or even high correlation between two funds does not necessarily imply that they will experience losses at the same time, nor does low correlation assure that this is not the case. Examining how each fund behaves when the other funds in the portfolio experience decline provides a more focused analysis than correlation in directly addressing the key concern: simultaneous losses.
The coincident negative return (CNR) matrix provides a convenient format for assessing the degree to which investments in the portfolio are vulnerable to losses when other portfolio holdings experience a decline. The CNR matrix would look similar to the conventional correlation matrix, but would differ in the following two essential ways:
There would be one parameter input required to calculate the CNR matrix: the minimum loss threshold (T) to define a losing period (month for monthly data). The default value for T would be zero; that is, any loss would represent a losing month. There is, however, a good reason to use a higher threshold: It may be more pertinent to focus on significant simultaneous losses rather than all simultaneous losses. For example, we may not care whether Fund E declined in months when Fund C was down a minimal amount. If T were set to a value of 0.5 percent, then the CNR matrix would show the percentage of times the row managers lost at least 0.5 percent when the column managers lost at least 0.5 percent.
The significance of what percentage of the time a manager loses when another manager loses depends on whether this percentage is more or less than the manager’s average percentage of losses. Thus, it may be more meaningful to use a normalized version of the CNR matrix, wherein each percentage is divided by a manager’s average percentage of losses worse than the threshold (T). For example, if Manager A is down in 30 percent of all months but in only 20 percent of the months when Manager B is down (for simplicity of exposition we assume T = 0), then the normalized CNR statistic would be 66.7 percent (20%/30% = 0.667), indicating it is less likely for Manager A to lose when Manager B is down than in a randomly selected month. If, however, Manager A is down in 50 percent of the months when Manager B is down, then the normalized CNR statistic would be 166.7 percent (50%/30% = 1.667), indicating it is more likely for Manager A to lose when Manager B is down than in a randomly selected month.
Note: The CNR is my own invention and not available on any existing software. It will, however, be included in the Schwager Analytics Module currently being developed by Gate 39 Media as a module for their Clarity Portfolio Viewer system, an add-on scheduled for release in the second quarter of 2013. Interested readers can get more information at: www.gate39media.com/schwager-analytics. For the sake of disclosure, I have a financial interest in this product.
Efficient portfolio allocation can dictate decisions that would be irrational for stand-alone investments. Lower return/risk managers might sometimes be preferred over higher return/risk managers for a portfolio allocation if they are inversely correlated to the portfolio. The key consideration is which manager will provide a portfolio with the highest expected return/risk characteristics, not which manager has the highest expected return/risk characteristics.
Portfolio optimization applies mathematical precision to the portfolio allocation process, but it is often based on faulty assumptions. The typical implicit assumption in portfolio optimization is that past returns, volatilities, and correlations are reasonable estimates for future expected levels. The problem is that this assumption is often deeply flawed, particularly in the case of returns. At major market transition points, portfolio optimization may often yield worse-than-random results.
Equal allocation is often viewed as the default neutral portfolio allocation. In reality, however, because managers often differ widely in the risk they assume, an equal allocation portfolio will inadvertently allocate far more risk to some managers than others. A more sensible neutral approach is to allocate in terms of equal risk, which ironically will imply that some managers will get much larger allocations than others.
1 The foregoing discussion of portfolio optimization refers to its application using past data as the assumed representative data for the various investments. Portfolio optimization, however, does provide a useful tool for determining optimal allocations implied by given assumptions. But, of course, in this latter case, the results are only as good as the assumptions.