Capital Attribution: Exponentially Increasing and Imprecise


Just about everybody who writes about capital has constructed a picture of the difference between the mean and an extremum of losses (Chart 1: capital), implicitly for a static pool of assets.   It is the unifying axiom of economic capital allocations, and it is simply not helpful in calculating capital.   Implicit in this approach is that capital is related to a Value-at-Risk type estimate, in this case a credit VaR. Yet a lot of details are invariably left unresolved.  Do you model this over the life of the loan, or 1 year?  Using mark-to-market or cash flow variability?  99.0%, 99.5%, 99.9% or 99.97% extremums?   Is this stand-alone capital, or is the business line's correlation with other corporate activities captured?  While everyone knows these issues have to be addressed prior to implementation, the fact that so few people wish to stick their neck out on these critical assumptions suggests that while many people are talking about capital, few are calculating numbers from these exercises that have real implications for pricing, incentive compensation, and performance measurement.  Did you ever wonder why the meticulously detailed CreditMetrics and CreditRisk+  contain no real estimates of commercial capital (e.g., 6.43% for 3 year B rated loans)?  It's not because they do not have data, it's because the process of generating a real number would bring forth the sad fact that the standard error from this process is sufficiently large to make it irrelevant!   There is an overriding faith that things will work themselves out.  To many involved, the inability to match theory to data is like the early days of quantum electrodynamics (QED), where initially calculations needed ad hoc adjustments to match the data, but eventually Feynman, Schwinger and Tomonaga figured out how to get it right.  Technical details.  Unlike physics, however, in economics there are very few nice theories that eventually matched data, and indeed many cases where unfounded optimism in new tools (e.g., game theory circa 1950, Keynesian demand management circa 1960, monetary policy circa 1980). 

Uncertainty Bounds

A survey by the First Manhattan Consulting Group in 1997 showed considerable variability for both expected loss and capital by banks (Table 1: Capital). Capital for B rated loans average 444 basis points, but ranged from 185 to 700 basis points, while expected losses average 44 basis points, and ranged from 18 to 88 basis points. These data, which is very similar to surveys by other private parties, appear reasonable estimates as to the state of agreement in this area. As actual pricing is nowhere near this varied, clearly there is a disconnect.  It seems most probable that the answer lies in the fact that most banks price to the market, and use capital and expected loss estimates as ex post analytical tools, not the drivers of pricing or strategic decisions.  Given the state of uncertainty in this field, this is probably a wise bit of prudent incrementalism. 


  Confusion to the capital allocation process comes from the spurious precision that results from applying an extremum to a default probability.   The usual result  leads to a targeted 99.95% annual nondefault rate.  As a practical matter such precision is counterproductive and misleading.  You can't calibrate a system to this level of accuracy, so it can't be relevant (though many physicists, who have no intuition for the data, wouldn't know this).  The sensitivity of such estimates to assumptions such as time horizon, volatility, correlations, marginal volatility contribution and distribution are so large that it is impractical to expect this number to achieve any meaningful consensus.   A firm would either have to wholly delegate the power of capital estimation to a single person or expect an endless debate.  S&P looks for 4-5 times lifetime expected losses on auto ABS for total subordination necessary to get to AAA.  You have to admire the recognition of imprecision in that sort of rule of thumb. 

A Basic Capital Result

The spread variability graph represents the 99.9% extremum of annualized credit spread changes, using data from 1992-98 (chart 3: capital).  As this does not include a recessionary period, inference for a worst-case scenario is clearly on the low end.  Yet the important point is that spread volatility appears to be convex over risk grades.  That is, the volatility of spreads are exponentially increasing as we move down the credit risk spectrum, which implies that the contribution of credit spread variability to an extremum measure of unexpected loss is exponentially increasing as we move down the credit risk spectrum. 

A time series of returns from Merrill Lynch's High Yield bond index and Lehman Bros. Aggregate bond index can also inform us on the general properties of a credit VaR.  If we look at (chart 10:Correlations), we can se the variability in the total return of these indices over time.  Quarterly data is best for bond funds, since there is autocorrelation in yield movements, and thus extrapolation from monthly to annual is not simply the square root of 12.  But I digress.  If we look at these series we see two things.  First, the S&P bank equity index varies much more closely with the Merrill Lynch High Yield index than the Lehman Aggregate, implying the average credit risk of a bank is more like high yield bonds, and not as much like investment grade (the Lehman Aggregate is primarily Treasuries and investment grade securities).  But if we try to take the annualized standard deviations of these returns, and multiply them by 3.09 to approximate a 99.9% scenario (assuming a Gaussian distribution),  the extremums of the investment grade and High Yield indices are 14.7% and 23.1% respectively.  If we try to take out interest rate risk by looking at the High Yield return minus the Investment Grade return--a fund hedged by Libor--the 99.9% extremum is still around 23%.  This says that hedging interest rate risk for junk credits adds more spread risk than it removes in rate risk (think about hedging high yield bonds with Treasuries last August, ouch!).  The nice thing about this approach is that index movements are a composite of spread movements, credit migrations and defaults.  If we look at chart 2: Other we can see the credit spread movements from 1988-98, which includes a recessionary period.  This confirms our earlier calculation.  Specifically, the B spread to Treasuries rose by 500 basis points in 1990 in a matter of months.  An instance of a 500 basis point move over the past 11 years implies a 99.9% scenario of at least 1,000 basis points, which implies capital well over 20% for B rated credits.   Both these estimates suggest very high levels of capital should be applied to BB and B rated credits if we use an annualized extremum consistent with a target debt rating. 

The last bit of information come from some actual estimates of capital using credit VaR type algorithms.   Using JPMorgan's CreditManager, software that calculates creditmetrics, we can estimate extremums for various risk grades (chart 5: capital).   We loaded in 1000 obligors, and given a transition matrix and assumptions on recovery rates (75%) and spreads by credit grade, measured the extremums relative to the mean for a portfolio.  We used the base transition matrix and spreads that came with the software, which were very similar to table 1: transitions, and general spreads (chart 5: default rates).   Spreads were constant, however, and thus this exercise is not a true mark-to-market variability; in practice the market value of a portfolio will fluctuate even if bonds do not migrate out of their current credit grades.  Extremums, i.e., unexpected losses, are exponentially increasing in this exercise, just as they were in the analysis of spread variability. 

To demonstrate that this is not my peculiar implementation of an algorithm, if we look at a paper done by some researchers at a Japanese bank  we see the same pattern: expected and unexpected losses increase exponentially over risk grades. Their algorithm used a Markov matrix transition, and the matrices were calibrated to internal data at Sakura Bank in Japan (chart 6: capital).  The simulations generated the extremum losses over the life of the portfolio, and is therefore a hybrid between CreditRisk+ (cash flow variability only) and CreditMetrics (markov transitions).  Note that expected losses rise exponentially, as they do at all banks.  The capital calculation however, is not necessarily true capital, it is the estimate of capital using a statistical algorithm and picking an extremum.  The point here is that statistical algorithms that use a PDF extremum approach must generate capital estimates that increase exponentially with credit grades, just as expected losses do.   

The exponential increase of capital, in lock step with the exponential increase in expected loss is a consistent feature of these statistically based algorithms.  CreditRisk+ and Wilson's CreditPortfolioView give similar answers.   Ignoring spread volatility leads to exponentially increasing capital, and spread volatility alone leads to the same conclusion.  Added together the result is doubly strong: capital estimates are exponentially increasing in expected loss. 

It is a fact that along any risk metric, expected losses are nonlinear.  One sees this in consumer and commercial gradations.  I would also argue that unexpected losses are also nonlinear, and the above evidence corroborates this, and I do not know many who would gainsay this point.   

Bottom Line

For me the primary implication is that it is much more fruitful to work on backing out simple functions based on expected loss and product type, which of course necessitates deriving the expected loss for all the different gradations of product types (grade 3 media lending, tier 1 auto,  middle market loan secured by an art collection, etc.).  When trying to allocate capital to a large complex banking operation with many different products these are the first order areas of importance.  Even if one is solely monitoring one product, such as auto loans or large corporate lending, expected loss is still the most important area of focus.  We should focus on dividing a portfolio into expected loss buckets, and then map this into capital using market data such as Asset Backed Security subordination or credit spreads.   The result is a set of mappings from expected loss into capital, by product.  Focusing on expected losses has an immediate impact on pricing and strategic decisions through the effect on net spreads.  Most importantly, during the essential and unavoidable multi-year period of calibrating capital one is  providing tangible value-added to your organization.   

Go to next point on Capital Allocation 

  Back to outline