A Review of the Empirical Data Relevant to Commercial Loan Portfolios

The sections below each address a different theme in bank loan risk, all coming together in regards to capital estimation.  This empirical section pertains only to the US experience.  It is hoped that by keeping the hyperlinks external to the text and using the "back" arrow, one can easily see the data supporting a particular statement without interrupting the flow of the text.

Defaults over Time

The most important thing to know about banks and debt markets is the history of their stock market and charge-off performance.  In the U.S., bank charge-offs have average 0.44% of total loans and leases outstanding since 1932, and have averaged a higher 0.67% over the more recent period of 1970-96. Chart 8: Default Rates and Chart 3: Correlations show the highly nonstationary nature of charge-offs over this century. Charge-offs were very low throughout the 50's and slowly trended upward over the next few decades, peaking in 1990. The early period was populated by lenders with fresh memories of the Great Depression, while the 80's were a period affected by significant moral hazard problems as deregulation in the US occurred in a haphazard fashion. Note from Chart 4: Correlations that perhaps the great increase in bank stock prices since 1990 may be explained by an inference that we have turned the corner on charge-offs in the banking industry, and after 30 years of slow but exponential growth in charge-offs, these rates have at least stopped growing and have perhaps changed direction towards lower long term levels.  Chart 3: Correlations clearly shows that from 1970 to 1990,  stock prices for banks were stagnant.  Given the rise in charge-offs, we can see why.    Chart 3: Default Rates shows that along with bank portfolios, publicly rated debt became riskier in an analogous fashion, with few defaults in the 70's relative to the 80's and 90's. As can be seen, most of this increase in riskiness was due to an increase in the default probability of speculative grade loans (those with ratings less than BBB-), as the relative contribution from investment grade borrowers was negligible.  Chart 7: Correlations shows that the Dunn & Bradstreet measure of small business failures also increased in the 80's and 90's, which is consistent with the bank charge-off experience. These data suggest that inferences based on data prior to 1970, or perhaps even 1980, are not appropriate for today's environment, due to these secular trends. Currently the relatively low bankruptcy period of 1940-69  is regarded as anomalous, not representative of the future.  

Another interesting fact of charge-offs is that small business failure rates are only weakly correlated with recessions. Changes in the D&B failure rate exhibit much closer correlation with recessionary periods than their levels (Chart 8: Correlations). The levels of failure and charge-off rates are so swamped by secular trends that they have little relation to recessionary periods. Thus we should not look to deviations from lengthy historical norms to assess current conditions as much as changes from immediately prior periods.

Chart 7: Default Rates shows the variability of the amount of nonaccrual loans as a percent of total loans and leases for banks and savings institutions since 1984. Both of these series peaked in 1990 at 3% levels and are now running at around 1%. Historically 20-50% of nonaccruals turn into charge-offs over this period, with significantly lower charge-offs as a percentage of nonaccruals for savings institutions relative to banks. This highlights the importance of refining a set of assumptions for a particular portfolio, in that different recovery rates and default rates affect debt portfolios differently: unrated debt pools do not mimic each other, let alone the publicly rated universe analyzed so frequently. This will be expanded upon further below in the section entitled Bank vs. Public Debt.

Default Rates by Credit Grade

Default rate studies by S&P and Moody's over the past 10 years have greatly helped our ability to translate ratings in default estimates. While agency ratings are certainly imperfect, cumulative loss curves definitely show an ordinal ranking consistent with their assigned risk. Studies have been done from 1920-90, 1970-95, and 1981-97, and choosing a relevant time period is more important than choosing whether or not to look at S&P or Moody's. Using the 1980-97 period from S&P, we see that investment grade issuers had only a 0.08% 1 year default rate, which is so low as to be measured with considerable proportional error (Table 1: Default Rates). The average annual default rate over 5 year periods for initially rated investment grade securities is 0.17%.  This allows one to measure more defaults as companies are downgraded from A to BBB, to B, etc., and in the process perhaps better estimate the probability of default.  In either case the expected loss for a diversified investment grade portfolio is best described as noise (only 3 companies since 1938 had investment grade ratings at the time of default, while 22 had investment grade ratings as of January 1 the year they defaulted, out of 800 corporate bond defaults in this period). Speculative grade, junk, or high yield debt had average annual loss rates of 3.84% for 1 year, and 3.47% over five years, which suggests that somewhere around 3.6% is probably a good estimate of annual default rates for these securities. Moody's shows similar results except for the B rated bonds. The sample estimate for Moody's using 1970-90 data is 8.1% over 1 year, but when annualized over 5 years this rate drops to 4.9%. This is as compared to S&P estimates of 4.8% and 4.0%, for 1 year and  annual average default rates over 5 years, respectively. Interestingly, the Moody's overall noninvestment default rate is not in as much disagreement with S&P. Thus, the 1 year Moody's rate of 8.1% appears an outlier, and a rate around 4-5% seems more appropriate for B rated public debt.

Kealhofer, Kwok and Weng (1998) show that given these are historical averages and the underlying process generating these default rates is highly asymmetric, these estimates of defaults by ratings band have very high standard errors. For example, assuming a 0.35 correlation, the B rated average default rate of 8% could be 12% or 4% at the 95% confidence bands. It is certainly relevant to keep in mind that default rates by loan grade are point estimates  measured with considerable uncertainty.

The difference between Moody's and S&P experience is highlighted by their cumulative default curves: Chart 4: Default Rates.

Correlations

It is useful to understand  correlations as well as default rates; in fact many argue that only correlations matter since the rest is theoretically diversifiable. This distinction highlights a particularly relevant difference between debt and equity holders. While equity holders may be able to diversify their holdings, debt holders are made unambiguously worse off by increased volatility in an asset that underlies a debt security. This is because with default, higher volatility means higher bankruptcy probability, and there is no offsetting debt upside.

To put loan default correlations in perspective, we included some other components of bank portfolios. Credit cards (Chart 2: Correlations) behave in a cyclical fashion, rising during recessions, though also occasionally rising during nonrecessionary periods (e.g., 1986 and 1996). Note that the time series is much more moderated than commercial loan risk. Credit cards averaged about 300 basis points in loss over the 1971-97 period, including a couple of increases which probably are explained by secular changes in the industry (e.g., the advent of monoline credit card companies), and in recessions losses increased by about 200 basis points. In contrast, bank charge-off rates moved from around 0.1% in the 50's to over 1.0% in the early 90's, a proportionally much higher change (Chart 4: Correlations). Further, the annual default rate variability is much greater for public debt, moving from 2% in the seventies for high yield debt to 10% in 1970 and 1990 (Chart 3: Default Rates). Consumer debt has higher average loss rates, but less volatility than commercial debt, though both are cyclical.

Net interest margin (NIM) is a major focus of banks, yet we see that this series shows very little correlation with business cycles (Chart 3: Correlations). In broad terms, however, there is a strong correlation between bank stock performance and NIM growth, as the growth in NIM from 1948 to 1970 and from 1990-1994 was matched with strong growth in the bank stocks, while the stagnant period of 1970-90 was accompanied by flat growth in bank stock prices. The NIM is a part of accounting earnings, which reflect changes in bank asset and liability values very weakly, even if CFO's did not manage earnings targets, which they certainly do. One should not expect the NIM to change drastically, on a relative basis, in a recession.

Default rates (Chart 3: Default Rates) , change in small business failures (Chart 8: Correlations), credit spreads (Chart 11: Correlations or Chart 2: Other) and upgrades/downgrades of public debt all show positive correlation with the business cycle. Yet the correlation with the cycle for recovery rates is somewhat mixed  (Chart 2: Recovery Rates). While Asarnow and Edwards (Citibank) found virtually no correlation with the business cycle, Carey, and Altman & Kishore found that during the 1990 recession, recovery rates did fall. Further, D&B's liability/failure amount in constant dollars does increase during recessions (Chart 3: Recovery Rates).  I think the issue here is that Citibank data is distorted by the fact that the timing of recoveries in banks is affected by desires to smooth earnings.  Further, Citibank wasn't as meticulous as the other studies in tracking recoveries to the year in which they occurred.  The net result is that it appears that recovery rates are indeed cyclical, with higher losses (lower recoveries) in recessions.    This implies that the rating agencies' practice of assuming recovery rates 10% below estimated (i.e., unbiased, see Chart 1: Recovery Rates) recovery rates is a reasonable adjustment, since in times of stress, recovery rates can be expected to fall. 

Industry correlations within the US display a cyclical component. Using 22 major industries and measuring their correlation with the S&P index over the past 24 months, we see that on a rolling basis, correlations with the S&P aggregate index rise during recessions (Chart 5: Correlations). If we look at the average correlation with the S&P and compare it to the year-over-year change in the S&P index, we see a strong correlation between the two series, with correlations falling during healthy times for the S&P index (Chart 6: Correlations). Correlations between industry groups average around 45%, as opposed to the correlation with the S&P which is around 60%. In stressed times, correlations rise an average of 25%, additively, between industries.

It has long been known in financial markets that during times of crisis, FX rate correlations increase dramatically. The recent Asian experience is perhaps the best recent case, though the 1992 EMU crisis is also a relevant example. Yet, as mentioned above, it is often unappreciated that industry correlations within the US also rise. This has major implications for calculating capital. If we use current correlations, which are a weighted average of historical correlations over the past couple years (nonrecessionary times), we will underestimate the correlations we should reasonably expect during the next economic downturn.  Yet another interpretation exists.  It is that correlations do not increase during a recession, it is just that a recession is defined as a period when correlations are high.  That is, if we randomly generated 100 cross sectional time series, the simulation  with the highest correlation would necessarily have the worst performance; correlation and recessions exist in an "if and only if" relation where one implies the other, and causation is ambiguous. The distinction is not simply academic, since if correlations really rise in recessions (not just measured or observed correlations), we should add on 0.2 to our correlation assumptions relative to currently estimated correlations, while if a recession is just a necessary correlative with a period of high correlations, such an add-on is not appropriate.  In this latter view,   the correlation relationship is simply an epiphenomenon, this implies that we should use correlations based on data over an entire economic cycle, and not just the past 2 years.

The effect of correlations on a portfolio's volatility can be seen in Chart 1: Correlations. Note that along the axis where correlations are 0, the law of large numbers allows diversification to greatly reduce the portfolio volatility, and after about 100 obligors it achieves most of its diversification benefit. Yet in the extreme case of correlations equal to 1, increasing  the number of obligors does not reduce portfolio volatility. With correlations between 0 and 1 there is a large increase in volatility over the 0 to 0.2 range, and then an almost linear rise in portfolio volatility from 0.2 to 0.8, even for a portfolio with many obligors. The increase in portfolio volatility is almost one-for-one with correlation increases, in that a correlation increase from 0.4 to 0.5 implies a 25% increase in portfolio risk irrespective of individual underwriting criteria. Thus although the Markowitzian formula tends to have little predictive power in explaining returns on a risk adjusted basis, it still explains an important point of risk management: greater diversification reduces risk. Reducing risk through diversification is still a free lunch, and measuring and managing this risk factor should continue to be a strategic priority for senior portfolio managers.

Credit Spreads

Credit spread data between various investment grade securities shows a highly cyclical pattern, but sometimes the spread increase precedes recessions (1990), and sometimes spreads increase throughout and after a recession (1975).   Chart 11: Correlations shows the Baa-Aaa industrial spread as a percent of the 5 year Treasury rate over the past several decades.  More relevant to banking portfolios are credit spreads for noninvestment grade loans, which are more difficult to get. Using data from DLJ we were able to reconstruct the spreads on publicly rated B and BB bonds over the past 10 years (Chart 2: Other). Note the rise during the 1990 recession, where spreads jumped dramatically during the Iraq-Kuwait crisis, and also due to some draconian regulation changes concerning what institutions could own junk bonds. We also have data from Citibank on bank debt spreads for B and BB securities, which suggest a much more moderate fluctuation in spreads during this period, around 25-50 basis points, as opposed to the 800 basis point move seen in the public B-rated loan market.  In the third quarter, the Russian crisis and other problems helped push spreads on high yield debt up 200 basis points even as US default rates remained low.  Again, bank debt of similar risk--that is of similar average spread--moved only 25 basis points.  This suggests that publicly traded spread volatility is higher than for private companies by a factor of 10. 

One way to gauge which spread movement is more relevant (public vs. private spreads) is to look at what happened to the High-Yield Bond Index from Lehman Brothers, and compare that to the S&P Bank Index and the Lehman Aggregate Index (Chart 10: Correlations). Note that the Bank index swung in lock step with the High-Yield Bond Index during the extreme swings in 1990-91, and in 1998, suggesting that the publicly traded rating movement is more applicable to bank portfolio values than the slight variation in the bank spreads. Yet this only shows a correlation between equity values and credit spreads, and while equity holders may discount cash flows more heavily this does not necessarily affect the probability of default.  Thus even though the volatility in high yield bonds does not translate into volatility of middle market borrowers, bank values are affected by a factor common to the high yield bond sector.

This brings to focus one of the major unresolved areas of modeling credit risk. Clearly spreads affect the value of fixed income securities, especially as related to bank portfolios. Yet is it meaningful in a capital estimation to include this spread volatility? A 1,000 basis point move would impact the value of the portfolio by more than 20%.  When added to estimates of fluctuations in value from downgrades and defaults, we get capital figures well into the 30% range and beyond.  If we want to truly measure mark-to-market variability, spread risk is indeed significant, irrespective of upgrades and downgrades. 

Recovery Rates

Before examining expected losses, we must address expected recovery rates. Data from Carey, Moody's, S&P, Fitch, and Altman were used to generate average estimates of recovery rates on Sr. Secured Bank Loans (74%) as well as for Sr. Secured Public Debt (58%), which are approximately 10% above what the ratings agencies "assume" in the evaluation of CLO transactions (Chart 1: recovery rates). Banks appear to have an approximate 16% higher average recovery rate than public bond holders on equivalent obligors. We can also see that recovery rates vary by seniority, but not by initial credit quality, as subordinated debt has recovery rates of around 30% relative to the 58% for secured senior public debt (Table 3: Recovery Rates).

Industry also affects recovery rates (Table 1: Recovery Rates). Waldman, Kane and Altman (Salomon Bros., High Yield Research, 3/26/96) find these differences between industry are statistically significant for only two industries out of thirty, Utilities and Chemicals (which includes Petroleum, Rubber and Plastics). The other industries may have significant differences yet since so few defaults have occurred it is impossible to infer whether or not most differences between industries are due to random or systematic factors.  It seems prudent to ignore industry effects, and hope that these differences will be captured by the "secured" status of loans in industries with higher recovery rates.

Annual Loss Rates

The combination of default and recovery rates gives us expected loss rates over time. These loss curves are especially informative when put into annualized terms (Chart 5: Default Rates). Altman estimated loss curves using data from 1970-90, and we used the Moody's idealized loss curves because they meld default and recovery rate information in a thoughtful way and are actually used in validating CBOs.  S&P default rates over the 1970-97 period were combined with recovery rate estimate of 58% to generate these figures.  These S&P annualized loss curves actually decrease over tenor for B rated loans, from 3% to 2% from 1 to 10 years, while BB annual loss rates increase and level off at 1% at 5 years. The loss rates for BBB loans are around 0.25% for both studies. In contrast, Altman found dramatically increasing loss rates for B rated loans from 1 to 5 years, from 0.5% to 3.5%, while BB loans approximated ours somewhat more closely at the longer maturities, rising from 0.1% to 1% over 5 years. It is interesting that as S&P and Moody's give explicit upgrades to Bank debt, secured, and sr. debt based on recovery rate assumptions, we should be interested in the annual loss rates for ratings more than anything else.  Further, Moody's Binomial Expansion Model for CBOs suggests that target debt ratings are achieved by targeting a loss rate.  That is, the rating agencies seem to map ratings into losses first and foremost, and thus so should our intuition.


The estimation of annual loss rates is paramount.  To calculate a RAROC for a loan, one can not assume the current spread will be realized.  This only holds if there is no default, and given these are not riskless, this is an invalid and very material assumption.  The expected loss needs to be subtracted from the current spread, and this highlights the importance of knowing which period is relevant for default estimates (e.g., was 1990 an anomaly?), and which dataset is relevant for recovery rates (senior?, secured?, syndicated bank debt?). 

Transition Matrices

Transition matrices from S&P and Moody's tell a similar story (Tables 1&2: Transition Matrices). The probability of remaining in the same grade decreases with credit quality. The probability of staying AAA over a year is around 88% in both studies, while it is around 75% for B rated debt. This makes intuitive sense, in that not only should we expect default rates for lower rated securities to be higher, but the volatility of the credit state is higher as well. Further, these transitions are stochastic, in that the 1990 transition matrix is significantly shifted towards the right (i.e., more downgrades) than in other years. Note also that the probability of being upgraded more than 1 grade (e.g., BBB to AA) is virtually zero, while the probability of being downgraded tends to decline exponentially, so that the probability of moving down 2 grades is about 1/4 of moving down 1 grade, and the probability of moving down 3 grades is 1/8 of that, etc., although there does appear to be a skip over the CCC grade: the probability of moving to D is greater than the probability of movements to CCC, regardless of where one starts above CCC. The transitions are in the family of absorbing state Markov chains, with maturities and defaults being absorbing states.

KMV provide independent validations of these general traits of transition matrices (Table 3: Transition Matrices). Higher credit grades are less subject to up and downgrades, and upgrade probabilities tail off more quickly than downgrades. Their estimates show much more variability, however, and it remains an outstanding question as to which one is a better view of the real transition tendency (30-40% lower on the diagonal in general).

Internal bank data tends to corroborate the S&P and Moody's data.  KMV would surely argue that banks, like rating agencies, are subject to the same "status quo" bias, and perhaps they are. 

It is important to point out that in transition matrices, given the many different elements of the matrix, many anomalies will occur where, for example, the sample average of moving down two states may be greater than that for moving one state for a particular credit grade. This is especially true of  industry-specific transition matrices that necessarily contain smaller samples. These should be interpreted as statistical   anomalies, and thus one should always smooth such data prior to incorporating them in an exercise.    This is corroborated by looking at 5 year transitions, which show a predicable monotonic decline in transition probability as one moves away from the diagonal (Tables 5&6: Transition Matrices).   Because the longer time period uses more data, anomalies in matrices reveal themselves as simply statistical flukes. 

Move to next section on capital allocationl

Back to outline