Mr. Eric Falkenstein
Sr. Vice President
127 Public Square 0501
Cleveland, OH 44114
phone: 216-689-0892
fax: 216-689-5427

Reprinted from Bank Accounting and Finance, Fall 1998

Eric Falkenstein is senior vice president for capital allocation at KeyCorp, Cleveland.

The nitty-gritty details of assigning capital to different activities can derail the most sophisticated risk-management process. This article outlines the process and benefits of assigning capital and suggests some ways to approach interest-rate and commercial loan capital allocations.


Integrating Quantitative Risk-Management through Economic Risk Capital

Eric Falkenstein

Bank risk management is at a unique crossroads. With the advent of powerful computers, we are able to store and analyze data in a way that was not feasible previously. Not surprisingly, this increase in ability has been accompanied by growing popularity of new analytical risk tools such as value at risk (VAR) for market risk and CreditMetricsTM or CreditRisk+  for credit risk. Furthermore, securitizations, collateralized loan obligations and other credit derivatives provide validation and relevance for risk measures. With this information and these markets, quantitative firmwide risk management is feasible. Exploiting this competitive tool will be extremely valuable in the future, as derivatives and regulatory changes will translate quantifying risk directly to a bank's bottom line.

Of course, banks have used quantitative methods since the Medicis, but today’s ability to aggregate risk information across a variety of activities in a meaningful way is unprecedented. Consider the common method of firmwide risk management, which relies on a variety of qualitative and quantitative information and intends to give senior management an overview of risk. Most quantitative information is presented piecemeal, scarcely aggregated within a single portfolio, let alone between different activities. It is impractical to expect senior management to aggregate this kind of information in a meaningful way. Today, we are awash in data; the key is turning data into knowledge. Risk is not represented by information, but by information organized in a meaningful way.   For a risk measure, this means a number that is calibrated and validated.

"Today, we are awash in data; the key is turning data into knowledge."

A firmwide quantitative system for risk management also exploits economies of scale. For example, a small bank can manage its exposures through extensive discussions with its senior management and careful examination of relevant financial statements. For banks with thousands of customers, extensive conversations with senior management must be delegated, and the senior risk manager needs a comprehensible synopsis with which to analyze the quality of the portfolio and underwriting standards. In this situation, there is no way to avoid some method of aggregation, and aggregation invariably is quantitative.


Any risk-management system encompasses several areas: establishing underwriting standards, formal reporting of asset quality, position limits such as concentration rules, and performance-based compensation. What is new is the ability to meaningfully assess an equity charge to augment this list of traditional risk-management functions. Michael Jensen has argued forcefully that leveraged buyouts (LBOs) were successful in the 1980s primarily because they forced management to economize on capital; increases in debt disciplined management by forcing managers to give back to investors free cash flow instead of wasting it on value-destroying projects that served to build insiders’ empires at the expense of firm value1. Now it is recognized that equity is expensive precisely because insiders have no hard obligation to pay it back (unlike debt service). For example, Stern Stewart’s concept of economic value added, or EVATM, has helped popularize the notion of holding insiders accountable for the cost of equity capital.2  The next step in this development is to apportion equity within an organization according to the risk of the various operations. That is, it is a major advance to hold management accountable for the cost of equity capital for the entire firm, but allocating this equity charge to various lines of business within a bank takes this concept to the next level. This risk-capital charge can be worked into underwriting, reporting and compensation and allows one to assess the desirability of keeping assets on or off the balance sheet. Further, this exercise can help the business line directly by replacing rationing with pricing (there are no bad assets, only bad prices). Throughout this process, risk managers become less like policemen and more like partners with the business line. In fact, a good sign of a successful risk-management operation is its integration with the revenue side of the bank.

Thus, the new focus of firmwide risk management is economic risk-capital apportioning. This sort of risk measurement allows unambiguous comparison of one activity to another. There are three main drivers of this approach:

  • Ultimately, total risk should be measured by one consistent number;
  • If total equity is to be apportioned, a risk-based allocation rule dominates alternatives;
  • Regulators.

One Number Should Represent Total Risk

To compare Bank A to Bank B, the total risk of both banks must be represented by one number. For example, consider two portfolios with different balances in banks A and B (Exhibit 1, next page). 

Exhibit 1
Hypothetical Portfolios of Banks A and B

                                                          Credit                                 Commercial                             Total Earning                         Total

                                                           Card                                  Real Estate                                 Assets                                 Risk                                

            Bank A                                    $10B                                        $5B                                       $15B                                  ??

            Bank B                                    $5B                                        $10B                                       $15B                                  ??


Now if we were only considering credit cards, we could easily say that Bank A is riskier than Bank B. But given that the banks are active in two business lines, it is not clear how to rank order Banks A and B in terms of risk. If one thinks that all lines of business are equally risky, then Banks A and B are equally risky, since they have identical diversification and total balances. Yet if the commercial real estate business is much riskier than the credit card business, Bank B is riskier. It all depends on how to weigh these two business risks; the information is incomplete. We have to implicitly weigh the importance of credit card and commercial real estate, as well as estimate their covariances, in order to rank the riskiness of the two banks. If the risk-management group does not do this, those reviewing the data will make ad hoc rankings. To avoid spelling out the precise weighting of different risk factors, for example default risk versus loss in event of default, is to leave the final risk measurement subjective and undocumented.

"To avoid spelling out the precise weighting of different risk factors is to leave the final risk measurement subjective and undocumented."

Consistency is also important. Many assessments grade risks from 1 to 10, but, unless these scales are comparable, comparison between units is impossible. For example, suppose real estate loans average a score of "5" and middle-market loans also average a "5", if a 5 in real estate is riskier than a 5 in middle-market loans, then the risk measurement is incomplete.

Every business’s risk is a function of many different attributes; the key is to assign a total number to this amalgam of information. To aggregate these attributes requires making assumptions, but making these subjective assumptions explicit does not increase subjectivity (although it may highlight this subjectivity). A single risk measure based on assumptions and uncertainties may not be absolutely perfect, but by presenting a final number with supporting documentation, risk managers can assist senior management to understand and use the risk measure. In order for risk to underlie a rank-ordering (that is, a direct comparison between two areas), total risk must be measured by a single, consistent number.

To Apportion Total Equity, a Risk-Based Allocation Rule Is Needed

By charging lines of businesses for equity, an organization ensures that managers economize on capital and evaluate projects in light of the cost of equity. No shareholder wants to find out that businesses are happy making a positive cash flow investment, regardless of the size of the cash flow relative to the equity base. Corporations that charge internal divisions for equity in order to garner the benefits of a LBO without losing control must allocate this equity logically.

Suppose there are two business lines A and B, and total equity for the corporation is $100. Since equity is proportionally required based on risk, economic risk capital can be a common yardstick. If risk for A are 15 and risk for B is 25, the risk capital can be apportioned pro-rata based on these risk numbers: business line A gets $37.5 of risk capital. What are the alternatives? One is to use regulatory capital, which assigns equity regardless of credit quality and only to product types: subprime auto gets the same equity assignment as an investment-grade large-corporate loan. Another alternative is noninterest expense, which again ignores differences in loan quality, among other factors.

To hold business lines accountable for the cost of equity, one must consider the relative riskiness of the different businesses. This ensures a sense of fairness, a political prerequisite for any attempt to apportion any equity. Further, since required capital is an increasing function of risk, a risk-based equity allocation is imperative for accurate return on equity (ROE) evaluation.


Regulations can be a force for good in an industry, especially if regulators realize that it is impossible to mandate cookie-cutter solutions to difficult problems. The S & L debacle, which was most assuredly exacerbated by regulatory ground rules and the disastrous attempts to redefine the rules in time of crisis (that is, goodwill amortization), should result in a much wiser regulatory body going forward. In this environment, our regulators are now implementing risk-based capital standard for market-risk activities (recommended by the Bank for International Settlements [BIS]) for the major banks. This approach allows banks to use their internal models for measuring risk capital and to capture subtleties such as various correlations that would be otherwise impossible for a regulator to spell out in a directive. This process, which may become the template for future risk-based capital standards for credit activities, greatly favors the bank with a well-developed internal capital model.3 In any case, it is always optimal to be at least two steps ahead of government regulations so that one can find opportunities in regulations, rather than be constrained by them.


There are three main uses of economic risk capital: pricing, performance evaluation, and regulatory arbitrage. Performance evaluation can lead to more informed incentive compensation plans, better evaluation of a business’s value added, and better ability to assess the merits of securitization.


All standard microeconomics textbooks show that one maximizes profits by taking into account two bits of information: the marginal cost curve and the demand curve. For costs, the "marginal" component implies that one only uses marginal or incremental costs when determining optimal pricing. This may seem counterintuitive, in that conceivably an optimal price will not cover total costs, which include fixed costs. While this is true, it just means that in certain cases the optimal price minimizes losses instead of maximizing profits. While this is unfortunate, using marginal and fixed costs in a pricing decision would produce an even larger loss.

The best way to incorporate capital into pricing is to think of it as a costly resource, just like internal funding or a loan-loss provision. Usually one adds a pretax credit for equity (which earns, say, the Fed Funds rate), and a posttax expense (at what one determines to be the cost of equity capital, usually between 12% and 15%). The worst way to incorporate capital is to determine the spread on the product that produces a hurdle rate ROE. This only brings one to a breakeven price that adds no value to the bank. In addition, ROE-based pricing models ignore the demand curve, which is the other bit of essential pricing information. More precisely, one needs to know the elasticity of the demand curve, which measures the tradeoff between the loss in volume with any increase in spread. One needs to know the market, what competitors are charging, and how they might react in order to price optimally.

A risk-capital number is not sufficient to determine pricing; it is only part of the picture. Capital is a marginal cost, but so are other important items such as marginal noninterest expense and loan provisioning. A capital allocation expert should not be expected to generate optimal pricing sheets; rather, the information should be allocated out internally just like other costs, so that pricing decisions in the field are made with full recognition of total marginal costs. Ultimately, optimal pricing is as much an incentive compensation issue as it is a cost issue. For example, if someone making pricing decisions is compensated or evaluated on volume, or on ROE, perverse results should be expected.  From the narrow perspective of capital allocation, all we can do is note that without accurate costs for equity capital, the ultimate pricing decision will be worse than it would otherwise be.

Performance Evaluation

Capital is a costly resource, at least twice as costly as debt funding. Determining how much is used is essential in evaluating the value-added of any activity. Two approaches are most useful. First, an ROE is often quite useful in determining whether or not a business is adding value. If its ROE is adding value, it is above the cost of equity capital (say, 12%). This is clearly a short-run bit of information; there would be few businesses remaining if they exited every time an ROE went below a hurdle rate. However, ROE is not blind to off-balance-sheet risks and does not confuse a positive income with true value-added.

Second, a value-added component helps you understand the importance of the business. While ROE tells you if the business is currently adding value, it doesn’t tell you how much value. A business with an ROE of 25% may be much more valuable than one with an ROE of 50%. For that evaluation, you need to look at after-tax net income after capital costs. Basically this is just a net present value taking into account the cost of equity capital, called economic value added (EVATM).  This helps to measure the true value added, since it contains both income and capital information. A common approach is to portray both an ROE and net income for a line of business. EVA neatly combines the two into one number, thus serving as an undistorted picture of incremental value-added for performance evaluation and incentive compensation.

Regulatory Arbitrage

Regulators will always treat assets more uniformly than cutting-edge risk managers would.  This is because they have to come up with rules for a large heterogeneous universe, with less direct access to risk information.   For example, currently all loans to commercial counterparties are considered equally risky, regardless of their balance sheet and earnings.  Thus high quality counterparties will have less economic risk capital than regulations require, and if regulatory capital is guiding your capital structure, it often pays to move these assets off the balance sheet.  Altenative, you can keep the assets on the balance sheet but through credit-linked notes or guarantees remove some of the regulatory weight (i.e., reduce risk-weighted assets but not gross assets).  The key is to remember that econoomic risk capital is akin to the unrated piece required when moving the assets off the balance sheet: the low-level recourse that a bank retains after transferring a senior piece to another entity. .


Though apportioning equity capital is tremendously valuable, one must acknowledge that the method is in its infancy. This brings special difficulties to its implementation but also greater rewards for doing it right. Worrying about which precise method to use and avoiding the process is like worrying about which pew to sit in before you choose your house of worship. Further, there is a learning curve each institution will face; it appears to take a few years to get workable results. The important point is not to focus on the metric, but instead on the goal.  As methods converge institutions that have been using an "almost correct" approach will be in a good position to amend their calculations. This is because any good economic equity method requires basically the same set of inputs. The main difficulty in initial implementation is getting a management information system (MIS) to extract the relevant data so that one can estimate risk capital; corrections to the precise capital algorithm are relatively straight-forward once this infrastructure is in place.

While the precise calculations of firmwide quantification are outside the scope of this article, I can outline the definitive characteristics of the algorithm. It should measure a worst-case scenario for the portfolio using relevant pricing models and historical data, and it should take into consideration correlations with other activities in the bank. This does not mean a theoretical worst-case scenario (usually everything) but an empirically based worst-case scenario. It is often useful to concentrate on a 99th percentile event, since this forces one to address the specifics of the asymmetry in the loss profile. Arguments about specific measures of risk capital are mainly innocuous, since it is the relative riskiness of these activities that should matter most. For most portfolios, the relative rankings will be similar using either a 99.5% or 99.9% scenario.4

"The main obstacles to firmwide risk management are not deep concepts, but nitty-gritty details of how to assign capital to different activities."

While many consultants are eager to help one use their economic risk-capital information, few desire to help construct it. The main obstacles to firmwide risk management are not deep concepts, but nitty-gritty details of how to assign capital to different activities. As any chess player knows, strategy not mated to tactics is doomed to failure. Therefore, I will describe how to examine interest-rate and commercial loan risk, while giving a more cursory view of other risks. No article can be an off-the-shelf template for firmwide implementation of a risk-management system, since it is crucial to tailor items to each bank’s particular idiosyncrasies. Instead, I will show how to apply the basic principles with concrete examples.


Since credit risk is measured in terms of its effect not on earnings but on solvency, which is a measure of firm value, it is necessary to put interest-rate risk in similar terms. For interest-rate risk, the implication is clear: move away from earnings measures of risk towards value at risk, not simply for the available-for-sale portfolio but also for the held-to-maturity assets. Perhaps the most important point of a value-at-risk (VAR) method for the balance sheet is that it is understood that simply because a security’s losses aren’t being marked to market, these losses are real. For example, if the long end of the yield curve increases while the short end stays put, the 30-year bond loses value but expected nominal earnings from that bond are unchanged over the next 12 months. What is the loss? In effect, the loss is the discounted value of an opportunity cost of being stuck in low-yielding securities in a future with higher expected rates. This affects the firm’s value today, which affects the risk inherent in firm equity. To ignore this because it does not affect year-ahead earnings is incorrect. Another way to look at this is to note that changes in value in a VAR are simply the present-valued changes in earnings, without an arbitrary cut-off at 12 or 24 months.

Superior risk measures contain two main challenges to implementation. To estimate the change in value of a mortgage portfolio or interest-rate cap, one needs a sufficient pricing algorithm, usually packaged in a derivatives pricing system such as those offered by QRM, RADAR, Algorithmics, or perhaps less expensive spreadsheet derivative calculators such as Financial CAD, tech Hackers, or FEA. Since mortgage-based products offer special difficulties and, therefore, the most expensive pricing software, it may be sufficient to map mortgages into buckets and then estimate their risk by examining the performance of various agency mortgage-backed securities, such as the GNMA newly issued 7.0% 15-year fixed-rate mortgage. This covers the pricing knowledge, that is, how to translate shifts in yields, etc., into changes in product values. The next bits of information required are the volatility and correlations of the various risk factors are, such as the three-standard-deviation move in the five-year Treasury yield. While this may seem like a very demanding process, the alternative is to generate several earnings scenarios and let the CFO implicitly calculate the probabilities for himself or herself.

A second reason to move from earnings scenarios, such as a 200 basis-point shock, to VAR, is that the earnings scenarios are limited in the number of risks they can portray. As banks have recently found, there are other risks to their balance sheet than parallel movements up and down; the recent flattening of the yield curve has lowered the net-interest margin of most banks. Thus, one must consider flattenings, steepenings, Treasury-swap curve spread shifts, prepayment movements, etc. Again, to leave these scenarios spelled out but disaggregated is to leave the analysis incomplete. VAR weighs all these scenarios probabilistically, does not arbitrarily truncate the effect on earnings at 12 months, and present values the effect of the changes on the value of the bank.

The precise method to calculate a VAR will not be addressed here,5 but the unifying theme to a VAR calculation is product scope and risk-factor comprehensiveness. In fact, these factors are also important for credit risk. Product scope means understanding how to value—from first principles, not broker quotes—all the relevant derivative securities in one’s bank, such as caps, index-amortizing swaps, and callable bonds. Risk-factor comprehensiveness means addressing yield curve shift and twist risk, option risk, prepayment risk, various spread risks, etc., for all these various securities. For example, to calculate the risk of a Treasury position one needs to know how to price a Treasury from the yield curve and how much the Treasury yield curve can be expected to move over a reasonable worst-case scenario. For a cap, one needs to know how to price this instrument and then revalue it over reasonable worst-case scenarios for swap rates and implied volatility movements.

A special mention of deposits is relevant since a bank’s firmwide VAR is highly influenced by assumptions about deposits. The duration of deposits (that is, the sensitivity in value of deposits to changes in interest rates) is affected by the lagged behavior of deposit rates to changes in the yield curve and also the average life of deposits and how this is affected by changes in rates. Most studies of the duration of deposits give a large band of reasonable estimates for the duration of deposits, often from two to five years. This issue arises with other products as well, such as the prepayment assumptions on many loans (especially residential mortgages and subprime loans). This uncertainty is not to be ignored, but quantified. If you suspect that two to five years is the correct duration of deposits, then this is a risk to your balance sheet that is real, although for all intents and purposes, unhedgeable. That is, parameter uncertainty creates what traders call basis risk, which is variability in a net position that is unhedgeable. This risk necessitates capital. Thus, while you might target firmwide duration of equity of four years, due to uncertainty about the duration of deposits assumption this implies duration in a range from three to five years. This adds uncertainty and, hence, volatility to the bank’s value.

I have found that those in charge of areas with risks that cannot be hedged tend to find measuring such risks pointless. If this risk is something the bank cannot change, and even expert opinions are quite varied, why bother measuring the uncertainty? Why not just pick a number and stick with it? It is true that for day-to-day management, uncertainty about the duration of demand deposits is irrelevant: the asset/liability strategy has to assume a point estimate for deposit duration and convexity. From a broader perspective of strategic evaluation (such as risk-adjusted return on capital, or RAROC) and pricing, however, this risk makes a difference. It is an unavoidable cost to be sure, but a cost of business nonetheless: a cost of equity capital. This highlights the importance of apportioning economic capital; otherwise, the business’s true costs are not recorded, and the business may think it has an attractive ROE when in fact it does not.

Interestingly, quantification of interest-rate risk does not imply the end of liability sensitivity for most banks. Since credit and interest-rate risks are not perfectly correlated and the yield curve usually exhibits a steep slope up to three years, a firmwide view will put the optimal interest-rate position in most yield-curve environments at something other than neutral (although less so the flatter the curve). Further, the VAR approach should highlight the attractiveness of moving away from simply riding the curve (that is, assuming a basic liability-sensitive position) to taking on other uncorrelated yield-curve positions through basis swaps, volatility plays, and second-factor yield curve strategies. In these cases, the power of Markowitzian diversification really pays off.


The largest distinction in bank credit risk is between commercial and consumer loans. These loans behave very differently, with consumer loans displaying higher but more predictable losses. Further, the obligors have different risk attributes: Business customers have balance sheet and earnings information, and consumers have credit-score information. Consumer loans have quicker turnaround, allowing quicker evaluation, while a commercial loan portfolio takes much greater time to evaluate (a full credit cycle). For both types of loans, however, risk buckets and transition matrices should be used to evaluate risk. Loans should be segmented into risk buckets, such as grades, and then the probability of moving from bucket 1 to bucket 2, from bucket 6 to default, etc., should be estimated. This makes it possible to calculate expected losses and the variability of expected losses.

For commercial loans, the first distinction is a risk grade, as in BB+ or A- public debt, which implies an ability to map internal loan grades into publicly traded grades. So, one must know whether to map loans rated "4" internally into BB+ or BB-. A quantitative guide, such as that provided by Zeta Services, LPC, or KMV, can be used to gain confidence that loans that are not public debt are mapped appropriately.6   At most banks, most loans fall into the non-investment-grade category. Non-investment-grade debt has an annual loss rate of around 3%, well above most banks’ loss experience on these loans (about 0.6% annually, industrywide, over the cycle). Banks’ lower loss rates highlight the distinction between passively managed debt and actively managed debt as well as the greater collateral coverage of bank loans. Various loan covenants and auxiliary relationships between the bank and the borrower do not exist between publicly traded debt holders and debt issuers. Any mapping into public debt should take this into consideration. A recent Moody’s report discusses these issues and, in general, finds that bank debt should be rated one notch higher than its corresponding public debt for a similar obligor (while expected loss rates may be much different, the volatilities of these loss rates are more similar).7

Why map loans into Moody’s and S&P? Most banks simply do not have the type of historical information contained with these benchmarks. Very few banks can analyze their internal grading system over an entire credit cycle; probably none have their current ranking system in place over a couple of cycles. Using only one or two cycles is troubling, because each recession is different and extrapolating from one instance unreliable. The best we can do is to examine the behavior of bonds over the past 25 years, since years before 1970 are not representative of current markets.8  For commercial loans, using only an expansionary period (for example, 1992 to present) to estimate expected losses and their variability would generate wildly inaccurate numbers.

"Once loans are mapped into grades, the data should be sliced into time-to-maturity buckets."

Once loans are mapped into grades, the data should be sliced into time-to-maturity buckets. There is a risk premium between risk grades (such as AAA and B), and this premium increases over time-to-maturity, or tenor. For example, in one study the annual cumulative loss rate on a one-year B loan is 99 basis points; on a five-year B loan the annual cumulative loss rate increases to 514 basis points. Therefore, a five-year B loan has more risk, even over a one-year horizon, than a one-year B loan, and it should be counted accordingly.9

One then adds information on estimated loss in event of defaults (LIEDs), which usually varies over collateral types and business lines. Clearly, if the LIED for one loan is 40% and another is 80%, then all else constant, the loan with an 80% LIED is riskier. Distinctions can be made for collateral types such as real estate, cash, accounts receivable, etc.

With an obligor and a facility rating, a rating for the probability of default, and a rating for the loss in the event of default, we have moved from measuring commercial loans along a single risk dimension (for example, B versus A) to two dimensions (five-year B versus two-year A) to three dimensions (five-year B 35% LIED versus two-year BBB 60% LIED). These are essential inputs to any meaningful quantification of commercial loan risk. As mentioned above, your choice of a one-year horizon or the life of the loan or 99.9% or 99.5% confidence intervals is less important than that you have good information on tenor, default probability, and LIED. The ultimate algorithm is irrelevant when the inputs are poor.

In addition to these inputs, you should consider a measure of LIED volatility. Although a LIED may be estimated at 40%, it could be 0% or 100%. As a practical matter, however, LIED volatility is only material for investment-grade loans for two reasons. First, estimates of the recovery rates of loans and bonds do not show large, systematic variability over the cycle; thus, we should not expect recession years to generate lower-than-expected LIEDs. Cyclical forces appear primarily in default rates. This implies that average LIEDs over a cycle are still the average LIEDs during a stress period. Second, for non-investment-grade loans, there is a high enough expected default rate that the law of large numbers comes into play. This statistical law states that the distribution around the mean becomes smaller as the number of defaults increases. So if you expect 10 loans to default, and the LIED has an expected value of 40% and a standard deviation of 10%, the standard deviation for 10 loans is only 3%, and the standard deviation for 100 loans is only 1%. Most banks have many non-investment-grade loans; thus, the mean should be a pretty stable estimate of actual performance of the aggregate. For investment-grade loans, this law is not as relevant, since we should expect only a handful of these to default. For investment-grade loans, therefore, LIED volatility is relevant. An important qualification is that if any of the loans are extremely large as a percentage of loans outstanding, the variability of the LIED for this loan becomes relevant. The main point here is that if your bank lends primarily to non-investment-grade borrowers, LIED volatility is a statistical refinement with little benefit and large costs.

Further refinements can address different amounts of seasoning in a particular loan, amortization type of the loan, and a qualitative assessment of underwriting quality for each line of business. One should also account for off-balance-sheet exposures such as letters of credit and unused lines by mapping these into loan-equivalent amounts.

Mapping these base risk units into capital estimates is not easy, but it is feasible. Correlation and credit grade migration issues will have to be addressed. JP Morgan has introduced a new method of quantifying risk based on one-year volatility (CreditMetricsTM); Credit Suisse Financial Products has recently come out with a slightly different methodology. It is useful to have a quantitative expert working for your firm who understands statistics, especially Markov matrix transitions.

A Graphical Overview

Exhibit 2 provides a schematic diagram of the capital allocation process. Transaction-level detail pertaining to both the obligor and the facility are pulled in and bucketed. This is basically where most risk reporting systems stop, with portfolio summary data, which remains the most important part of the process. To be more precise about what portfolio data is necessary, think of the following benchmark: everything needed to complete a securitization. By taking this data to this level, you can aggregate risk into a meaningful number that values a worst-case scenario, allows apples-to-apples comparison of risk, can serve as the basis of RAROC calculations, and probably anticipates future regulations. Also, the concept of determining the worst-case scenario within a portfolio can help make the portfolio summary process more meaningful. For example, it is not obvious that a portfolio summary of grade distributions only is inadequate until one realizes that, depending on the tenor of the loans, risk capital could vary from 2% to 8%. The statistical algorithm will be much less tangible to most bankers; while they can delegate this and regard it as a black box, the relevant line-of-business heads should be able to understand the basics of any model.
flow1.gif (13192 bytes)


Measuring consumer loan risk is in many ways similar to measuring commercial loan risk. Unfortunately, the lower variability of consumer losses and the steady flow of positive net-charge-offs can give one the impression that one can simply extrapolate current loss rates for a product without losing much forecast precision. While simple extrapolation is much less misleading when applied to consumer loans compared to commercial loans, it is important to bucket consumer cohorts appropriately in order to forecast net losses that will be used in any capital assessment.

The key, again, is splitting the data into homogenous risk buckets, as many as is material and feasible. In fact, this method should be used throughout the bank, making material distinctions as long as the benefits outweigh the costs of getting this information. The most obvious first distinction is between major product types such as credit card, mortgage, and installment debt. Then, within these categories, one would distinguish among the particular products. Next one could use risk bands based on bureau, behavior, and custom scores as well as loan-to-value ratios for secured products. Finally, and perhaps most important, one should segment by seasoning, since consumer losses display a predictable curve, with losses peaking at between 8 and 24 months of origination depending on the product. A 1-month old consumer loan has much more risk than a 30-month-old loan, regardless of credit score. One should have a sense of the expected net charge-offs within the lowest-level risk bands, which should be used as a guide to measure the relative riskiness of the portfolio.

By breaking loans into static pools (that is, seasoning buckets) for each risk bucket, one can adjust for portfolio growth, a common issue. A growing portfolio will tend to have younger loans with lower losses (since few consumer loans default within 12 months). Therefore, a rapidly growing consumer portfolio can mask deteriorating credit quality, and a shrinking portfolio’s past performance will often overestimate future losses.

The audit department can help this process in two ways. First, many risks, such as operating risks, come with ranges. In other areas, operating risk may be assessed based on noninterest expense or total fixed assets, and this is meant to cover the potential of an assortment of disasters that can occur due to poor controls and communication, fraud, etc. These numbers, ultimately a very crude guide, should be based on empirical data. A useful modification is to use audit or other qualitative reviews to shade these estimates up or down. For example, say you allocate 5% of noninterest expense toward capital. You can decide to allocate 4% if the business line has a lower-than-average audit score but 6% if the business line has a higher-than-average assessment. Since the audit presumably measures the compliance and other operational risks of the business line, this is appropriate. As another example, First Manhattan Consulting Group did a survey of losses in fiduciary asset services. It found that discretionary asset management carried losses of between 5 and 9 basis points per dollar of assets under management. One could use audit scores to allocate 5 or 9 basis points against discretionary assets under management, depending on the audit score. Your audit group will appreciate this added bite to their evaluations, and this measurement will encourage mitigation of operating risks.

Second, audit can assist in overcoming political problems with implementing risk allocations. For example, assume your commercial group does not generate clean data and is indifferent to attempts to map commercial ratings to external benchmarks. If mapping the ratings is an agenda item for the auditing team, recalcitrant employees will be forced to put down in writing their reasons for not generating this information—on a document that goes to the board of directors. It is one thing to brush off a staff request, quite another to defend on the record a refusal to generate salient information. This is a big stick, and nonrevenue producers like risk managers need big sticks.

"In my experience regulators spend more time analyzing the interest-rate risk of a Section 20 subsidiary than a bank’s balance sheet."

Since the Barings debacle, much attention has been given to trading risk. Indeed, in my experience regulators spend more time analyzing the interest-rate risk of a Section 20 subsidiary than a bank’s balance sheet, even though the latter usually contains 100 times the risk. Nonetheless, the determination of this area’s risk should be a priority, because it is amenable to a VAR calculation and can serve as a useful benchmark in any firmwide process. It is useful to measure the market-risk component of trading risk capital with a combination of VAR and loss limits, not just VAR. This is because, for most trading desks, an annualized VAR generates a number well above a trader’s loss limit. If the VAR is working well and profitability reporting is accurate, a trader should be stopped out at his or her loss limit, which would be well before the trader reached his or her annualized VAR (about 14 times the daily VAR). VAR should be thought of as a loan outstanding; and the difference between the actual VAR and the loss limit (or VAR limit) should be thought of as an unused line. In this way, a trader has an incentive both to continually minimize VAR and to minimize his or her option to access the bank’s capital (the unused limit).

Unfortunately, an unavoidable pas de deux between risk managers and the business line makes independent risk management difficult. Business managers make money primarily by exploiting an informational advantage, and this habit becomes ingrained in their approach to all requests for information. In an environment where a business manager succeeds by playing a game of sophisticated poker, it is only natural for this same business manager to be unwilling to share extensive information with a risk manager. Therefore, the risk-management group must acquire a risk tool kit that makes it possible to offer a quid pro quo to the business side. Information is bartered more freely if line managers see that it is to their advantage to give their narrow and detailed information to someone who can process this information using the new tools of risk capital. If the risk manager can show how the risk of a particular business appears vis--vis other activities using historical information from S&P, for example, the line manager has a powerful incentive to share information.

The keys to any firmwide risk measurement tool are the basic bits of information that feed the final tally. Only when the organization uses this information for real evaluation and compensation will people take the basic information seriously. The method described above, which relies primarily on quantitative data, also can educate business managers (who may be more comfortable with subjective skills like managing relationships and people) to understand the key elements of risk in their business. In this way, all sorts of people—not only risk managers—will be asking the right questions.

I would be remiss not to mention that fixed assets, goodwill, and other sundry items also use capital. These are risky in the same sense that loans and investment portfolios are risky, yet no lender would allow 100% loan-to-value financing of goodwill. Nonetheless, they add up to a significant portion of total capital allocated at a bank, usually well more than 25%.


Twenty-five years ago, academics believed that risk was becoming more and more focused on a singular number, beta, and that measuring risk was a matter of finding the volatility of value with the S&P 500. The empirical failure of beta as a measure of risk is highlighted by the advent of a smorgasbord of equity benchmarks (for example, mid-cap aggressive growth). Likewise, the riskiness of various bank products has become more, not less, complicated. Risk is like an onion, with many different layers. A credit card portfolio should be sliced differently from a commercial loan portfolio, so the risk manager needs a different understanding of the various product attributes.

Despite this, an underlying unifying theme is again emerging. While no one person can fully understand all risks of all products, the calculation of risk capital from the lowest levels implies that only the most relevant information is passed along. Thus, lower-level risk managers focus on volatility of value estimates, concentrating especially on potential tail events. The senior risk manager then only needs to know how to evaluate the various risk capital estimates, asking about the relevant risk bands and what data was used to estimate worst-case scenarios and correlations and back-testing these estimates if possible with either actual or analogous data.

Surprisingly, all risks are not relevant to risk capital. For example, bank robberies are still a risk to the banking system, and considerable resources must be spent on minimizing these occurrences. Yet these events are small and diversified enough not to be relevant to the computation of economic risk capital. This example shows that considerable amounts of audit, compliance, and security operations are outside the scope of firmwide risk capital. These aspects of risk management are really a part of general operating efficiency, as opposed to risks that are relevant to capital adequacy.

Firmwide risk management has some indirect benefits as well. Directly targeting a firmwide risk measure is probably the best way to minimize the probability of a firm-destroying scenario, such as a Barings debacle, since a healthy risk capital measurement system would be asking the right questions and making sure that everything is on the radar screen. Measuring and managing risk through attention to detail brings forth information and action that indirectly minimizes those one-in-a-million disasters.

While the overall riskiness of a financial institution is a complex combination of many different risks, risk capital can unify the various risks in a meaningful way. Setting up a risk capital project is the best way to guide a focused, comprehensive risk-management system. Further, the effects it can have on performance evaluation, incentive compensation, pricing, and strategic decisions give an incentive for the business lines to use information provided by risk managers--something that occurs less often than we would like to admit..



1Michael Jensen, "Active Investors, LBOs and the Privatization of Bankruptcy," No. 2 Journal of Applied Corporate Finance (1993): 35-44.

2Dennis G. Uyemura, "EVA: A Top-Down Approach to Risk Management," The Journal of Lending and Credit Risk Management (February 1997).

3Federal Reserve System Task Force on Internal Credit Risk Models, "Credit Risk Models at Major U.S. Banking Institutions: Current State of the Art and Implications for Assessments of Capital Adequacy," (May 1988).

4However, using one standard deviation (67%) moves versus a 99% extreme event will probably affect relative risk rankings.

5There is voluminous literature. For references, see

6 See http//

7Pamela Stumpp, et al., "A sense of Security: Moody’s Approach to Evaluating Bank Loan Structure and Collateral" (October 1997).

8Le  Cary and Dana Lieberman, "Corporate Bond Defaults and Default Rates, 1938-95," Moody’s Investors Service (January 1996).

9Edward Altman and Anthony Saunders, "Credit Risk Management Developments Over the Last 20 Years," working paper, New York University (1996).