Topics 46-48 Flashcards

1
Q

Explain the process of model validation

A
  • The rating model validation process includes a series of formal activities and tools to determine the accuracy of the estimates for the key risk components as well as the model’s predictive power. The overall validation process can be divided between quantitative and qualitative validation.
  • Quantitative validation includes comparing ex post results of risk measures to ex ante estimates, parameter calibrations, benchmarking, and stress tests.
  • Qualitative validation focuses on non-numerical issues pertaining to model development such as logic, methodology, controls, documentation, and information technology.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe best practices for the roles of internal organizational units in the validation process

A

Best practices for the roles of internal organizational units in the validation process include:

  1. Senior management needs to examine the recommendations that arise from the validation process together with analyzing the reports that are prepared by the internal audit group.
  2. Smaller financial institutions require, at a minimum, a manager who is appointed to direct and oversee the validation process.
  3. The validation group must be independent from the groups that are developing and maintaining validation models and the group(s) dealing with credit risk. The validation group should also be independent of the lending group and the rating assignment group. Ultimately, the validation group should not report to any of those groups.
  4. Should it not be feasible for the validation group to be independent from designing and developing rating systems, then the internal audit group should be involved to ensure that the validation group is executing its duties with independence. In such a case, the validation group must be independent of the internal audit group.
  5. In general, all staff involved in the validation process must have sufficient training to perform their duties properly.
  6. Internal ratings must be discussed when management reports to or meets with the credit control group.
  7. The internal audit group must examine the independence of the validation group and ensure that the validation group staff is sufficiently qualified.
  8. Given that validation is mainly done using documentation received by groups dealing with model development and implementation, the quality of the documentation is important. Controls must be in place to ensure that there is sufficient breadth, transparency, and depth in the documentation provided.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Elements of Qualitative Validation

A

Qualitative and quantitative validation are complements although a greater emphasis is placed on qualitative validation given its holistic nature. In other words, neither a positive nor negative conclusion on quantitative validation is sufficient to make an overall conclusion.

  • There are five key areas regarding rating systems that are analyzed during the qualitative validation process:
  1. Obtaining probabilities of default (PD). Using statistical models created from actual historical data allows for the determination of the PD for separate rating classes through the calibration of results with the historical data. An ex post validation of the calibration of the model can be done with data obtained during the use of the model.
  2. Completeness of rating system. All relevant information should be considered when determining creditworthiness and the resulting rating. Given that most default risk models include only a few borrower characteristics to determine creditworthiness, the validation process needs to provide assurance over the completeness of factors used for credit granting purposes.
  3. Objectivity of rating system. Objectivity is achieved when the rating system can clearly define creditworthiness factors with the least amount of interpretation required. A judgmentbased rating model would likely be fraught with biases (with low discriminatory power of ratings); therefore, it requires features such as strict (but reasonable) guidelines, proper staff training, and continual benchmarking. A statistical-based ratings model analyzes borrower characteristics based on actual data, so it is a much more objective model.
  4. Acceptance of rating system. Acceptance by users (e.g., lenders and analysts) is crucial, so the validation process must provide assurance that the models are easily understood and shared by the users. Heuristic models (i.e., expert systems) are more easily accepted since they mirror past experience and the credit assessments tend to be consistent with cultural norms. In contrast, fuzzy logic models and artificial neural networks are less easily accepted given the high technical knowledge demands to understand them and the high complexity that creates challenges when interpreting the output.
  5. Consistency of rating system. The validation process must ensure that the models make sense and are appropriate for their intended use. For example, statistical models may produce relationships between variables that are nonsensical, so the process of eliminating such variables increases consistency.

Additionally, the validation process must deal with the continuity of validation processes, which includes periodic analysis of model performance and stability, analysis of model relationships, and comparisons of model outputs versus actual outcomes. In addition, the validation of statistical models must evaluate the completeness of documentation with focus on documenting the statistical foundations. Finally, validation must consider external benchmarks such as how rating systems are used by competitors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Elements of Quantitative Validation

A

Quantitative validation comprises the following areas:

  1. Sample representativeness. Sample representativeness is demonstrated when a sample from a population is taken and its characteristics match those of the total population. A key problem is that some loan portfolios (in certain niche areas or industries) have very low default rates, which frequently results in an overly low sample size for defaulting entities. The validation process would use bootstrap procedures that randomly create samples through an iterative process that combines items from a default group and items from a non-default group.
  2. Discriminatory power. Discriminatory power is the relative ability of a rating model to accurately differentiate between defaulting and non-defaulting entities for a given forecast period. The forecast period is usually 12 months for PD estimation purposes but is longer for rating validation purposes. It also involves classifying borrowers by risk level on an overall basis or by specific attributes such as industry sector, size, or geographical location.
  3. Dynamic properties. Dynamic properties include rating systems stability and attributes of migration matrices. In fact, the use of migration matrices assists in determining ratings stability. Migration matrices are introduced after a minimum two-year operational period for the rating model. Ideal attributes of annual migration matrices include (1) ascending order of transition rates to default as rating classes deteriorate, (2) stable ratings over time (e.g., high values being on the diagonal and low values being off-diagonal), and (3) gradual rating movements as opposed to abrupt and large movements (e.g., migration rates of +/— one class are higher than those of +/— two classes). Should the validation process determine the migration matrices to be stable, then the conclusion is that ratings move slowly given their relative insensitivity to credit cycles and other temporary events.
  4. Calibration. Calibration looks at the relative ability to estimate PD. Validating calibration occurs at a very early stage, and because of the limited usefulness in using statistical tools to validate calibration, benchmarking could be used as a supplement to validate estimates of probability of default (PD), loss given default (LGD), and exposure at default (EAD). The benchmarking process compares a financial institution’s ratings and estimates to those of other comparable sources; there is flexibility permitted in choosing the most suitable benchmark.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe challenges related to data quality

A
  • Strong data quality is crucial when performing quantitative validation. General challenges involved with data quality include (1) completeness, (2) availability, (3) sample representativeness, (4) consistency and integrity, and (3) data cleaning procedures.
  • It is difficult to create samples from a population over a long period using the same lending technology. Lending technology refers to the information, rules, and regulations used in credit origination and monitoring. In practice, it is almost impossible to have credit rules and regulations remain stable for even five years of a credit cycle. Changes occur because of technological breakthroughs that allow for more efficient handling of the credit function, market changes and new segments that require significant changes to credit policies, and merger and acquisition activity. Unfortunately, the changes result in less consistency between the data used to create the rating model and the population to which the model is applied.
  • The time horizon of the data may be problematic because the data should be created from a full credit cycle. If it is less than a full cycle, the estimates will be biased by the favorable or unfavorable stages during the selected period within the cycle.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain how to validate the calibration of a rating model.

A

The validation process looks at the variances from the expected PDs and the actual default rates.

The Basel Committee (2005a)2 suggests the following tests for calibration:

  • Binomial test.
  • Chi-square test (or Hosmer-Lemeshow).
  • Normal test.
  • Traffic lights approach.

The binomial test looks at a single rating category at a time, while the chi-square test looks at multiple rating categories at a time. The normal test looks at a single rating category for more than one period, based on a normal distribution of the time-averaged default rates.

Two key assumptions include (1) mean default rate has minimal variance over time and (2) independence of default events. The traffic lights approach involves backtesting in a single rating category for multiple periods. Because each of the tests has some shortcomings, the overall conclusion is that no truly strong calibration tests exist at this time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain how to validate the discriminatory power of a rating model.

A

Validating discriminatory power can be done using the following four methods as outlined by the Basel Committee (2005a):

  • Statistical tests (e.g., Fisher’s r2, Wilks’ X, and Hosmer-Lemeshow).
  • Migration matrices.
  • Accuracy indices (e.g., Lorentz’s concentration curves and Gini ratios).
  • Classification tests (e.g., binomial test, Type I and II errors, chi-square test, and normality test).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Identify and explain errors in modeling assumptions that can introduce model risk

A

When quantifying the risk of simple financial instruments such as stocks and bonds, model risk is less of a concern. These simple instruments exhibit less volatility in price and sensitivities relative to complex financial instruments so, therefore, their market values tend to be good indicators of asset values. However, model risk is a significantly more important consideration when quantifying the risk exposures of complex financial instruments, including instruments with embedded options, exotic over-the-counter (OTC) derivatives, synthetic credit derivatives, and many structured products.

Losses from model errors can be due to errors in assumptions, carelessness, fraud, or intentional mistakes that undervalue risk or overvalue profit. The six common model errors are as follows:

  • Assuming constant volatility.
  • Assuming a normal distribution of returns. Practice has shown, however, that returns typically do not follow a normal distribution, because distributions in fact have fat tails (i.e., unexpected large outliers).
  • Underestimating the number of risk factors. For more complex products, including many exotic derivatives (e.g., Bermuda options), models need to incorporate multiple risk factors.
  • Assuming perfect capital markets.
  • Assuming adequate liquidity.
  • Misapplying a model.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain how model risk can arise in the implementation of a model

A

Common Model Implementation Errors

Implementation error could occur, for example, when models that require Monte Carlo simulations are not allowed to run a sufficient number of simulations. For the implementation of models, important considerations should include how frequently the model parameters need to be refreshed, including volatilities and correlations. Similarly, the treatment of outliers should also be considered.

Common Valuation and Estimation Errors

Models also rely on the accuracy of inputs and values fed into the model, and are therefore subject to human error. Human error is particularly of concern in new or developing markets where adequate controls have not been fully defined and implemented.

Common valuation and estimation errors include:

  1. Inaccurate data.
  2. Incorrect sampling period length.
  3. Liquidity and valuation problems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain methods and procedures risk managers can use to mitigate model risk

A

Model risk can be mitigated either through investing in research to improve the model or through an independent vetting process. Investing in research leads to developing better and more accurate statistical tools, both internally and externally. Independent vetting includes the independent oversight of profit and loss calculations as well as the model selection and construction process. Vetting consists of the following six phases:

  1. Documentation. Documentation should contain the assumptions of the underlying model and include the mathematical formulas used in the model.
  2. Model soundness. Vetting should ensure that the model used is appropriate for the financial instrument being valued.
  3. Independent access to rates. To facilitate independent parameter estimation, the model vetter should ensure that the middle office has access to independent financial rates.
  4. Benchmark selection. The vetting process should include selecting the appropriate benchmark based on assumptions made. Results from the benchmark test should be compared with the results from the model test.
  5. Health check and stress test. Models should be vetted to ensure they contain all necessary properties and parameters. Models should also be stress tested to determine the range of values for which the model provides accurate pricing.
  6. Incorporate model risk into the risk management framework. Model risk should be considered in the formal risk management governance and framework of an institution. In addition, models need to be periodically reevaluated for relevance and accuracy. Empirical evidence suggests that simple, robust models work better than more complex and less robust models.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain the impact of model risk and poor risk governance in the 1998 collapse of Long Term Capital Management

A

LTCM’s collapse highlighted several flaws in its regulatory value at risk (VaR) calculations:

  1. The fund’s calculated 10-day VaR period was too short. A time horizon for economic capital should be sufficiently long enough to raise new capital, which is longer than the 10-day assumption.
  2. The fund’s VaR models did not incorporate liquidity assumptions. The assumption of perfectly liquid markets proved to be incorrect when the fund experienced liquidity droughts.
  3. The fund’s risk models did not incorporate correlation and volatility risks. This weakness was especially evident when markets moved to a correlation of close to +1 and volatility increased significantly above historical and model predicted levels.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain the impact of model risk and poor risk governance in the 2012 London Whale trading loss

A

In 2012, JPMorgan Chase (JPM) and its Chief Investment Office (CIO) sustained severe losses due to risky synthetic credit derivatives trades executed by its London office. The losses from the London Whale trade and the subsequent investigations highlighted a poor risk culture at JPM, giving rise to both model and operational risks across the firm. Risk limits were routinely ignored and limit breaches were disregarded.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define, compare, and contrast risk capital, economic capital, and regulatory capital

A

Risk capital provides protection against risk (i.e., unexpected losses). In other words, it can be defined as a (financial) buffer to shield a firm from the economic impact of risks taken.

In short, risk capital provides assurance to the firm’s stakeholders that their invested funds are safe. In most cases, risk capital and economic capital are treated synonymously, although an alternative definition of economic capital exists:

economic capital = risk capital + strategic risk capital

On the other hand, there are at least three distinct differences between risk capital and regulatory capital as follows:

  1. Unlike risk capital, regulatory capital is relevant only for regulated industries such as banking and insurance.
  2. Regulatory capital is computed using general benchmarks that apply to the industry. The result is a minimum required amount of capital adequacy that is usually far below the firm’s risk capital.
  3. Assuming that risk capital and regulatory capital are the same for the overall firm, the amounts may be different within the various divisions of the firm.

Given that Basel III requirements are sufficiently robust, it is probable that in certain areas (e.g., securitization), regulatory capital will be substantially higher than risk/economic capital. Although the two amounts may conflict, risk/economic capital must be computed in order to determine the economic viability of an activity or division. Assuming that regulatory capital is substantially higher than risk/economic capital for a given activity, then that activity will potentially move over to shadow banking (i.e., unregulated activities by regulated financial institutions) in order to provide more favorable pricing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain methods and motivations for using economic capital approaches to allocate risk capital

A

From the perspective of financial institutions, the motivations for using economic capital are as follows:

  • Capital is used extensively to cushion risk.
  • Financial institutions must be creditworthy.
  • There is difficulty in providing an external assessment of a financial institutions creditworthiness. It is challenging to provide an accurate credit assessment of a financial institution because its risk profile is likely to be constantly evolving.Therefore, having a sufficient store of economic capital could mitigate this problem and provide assurance of financial stability.
  • Profitability is greatly impacted by the cost of capital. Economic capital is similar to equity capital in the sense that the invested funds do not need to be repaid in the same manner as debt capital, for instance. In other words, economic capital serves as a reserve or a financial cushion in case of an economic downturn. As a result, economic capital is more expensive to hold than debt capital, thereby increasing the cost of capital and reducing the financial institutions profits. A proper balance between holding sufficient economic capital and partaking in risky transactions is necessary.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe the RAROC (risk-adjusted return on capital) methodology and its use in capital budgeting

A

Benefits of RAROC include:

  1. Performance measurement using economic profits instead of accounting profits. Accounting profits include historical and arbitrary measures such as depreciation, which may be less relevant.
  2. Use in computing increases in shareholder value as part of incentive compensation (e.g., scorecards) within the firm and its divisions. The flexibility of RAROC may also allow for deferred/contingent compensation or clawbacks for subsequent poor performance.
  3. Use in portfolio management for buy and sell decisions and use in capital management in estimating the incremental value-added through a new investment or discontinuing an existing investment.
  4. Using risk-based pricing, which will allow proper pricing that takes into account the economic risks undertaken by a firm in a given transaction. Each transaction must consider the expected loss and the cost of economic capital allocated. Many firms use the “marginal economic capital requirement” portion of the RAROC equation for the purposes of pricing and determining incremental shareholder value.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Compute and interpret the RAROC for a project, loan, or loan portfolio, and use RAROC to compare business unit performance

A

The RAROC for a project or loan can be defined as risk-adjusted return divided by risk-adj usted capital. The basic RAROC equation is as follows:

RAROC= after-tax expected risk-adjusted net income / economic capital

The underlying principles of the RAROC equation are similar to two other common measures of risk/return:

  • the Sharpe ratio,
  • the net present value (NPV)

The discount rate for the NPV is a risk-adjusted expected return that uses beta (captures systematic risk only) from the capital asset pricing model (CAPM). In contrast to NPV, RAROC takes into account both systematic and unsystematic risk in its earnings figure.

A more detailed RAROC equation to use for capital budgeting decisions is as follows:

*Note that for capital budgeting projects, expected revenues and losses should be used in the numerator since the analysis is being performed on an ex ante (or before the fact) basis. In contrast, for performance evaluation purposes on an ex post (or after the fact) basis, realized (or actual) revenues and losses should be used.

17
Q

Strategic risk capital

A

Strategic risk capital pertains to the uncertainty surrounding the success and profitability of certain investments. An unsuccessful investment could result in financial losses and a negative reputational impact on the firm. Strategic risk capital includes goodwill and burned-out capital.

  • Goodwill is the excess of the purchase price over the fair value (or replacement value) of the net assets recorded on the balance sheet. A premium price may exist because of the existence of valuable but unrecorded intangible assets.
  • Burned-out capital represents the risk of amounts spent during the start-up phase of a venture that may be lost if the venture is not pursued because of low projected risk-adjusted returns. The venture may refer to a recent acquisition or an internally generated project. Burned-out capital is amortized over time as the strategic failure risk decreases.
  • Economic capital is designed to provide a cushion against unexpected losses at a specified confidence level. The confidence level at which economic capital is set can be viewed as the probability that the firm will be able to absorb unexpected losses over a specified period.
18
Q
A
19
Q

Explain challenges that arise when using RAROC for performance measurement, including choosing a time horizon, measuring default probability, and choosing a confidence level.

A

Time Horizon

There is a lot of subjectivity in selecting the time horizon for RAROC calculation purposes. A longer time horizon could be selected to account for the full business cycle; it may not always increase the risk capital required since the confidence level required to maintain a firms solvency will fall as the time horizon is increased. A key consideration with the selection of a time horizon is the fact that risk and return data for periods over one year is likely to be of questionable reliability.

Default Probability

A point-in-time (PIT) probability of default could be used to compute short-term expected losses and to price financial instruments with credit risk exposure. A through-the-cycle (TTC) probability of default is more commonly used for computations involving economic capital, profitability, and strategic decisions.

Confidence Level

In computing economic capital, the confidence level chosen must correspond with the firm’s desired credit rating. A high rating such as AA or AAA would require a confidence level in excess of 99.95%, for example. Choosing a lower confidence level will reduce the amount of risk capital required/allocated and it will impact the risk-adjusted performance measures. The reduction may be dramatic if the firm is primarily exposed to operational, credit, and settlement risks where large losses are rare.

20
Q

Calculate the hurdle rate and apply this rate in making business decisions using RAROC

A

Similar to internal rate of return (IRR) analysis, the use of a hurdle rate (i.e., after-tax weighted average cost of equity capital) is compared to RAROC in making business decisions. In general, the hurdle rate should be revised perhaps once or twice a year or when it has moved by over 10%.

Once the hurdle rate and the RAROC are calculated, the following rules apply:

  • If RAROC > hurdle rate, there is value creation from the project and it should be accepted.
  • If RAROC < hurdle rate, there is value destruction from the project and it should be rejected/ discontinued.

Obviously, a shortcoming of the above rules is that higher return projects that have a RAROC > hurdle rate (accepted projects) also come with high risk that could ultimately result in losses and reduce the value of the firm. In addition, lower return projects that have a RAROC < hurdle rate (rejected projects) also come with low risk that could provide steady returns and increase the value of the firm. As a result, an adjusted RAROC measure should be computed.

21
Q

Compute the adjusted RAROC for a project to determine its viability

A

Alternative way to calculate ARAROC:

ARAROC = (RAROC - Rf) / Beta

If this ARAROC > market’s expected return then accept the project

22
Q

Explain challenges in modeling diversification benefits, including aggregating a firm’s risk capital and allocating economic capital to different business lines

A

For example, assume the following information pertaining to a business unit that engages in only two activities, A and B:

  • Activity A alone requires $50 of risk capital
  • Activity B alone requires $60 of risk capital
  • Activities A and B together require a total of $90 of risk capital

Stand-alone capital looks at each activity independently and ignores any diversification benefits. Therefore, the stand-alone capital for Activities A and B are $50 and $60, respectively. The stand-alone capital for the business unit is $90.

Fully diversified capital takes into consideration the diversification benefits, which equal $20 ($50 + $60 — $90). For simplicity, the diversification benefit can be done on a pro-rata basis as follows: ($20 x $50) / $110 = $9.1 is allocated to Activity A and ($20 x $60) /$ 110 = $10.9 is allocated to Activity B. Therefore, Activities A and B have fully diversified capital of $40.9 and $48.1, respectively. Fully diversified capital should be used to determine a firm’s solvency and to determine the minimum amount of risk capital required for a given activity.

Marginal capital is the extra capital needed as a result of a new activity added to the business unit. Diversification benefits are fully considered. The marginal risk capital for Activity A is $30 ($90 total — $60 for Activity B) and the marginal risk capital for Activity B is $40 ($90 total — $50 for Activity A). Total marginal risk capital ($70) is below the full risk capital of the business unit ($90). The general method for computing marginal capital of a new activity is to start with the total risk capital required for the business unit minus all of the risk capital required for the other activities. Marginal capital is useful for making active portfolio management and business mix decisions; such decisions need to fully consider diversification benefits.

In a performance measurement context, stand-alone risk capital is useful to determine incentive pay and fully diversified risk capital is useful to determine the incremental benefit due to diversification. In allocating the diversification benefits, caution must be taken especially since correlations between the risk factors usually change over time. In a more extreme situation such as a market crisis, correlations could move to —1 or +1, thereby reducing diversification benefits.

23
Q

Explain best practices in implementing an approach that uses RAROC to allocate economic capital.

A

Senior Management

The management team (including the CEO) needs to be actively involved with the implementation of a RAROC approach within the firm and promote it as a means of measuring shareholder value creation.

Communication and Education

The RAROC process needs to be clearly explained to all levels of management of the firm in order to have sufficient “buy in” from management.

Ongoing Consultation

There are key metrics that impact the computation of economic capital. A committee consisting of members from the various business units as well as the risk management group should review these metrics periodically in order to promote fairness in the capital allocation process.

Data Quality Control

Information systems collect data (e.g., risk exposures and positions) required to perform the RAROC calculations. The data collection process should be centralized with built-in edit and reasonability checks to increase the accuracy of the data.

Complement RAROC with Qualitative Factors

A qualitative assessment of each business unit could be performed using a four-quadrant analysis. The horizontal axis would represent the expected RAROC return and the vertical axis would represent the quality of the earnings based on the importance of the business unit’s activities to the overall firm, growth opportunities, long-run stability and volatility of earnings, and any synergies with other business units. There are four resulting possibilities:

  • Low quality of earnings, low quantity of earnings: the firm should try to correct, reduce, or shut down the activities of any of its business units in this category.
  • Low quality of earnings, high quantity of earnings (managed growth): the firm should maintain any business units that currently produce high returns but have low strategic importance to the firm.
  • High quality of earnings, low quantity of earnings (investment): the firm should maintain any business units that currently produce low returns but have high strategic value and high growth potential.
  • High quality of earnings, high quantity of earnings: the firm should allocate the most resources to business units in this category.

Active Capital Management

Business units should submit their limit requests (e.g., economic capital, leverage, liquidity, risk-weighted assets) quarterly to the RAROC team. The RAROC team performs the relevant analysis and sets the limits in a collaborative manner that allows business units to express any objections. Senior management will then make a final decision. The treasury group will ensure the limits make sense in the context of funding limits. The restriction placed on a firm’s growth due to leverage limitations helps promote the optimal use of the limited amount of capital available.