Topics 46-48 Flashcards
Explain the process of model validation
- The rating model validation process includes a series of formal activities and tools to determine the accuracy of the estimates for the key risk components as well as the model’s predictive power. The overall validation process can be divided between quantitative and qualitative validation.
- Quantitative validation includes comparing ex post results of risk measures to ex ante estimates, parameter calibrations, benchmarking, and stress tests.
- Qualitative validation focuses on non-numerical issues pertaining to model development such as logic, methodology, controls, documentation, and information technology.
Describe best practices for the roles of internal organizational units in the validation process
Best practices for the roles of internal organizational units in the validation process include:
- Senior management needs to examine the recommendations that arise from the validation process together with analyzing the reports that are prepared by the internal audit group.
- Smaller financial institutions require, at a minimum, a manager who is appointed to direct and oversee the validation process.
- The validation group must be independent from the groups that are developing and maintaining validation models and the group(s) dealing with credit risk. The validation group should also be independent of the lending group and the rating assignment group. Ultimately, the validation group should not report to any of those groups.
- Should it not be feasible for the validation group to be independent from designing and developing rating systems, then the internal audit group should be involved to ensure that the validation group is executing its duties with independence. In such a case, the validation group must be independent of the internal audit group.
- In general, all staff involved in the validation process must have sufficient training to perform their duties properly.
- Internal ratings must be discussed when management reports to or meets with the credit control group.
- The internal audit group must examine the independence of the validation group and ensure that the validation group staff is sufficiently qualified.
- Given that validation is mainly done using documentation received by groups dealing with model development and implementation, the quality of the documentation is important. Controls must be in place to ensure that there is sufficient breadth, transparency, and depth in the documentation provided.
Elements of Qualitative Validation
Qualitative and quantitative validation are complements although a greater emphasis is placed on qualitative validation given its holistic nature. In other words, neither a positive nor negative conclusion on quantitative validation is sufficient to make an overall conclusion.
- There are five key areas regarding rating systems that are analyzed during the qualitative validation process:
- Obtaining probabilities of default (PD). Using statistical models created from actual historical data allows for the determination of the PD for separate rating classes through the calibration of results with the historical data. An ex post validation of the calibration of the model can be done with data obtained during the use of the model.
- Completeness of rating system. All relevant information should be considered when determining creditworthiness and the resulting rating. Given that most default risk models include only a few borrower characteristics to determine creditworthiness, the validation process needs to provide assurance over the completeness of factors used for credit granting purposes.
- Objectivity of rating system. Objectivity is achieved when the rating system can clearly define creditworthiness factors with the least amount of interpretation required. A judgmentbased rating model would likely be fraught with biases (with low discriminatory power of ratings); therefore, it requires features such as strict (but reasonable) guidelines, proper staff training, and continual benchmarking. A statistical-based ratings model analyzes borrower characteristics based on actual data, so it is a much more objective model.
- Acceptance of rating system. Acceptance by users (e.g., lenders and analysts) is crucial, so the validation process must provide assurance that the models are easily understood and shared by the users. Heuristic models (i.e., expert systems) are more easily accepted since they mirror past experience and the credit assessments tend to be consistent with cultural norms. In contrast, fuzzy logic models and artificial neural networks are less easily accepted given the high technical knowledge demands to understand them and the high complexity that creates challenges when interpreting the output.
- Consistency of rating system. The validation process must ensure that the models make sense and are appropriate for their intended use. For example, statistical models may produce relationships between variables that are nonsensical, so the process of eliminating such variables increases consistency.
Additionally, the validation process must deal with the continuity of validation processes, which includes periodic analysis of model performance and stability, analysis of model relationships, and comparisons of model outputs versus actual outcomes. In addition, the validation of statistical models must evaluate the completeness of documentation with focus on documenting the statistical foundations. Finally, validation must consider external benchmarks such as how rating systems are used by competitors.
Elements of Quantitative Validation
Quantitative validation comprises the following areas:
- Sample representativeness. Sample representativeness is demonstrated when a sample from a population is taken and its characteristics match those of the total population. A key problem is that some loan portfolios (in certain niche areas or industries) have very low default rates, which frequently results in an overly low sample size for defaulting entities. The validation process would use bootstrap procedures that randomly create samples through an iterative process that combines items from a default group and items from a non-default group.
- Discriminatory power. Discriminatory power is the relative ability of a rating model to accurately differentiate between defaulting and non-defaulting entities for a given forecast period. The forecast period is usually 12 months for PD estimation purposes but is longer for rating validation purposes. It also involves classifying borrowers by risk level on an overall basis or by specific attributes such as industry sector, size, or geographical location.
- Dynamic properties. Dynamic properties include rating systems stability and attributes of migration matrices. In fact, the use of migration matrices assists in determining ratings stability. Migration matrices are introduced after a minimum two-year operational period for the rating model. Ideal attributes of annual migration matrices include (1) ascending order of transition rates to default as rating classes deteriorate, (2) stable ratings over time (e.g., high values being on the diagonal and low values being off-diagonal), and (3) gradual rating movements as opposed to abrupt and large movements (e.g., migration rates of +/— one class are higher than those of +/— two classes). Should the validation process determine the migration matrices to be stable, then the conclusion is that ratings move slowly given their relative insensitivity to credit cycles and other temporary events.
- Calibration. Calibration looks at the relative ability to estimate PD. Validating calibration occurs at a very early stage, and because of the limited usefulness in using statistical tools to validate calibration, benchmarking could be used as a supplement to validate estimates of probability of default (PD), loss given default (LGD), and exposure at default (EAD). The benchmarking process compares a financial institution’s ratings and estimates to those of other comparable sources; there is flexibility permitted in choosing the most suitable benchmark.
Describe challenges related to data quality
- Strong data quality is crucial when performing quantitative validation. General challenges involved with data quality include (1) completeness, (2) availability, (3) sample representativeness, (4) consistency and integrity, and (3) data cleaning procedures.
- It is difficult to create samples from a population over a long period using the same lending technology. Lending technology refers to the information, rules, and regulations used in credit origination and monitoring. In practice, it is almost impossible to have credit rules and regulations remain stable for even five years of a credit cycle. Changes occur because of technological breakthroughs that allow for more efficient handling of the credit function, market changes and new segments that require significant changes to credit policies, and merger and acquisition activity. Unfortunately, the changes result in less consistency between the data used to create the rating model and the population to which the model is applied.
- The time horizon of the data may be problematic because the data should be created from a full credit cycle. If it is less than a full cycle, the estimates will be biased by the favorable or unfavorable stages during the selected period within the cycle.
Explain how to validate the calibration of a rating model.
The validation process looks at the variances from the expected PDs and the actual default rates.
The Basel Committee (2005a)2 suggests the following tests for calibration:
- Binomial test.
- Chi-square test (or Hosmer-Lemeshow).
- Normal test.
- Traffic lights approach.
The binomial test looks at a single rating category at a time, while the chi-square test looks at multiple rating categories at a time. The normal test looks at a single rating category for more than one period, based on a normal distribution of the time-averaged default rates.
Two key assumptions include (1) mean default rate has minimal variance over time and (2) independence of default events. The traffic lights approach involves backtesting in a single rating category for multiple periods. Because each of the tests has some shortcomings, the overall conclusion is that no truly strong calibration tests exist at this time.
Explain how to validate the discriminatory power of a rating model.
Validating discriminatory power can be done using the following four methods as outlined by the Basel Committee (2005a):
- Statistical tests (e.g., Fisher’s r2, Wilks’ X, and Hosmer-Lemeshow).
- Migration matrices.
- Accuracy indices (e.g., Lorentz’s concentration curves and Gini ratios).
- Classification tests (e.g., binomial test, Type I and II errors, chi-square test, and normality test).
Identify and explain errors in modeling assumptions that can introduce model risk
When quantifying the risk of simple financial instruments such as stocks and bonds, model risk is less of a concern. These simple instruments exhibit less volatility in price and sensitivities relative to complex financial instruments so, therefore, their market values tend to be good indicators of asset values. However, model risk is a significantly more important consideration when quantifying the risk exposures of complex financial instruments, including instruments with embedded options, exotic over-the-counter (OTC) derivatives, synthetic credit derivatives, and many structured products.
Losses from model errors can be due to errors in assumptions, carelessness, fraud, or intentional mistakes that undervalue risk or overvalue profit. The six common model errors are as follows:
- Assuming constant volatility.
- Assuming a normal distribution of returns. Practice has shown, however, that returns typically do not follow a normal distribution, because distributions in fact have fat tails (i.e., unexpected large outliers).
- Underestimating the number of risk factors. For more complex products, including many exotic derivatives (e.g., Bermuda options), models need to incorporate multiple risk factors.
- Assuming perfect capital markets.
- Assuming adequate liquidity.
- Misapplying a model.
Explain how model risk can arise in the implementation of a model
Common Model Implementation Errors
Implementation error could occur, for example, when models that require Monte Carlo simulations are not allowed to run a sufficient number of simulations. For the implementation of models, important considerations should include how frequently the model parameters need to be refreshed, including volatilities and correlations. Similarly, the treatment of outliers should also be considered.
Common Valuation and Estimation Errors
Models also rely on the accuracy of inputs and values fed into the model, and are therefore subject to human error. Human error is particularly of concern in new or developing markets where adequate controls have not been fully defined and implemented.
Common valuation and estimation errors include:
- Inaccurate data.
- Incorrect sampling period length.
- Liquidity and valuation problems.
Explain methods and procedures risk managers can use to mitigate model risk
Model risk can be mitigated either through investing in research to improve the model or through an independent vetting process. Investing in research leads to developing better and more accurate statistical tools, both internally and externally. Independent vetting includes the independent oversight of profit and loss calculations as well as the model selection and construction process. Vetting consists of the following six phases:
- Documentation. Documentation should contain the assumptions of the underlying model and include the mathematical formulas used in the model.
- Model soundness. Vetting should ensure that the model used is appropriate for the financial instrument being valued.
- Independent access to rates. To facilitate independent parameter estimation, the model vetter should ensure that the middle office has access to independent financial rates.
- Benchmark selection. The vetting process should include selecting the appropriate benchmark based on assumptions made. Results from the benchmark test should be compared with the results from the model test.
- Health check and stress test. Models should be vetted to ensure they contain all necessary properties and parameters. Models should also be stress tested to determine the range of values for which the model provides accurate pricing.
- Incorporate model risk into the risk management framework. Model risk should be considered in the formal risk management governance and framework of an institution. In addition, models need to be periodically reevaluated for relevance and accuracy. Empirical evidence suggests that simple, robust models work better than more complex and less robust models.
Explain the impact of model risk and poor risk governance in the 1998 collapse of Long Term Capital Management
LTCM’s collapse highlighted several flaws in its regulatory value at risk (VaR) calculations:
- The fund’s calculated 10-day VaR period was too short. A time horizon for economic capital should be sufficiently long enough to raise new capital, which is longer than the 10-day assumption.
- The fund’s VaR models did not incorporate liquidity assumptions. The assumption of perfectly liquid markets proved to be incorrect when the fund experienced liquidity droughts.
- The fund’s risk models did not incorporate correlation and volatility risks. This weakness was especially evident when markets moved to a correlation of close to +1 and volatility increased significantly above historical and model predicted levels.
Explain the impact of model risk and poor risk governance in the 2012 London Whale trading loss
In 2012, JPMorgan Chase (JPM) and its Chief Investment Office (CIO) sustained severe losses due to risky synthetic credit derivatives trades executed by its London office. The losses from the London Whale trade and the subsequent investigations highlighted a poor risk culture at JPM, giving rise to both model and operational risks across the firm. Risk limits were routinely ignored and limit breaches were disregarded.
Define, compare, and contrast risk capital, economic capital, and regulatory capital
Risk capital provides protection against risk (i.e., unexpected losses). In other words, it can be defined as a (financial) buffer to shield a firm from the economic impact of risks taken.
In short, risk capital provides assurance to the firm’s stakeholders that their invested funds are safe. In most cases, risk capital and economic capital are treated synonymously, although an alternative definition of economic capital exists:
economic capital = risk capital + strategic risk capital
On the other hand, there are at least three distinct differences between risk capital and regulatory capital as follows:
- Unlike risk capital, regulatory capital is relevant only for regulated industries such as banking and insurance.
- Regulatory capital is computed using general benchmarks that apply to the industry. The result is a minimum required amount of capital adequacy that is usually far below the firm’s risk capital.
- Assuming that risk capital and regulatory capital are the same for the overall firm, the amounts may be different within the various divisions of the firm.
Given that Basel III requirements are sufficiently robust, it is probable that in certain areas (e.g., securitization), regulatory capital will be substantially higher than risk/economic capital. Although the two amounts may conflict, risk/economic capital must be computed in order to determine the economic viability of an activity or division. Assuming that regulatory capital is substantially higher than risk/economic capital for a given activity, then that activity will potentially move over to shadow banking (i.e., unregulated activities by regulated financial institutions) in order to provide more favorable pricing.
Explain methods and motivations for using economic capital approaches to allocate risk capital
From the perspective of financial institutions, the motivations for using economic capital are as follows:
- Capital is used extensively to cushion risk.
- Financial institutions must be creditworthy.
- There is difficulty in providing an external assessment of a financial institutions creditworthiness. It is challenging to provide an accurate credit assessment of a financial institution because its risk profile is likely to be constantly evolving.Therefore, having a sufficient store of economic capital could mitigate this problem and provide assurance of financial stability.
- Profitability is greatly impacted by the cost of capital. Economic capital is similar to equity capital in the sense that the invested funds do not need to be repaid in the same manner as debt capital, for instance. In other words, economic capital serves as a reserve or a financial cushion in case of an economic downturn. As a result, economic capital is more expensive to hold than debt capital, thereby increasing the cost of capital and reducing the financial institutions profits. A proper balance between holding sufficient economic capital and partaking in risky transactions is necessary.
Describe the RAROC (risk-adjusted return on capital) methodology and its use in capital budgeting
Benefits of RAROC include:
- Performance measurement using economic profits instead of accounting profits. Accounting profits include historical and arbitrary measures such as depreciation, which may be less relevant.
- Use in computing increases in shareholder value as part of incentive compensation (e.g., scorecards) within the firm and its divisions. The flexibility of RAROC may also allow for deferred/contingent compensation or clawbacks for subsequent poor performance.
- Use in portfolio management for buy and sell decisions and use in capital management in estimating the incremental value-added through a new investment or discontinuing an existing investment.
- Using risk-based pricing, which will allow proper pricing that takes into account the economic risks undertaken by a firm in a given transaction. Each transaction must consider the expected loss and the cost of economic capital allocated. Many firms use the “marginal economic capital requirement” portion of the RAROC equation for the purposes of pricing and determining incremental shareholder value.