1 Flashcards
Which of the following statements are true ?
I. Risk governance structures distribute rights and responsibilities among stakeholders in the corporation
II. Cybernetics is the multidisciplinary study of cyber risk and control systems underlying information systems in an organization
III. Corporate governance is a subset of the larger subject of risk governance
IV. The Cadbury report was issued in the early 90s and was one of the early frameworks for corporate governance
(a) II and III (b) I and IV (c) I, II and IV (d) All of the above
The correct answer is choice ‘b’
Governance structures specify the policies, principles and procedures for making decisions about corporate direction. They distribute rights and responsibiliies among stakeholders that typically include executive management, employees, the board etc. Statement I is therefore correct.
“Cybernetics is a transdisciplinary approach for exploring regulatory systems, their structures, constraints, and possibilities. In the 21st century, the term is often used in a rather loose way to imply “control of any system using technology” (Wikipedia). Governance literature has been affected by cybernetics, which is not the same thing as information security or cyber security. Statement II is incorrect.
Corporate governance includes risk governance, and not the other way round. Therefore statement III is incorrect.
The Cadbury Report, titled Financial Aspects of Corporate Governance, was a report issued in the UK in December 1992 by “The Committee on the Financial Aspects of Corporate Governance”. The report is eponymous with the chair of the committee, and set out recommendations on the arrangement of company boards and accounting systems to mitigate corporate governance risks and failures. Statement IV is therefore correct.
Which of the below are a way to classify risk governance structures:
(a) Committee based, regulation based and board mandated
(b) Top-down and Bottom-up
(c) Reactive, Preventative and Active
(d) Active and Passive
The correct answer is choice ‘c’
This is a tricky question in the sense no risk management professional can be expected to know the answer to this one unless they have read Chapter 2 of the PRMIA handbook. So this question appears purely for the sake of something you would need to know purely for the sake of the exam.
PRMIA’s handbook classifies governance sructures as reactive, preventative and active. Reactive structures involve monitoring signals after the event leading to corrective actions. Preventative structures are forward looking and anticipate issues before they arise. Active structures include considerations of operational efficiency and not just governance. All other answers are made up phrases and are incorrect.
In reality, corporations employ all structures together without worrying about the boundary between the three, and these distinctions do not exist except in textbooks.
Which of the following are a CRO's responsibilities: I. Statutory financial reporting II. Reporting to the audit committee III. Compliance with risk regulatory standards IV. Operational risk (a) III and IV (b) I and II (c) II and IV (d) All of the above
The correct answer is choice ‘a’
Statutory financial reporting is the responsibility of the Chief Financial Officer, not the Chief Risk Officer. The head of internal audit reports to the audit committee of the board, not the CRO. Therefore statements I and II are incorect.
The CRO is generally expected to drive risk and compliance with related regulatory standards. Market risk, credit risk and operational risk groups report into the CRO, so statements III and IV are correct.
Which of the following statements are correct?
I. A reliance upon conditional probabilities and a-priori views of probabilities is called the ‘frequentist’ view
II. Knightian uncertainty refers to things that might happen but for which probabilities cannot be evaluated
III. Risk mitigation and risk elimination are approaches to reacting to identified risks
IV. Confidence accounting is a reference to the accounting frauds that were seen in the past decade as a reflection of failed governance processes
(a) II and III
(b) I and IV
(c) II, III and IV
(d) All of the above
The correct answer is choice ‘a’
In statistics, which is relevant to risk management, a distinction is often drawn between ‘frequentists’ and ‘Bayesians’. Frequentists rely upon data to draw conclusions as to probabilities. Bayesians consider conditional probabilities, ie, take into account what things are already known, and inject sometimes subjective a-priori probabilities into the calculations. Statement I describes Bayesians, and not frequentists. In reality however, the difference is merely academic. Risk managers use whichever technique best applies to the given situation without making it about ideology.
The difference between ‘Knightian uncertainty’ and ‘Risk’ is similarly academic. Knightian uncertainty refers to risk that cannot be measured or calculated. ‘Risk’ on the other hand refers to things for which past data exists and calculations of exposure can be made. To give an example in the context of the financial world, the risk from a pandemic creating systemic failures from a failure of payment and settlement systems and the like is ‘Knightian uncertainty’, but the market risk from equity price movements can be modeled (albeit with limitations) and is calculable. Statement II is therefore correct.
Once a risk is identified, it can be mitigated, accepted, avoided or eliminated, or transferred by way of insurance. Therefore statement III is correct.
Which of the following statements are correct in relation to the financial system just prior to the current financial crisis:
I. The system was robust against small random shocks, but not against large scale disturbances to key hubs in the network
II. Financial innovation helped reduce the complexity of the financial network
III. Knightian uncertainty refers to risk that can be quantified and measured
IV. Feedback effects under stress accentuated liquidity problems
(a) I, II and IV
(b) III and IV
(c) I and IV
(d) II and III
The correct answer is choice ‘c’
Which of the following statements is true:
I. Basel II requires banks to conduct stress testing in respect of their credit exposures in addition to stress testing for market risk exposures
II. Basel II requires pooled probabilities of default (and not individual PDs for each exposure) to be used for credit risk capital calculations
(a) I & II
(b) I
(c) II
(d) Neither statement is true
The correct answer is choice ‘a’
Both statements are accurate. Basel II requires pooled probabilities of default to be applied to risk buckets that contain similar exposures. Also, stress testing is mandatory for both market and credit risk.
Which of the following is a measure of the level of capital that an institution needs to hold in order to maintain a desired credit rating?
(a) Economic capital
(b) Book value
(c) Regulatory capital
(d) Shareholders
’
equity
The correct answer is choice ‘a’
Economic capital is a measure of the level of capital needed to maintain a desired credit rating. Regulatory capital is the amount of capital required to be held by regulation, and this may be quite different from economic capital. Book value is an accounting measure reflecting the assets minus liabilities as measured per accounting rules, this is often expressed per share. Shareholders’ equity is a narrow term which is the amount of capital attributable to the shareholders and includes paid up capital and reserves but not long term debt or other non-equity funding.
Which of the following statements are true:
I. Capital adequacy implies the ability of a firm to remain a going concern
II. Regulatory capital and economic capital are identical as they target the same objectives
III. The role of economic capital is to provide a buffer against expected losses
IV. Conservative estimates of economic capital are based upon a confidence level of 100%
(a) I
(b) I, III and IV
(c) III
(d) I and III
The correct answer is choice ‘a’
Statement I is true - capital adequacy indeed is a reference to the ability of the firm to stay a ‘going concern’. (Going concern is an accounting term that means the ability of the firm to continue in business without the stress of liquidation.)
Statement II is not true because even though the stated objective of regulatory capital requirements is similar to the purposes for which economic capital is calculated, regulatory capital calculations are based upon a large number of ad-hoc estimates and parameters that are ‘hard-coded’ into regulation, while economic capital is generally calculated for internal purposes and uses an institution’s own estimates and models. They are rarely identical.
Statement II is not true as the purpose of economic capital is to provide a buffer against unexpected losses. Expected losses are covered by the P&L (or credit reserves), and not capital.
Statement IV is incorrect as even though economic capital may be calculated at very high confidence levels, that is never 100% which would require running a ‘risk-free’ business, which would mean there are no profits either. The level of confidence is set at a level which is an acceptable balance between the interests of the equity providers and the debt holders.
When combining separate bottom up estimates of market, credit and operational risk measures, a most conservative economic capital estimate results from which of the following assumptions:
(a) Assuming that market, credit and operational risk estimates are perfectly negatively correlated
(b) Assuming that market, credit and operational risk estimates are perfectly positively correlated
(c) Assuming that the resulting distributions have a correlation between 0 and 1
(d) Assuming that market, credit and operational risk estimates are uncorrelated
The correct answer is choice ‘b’
If the risks are considered perfectly positively correlated, ie assumed to have a correlation equal to 1, the standard deviations can simply be added together. This gives the most conservative estimate of combined risk for capital calculation purposes. In practice, this is the assumption used most often.
If risks are uncorrelated, ie correlation is assumed to be zero, variances can be added or the standard deviation is the root of the sum of the squares of the individual standard deviations. This obviously gives a number lower than that given when correlation is assumed to be +1.
Similarly, assumptions of negative correlation, or any correlation other than +1 will give a standard deviation number that is smaller and therefore less conservative. Choice ‘b’ is the correct answer.
The Options Theoretic approach to calculating economic capital considers the value of capital as being equivalent to a call option with a strike price equal to:
(a) The market value of the debt
(b) The value of the assets
(c) The value of the firm
(d) The notional value of the debt
The correct answer is choice ‘d’
The Options Theoretic approach to calculating economic capital is a top-down approach that considers the value of capital as being equivalent to a call option with a strike price equal to the notional value of the debt - ie, the shareholders have a call option on the assets of the firm which they can acquire by paying the debt holders a value equal to their notional claim (ie the face value of the debt). Therefore Choice ‘d’ is the correct answer and the other choices are incorrect.
Economic capital under the Earnings Volatility approach is calculated as:
(a) Earnings under the worst case scenario at a given confidence level/Required rate of return for the firm
(b) [Expected earnings less Earnings under the worst case scenario at a given confidence level]/Required rate of return for the firm
(c) Expected earnings/Required rate of return for the firm
(d) Expected earnings/Specific risk premium for the firm
The correct answer is choice ‘b’
The Earnings Volatility approach to calculating economic capital is a top down approach that considers economic capital as being the capital required to make for the worst case fall in earnings, and calculates EC as equal to the worst case decrease in earnings capitalized at the rate of return expected of the firm. The worst case decrease in earnings, or the earnings-at-risk can only be stated at a given confidence level, and is equal to the Expected Earnings less Earnings under the worst case scenario.
Which of the following is not one of the ‘three pillars’ specified in the Basel accord:
(a) Minimum capital requirements
(b) National regulation
(c) Supervisory review
(d) Market discipline
The correct answer is choice ‘b’
The three pillars are minimum capital requirements, supervisory review and market discipline. National regulation is not a pillar described under the accord. Choice ‘b’ is the correct answer.
According to the Basel framework, shareholders’ equity and reserves are considered a part of:
(a) Tier 1 capital
(b) Tier 2 capital
(c) Tier 3 capital
(d) All of the above
The correct answer is choice ‘a’
According to the Basel II framework, Tier 1 capital, also called core capital or basic equity, includes equity capital and disclosed reserves.
Tier 2 capital, also called supplementary capital, includes undisclosed reserves, revaluation reserves, general provisions/general loan-loss reserves, hybrid debt capital instruments and subordinated term debt.
Tier 3 capital, or short term subordinated debt, is intended only to cover market risk but only at the discretion of their national authority.
According to the Basel framework, reserves resulting from the upward revaluation of assets are considered a part of:
(a) Tier 2 capital
(b) Tier 1 capital
(c) Tier 3 capital
(d) All of the above
The correct answer is choice ‘a’
According to the Basel II framework, Tier 1 capital, also called core capital or basic equity, includes equity capital and disclosed reserves.
Tier 2 capital, also called supplementary capital, includes undisclosed reserves, revaluation reserves, general provisions/general loan-loss reserves, hybrid debt capital instruments and subordinated term debt.
Tier 3 capital, or short term subordinated debt, is intended only to cover market risk but only at the discretion of their national authority.
According to the Basel II framework, subordinated term debt that was originally issued 4 years ago with a maturity of 6 years is considered a part of:
(a) Tier 3 capital
(b) Tier 1 capital
(c) Tier 2 capital
(d) None of the above
The correct answer is choice ‘c’
According to the Basel II framework, Tier 1 capital, also called core capital or basic equity, includes equity capital and disclosed reserves.
Tier 2 capital, also called supplementary capital, includes undisclosed reserves, revaluation reserves, general provisions/general loan-loss reserves, hybrid debt capital instruments and subordinated term debt issued originally for 5 years or longer.
Tier 3 capital, or short term subordinated debt, is intended only to cover market risk but only at the discretion of their national authority. This only includes short term subordinated debt originally issued for 2 or more years.
Which of the following is NOT an approach used to allocate economic capital to underlying business units:
(a) Fixed ratio economic capital contributions
(b) Incremental economic capital contributions
(c) Stand alone economic capital contributions
(d) Marginal economic capital contributions
The correct answer is choice ‘a’
Other than Choice ‘a’, all others represent valid approaches to allocate economic capital to underlying business units. There is no such thing as ‘fixed ratio economic capital contribution’
The sum of the stand alone economic capital of all the business units of a bank is:
(a) unrelated to the economic capital for the firm as a whole
(b) less than the economic capital for the firm as a whole
(c) equal to the economic capital for the firm as a whole
(d) more than the economic capital for the firm as a whole
The correct answer is choice ‘d’
Economic capital is sub-additive, ie, because of the correlation being less than perfect between the risks of the different business units, the total economic capital for the firm will be less than the sum of the EC for the individual business units. Therefore Choice ‘d’ is the correct answer.
In practice, correlations are difficult to estimate reliably, and banks often use estimates and corroborate their capital calculations with reference to a number of data points.
The standalone economic capital estimates for the three uncorrelated business units of a bank are $100, $200 and $150 respectively. What is the combined economic capital for the bank?
(a) 72500
(b) 450
(c) 269
(d) 21
The correct answer is choice ‘c’
Since the business units are uncorrelated, we can get the combined EC as equal to the square root of the sum of the squares of the individual EC estimates. Therefore Choice ‘c’ is the correct answer. [=SQRT(100^2+200^2+150^2)]
The standalone economic capital estimates for the three business units of a bank are $100, $200 and $150 respectively. What is the combined economic capital for the bank, assuming the risks of the three business units are perfectly correlated?
(a) 21
(b) 269
(c) 72500
(d) 450
The correct answer is choice ‘d’
Since the business units are perfectly correlated, we can get the combined EC as equal to the sum of the individual EC estimates. Therefore Choice ‘d’ is the correct answer.
A key problem with return on equity as a measure of comparative performance is:
(a) that return on equity measures do not account for interest and taxes
(b) that return on equity are not adjusted for cash flows being different from accounting earnings
(c) that return on equity is not adjusted for risk
(d) that return on equity ignores the effect of leverage on returns to shareholders
The correct answer is choice ‘c’
The major problem with using return on equity as a measure of performance is that return on equity is not adjusted for risk. Therefore, a riskier investment will always come out ahead when compared to a less risky investment when using return on equity as a performance metric.
Return on equity does not ignore the effect of leverage (though return on assets does) because it considers the income attributable to equity, including income from leveraged investments.
Return on equity is generally measured after interest and taxes at the company wide level, though at business unit level it may use earnings before interest and taxes. However this does not create a problem so long as all performance being covered is calculated in the same way.
Cash flows being different from accounting earnings can create liquidity issues, but this does not affect the effectiveness of ROE as a measure of performance.
As opposed to traditional accounting based measures, risk adjusted performance measures use which of the following approaches to measure performance:
(a) adjust returns based on the level of risk undertaken to earn that return
(b) adjust both return and the capital employed to account for the risk undertaken
(c) adjust capital employed to reflect the risk undertaken
(d) Any or all of the above
The correct answer is choice ‘d’
Performance measurement at a very basic level involves comparing the return earned to the capital invested to earn that return. Risk adjusted performance measures (RAPMs) come in various varieties - and the key difference between RAPMs and traditional measures such as return on equity, return on assets etc is that RAPMs account for the risk undertaken. They may do so by either adjusting the return, or the capital, or both. They are classified as RAROCs (risk adjusted return on capital), RORACs (return on risk adjusted capital) and RARORACs (risk adjusted return on risk adjusted capital).
For a group of assets known to be positively correlated, what is the impact on economic capital calculations if we assume the assets to be independent (or uncorrelated)?
(a)
Economic capital estimates remain the same
(b)
The impact on economic capital cannot be determined in the absence of volatility information
(c)
Estimates of economic capital go down
(d)
Estimates of economic capital go up
The correct answer is choice ‘c’
By assuming the assets to be independent, we are reducing the correlation from a positive number to zero. Reducing asset correlations reduces the combined standard deviation of the assets, and therefore reduces economic capital. Therefore Choice ‘c’ is the correct answer.
Note that this question could also be phrased in terms of the impact on VaR estimates, and the answer would still be the same. Both VaR and economic capital are a multiple of standard deviation, and if standard deviation goes down, both VaR and economic capital estimates will reduce.
A financial institution is considering shedding a business unit to reduce its economic capital requirements. Which of the following is an appropriate measure of the resulting reduction in capital requirements?
(a) Marginal capital for the business unit in consideration
(b) Percentage of total gross income contributed by the business unit in question
(c) Proportionate capital for the business unit in consideration
(d) Incremental capital for the business unit in consideration
The correct answer is choice ‘d’
Incremental capital (or incremental VaR, depending upon the context), is a measure of the change in the capital (or VaR) requirements if a certain change is made to a portfolio. It uses the ‘before’ and ‘after’ approach, ie find out what the capital requirement or VaR will be without the change, and what it will be after the change. The difference is the incremental capital or incremental VaR. It helps measure the change in risk as a result of a particular action, eg a change in a position.
Marginal capital or VaR on the other hand is a method to break down the capital requirement or the VaR so that it can be assigned to individual positions within the portfolio. The total of marginal capital or marginal VaR for all the positions in a portfolio adds up to the total capital requirements or total VaR. Note that marginal VaR is also called component VaR.
Therefore incremental capital is the correct answer to this question. The other choices are incorrect. In the exam, the question may be phrased differently, so try to keep in mind the different between incremental and marginal capital, which can be a bit confusing given what these terms mean in plain English.
Which of the following best describes economic capital?
(a) Economic capital is a form of provision for market risk losses should adverse conditions arise
(b) Economic capital is the amount of regulatory capital that minimizes the cost of capital for firm
(c) Economic capital reflects the amount of capital required to maintain a firm
’
s target credit rating
(d) Economic capital is the amount of regulatory capital mandated for financial institutions in the OECD countries
The correct answer is choice ‘c’
Economic capital is often calculated with a view to maintaining the credit ratings for a firm. It is the capital available to absorb unexpected losses, and credit ratings are also based upon a certain probability of default. Economic capital is often calculated at a level equal to the confidence required for the desired credit rating. For example, if the probability of default for a AA rating is 0.02%, and the firm desires to hold an AA rating, then economic capital maintained at a confidence level of 99.98% would allow for such a rating. In this case, economic capital set at a 99.8% level can be thought of as the level of losses that would not be exceeded with a 99.8% probability, and would help get the firm its desired credit ratin
If E denotes the expected value of a loan portfolio at the end on one year and U the value of the portfolio in the worst case scenario at the 99% confidence level, which of the following expressions correctly describes economic capital required in respect of credit risk?
(a) E - U
(b) U
(c) U/E
(d) E
The correct answer is choice ‘a’
Economic capital in respect of credit risk is intended to absorb unexpected losses. Unexpected losses are the losses above and beyond expected losses and up to the level of confidence that economic capital is being calculated for. The capital required to cover unexpected losses in this case is E - U, and therefore Choice ‘a’ is the correct answer.
A loan portfolio’s full notional value is $100, and its value in a worst case scenario at the 99% level of confidence is $65. Expected losses on the portfolio are estimated at 10%. What is the level of economic capital required to cushion unexpected losses?
(a) 35
(b) 10
(c) 65
(d) 25`
The correct answer is choice ‘d’
Expected value = $90 ($100 - 10%)
Value at 99% confidence level = $65
Therefore economic capital required at this level of confidence = $90 - $65 = $25.
The VaR of a portfolio at the 99% confidence level is $250,000 when mean return is assumed to be zero. If the assumption of zero returns is changed to an assumption of returns of $10,000, what is the revised VaR?
(a) 226740
(b) 260000
(c) 273260
(d) 240000
The correct answer is choice ‘d’
The exact formula for VaR is = -(Zα σ + μ), where Z α is the z-multiple for the desired confidence level, and μ is the mean return. Now Zα is always a negative number, or at least will certainly be provided the desired confidence level is greater than 50%, and μ is often assumed to be zero because generally for the short time periods for which market risk VaR is calculated, its value is very close to zero.
Therefore in practice the formula for VaR just becomes -Zασ, and since Z is always negative, we normally just multiply the Z factor without the negative sign with the standard deviation to get the VaR.
For this question, there are two ways to get the answer. If we use the formula, we know that -Zασ= 250,000 (as μ=0), and therefore -Zασ - μ = 250,000 - 10,000 = $240,000.
The other, easier way to think about this is that if the mean changes, then the distribution’s shape stays exactly the same, and the entire distribution shifts to the right by $10,000 as the mean moves up by $10,000. Therefore the VaR cutoff, which was previously at -250,000 on the graph also moves up by 10k to -240,000, and therefore $240,000 is the correct answer.
What would be the correct order of steps to addressing data quality problems in an organization?
(a) Design the future state, perform a gap analysis, analyze the current state and implement the future state
(b) Assess the current state, design the future state, determine gaps and the actions required to be implemented to eliminate the gaps
(c) Articulate goals, do a
‘
strategy-fit
’
analysis and plan for action
(d) Call in external consultants
The correct answer is choice ‘b’
The correct order of steps to addressing data quality problems in an organization would include:
- Assesing the current state
- Designing the future state, and
- Planning and implementation which would include identifying the gaps between the current and the desired future state, and implementation to address the gaps.
A bank’s detailed portfolio data on positions held in a particular security across the bank does not agree with the aggregate total position for that security for the bank. What data quality attribute is missing in this situation?
(a) Data completeness
(b) Auditability
(c) Data extensibility
(d) Data integrity
The correct answer is choice ‘d’
The term ‘data quality’ has multiple elements, ie, data in order to be considered of a high quality must have multiple attributes such as completeness, timeliness, auditability etc. Because this is not an exact science, every expert or text book will have a different view of what goes into data quality. For our purposes however, we will stick to what the PRMIA study material specifies, and according to the study material the following are the elements that can be considered attributes that make for quality data:
- Integration
- Integrity
- Completeness
- Accessibility
- Flexibility
- Extensibility
- Timeliness
- Auditability
I am not going to describe each of these here as that would be repetitive of the study material, but suffice it to say that the break-down of a number into its constituents should tie to the aggregate total. If that is not true, then the data lacks integrity - and therefore Choice ‘d’ is the correct answer. The other choices address other aspects of data quality but not this, and therefore are not correct.
Which of the following statements are true?
I. Retail Risk Based Pricing involves using borrower specific data to arrive at both credit adjudication and pricing decisions
II. An integrated ‘Risk Information Management Environment’ includes two elements - people and processes
III. A Logical Data Model (LDM) lays down the relationships between data elements that an organization stores
IV. Reference Data and Metadata refer to the same thing
(a) I and III
(b) II and IV
(c) I, II and III
(d) All of the above
The correct answer is choice ‘a’
Statement I is correct. Retail Risk Based Pricing (RRBP) involves the use of borrower specific data (such as FICO scores, average balances etc) to arrive at credit decisions. These ‘retail’ credit decisions may include decisions on whether to grant a line of credit, a mortgage, issue a credit card, or any of the various other retail activities a bank may be dealing with. At the same time, this data can also be used to price the product, in addition to providing a yes or no credit decision so that risky borrowers are charged more than less risky borrowers.
Statement II is not correct, because an integrated Risk Information Management Environment includes three elements - people, processes and technology (and not just people and processes).
Statement III is correct. An LDM is a blue print of an organization’s data, and describes the relationships between the various data elements.
Statement IV is not correct because reference data and metadata are not the same thing. Reference data refers to relatively static data, such as customer name (while actual transactions may not be so static). Metadata refers to data about data, and is stored in a data dictionary.
Which of the following is not an approach proposed by the Basel II framework to compute operational risk capital?
(a) Factor based approach
(b) Advanced measurement approach
(c) Standardized approach
(d) Basic indicator approach
The correct answer is choice ‘a’
Basel II proposes three approaches to compute operational risk capital - the basic indicator approach (BIA), the standardized approach (SIA) and the advanced measurement approach (AMA). There is no operational risk approach called the factor based approach.
Which of the following data sources are expected to influence operational risk capital under the AMA:
I. Internal Loss Data (ILD)
II. External Loss Data (ELD)
III. Scenario Data (SD)
IV. Business Environment and Internal Control Factors (BEICF)
a) I, II and III only
(b) III only
(c) I and II
(d) All of the above
The correct answer is choice ‘d’
All four data sources are expected to be utilized as inputs as appropriate for operational risk calculations under the advanced measurement approach. Of these, the last one, BEICF, is slightly different from the rest as it does not yield data points that become the basis of curve fitting or other statistical computions underlying capital calculations. It includes items such as KRIs, risk assessments etc and allow the risk manager to assess the qualitative aspects of loss data.
The Basel framework does not permit which of the following Units of Measure (UoM) for operational risk modeling: I. UoM based on legal entity II. UoM based on event type III. UoM based on geography IV. UoM based on line of business (a) I and IV (b) II only (c) III only (d) None of the above
The correct answer is choice ‘d’
Units of Measure for operational risk are homogenous groupings of risks to allow sensible modeling decisions to be made. For example, some risks may be fat-tailed, for example the risk of regulatory fines. Other risks may have finite tails - for example damage to physical assets risk (DPA) may be limited to the value of the asset in the question.
Additionally, risk reporting may need to be done at the line of business, legal entity or regional basis, and in order to be able to do, so the right level of granularity needs to be captured in the risk modeling exercise. The level of granularity applied is called the ‘unit of measurement’ (UoM), and it is okay to adopt all of the choices listed above as the dimensions that describe the unit of measure.
Note that it is entirely possible, even likely, to use legal entity, risk type, region, business and other dimensions simultaneously, though doing so is likely to result in an extremely large number of UoM combinations. That can be addressed by then subsequently grouping the more granular UoMs into larger UoMs, which may ultimately be used for frequency and severity estimation
Which of the following statements are true:
I. The set of UoMs used for frequency and severity modeling should be identical
II. UoMs can be grouped together into larger combined UoMs using judgment based on the knowledge of the business
III. UoMs can be grouped together into combined UoMs using statistical techniques
IV. One may use separate sets of UoMs for frequency and severity modeling
(a) I, II and III
(b) II, III and IV
(c) IV only
(d) All of the above
The correct answer is choice ‘b’
One may use separate UoMs for frequency and severity modeling, for example, a combined UoM may be used for estimating the frequency of cyber attacks in a scenario, while the severity may be modeled using a more granular line-of-business UoM. Therefore statement I is false, while statement IV is true.
Statement II is correct, UoMs can be grouped together into larger units based on the facts relating to the business, controls and the business environment. Similarly, UoMs can be grouped together based on statistical clustering techniques using the ‘distance’ between the units of measure and combining UoMs that are closer to each other. In addition, it is also possible to combine both business knowledge and statistical algorithms to combine UoMs.
Which of the following distributions is generally not used for frequency modeling for operational risk
(a) Binomial
(b) Poisson
(c) Gamma
(d) Negative binomial
The correct answer is choice ‘c’
Frequency modeling is performed using discrete distributions that have a positive integer as a resultant - this allows for the number of events per period of time to be modeled. Of the distributions listed above, Poisson, negative binomial and binomial can be used for modeling frequency distributions. The Poisson and negative binomial distributions are encountered the most in practice.
The gamma distribution is a continuous distribution and cannot be used for frequency modeling.
For a given mean, which distribution would you prefer for frequency modeling where operational risk events are considered dependent, or in other words are seen as clustering together (as opposed to being independent)?
(a) Binomial
(b) Negative binomial
(c) Poisson
(d) Gamma
The correct answer is choice ‘b’
An interesting property that distinguishes the three most used distributions for modeling event frequency is that for a given mean, their variances differ. The ratio of variance to mean (the variance-mean ratio, calculated as variance/mean) can then be used to decide the kind of distribution to use. Both the variance and the mean can be estimated from available data points from the internal or external loss databases, or the scenario exercise.
The variance-mean ratio reflects how dispersed a distribution is. (In the PRMIA handbook, the variance to mean ratio has been described as the “Q-Factor”.)
The Poisson distribution has its mean equal to its variance, and therefore the variance to mean ratio is 1. For the negative binomial distribution, this ratio is always greater than 1, which means there is greater dispersion compared to the mean - or more intervals with low counts as well as more intervals with high counts. For the binomial distribution, the variance to mean ratio is less than one, which means it is less dispersed than the Poisson distribution with values closer to the mean.
In a situation where operational risk events are seen as clustering together, or dependent, the variance will be higher and it would be more appropriate to use the negative binomial distribution.
Which of the following is closest to the description of a ‘risk functional’?
(a) A risk functional is the distribution that models the severity of a risk
(b) A risk functional is a model distribution that is an approximation of the true loss distribution of a risk
(c) A risk functional assigns a penalty value for the difference between a model distribution and a risk
’
s severity distribution
(d) Risk functional refers to the Kolmogorov-Smirnov distance
The correct answer is choice ‘c’
For operational risk modeling, both frequency and severity distributions need to be modeled. Modeling severity involves finding an analytical distribution, such as log-normal or other that approximates the distribution best represented by known data - whether from the internal loss database, the external loss database or scenario data. A ‘risk functional’ is a measure of the deviation of the model distribution from the risk’s actual severity distribution. It assigns a penalty value for the deviation, using a statistical measure, such as the KS distance (Kolmogorov-Smirnov distance).
The problem of finding the right distribution then becomes the problem of optimizing the risk functional. For example, if F is the model distribution, and G is the actual, or empirical severity distribution, and we are using the KS test, then the Risk Functional R is defined as follows:
Note that supx stands for ‘supremum’, which is a more technical way of saying ‘maximum’. In other words, we are calculating the maximum absolute KS distance between the two distributions. (Note that the KS distance is the max of the distance between identical percentiles of the two distributions using the CDFs of the two.)
Once the risk functional is identified, we can minimize it to determine the best fitting distribution for severity.
Which of the following statements are true:
I. Heavy tailed parametric distributions are a good choice for severity modeling in operational risk.
II. Heavy tailed body-tail distributions are a good choice for severity modeling in operational risk.
III. Log-likelihood is a means to estimate parameters for a distribution.
IV. Body-tail distributions allow modeling small losses differently from large ones.
(a) I and IV
(b) II and III
(c) II, III and IV
(d) All of the above
The correct answer is choice ‘d’
When modeling for operational risk, we are generally concerned with tail losses - this is because the horizon for operational risk is 1 year at the 99.9th percentile. Since the 99.9th percentile is in the tail region, we would like to ensure that the tails are modeled as accurately as possible. Operational risk distributions are modeled using heavy tailed distributions.
Heavy tailed parametric distributions such as log-normal, pareto and others are therefore a good choice for modeling risk severity, therefore statement I is correct.
Body-tail distributions are combinations of parametric distributions, with different types of distributions being used to model the body and the tail - this provides flexibility because small and medium losses upto a threshold can be modeled using one distribution, and losses beyond the threshold can be modeled using a different distribution that is a better estimate of the tail. Statement II is therefore correct.
A log-likelihood function simplifies the optimization of a regular likelihood function. We generally maximize (or minimize the risk functional) a likelihood function with a view to estimating the parameters of the underlying distribution. If the likelihood function is complex, it may sometimes make it mathematically easier to optimize the log of the function - as that changes exponents and multiplications to additions, while behaving in the same way as the underlying function. Therefore statement III is correct, log-likelihood is a means to estimate parameters for a distribution.
Statement IV is correct as body-tail distributions allow modeling different parts of the distribution differently from each other.
The difference between true severity and the best approximation of the true severity is called:
(a) Approximation error
(b) Total error
(c) Estimation error
(d) Fitting error
The correct answer is choice ‘a’
This question relates to fitting a distribution to the true severity of the operational risk loss we are trying to model. The quality of the fit, or the precision of the fit, has two elements to the difference between the severity as represented by our model and the true severity. To understand this, consider the three data points below:
a. The true severity,
b. The best approximation of the true severity in the model space, and
c. The fit based on the dataset.
- True severity is what we are trying to model.
- The model space refers to the collection of analytical distributions (log-normal, burr etc) that we are considering to arrive at the estimate of the severity.
- The ‘best approximation of the true severity in the model space’ is reached by estimating the parameters of the distribution that optimizes the risk functional.
- The ‘fit’ is the actual parameter estimates we settle for with the distribution we have determined best fits the true estimate of our severity. When estimating parameters, we have various methods available for estimation - the least squares method, the maximum likelihood method, for example, and we can get different estimates depending upon the method we choose to use.
Our severity model will be different from the true severity, and the total difference can be split into two types of errors:
- Fitting error, represented by ‘c - b’ above: The difference between the fit based on the dataset and the best approximation of the true severity is called ‘fitting error’, ie, a measure of the extent to which we could have estimated the parameters better.
- Approximation error, represented by ‘b - a’ above: Approximation error is the difference between the true severity, and the best approximation of the true severity that can be achieved within the model space is called ‘approximation error’.
One can reduce the approximation error by expanding the model space by adding more distributions. This will reduce the approximation error, but generally has the effect of increasing the fitting error because the complexity of the model space increases, and there are more ways to fit to the true severity.
Which of the following are valid methods for selecting an appropriate model from the model space for severity estimation: I. Cross-validation method II. Bootstrap method III. Complexity penalty method IV. Maximum likelihood estimation method (a) I and IV (b) II and III (c) I, II and III (d) All of the above
The correct answer is choice ‘d’
Once we have a number of distributions in the model space, the task is to select the “best” distribution that is likely to be a good estimate of true severity. We have a number of distributions to pick from, an empirical dataset (from internal or external losses), and we can estimate the parameters for the different distributions. We then have to decide which distribution to pick, and that generally requires considering both approximation and fitting errors.
There are three methods that are generally used for selecting a model:
- The cross-validation method: This method divides the available data into two parts - the training set, and the validation set (the validation set is also called the ‘testing set’). Parameter estimation for each distribution is done using the training set, and differences are then calculated based on the validation set. Though the temptation may be to use the entire data set to estimate the parameters, that is likely to result in what may appear to be an excellent fit to the data on which it is based, but without any validation. So we estimate the parameters based on one part of the data (the training set), and check the differences we get from the remaining data (the validation set).
- Complexity penalty method: This is similar to the cross-validation method, but with an additional consideration of the complexity of the model. This is because more complex models are likely to produce a more exact fit than simpler models, this may be a spurious thing - and therefore a ‘penalty’ is added to the more complex models as to favor simplicity over complexity. The ‘complexity’ of a model may be measured by the number of parameters it has, for example, a log-normal distribution has only two parameters while a body-tail distribution combining two different distributions may have many more.
- The bootstrap method: The bootstrap method estimates fitting error by drawing samples from the empirical loss dataset, or the fit already obtained, and then estimating parameters for each draw which are compared using some statistical technique. If the samples are drawn from the loss dataset, the technique is called a non-parametric bootstrap, and if the sample is drawn from an estimated model distribution, it is called a parametric bootstrap.
- Using goodness of fit statistics: The candidate fits can be compared using MLE based on the KS distance, for example, and the best one selected. Maximum likelihood estimation is a technique that attempts to maximize the likelihood of the estimate to be as close to the true value of the parameter. It is a general purpose statistical technique that can be used for parameter estimation technique, as well as for deciding which distribution to use from the model space.
: Which of the following is the most important problem to solve for fitting a severity distribution for operational risk capital:
(a) The fit obtained should reduce the combination of the fitting and approximation errors to a minimum
(b) Determine plausible scenarios to fill the data gaps in the internal and external loss data
(c) The risk functional
’
s minimization should lead to a good estimate of the 0.999 quantile
(d) Empirical loss data needs to be extended to the ranges below the reporting threshold and above large value losses
The correct answer is choice ‘c’
Ultimately, the objective of the operational risk severity estimation exercise is to calculate the 99.9th percentile loss over a one year horizon; and everything else we do with data, collecting loss information, modeling, curve fitting etc revolves around this objective. If we cannot estimate the 99.9th percentile loss accurately, then not much else matters. Therefore Choice ‘c’ is the correct answer.
Minimizing the combination of fitting and approximation errors is one of the things we do with a view to better estimating the operational loss distribution. Likewise, empirical loss data generally is range bound because corporations do not require employees to log losses less than an threshold, and high value losses are generally rare. This problem is addressed by extrapolating both large and small losses, something that impacts the performance of our model. Likewise, one of the objectives of scenario analysis is to fill data gaps by generating plausible scenarios. Yet while all these are real issues to address, the primary problem we are trying to solve is estimating the 0.999th quantile.
When fitting a distribution in excess of a threshold as part of the body-tail distribution method described by the equation below, how is the parameter ‘p’ calculated.
Here, F(x) is the severity distribution. F(Tail) and F(Body) are the parametric distributions selected for the tail and the body, and T is the threshold in excess of which the tail is considered to begin.
The correct answer is choice ‘d’
p = k/N. If there are N observations of which K are upto T, then p = k/N allows us to have a continuous unbroken curve which gets increasingly weighted towards the distribution selected for the tail as we move towards the ‘right’, ie the higher values of losses.
Which of the following risks and reasons justify the use of scenario analysis in operational risk modeling:
I. Risks for which no internal loss data is available
II. Risks that are foreseeable but have no precedent, internally or externally
III. Risks for which objective assessments can be made by experts
IV. Risks that are known to exist, but for which no reliable external or internal losses can be analyzed
V. Reducing the complexity of having to fit statistical models to internal and external loss data
VI. Managing the capital estimation process as to produce estimates in line with management’s desired capital buffers.
a) I, II, III and IV
(b) I, II and III
(c) V
(d) All of the above
The correct answer is choice ‘a’
All the reasons and risks presented above are valid reasons for using scenario analysis, except V and VI - ie, the need to reduce the complexity of calculations is not a valid reason for using scenario analysis. Similarly, making operational risk capital estimates match management’s desired capital allocation targets is also not a valid reason. Capital calculations are intended to provide adequate capital for managing the risk from operations, regardless of what management may desire them to be.
Which of the following statements is true
I. If no loss data is available, good quality scenarios can be used to model operational risk
II. Scenario data can be mixed with observed loss data for modeling severity and frequency estimates
III. Severity estimates should not be created by fitting models to scenario generated loss data points alone
IV. Scenario assessments should only be used as modifiers to ILD or ELD severity models.
(a) III and IV
(b) I
(c) I and II
(d) All statements are true
The correct answer is choice ‘c’
There are multiple ways to incorporate scenario analysis for modeling operational risk capital - and the exact approach used depends upon the quantity of loss data available, and the quality of scenario assessments. Generally:
- If there is no past loss data available, scenarios are the only practical means to model operational risk loss distributions. Both frequency and severity estimates can be modeled based on scenario data.
- If there is plenty of past data available, scenarios can be used as a modifier for estimates that are based solely on data (for example, consider the MAX of the loss estimates at the desired quantile as provided by the data, and as indicated by scenarios)
- If high quality scenario data is available, and there is sufficient past data, one could mix scenario assessments with the loss data and fit the combined data set to create the loss distribution. Alternatively, both could be fitted with severity estimates and then the two severities could be parametrically combined.
In short, there is considerable flexibility in how scenarios can be used.
Statement I is therefore correct, and so is statement II as both indicate valid uses of scenarios.
Statement III is not correct because it may be okay to create severity estimates based on scenario data alone.
Statement IV is not correct because while using scenarios as modifiers to other means of estimation is acceptable, that is not the only use of scenarios.
An operational loss severity distribution is estimated using 4 data points from a scenario. The management institutes additional controls to reduce the severity of the loss if the risk is realized, and as a result the estimated losses from a 1-in-10-year losses are halved. The 1-in-100 loss estimate however remains the same. What would be the impact on the 99.9th percentile capital required for this risk as a result of the improvement in controls?
(a) The capital required will decrease
(b) The capital required will increase
(c) The capital required will stay the same
(d) Can
’
t say based on the information provided
The correct answer is choice ‘b’
This situation represents one of the paradoxes in estimating severity that one needs to be aware of - the improvement in controls reduces the weight of the body/middle of the distribution and moves it towards the tails (as the total probability under the curve must stay at 100%) and the distribution becomes more heavy tailed. As a result, the 99.9th percentile loss actually increases. instead of decreasing, creating a counterintuitive result. Therefore the correct answer is that the capital required will increase.
If scenario analysis produces such a result, the analyst must question if the 1 in 100 year loss severity is still accurate. If the new control has reduced the severity in the body of the distribution, the question as to why the more extreme losses have not changed should be raised.
For a hypotherical UoM, the number of losses in two non-overlapping datasets is 24 and 32 respectively. The Pareto tail parameters for the two datasets calculated using the maximum likelihood estimation method are 2 and 3. What is an estimate of the tail parameter of the combined dataset?
(a) 2.23
(b) 2.57
(c) 3
(d) Cannot be determined
The correct answer is choice ‘b’
For a number of processes, including many in finance, while a distribution such as the normal distribution is a good approximation of the distribution near the modal value of the variable, the same normal distribution may not be a good estimate of the tails. For this reason, the Pareto distribution is one of the distributions that is often used to model the tails of another distribution. Generally, if you have a set of observations, and you discard all observations below a threshold, you are left with what are called ‘exceedances’. The threshold needs to be reasonably far out in the tail. If from each value of the exceedances you subtract the threshold value, the resulting dataset is estimated by the generalized Pareto distribution.
Therefore 2.57 [=2(24/(24+32)) + 3(32/(24+32))] is the correct answer.
Which of the following statements is true:
I. When averaging quantiles of two Pareto distributions, the quantiles of the averaged models are equal to the geometric average of the quantiles of the original models based upon the number of data items in each original model.
II. When modeling severity distributions, we can only use distributions which have fewer parameters than the number of datapoints we are modeling from.
III. If an internal loss data based model covers the same risks as a scenario based model, they can can be combined using the weighted average of their parameters.
IV If an internal loss model and a scenario based model address different risks, the models can be combined by taking their sums.
(a)
II and III
(b)
I and II
(c)
III and IV
(d)
All statements are true
The correct answer is choice ‘d’
Statement I is true, the quantiles of the averaged models are equal to the geometric average of the quantiles of the original models.
Statement II is correct, the number of data points from which model parameters are estimated must be greater than the number of parameters. So if a distribution, say Poisson, has one parameter, we need at least two data points to estimate the parameter. Other complex distributions may have multiple parameters for shape, scale and other things, and the minimum number of observations required will be greater than the number of parameters.
Statement III is true, if the ILD data and scenarios cover the same risk, they are essentially different perspectives on the same risk, and therefore should be combined as weighted averages.
But if they cover completely different risks, the models will need to be added together, not averaged - which is why Statement IV is true.
Which of the following are valid approaches to leveraging external loss data for modeling operational risks:
I. Both internal and external losses can be fitted with distributions, and a weighted average approach using these distributions is relied upon for capital calculations.
II. External loss data is used to inform scenario modeling.
III. External loss data is combined with internal loss data points, and distributions fitted to the combined data set.
IV. External loss data is used to replace internal loss data points to create a higher quality data set to fit distributions.
(a) I, II and III
(b) I and III
(c) II and IV
(d) All of the above
The correct answer is choice ‘a’
Internal loss data is generally the highest quality as it is relevant, and is ‘real’ as it has occurred to the organization. External loss data suffers from a significant limitation that the risk profiles of the banks to which the data relates is generally not known due to anonymization, and may likely may not be applicable to the bank performing the calculations. Therefore, replacing external loss data with external loss data is not a good idea. Statement IV is therefore incorrect.
Which of the following steps are required for computing the aggregate distribution for a UoM for operational risk once loss frequency and severity curves have been estimated:
I. Simulate number of losses based on the frequency distribution
II. Simulate the dollar value of the losses from the severity distribution
III. Simulate random number from the copula used to model dependence between the UoMs
IV. Compute dependent losses from aggregate distribution curves
(a) None of the above
(b) III and IV
(c) I and II
(d) All of the above
The correct answer is choice ‘c’
A recap would be in order here: calculating operational risk capital is a multi-step process.
First, we fit curves to estimate the parameters to our chosen distribution types for frequency (eg, Poisson), and severity (eg, lognormal). Note that these curves are fitted at the UoM level - which is the lowest level of granularity at which modeling is carried out. Since there are many UoMs, there are are many frequency and severity distributions. However what we are interested in is the loss distribution for the entire bank from which the 99.9th percentile loss can be calculated. From the multiple frequency and severity distributions we have calculated, this becomes a two step process:
- Step 1: Calculate the aggregate loss distribution for each UoM. Each loss distribution is based upon and underlying frequency and severity distribution.
- Step 2: Combine the multiple loss distributions after considering the dependence between the different UoMs. The ‘dependence’ recognizes that the various UoMs are not completely independent, ie the loss distributions are not additive, and that there is a sort of diversification benefit in the sense that not all types of losses can occur at once and the joint probabilities of the different losses make the sum less than the sum of the parts.
Step 1 requires simulating a number, say n, of the number of losses that occur in a given year from a frequency distribution. Then n losses are picked from the severity distribution, and the total loss for the year is a summation of these losses. This becomes one data point. This process of simulating the number of losses and then identifying that number of losses is carried out a large number of times to get the aggregate loss distribution for a UoM.
Step 2 requires taking the different loss distributions from Step 1 and combining them considering the dependence between the events. The correlations between the losses are described by a ‘copula’, and combined together mathematically to get a single loss distribution for the entire bank. This allows the 99.9th percentile loss to be calculated.
Which of the following steps are required for computing the total loss distribution for a bank for operational risk once individual UoM level loss distributions have been computed from the underlhying frequency and severity curves:
I. Simulate number of losses based on the frequency distribution
II. Simulate the dollar value of the losses from the severity distribution
III. Simulate random number from the copula used to model dependence between the UoMs
IV. Compute dependent losses from aggregate distribution curves
(a) III and IV
(b) I and II
(c) None of the above
(d) All of the above
The correct answer is choice ‘b’
A recap would be in order here: calculating operational risk capital is a multi-step process.
First, we fit curves to estimate the parameters to our chosen distribution types for frequency (eg, Poisson), and severity (eg, lognormal). Note that these curves are fitted at the UoM level - which is the lowest level of granularity at which modeling is carried out. Since there are many UoMs, there are are many frequency and severity distributions. However what we are interested in is the loss distribution for the entire bank from which the 99.9th percentile loss can be calculated. From the multiple frequency and severity distributions we have calculated, this becomes a two step process:
- Step 1: Calculate the aggregate loss distribution for each UoM. Each loss distribution is based upon and underlying frequency and severity distribution.
- Step 2: Combine the multiple loss distributions after considering the dependence between the different UoMs. The ‘dependence’ recognizes that the various UoMs are not completely independent, ie the loss distributions are not additive, and that there is a sort of diversification benefit in the sense that not all types of losses can occur at once and the joint probabilities of the different losses make the sum less than the sum of the parts.
Step 1 requires simulating a number, say n, of the number of losses that occur in a given year from a frequency distribution. Then n losses are picked from the severity distribution, and the total loss for the year is a summation of these losses. This becomes one data point. This process of simulating the number of losses and then identifying that number of losses is carried out a large number of times to get the aggregate loss distribution for a UoM.
Step 2 requires taking the different loss distributions from Step 1 and combining them considering the dependence between the events. The correlations between the losses are described by a ‘copula’, and combined together mathematically to get a single loss distribution for the entire bank. This allows the 99.9th percentile loss to be calculated.
Which of the following statements are correct:
I. A training set is a set of data used to create a model, while a control set is a set of data is used to prove that the model actually works
II. Cleansing, aggregating or ensuring data integrity is a task for the IT department, and is not a risk manager’s responsibility
III. Lack of information on the quality of underlying securities and assets was a major cause of the collapse in the CDO markets during the credit crisis that started in 2007
IV. The problem of lack of historical data can be addressed reasonably satisfactorily by using analytical approaches
(a) I, III and IV
(b) I and III
(c) II and IV
(d) All of the above
The correct answer is choice ‘b’
Statement I is correct. Data is often divided into two sets - a ‘training set’ that is used to create and fine-tune the model while the ‘control set’ is used to prove that the model works on sample data. Back testing is then perfomed using actual data that becomes available over time, or may already be available as historical data.
Statement II is incorrect. A risk manager often spends a great deal of time in managing data, and ensuring that the data being used is accurate enough for the purpose it is being used for. A risk manager can expect to spend a good part of his or her team’s time in cleansing data. While he or she can try to get the IT processes and systems to produce correct data in the first place so it requires minimal subsequent cleansing or validation, this task is likely to remain a key part of a risk manager’s role for quite some time in the future given the challenges nearly all organizations face in managing risk data.
Statement III is correct. There was not enough granular data available on the underlying components of some of the derivative debt securities whose markets dried up during the crisis that began in 2007. This was because investors became increasingly unsure of what the value of these securities, such as CDOs was, leading to market seizure and firesale prices.
Statement IV is not correct. There is no easy solution to the lack of enough historical data, which is used to create as well as test models, and construct stress scenarios. Analytical approaches are not a good enough substitute for real market data. During the recent crisis, many instruments had rather short histories and there was not enough data available, and risk managers and portfolio managers relied upon analytical approaches to value and price them. Many of the assumptions that underpinned these approaches were untested in the real world and turned out to be incorrect.
When compared to a high severity low frequency risk, the operational risk capital requirement for a low severity high frequency risk is likely to be:
(a) Higher
(b) Zero
(c) Lower
(d) Unaffected by differences in frequency or severity
The correct answer is choice ‘c’
High frequency and low severity risks, for example the risks of fraud losses for a credit card issuer, may have high expected losses, but low unexpected losses. In other words, we can generally expect these losses to stay within a small expected and known range. The capital requirement will be the worst case losses at a given confidence level less expected losses, and in such cases this can be expected to be low.
On the other hand, medium severity medium frequency risks, such as the risks of unexpected legal claims, ‘fat-finger’ trading errors, will have low expected losses but a high level of unexpected losses. Thus the capital requirement for such risks will be high.
It is also worthwhile mentioning high severity and low frequency risks - for example a rogue trader circumventing all controls and bringing the bank down, or a terrorist strike or natural disaster creating other losses - will probably have zero expected losses & high unexpected losses but only at very high levels of confidence. In other words, operational risk capital is unlikely to provide for such events and these would lie in the part of the tail that is not covered by most levels of confidence when calculating operational risk capital.
Note that risk capital is required for only unexpected losses as expected losses are to be borne by P&L reserves. Therefore the operational risk capital requirements for a low severity high frequency risk is likely to be low when compared to other risks that are lower frequency but higher severity.
When compared to a low severity high frequency risk, the operational risk capital requirement for a medium severity medium frequency risk is likely to be:
(a) Zero
(b) Lower
(c) Higher
(d) Unaffected by differences in frequency or severity
The correct answer is choice ‘c’
high frequency and low severity risks, for example the risks of fraud losses for a credit card issuer, may have high expected losses, but low unexpected losses. In other words, we can generally expect these losses to stay within a small expected and known range. The capital requirement will be the worst case losses at a given confidence level less expected losses, and in such cases this can be expected to be low.
On the other hand, medium severity medium frequency risks, such as the risks of unexpected legal claims, ‘fat-finger’ trading errors, will have low expected losses but a high level of unexpected losses. Thus the capital requirement for such risks will be high.
It is also worthwhile mentioning high severity and low frequency risks - for example a rogue trader circumventing all controls and bringing the bank down, or a terrorist strike or natural disaster creating other losses - will probably have zero expected losses & high unexpected losses but only at very high levels of confidence. In other words, operational risk capital is unlikely to provide for such events and these would lie in the part of the tail that is not covered by most levels of confidence when calculating operational risk capital.
Note that risk capital is required for only unexpected losses as expected losses are to be borne by P&L reserves. Therefore the operational risk capital requirements for a low severity high frequency risk is likely to be low when compared to other risks that are lower frequency but higher severity
When compared to a medium severity medium frequency risk, the operational risk capital requirement for a high severity very low frequency risk is likely to be:
(a) Higher
(b) Lower
(c) Zero
(d) Unaffected by differences in frequency or severity
The correct answer is choice ‘c’
On the other hand, medium severity medium frequency risks, such as the risks of unexpected legal claims, ‘fat-finger’ trading errors, will have low expected losses but a high level of unexpected losses. Thus the capital requirement for such risks will be high.
It is also worthwhile mentioning high severity and low frequency risks - for example a rogue trader circumventing all controls and bringing the bank down, or a terrorist strike or natural disaster creating other losses - will probably have zero expected losses & high unexpected losses but only at very high levels of confidence. In other words, operational risk capital is unlikely to provide for such events and these would lie in the part of the tail that is not covered by most levels of confidence when calculating operational risk capital
In respect of operational risk capital calculations, the Basel II accord recommends a confidence level and time horizon of:
(a) 99% confidence level over a 10 year time horizon
(b) 99.9% confidence level over a 1 year time horizon
(c) 99% confidence level over a 1 year time horizon
(d) 99.9% confidence level over a 10 day time horizon
The correct answer is choice ‘b’
Choice ‘b’ represents the Basel II requirement, all other choices are incorrect.
For a back office function processing 15,000 transactions a day with an error rate of 10 basis points, what is the annual expected loss frequency (assume 250 days in a year)
(a) 375
(b) 0.06
(c) 37500
(d) 3750
The correct answer is choice ‘d’
An error rate of 10 basis points means the number of errors expected in a day will be 15 (recall that 100 basis points = 1%). Therefore the total number of errors expected in a year will be 15 x 250 = 3750. Choice ‘d’ is the correct answer.
Once the frequency and severity distributions for loss events have been determined, which of the following is an accurate description of the process to determine a full loss distribution for operational risk?
(a) A firm wide operational risk distribution is set to be equal to the product of the frequency and severity distributions
(b) A firm wide operational risk distribution is generated using Monte Carlo simulations
(c) The frequency distribution alone forms the basis for the loss distribution for operational risk
(d) A firm wide operational risk distribution is generated by adding together the frequency and severity distributions
The correct answer is choice ‘b’
Once the frequency distribution has been determined (for example, using the binomial, Poisson or the negative binomial distributions) and the severity distribution has also been determined (for example, using the lognormal, gamma or other functions), the loss distribution can be produced by a Monte Carlo simulation using successive drawings from each of these two distributions. It is assumed that the severity and frequency are independent of each other. The resulting distribution gives a distribution showing the losses for operational risk, from which there Op Risk VaR can be determined using the appropriate percentile.
The frequency distribution for operational risk loss events can be modeled by which of the following distributions: I. The binomial distribution II. The Poisson distribution III. The negative binomial distribution IV. The omega distribution (a) I, III and IV (b) I, II, III and IV (c) I, II and III (d) I and III
The correct answer is choice ‘c’
The binomial, Poisson and the negative binomial distributions can all be used to model the loss event frequency distribution. The omega distribution is not used for this purpose, therefore Choice ‘c’ is the correct answer.
Also note that the negative binomial distribution provides the best model fit because it has more parameters than the binomial or the Poisson. However, in practice the Poisson distribution is most often used due to reasons of practicality and the fact that the key model risk in such situations does not arise from the choice of an incorrect underlying distribution.
Which of the following statements is true:
I. Confidence levels for economic capital calculations are driven by desired credit ratings
II. Loss distributions for operational risk are affected more by the severity distribution than the frequency distribution
III. The Advanced Measurement Approach (AMA) referred to in the Basel II standard is a type of a Loss Distribution Approach (LDA)
IV. The loss distribution for operational risk under the LDA (Loss Distribution Approach) is estimated by separately estimating the frequency and severity distributions.
(a) I, II and IV
(b) I, III and IV
(c) I and II
(d) III and IV
The correct answer is choice ‘a’
Statement I is correct. Economic capital is the capital available to absorb unexpected losses, and credit ratings are also based upon a certain probability of default. Economic capital is often calculated at a level equal to the confidence required for the desired credit rating. For example, if the probability of default for a AA rating is 0.02%, then economic capital maintained at a 99.98% would allow for such a rating. Economic capital set at a 99.8% level can be thought of as the level of losses that would not be exceeded with a 99.8% probability.
Loss distributions are the product of the severity and frequency distributions, each of which are estimated separately. The total loss distribution is affected far more by the severity distribution than by the frequency distribution, therefore statement II is correct.
The Loss Distribution Approach (LDA) is one of the ways in which the requirements of the AMA can be satisfied, and not the other way round. Therefore statement III is incorrect.
Statement IV is correct as the total loss distribution is estimated using separate estimates of loss frequency and distributions.
The loss severity distribution for operational risk loss events is generally modeled by which of the following distributions:
I. the lognormal distribution
II. The gamma density function
III. Generalized hyperbolic distributions
IV. Lognormal mixtures
(a) I and III
(b) I, II and III
(c) II and III
(d) I, II, III and IV
The correct answer is choice ‘d’
All of the distributions referred to in the question can be used to model the loss severity distribution for op risk. Therefore Choice ‘d’ is the correct answer.
When modeling severity of operational risk losses using extreme value theory (EVT), practitioners often use which of the following distributions to model loss severity: I. The 'Peaks-over-threshold' (POT) model II. Generalized Pareto distributions III. Lognormal mixtures IV. Generalized hyperbolic distributions (a) I, II, III and IV (b) I, II and III (c) II and III (d) I and II
The correct answer is choice ‘d’
The peaks-over-threshold model is used when losses over a given threshold are recorded, as is often the case when using data based on external public sources where only large loss events tend to find a place. The generalized Pareto distribution is also used when attempting to model loss severity using EVT. Lognormal mixtures and generalized hyperbolic distributions are not used as extreme value distributions.
Which of the following is not a permitted approach under Basel II for calculating operational risk capital
(a) the internal measurement approach
(b) the standardized approach
(c) the basic indicator approach
(d) the advanced measurement approach
The correct answer is choice ‘a’
The Basel II framework allows the use of the basic indicator approach, the standardized approach and the advanced measurement approaches for operational risk. There is no approach called the ‘internal measurement approach’ permitted for operational risk. Choice ‘a’ is therefore the correct answer.
Under the basic indicator approach to determining operational risk capital, operational risk capital is equal to:
(a) 15% of the average gross income (considering only the positive years) of the past three years
(b) 25% of the average gross income (considering only the positive years) of the past three years
(c) 15% of the average net income (considering only the positive years) of the past three years
(d) 15% of the average gross income of the past five years
The correct answer is choice ‘a’
Choice ‘a’ is the correct answer. According to the Basel II document, banks using the Basic Indicator Approach must hold capital for operational risk equal to the average over the previous three years of a fixed percentage (denoted alpha, and currently 15%) of positive annual gross income. Figures for any year in which annual gross income is negative or zero should be excluded from both the numerator and denominator when calculating the average.
Under the standardized approach to determining operational risk capital, operations risk capital is equal to:
(a) 15% of the average gross income (considering only the positive years) of the past three years
(b) a varying percentage, determined by the national regulator, of the gross revenue of each of the bank
’
s business lines
(c) a fixed percentage of the latest gross income of the bank
(d) a fixed percentage (different for each business line) of the gross income of the eight specified business lines, averaged over three years
The correct answer is choice ‘d’
Choice ‘d’ is the correct answer, as laid down in the Basel II document. The other choices are incorrect.
When building a operational loss distribution by combining a loss frequency distribution and a loss severity distribution, it is assumed that:
I. The severity of losses is conditional upon the number of loss events
II. The frequency of losses is independent from the severity of the losses
III. Both the frequency and severity of loss events are dependent upon the state of internal controls in the bank
(a) II and III
(b) I and II
(c) II
(d) I, II and III
The correct answer is choice ‘c’
When a operational loss frequency distribution (which, for example, may be based upon a Poisson distribution) and a loss severity distribution (for example, based upon a lognormal distribution), it is assumed that the frequency of losses and the severity of the losses are completely independent and do not impact each other. Therefore statement II is correct, and the others are not valid assumptions underlying the operational loss distribution.
Which of the following is a cause of model risk in risk management?
(a) Misspecification of the model
(b) Programming errors
(c) Incorrect parameter estimation
(d) All of the above
The correct answer is choice ‘d’
Model risk is the risk that a model built for estimating a variable will produce erroneous estimates. Model risk is caused by a number of factors, including:
a) Misspecifying the model: For example, using a normal distribution when it is not justified.
b) Model misuse: For example, using a model built to estimate bond prices to estimate equity prices
c) Parameter estimation errors: In particular, parameters that are subjectively determined can be subject to significant parameter estimation errors
d) Programming errors: Errors in coding the model as part of computer implementation may not be detected by end users
e) Data errors: Errors in data used for building the model may also introduce model risk
For a bank using the advanced measurement approach to measuring operational risk, which of the following brings the greatest ‘model risk’ to its estimates:
(a) Choice of incorrect parameters for loss severity distributions
(b) Choice of an incorrect distribution for loss event frequencies
(c) Insufficient number of simulations when building the loss distribution
(d) Aggregation risk, from selecting an incorrect value of estimated correlations between different operational risk estimates
The correct answer is choice ‘d’
The greatest model risk when calculating operational risk capital comes from incorrect assumptions about correlations between different operational risks for which standalone risk calculations have been made. Generally, the correlation can be expected to be positive, and would therefore vary between 0 and 1. These two values determine the ‘bounds’ between which the total operational risk capital would lie, and these bounds are generally quite far apart. Therefore the total value of the operational risk capital is very sensitive to the value chosen for the correlation, and this is the source of the biggest model risk under the AMA.
Which of the following should be included when calculating the Gross Income indicator used to calculate operational risk capital under the basic indicator and standardized approaches under Basel II?
(a) Net non-interest income
(b) Fees paid to outsourcing service proviers
(c) Insurance income
(d) Operating expenses
The correct answer is choice ‘a’
Gross income is defined by Basel II (see para 650 of the Basel standard) as net interest income plus net non-interest income. It is intended that this measure should: (i) be gross of any provisions (e.g. for unpaid interest); (ii) be gross of operating expenses, including fees paid to outsourcing service providers; (iii) exclude realised profits/losses from the sale of securities in the banking book; and (iv) exclude extraordinary or irregular items as well as income derived from insurance.
What this means is that gross income is calculated without deducting any provisions or operating expenses from net interest plus non-interest income; and does not include any realised profits or losses from the sale of securities in the banking book, and also does not include any extraordinary or irregular item or insurance income.
Therefore operating expenses are to be not to be deducted for the purposes of calculating gross income, and neither are any provisions. Profits and losses from the sale of banking book securities are not considered part of gross income, and so isn’t any income from insurance or extraordinary items.
Which of the following statements are true:
I. The three pillars under Basel II are market risk, credit risk and operational risk.
II. Basel II is an improvement over Basel I by increasing the risk sensitivity of the minimum capital requirements.
III. Basel II encourages disclosure of capital levels and risks
(a) I and II
(b) II and III
(c) III only
(d) I only
The correct answer is choice ‘b’
The three pillars under Basel II are minimum capital requirements, supervisory review process and market discipline. Therefore statement I is false. The other two statements are accurate. Therefore Choice ‘b’ is the correct answer.
The key difference between ‘top down models’ and ‘bottom up models’ for operational risk assessment is:
(a) Top down approaches to operational risk are based upon an analysis of key risk drivers, while bottom up approaches consider causality in risk scenarios.
(b) Bottom up approaches to operational risk are based upon an analysis of key risk drivers, while top down approaches consider causality in risk scenarios.
(c) Bottom up approaches to operational risk calculate the implied operational risk using available data such as income volatility, capital etc; while top down approaches use causal factors, risk drivers and other factors to get an aggregated estimate of risk.
(d) Top down approaches to operational risk calculate the implied operational risk using available data such as income volatility, capital etc; while bottom up approaches use causal factors, risk drivers and other factors to get an aggregated estimate of risk.
The correct answer is choice ‘d’
Top down approaches rely upon available data such as total capital, income volatility, peer group information etc and attempt to imply the capital attributable to operational risk. They do not consider firm specific scenarios or causal factors. Bottom up approaches on the other hand attempt to determine operational risk capital based upon an identification and quantification of firm specific risks. Bottom up approaches help determine a traditional loss distribution from which capital requirements can be determined at a given level of confidence.
Which of the following statements are true:
I. Top down approaches help focus management attention on the frequency and severity of loss events, while bottom up approaches do not.
II. Top down approaches rely upon high level data while bottom up approaches need firm specific risk data to estimate risk.
III. Scenario analysis can help capture both qualitative and quantitative dimensions of operational risk.
(a) III only
(b) II only
(c) II and III
(d) I only
The correct answer is choice ‘c’
Top down approaches do not consider event frequency and severity, on the other hand they focus on high level available data such as total capital, income volatility, peer group information on risk capital etc. Bottom up approaches focus on severity and frequency distributions for events. Statement I is therefore not correct.
Top down approaches do indeed rely upon high level aggregate data and tend to infer operational risk capital requirements from these. Bottom up approaches look at more detailed firm specific information. Statement II is correct.
Scenario analysis requires estimating losses from risk scenarios, and allows incorporating the judgment and views of managers in addition to any data that might be available from internal or external loss databases. Statement III is correct. Therefore Choice ‘c’ is the correct answer.
According to the implied capital model, operational risk capital is estimated as:
(a) Capital implied from known risk premiums and the firm
’
s earnings
(b) Operational risk capital held by similar firms, appropriately scaled
(c) Total capital based on the capital asset pricing model
(d) Total capital less market risk capital less credit risk capital
The correct answer is choice ‘d’
Operational risk capital estimated using the implied capital model is merely the capital that is not attributable to market or credit risk. Therefore Choice ‘d’ is the correct answer. All other responses are incorrect.
A bank expects the error rate in transaction data entry for a particular business process to be 0.005%. What is the range of expected errors in a day within +/- 2 standard deviations if there are 2,000,000 such transactions each day?
(a) 80 to 120 errors in a day
(b) 90 to 110 errors in a day
(c) 60 to 80 errors in a day
(d) 0 to 200 errors in a day
The correct answer is choice ‘a’
Error rates are generally modeled using the Poisson distribution. Recall that the Poisson distribution has only one parameter - λ - which is its mean and also its variance.
In the given case, the mean number of errors is 2,000,000 x 0.005% = 100. Since this is the variance as well, the standard deviation is √100 = 10. Therefore the range of outcomes within 2 standard deviations of the mean is 100 +/- (2*10) = 80 to 120 errors in a day.
The generalized Pareto distribution, when used in the context of operational risk, is used to model:
(a) Expected losses
(b) Tail events
(c) Unexpected losses
(d) Average losses
The correct answer is choice ‘b’
Some risk experts have suggested the use of extreme value theory to model tail risk or extreme events for operational risk. The generalized Pareto model or the Peaks-over-Threshold (POT) model are often used to model extreme value distributions, and therefore Choice ‘b’ is the correct answer.
When modeling operational risk using separate distributions for loss frequency and loss severity, which of the following is true?
(a) Loss severity and loss frequency are considered independent
(b) Loss severity and loss frequency distributions are considered as a bivariate model with positive correlation
(c) Loss severity and loss frequency are modeled as conditional probabilities
(d) Loss severity and loss frequency are modeled using the same units of measurement
The correct answer is choice ‘a’
When modeling operational loss frequency distribution (which, for example, may be based upon a Poisson distribution) and a loss severity distribution (for example, based upon a lognormal distribution), it is assumed that the frequency of losses and the severity of the losses are completely independent and do not impact each other. Therefore Choice ‘a’ is correct, and the others are not valid assumptions underlying the operational loss modeling.
Once each of these distributions has been built, a random number is drawn from each to determine a loss scenario. The process is repeated many times as part of a Monte Carlo simulation to get a the loss distribution.
Which of the following will be a loss not covered by operational risk as defined under Basel II?
(a) Systems failure
(b) Strategic planning
(c) Earthquakes
(d) Fat finger losses
The correct answer is choice ‘b’
Operational risk is defined as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk.
Therefore any losses from poor strategic planning will not be a part of operational risk. Choice ‘b’ is the correct answer.
Note that floods, earthquakes and the like are covered under the definition of operational risk as losses arising from loss or damage to physical assets from natural disaster or other events.
What does a middle office do for a trading desk?
(a) Transaction data entry
(b) Risk analysis
(c) Operations
(d) Reconciliations
The correct answer is choice ‘b’
The ‘middle office’ is a term used for the risk management function, therefore Choice ‘b’ is the correct answers. The other functions describe what the ‘back office’ does (IT, accounting). The ‘front office’ includes the traders.
The definition of operational risk per Basel II includes which of the following: I. Risk of loss resulting from inadequate or failed internal processes, people and systems or from external events II. Legal risk III. Strategic risk IV. Reputational risk (a) I and II (b) I, II, III and IV (c) II and III (d) I and III
The correct answer is choice ‘a’
Operational risk as defined in Basel II specifically excludes strategic and reputational risk. Therefore Choice ‘a’ is the correct answer.
Note that Basel II defines operational risk as follows:
Operational risk is defined as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk.
Which of the following is true in relation to the application of Extreme Value Theory when applied to operational risk measurement?
I. EVT focuses on extreme losses that are generally not covered by standard distribution assumptions
II. EVT considers the distribution of losses in the tails
III. The Peaks-over-thresholds (POT) and the generalized Pareto distributions are used to model extreme value distributions
IV. EVT is concerned with average losses beyond a given level of confidence
(a) I, II and IV
(b) II and III
(c) I, II and III
(d) I and IV
The correct answer is choice ‘c’
EVT, when used in the context of operational risk measurement, focuses on tail events and attempts to build a distribution of losses beyond what is covered by VaR. Statements I, II and II are correct. Statement IV describes conditional VaR (CVAR) and not EVT.
According to Basel II’s definition of operational loss event types, losses due to acts by third parties intended to defraud, misappropriate property or circumvent the law are classified as:
(a) Internal fraud
(b) External fraud
(c) Execution delivery and system failure
(d) Third party fraud
The correct answer is choice ‘b’
Choice ‘b’ is the correct answer. Refer to the detailed loss event type classification under Basel II (see Annex 9 of the accord). You should know the exact names of all loss event types, and examples of each.
Loss from a lawsuit from an employee due to physical harm caused while at work is categorized per Basel II as:
(a) Damage to physical assets
(b) Unsafe working environment
(c) Execution delivery and process management
(d) Employment practices and workplace safety
The correct answer is choice ‘d’
Choice ‘d’ is the correct answer. Refer to the detailed loss event type classification under Basel II (see Annex 9 of the accord). You should know the exact names of all loss event types, and examples of each.
An error by a third party service provider results in a loss to a client that the bank has to make up. Such as loss would be categorized per Basel II operational risk categories as:
(a) Business disruption and process failure
(b) Outsourcing loss
(c) Abnormal loss
(d) Execution delivery and process management
The correct answer is choice ‘d’
Choice ‘d’ is the correct answer. Refer to the detailed loss event type classification under Basel II (see Annex 9 of the accord). You should know the exact names of all loss event types, and examples of each.
Which of the following event types is hacking damage classified under Basel II operational risk classifications?
(a) Information security
(b) Technology risk
(c) Damage to physical assets
(d) External fraud
The correct answer is choice ‘d’
Choice ‘d’ is the correct answer. All other answers are incorrect.
Refer to the detailed loss event type classification under Basel II (see Annex 9 of the accord).
What percentage of average annual gross income is to be held as capital for operational risk under the basic indicator approach specified under Basel II?
(a) 0.12
(b) 0.08
(c) 0.125
(d) 0.15
The correct answer is choice ‘d’
Banks using the basic indicator approach must hold 15% of the average annual gross income for the past three years, excluding any year that had a negative gross income. Therefore Choice ‘d’ is the correct answer.
Under the standardized approach to calculating operational risk capital, how many business lines are a bank’s activities divided into per Basel II?
(a) 8
(b) 15
(c) 7
(d) 12
The correct answer is choice ‘a’
In the Standardized Approach, banks’ activities are divided into eight business lines: corporate finance, trading & sales, retail banking, commercial banking, payment & settlement, agency services, asset management, and retail brokerage. Therefore Choice ‘a’ is the correct answer.
Under the standardized approach to calculating operational risk capital under Basel II, negative regulatory capital charges for any of the business units:
(a) Should be included after ignoring the negative sign
(b) Should be excluded from capital calculations
(c) Should be offset against positive capital charges from other business units
(d) Should be ignored completely
The correct answer is choice ‘c’
According to Basel II, in any given year, negative capital charges (resulting from negative gross income) in any business line may offset positive capital charges in other business lines without limit. Therefore Choice ‘c’ is the correct answer.
Which loss event type is the loss of personally identifiable client information classified as under the Basel II framework?
(a) Clients, products and business practices
(b) Information security
(c) External fraud
(d) Technology risk
The correct answer is choice ‘a’
Which loss event type is the failure to timely deliver collateral classified as under the Basel II framework?
(a) Clients, products and business practices
(b) External fraud
(c) Execution, Delivery
&
Process Management
(d) Information security
The correct answer is choice ‘c’
Which of the following is not a risk faced by a bank from holding a portfolio of residential mortgages?
(a) The risk that mortgage interest rates will rise in the future
(b) The risk that CDS spreads on the bank
’
s debt will rise making funding more expensive
(c) The risk that the homeowners will not be able to pay their mortgage when they are due
(d) The risk that the homeowners will pay the mortgage off before they are due
The correct answer is choice ‘b’
Choice ‘b’ represents a risk that does not arise from its holdings of mortgages. Therefore Choice ‘b’ is the correct answer.
All the other risks identified are correct - the bank faces interest rate, default and prepayment risks on its mortgages.
A bank prices retail credit loans based on median default rates. Over the long run, it can expect:
(a) Correct pricing of risk in the retail credit portfolio
(b) Underestimation and therefore underpricing of risk in it retail portfolio
(c) Overestimation of risk and overpricing, leading to loss of market share
(d) A reduction in the rate of defaults
The correct answer is choice ‘b’
The key to pricing loans is to make sure that the prices cover expected losses. The correct measure of expected losses is the mean, and not the median. To the extent the median is different from the mean, the loans would be over or underpriced.
The loss curve for credit defaults is a distribution skewed to the right. Therefore its mode is less than its median which is less than its mean. Since the median is less than the mean, the bank is pricing in fewer losses than the mean, which means over the long run it is underestimating risk and underpricing its loans. Therefore Choice ‘b’ is the correct answer.
If on the other hand for some reason the bank were overpricing risk, its loans would be more expensive than its competitors and it would lose market share. In this case however, this does not apply. Loan pricing decisions are driven by the rate of defaults, and not the other way round, therefore any pricing decisions will not reduce the rate of default.
Loss provisioning is intended to cover:
(a) Expected losses
(b) Losses in excess of unexpected losses
(c) Unexpected losses
(d) Both expected and unexpected losses
The correct answer is choice ‘a’
Loss provisioning is intended to cover expected losses. Economic capital is expected to cover unexpected losses. No capital or provisions are set aside for losses in excess of unexpected losses, which will ultimately be borne by equity.
What would be the consequences of a model of economic risk capital calculation that weighs all loans equally regardless of the credit rating of the counterparty?
I. Create an incentive to lend to the riskiest borrowers
II. Create an incentive to lend to the safest borrowers
III. Overstate economic capital requirements
IV. Understate economic capital requirements
(a) I only
(b) III only
(c) I and IV
(d) II and III
The correct answer is choice ‘c’
If capital calculations are done in a standard way regardless of risk (as reflected by credit ratings), then it creates a perverse incentive for the lenders’ employees to lend to the riskiest borrowers that offer the highest expected returns as there is no incentive to ‘save’ on economic capital requirements that are equal for both safe and unsafe borrowers. Therefore statement I is correct.
Given that the portfolio of such an institution is likely to then comprise poor quality borrowers, and economic capital would be based upon ‘average’ expected ratings, it is likely to carry lower economic capital given its exposures. Therefore any such economic risk capital model is likely to understate economic capital requirements. Therefore statement IV is correct.
Which of the following best describes Altman’s Z-score
(a) A standardized z based upon the normal distribution
(b) A regression of probability of survival against a given set of factors
(c) A calculation of default probabilities
(d) A numerical computation based upon accounting ratios
The correct answer is choice ‘d’
Altman’s Z-score does not consider which of the following ratios:
(a) Sales to total assets
(b) Net income to total assets
(c) Working capital to total assets
(d) Market capitalization to debt
The correct answer is choice ‘b’
A computation of Altman’s Z-score considers the following ratios:
- Working capital to total assets
- Retained earnings to total assets
- EBIT to total assets
- Market cap to debt
- Sales to total assets
It does not consider Net Income to total assets, therefore Choice ‘b’ is the correct answer. This makes sense as net income is after interest and taxes, both of which are not relevant for considering the cash flows for debt servicing.
he Altman credit risk score considers:
(a) A historical database of the firms that have defaulted
(b) A combination of accounting measures and market values
(c) A historical database of the firms that have survived
(d) A quadratic approximation of the credit risk based on underlying risk factors
The correct answer is choice ‘b’
A computation of Altman’s Z-score considers the following ratios:
- Working capital to total assets
- Retained earnings to total assets
- EBIT to total assets
- Market cap to debt
- Sales to total assets
Nearly all the numbers above are accounting measures derived straight from the balance sheet or the income statement. Market capitalization is a market driven number. Therefore Choice ‘b’ is the correct answer as the Altman credit risk score considers both accounting and market based measures.
Altman’s score, though computationally straightforward and intuitively easy to understand, was introduced in the late sixties and has been very accurate in predicting corporate bankruptcies, which is why it continues to be used extensively.
Which of the following losses can be attributed to credit risk:
I. Losses in a bond’s value from a credit downgrade
II. Losses in a bond’s value from an increase in bond yields
III. Losses arising from a bond issuer’s default
IV. Losses from an increase in corporate bond spreads
(a) II and IV
(b) I, III and IV
(c) I and II
(d) I and III
The correct answer is choice ‘d’
Losses due to credit risk include the loss of value from credit migration and default events (which can be considered a migration to the ‘default’ category). Therefore Choice ‘d’ is the correct answer. Changes in spreads or interest rates are examples of market risk events.
A bank holds a portfolio of corporate bonds. Corporate bond spreads widen, resulting in a loss of value for the portfolio. This loss arises due to:
(a) Credit risk
(b) Market risk
(c) Liquidity risk
(d) Counterparty risk
The correct answer is choice ‘b’
The difference between the yields on corporate bonds and the risk free rate is called the corporate bond spread. Widening of the spread means that corporate bonds yield more, and their yield curve shifts upwards, driving down bond prices. The increase in the spread is a consequence of the market risk from holding these interest rate instruments, which is a part of market risk. If the reduction in the value of the portfolio were to be caused by a change in the credit rating of the bonds held, it would have been a loss arising due to credit risk. Counterparty risk and liquidity risk are not relevant for this question. Therefore Choice ‘b’ is the correct answer.
Under the credit migration approach to assessing portfolio credit risk, which of the following are needed to generate a distribution of future portfolio values?
(a) The forward yield curve
(b) A rating migration matrix
(c) A specified risk horizon
(d) All of the above
The correct answer is choice ‘d’
The credit migration approach to assessing portfolio credit risk involves obtaining a distribution of future portfolio values from the ratings migration matrix. First, the frequencies in the matrix are used as probabilities, and expected future values of the securities belonging to each rating category are calculated. These are then discounted to the present using the discount rate appropriate to the ‘future’ rating category. This gives us a forward distribution of the value of each security in the portfolio. These are then combined using the default correlations between the issuers. The default correlation between the issuers is often proxied using asset returns, and recognizing that default occurs when asset values fall below a certain threshold. A distribution for the future value of the portfolio is generated using simulation, and from this distribution the Credit VaR can be calculated.
Thus, we need the migration matrix, the risk horizon from which the present values need to be calculated, and the forward yield curve or the discount curve for each rating category for the risk horizon. Thus, Choice ‘d’ is the correct answer.
If F be the face value of a firm’s debt, V the value of its assets and E the market value of equity, then according to the option pricing approach a default on debt occurs when:
(a) F < V
(b) V < E
(c) F > V
(d) F - E < V
The correct answer is choice ‘c’
According to the option pricing approach developed by Merton, the shareholders of a firm have a put on the assets of the firm where the strike price is equal to the face value of the firm’s debt. This is just a more complicated way of saying that the debt holders are entitled to all the assets of the firm if these assets are insufficient to pay off the debts, and because of limited liability of the shareholders of a corporation this part payment will fully extinguish the debt.
A firm will default on its debt if the value of the assets falls below the face value of the debt.
Which of the following credit risk models relies upon the analysis of credit rating migrations to assess credit risk?
(a) KMV
’
s EDF based approach
(b) The CreditMetrics approach
(c) The actuarial approach
(d) The contingent claims approach
The correct answer is choice ‘b’
The correct answer is Choice ‘b’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:
- CreditMetrics: based on the credit migration framework. Considers the probability of migration to other credit ratings and the impact of such migrations on portfolio value.
- CreditPortfolio View: similar to CreditMetrics, but adds the impact of the business cycle to the evaluation.
- The contingent claims approach: uses option theory by considering a debt as a put option on the assets of the firm.
- KMV’s EDF (expected default frequency) based approach: relies on EDFs and distance to default as a measure of credit risk.
- CreditRisk+: Also called the ‘actuarial approach’, considers default as a binary event that either happens or does not happen. This approach does not consider the loss of value from deterioration in credit quality (unless the deterioration implies default).
Which of the following credit risk models includes a consideration of macro economic variables such as unemployment, balance of payments etc to assess credit risk?
(a) The actuarial approach
(b) The CreditMetrics approach
(c) KMV
’
s EDF based approach
(d) CreditPortfolio View
The correct answer is choice ‘d’
The correct answer is Choice ‘d’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:
- CreditMetrics: based on the credit migration framework. Considers the probability of migration to other credit ratings and the impact of such migrations on portfolio value.
- CreditPortfolio View: similar to CreditMetrics, but adds the impact of the business cycle to the evaluation.
- The contingent claims approach: uses option theory by considering a debt as a put option on the assets of the firm.
- KMV’s EDF (expected default frequency) based approach: relies on EDFs and distance to default as a measure of credit risk.
- CreditRisk+: Also called the ‘actuarial approach’, considers default as a binary event that either happens or does not happen. This approach does not consider the loss of value from deterioration in credit quality (unless the deterioration implies default).
Which of the following credit risk models considers debt as including a put option on the firm’s assets to assess credit risk?
(a) CreditPortfolio View
(b) The actuarial approach
(c) The CreditMetrics approach
(d) The contingent claims approach
The correct answer is choice ‘d’
The correct answer is Choice ‘d’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:
- CreditMetrics: based on the credit migration framework. Considers the probability of migration to other credit ratings and the impact of such migrations on portfolio value.
- CreditPortfolio View: similar to CreditMetrics, but adds the impact of the business cycle to the evaluation.
- The contingent claims approach: uses option theory by considering a debt as a put option on the assets of the firm.
- KMV’s EDF (expected default frequency) based approach: relies on EDFs and distance to default as a measure of credit risk.
- CreditRisk+: Also called the ‘actuarial approach’, considers default as a binary event that either happens or does not happen. This approach does not consider the loss of value from deterioration in credit quality (unless the deterioration implies default).
Which of the following credit risk models focuses on default alone and ignores credit migration when assessing credit risk?
(a) CreditPortfolio View
(b) The actuarial approach
(c) The contingent claims approach
(d) The CreditMetrics approach
The correct answer is choice ‘b’
Which of the following does not affect the credit risk facing a lender institution?
(a) The state of the economy
(b) The degree of geographical or sectoral concentration in the loan book
(c) The applicability or otherwise of mark to market accounting to the institution
(d) Credit ratings of individual borrowers
The correct answer is choice ‘c’
The state of the economy, credit quality of individual borrowers and concentration risk are all factors that affect the credit risk facing a lender. Mark to market accounting does not change the credit risk, or the underlying economic reality facing the institution.
What is the risk horizon period used for credit risk as generally used for economic capital calculations and as required by regulation?
(a) 10 years
(b) 1 year
(c) 10 days
(d) 1-day
The correct answer is choice ‘b’
The credit risk horizon for credit VaR is generally one year
Conditional default probabilities modeled under CreditPortfolio view use a:
(a) Probit function
(b) Altman
’
s z-score
(c) Logit function
(d) Power function
The correct answer is choice ‘c’
Conditional default probabilities are modeled as a logit function under CreditPortfolio view. That ensures the resulting probabilities are ‘well behaved’, ie take a value between 0 and 1. The probability may be expressed as = 1/ (1 + exp(I)), where I is a country specific index taking various macro economic factors into account.
Under the CreditPortfolio View model of credit risk, the conditional probability of default will be:
(a) higher than the unconditional probability of default in an economic expansion
(b) lower than the unconditional probability of default in an economic expansion
(c) the same as the unconditional probability of default in an economic expansion
(d) lower than the unconditional probability of default in an economic contraction
The correct answer is choice ‘b’
When the economy is expanding, firms are less likely to default. Therefore the conditional probability of default, given an economic expansion, is likely to be lower than the unconditional probability of default. Therefore Choice ‘b’ is the correct answer and the other statements are incorrect.
Under the CreditPortfolio View approach to credit risk modeling, which of the following best describes the conditional transition matrix:
(a) The conditional transition matrix is the unconditional transition matrix adjusted for probabilities of defaults
(b) The conditional transition matrix is the transition matrix adjusted for the distribution of the firms
’
asset returns
(c) The conditional transition matrix is the unconditional transition matrix adjusted for the state of the economy and other macro economic factors being modeled
(d) The conditional transition matrix is the transition matrix adjusted for the risk horizon being different from that of the transition matrix
The correct answer is choice ‘c’
Under the CreditPortfolio View approach, the credit rating transition matrix is adjusted for the state of the economy in a way as to increase the probability of defaults when the economy is not doing well, and vice versa. Therefore Choice ‘c’ is the correct answer. The other choices represent nonsensical options.
he principle underlying the contingent claims approach to measuring credit risk equates the cost of eliminating credit risk for a firm to be equal to:
(a) the cost of a call on the firm
’
s assets with a strike equal to the value of the debt
(b) the market valuation of the firm
’
s equity less the value of its liabilities
(c) the value of a put on the firm
’
s assets with a strike equal to the value of the debt
(d) the probability of the firm
’
s assets falling below the critical value for default
The correct answer is choice ‘c’
Under the contingent claims approach, a firm will default on its debt when the value of its assets fall to less than the face value of the debt. Debt holders can protect themselves against such an event by buying a put on the assets of the firm, where the strike price is equal to the value of the debt. In other words, Risky Debt + Put on the firm’s assets = Risk free debt. This is because if the value of the assets is greater than the value of the debt, they will be paid in full. If the value of the assets is lower than the value of the debt, they will exercise the put and be paid in full.
Under the contingent claims approach to measuring credit risk, which of the following factors does NOT affect credit risk:
(a) Volatility of the firm
’
s asset values
(b) Leverage in the capital structure
(c) Maturity of the debt
(d) Cash flows of the firm
The correct answer is choice ‘d’
Under the contingent claims approach, credit risk is modeled as the value of a put option on the value of the firm’s assets with a strike equal to the face value of the debt and maturity equal to the maturity of the obligation. The cost of credit risk is determined by the leverage ratio, the volatility of the firm’s assets and the maturity of the debt. Cash flows are not a part of the equation. Therefore Choice ‘d’ is the correct answer.
Under the contingent claims approach to credit risk, risk increases when: I. Volatility of the firm's assets increases II. Risk free rate increases III. Maturity of the debt increases (a) I and II (b) I and III (c) II and III (d) I, II and III
The correct answer is choice ‘b’
Under the contingent claims approach, credit risk is evaluated as the value of the put on the firm’s assets with a strike price equal to the face value of the debt and maturity equal to the maturity of the obligation. The Black Scholes model can then be used to value the put, and therefore an increase in volatility and the time to expiry (ie maturity) will increase the value of the debt. An increase in the risk free rate will actually reduce the value of the put, therefore statements I and III are correct and Choice ‘b’ is the correct answer.
Under the KMV Moody’s approach to calculating expecting default frequencies (EDF), firms’ default on obligations is likely when:
(a) asset values reach a level below total liabilities
(b) asset values reach a level between short term debt and total liabilities
(c) asset values reach a level below short term debt
(d) expected asset values one year hence are below total liabilities
The correct answer is choice ‘b’
An observed fact that the KMV approach relies upon is that firms do not default when their liabilities exceed assets, but when asset values are somewhere between short term liabilities and the total liabilities. In fact, the ‘default point’ in the KMV methodology is defined as the short term debt plus half of the long term debt. The difference between expected value of the assets in one year and this ‘default point’, when expressed in terms of standard deviation of the asset values, is called the ‘distance-to-default’.
If EV be the expected value of a firm’s assets in a year, and DP be the ‘default point’ per the KMV approach to credit risk, and σ be the standard deviation of future asset returns, then the distance-to-default is given by?
The correct answer is choice ‘b’
The distance to default is the number of standard deviations that expected asset values are away from the default point (EV-DP)/sigma
Under the KMV Moody’s approach to credit risk measurement, which of the following expressions describes the expected ‘default point’ value of assets at which the firm may be expected to default?
(a) Short term debt + 0.5* Long term debt
(b) Short term debt + Long term debt
(c) 2* Short term debt + Long term debt
(d) Long term debt + 0.5* Short term debt
The correct answer is choice ‘a’
A situation where a firm has more liabilities than assets does not necessarily imply default, so long as the firm is able to pay its obligations when they come due. Therefore, short term debts have a greater bearing on a firm’s default than longer term debt. However, this is not to say that merely having enough to pay off the short term debts (ie debts due within one year) is enough to avoid default. Over time, the long term debt will also be turning to short term debt, and it may not be possible for the firm to roll over its liabilities without lenders considering the long term debt. The KMV approach considers the entire short term debt and half of the long term debt as the critical value of assets below which default will be triggered
nder the KMV Moody’s approach to credit risk measurement, how is the distance to default converted to expected default frequencies?
(a) Using a normal distribution
(b) Using a proprietary database based on historical information
(c) Using migration matrices
(d) Using Monte Carlo simulations
The correct answer is choice ‘b’
KMV Moody’s uses a proprietary database to convert the distance to default to expected default probabilities.
Changes in which of the following do not affect the expected default frequencies (EDF) under the KMV Moody’s approach to credit risk?
(a) Changes in the debt level
(b) Changes in asset volatility
(c) Changes in the risk free rate
(d) Changes in the firm
’
s market capitalization
The correct answer is choice ‘c’
EDFs are derived from the distance to default. The distance to default is the number of standard deviations that expected asset values are away from the default point, which itself is defined as short term debt plus half of the long term debt. Therefore debt levels affect the EDF. Similarly, asset values are estimated using equity prices. Therefore market capitalization affects EDF calculations. Asset volatilities are the standard deviation that form a place in the denominator in the distance to default calculations. Therefore asset volatility affects EDF too. The risk free rate is not directly factored in any of these calculations (except of course, one could argue that the level of interest rates may impact equity values or the discounted values of future cash flows, but that is a second order effect). Therefore Choice ‘c’ is the correct answer.
Which of the following is true for the actuarial approach to credit risk modeling (CreditRisk+):
(a) The number of defaults is modeled using a binomial distribution where the number of defaults are considered discrete events
(b) The approach considers only default risk, and ignores the risk to portfolio value from credit downgrades
(c) Default correlations between obligors are accounted for using a multivariate normal model
(d) The approach is based upon historical rating transition matrices
The correct answer is choice ‘b’
The actuarial model considers defaults to follow a Poisson distribution with a given mean per period, and these are binary in nature, ie a default happens or it does not happen. The model does not consider the loss of value from credit downgrades, and focuses only on defaults. The model also does not consider default correlations between obligors. Therefore Choice ‘b’ is the correct answer.
CreditRisk+, the actuarial model for calculating portfolio credit risk, is based upon:
(a) the normal distribution
(b) the exponential distribution
(c) the Poisson distribution
(d) the log-normal distribution
The correct answer is choice ‘c’
CreditRisk+ treats default as a binary event, ignoring downgrade risk, capital structures of individual firms in the portfolio or the causes of default. It uses a single parameter, λ or the mean default rate, and derives credit risk based upon the Poisson distribution. Therefore Choice ‘c’ is the correct answer.
Under the actuarial (or CreditRisk+) based modeling of defaults, what is the probability of 4 defaults in a retail portfolio where the number of expected defaults is 2?
(a) 9%
(b) 18%
(c) 4%
(d) 2%
The correct answer is choice ‘a’
The actuarial or CreditRisk+ model considers default as an ‘end of game’ event modeled by a Poisson distribution. The annual number of defaults is a stochastic variable with a mean of μ and standard deviation equal to √μ.
The probability of n defaults is given by (μ^n e^-μ) /n!, and therefore in this case is equal to (=2^4 * exp(-2))/FACT(4)) = 0.0902.
Note that CreditRisk+ is the same methodology as the actuarial approach, and requires using the Poisson distribution.
The probability of default of a security over a 1 year period is 3%. What is the probability that it would not have defaulted at the end of four years from now?
(a) 11.47%
(b) 12.00%
(c) 88.00%
(d) 88.53%
The correct answer is choice ‘d’
The probability that the security would not default in the next 4 years is equal to the probability of survival raised to the power four. In other words, =(1 - 3%)^4 = 88.53%.
If the default hazard rate for a company is 10%, and the spread on its bonds over the risk free rate is 800 bps, what is the expected recovery rate?
(a) 0.00%
(b) 8.00%
(c) 20.00%
(d) 40.00%
The correct answer is choice ‘c’
The recovery rate, the default hazard rate (also called the average default intensity) and the spread on debt are linked by the equation Hazard Rate = Spread/(1 - Recovery Rate). Therefore, the recovery rate implicit in the given data is = 1 - 8%/10% = 20%.
Which of the following cannot be used as an internal credit rating model to assess an individual borrower:
(a) Altman
’
s Z-score
(b) Distance to default model
(c) Probit model
(d) Logit model
The correct answer is choice ‘b’
Altman’s Z-score, the Probit and the Logit models can all be used to assess the credit rating of an individual borrower. There is no such model as the ‘distance to default model’, and therefore Choice ‘b’ is the correct answer
Which of the following statements are true:
I. A high score according to Altman’s Z-Score methodology indicates a lower default risk
II. A high score according to the Probit or Logit models indicates a higher default risk
III. A high score according to Altman’s Z-Score methodology indicates a higher default risk
IV. A high score according to the Probit or Logit models indicates a lower default risk
(a) I and II
(b) III and IV
(c) II and III
(d) I and IV
The correct answer is choice ‘a’
A high score under the probit and logit models indicates a higher default risk, while under Altman’s methodology it indicates a lower default risk. Therefore Choice ‘a’ is the correct answer.
For a corporate issuer, which of the following can be used to calculate market implied default probabilities? I. CDS spreads II. Bond prices III. Credit rating issued by S&P IV. Altman's scoring model (a) I, II and III (b) II and III (c) III and IV (d) I and II
The correct answer is choice ‘d’
Generally, the probability of default is an input into determining the price of a security. However, if we know the market price of a security, we can back out the probability of default that the market is factoring into pricing that security. Market implied default probabilities are the probabilities of default priced into security prices, and can be determined from both bond prices and CDS spreads. Credit ratings issued by a credit agency do not give us ‘market implied default probabilities’, and neither does an internal scoring model like Altman’s as these do not consider actual market prices in any way.
A long position in a credit sensitive bond can be synthetically replicated using:
(a) a long position in a treasury bond and a short position in a CDS
(b) a long position in a treasury bond and a long position in a CDS
(c) a short position in a treasury bond and a short position in a CDS
(d) a short position in a treasury bond and a long position in a CDS
The correct answer is choice ‘a’
A long position in a credit sensitive bond is equivalent to earning the risk free rate and the spread on the bond. The risk free rate can be earned through a long position in a treasury bond, and the spread can be earned in the form of premiums on a CDS, which are received by the protection seller, ie the party short a CDS contract. Therefore we can get the same results as a long bond position using a combination of a long treasury bond and a short position in a CDS. Choice ‘a’ is the correct answer.
The CDS rate on a defaultable bond is approximated by which of the following expressions:
(a) Loss given default x Default hazard rate
(b) Credit spread x Loss given default
(c) Hazard rate x Recovery rate
(d) Hazard rate / (1 - Recovery rate)
The CDS rate is approximated by the [Loss given default x Default hazard rate]. Thus Choice ‘a’ is the correct answer.
For a corporate bond, which of the following statements is true:
I. The credit spread is equal to the default rate times the recovery rate
II. The spread widens when the ratings of the corporate experience an upgrade
III. Both recovery rates and probabilities of default are related to the business cycle and move in opposite directions to each other
IV. Corporate bond spreads are affected by both the risk of default and the liquidity of the particular issue
(a) IV only
(b) III and IV
(c) I, II and IV
(d) III only
The correct answer is choice ‘b’
The credit spread is equal to the default rate times the loss given default, or stated another way, default rate times (1 - recovery rate). It is not equal to the default rate times the recovery rate. Therefore statement I is not correct.
When ratings are upgraded by rating agencies, the spread contracts and not widen. Therefore statement II is not correct.
Both recovery rates and probabilities of default are related to the business cycle, and they move in opposite directions. Economic recessions witness an increase in the default rate and a decrease in the recovery rate, and economic expansions result in a decrease in the default rate and an increase in the recovery rates when default does happen. Therefore statement III is correct.
Bond spreads incorporate both the risk of default, but also considerations of liquidity in the case of corporate bonds. Hence statement IV is correct.
The CDS quote for the bonds of Bank X is 200 bps. Assuming a recovery rate of 40%, calculate the default hazard rate priced in the CDS quote.
(a) 3.33%
(b) 0.80%
(c) 2.00%
(d) 5.00%
The correct answer is choice ‘a’
Hazard rate x Loss given default = CDS quote. In other words, Hazard rate x (1 - recovery rate) = CDS quote. We can therefore calculate the hazard rate for this problem as 200 bps/(1 - 40%) = 3.33%.
: Which of the following is the best description of the spread premium puzzle:
(a) The spread premium puzzle refers to observed default rates being much less than implied default rates, leading to lower credit bonds being relatively cheap when compared to their actual default probabilities
(b) The spread premium puzzle refers to AAA corporate bonds being priced at almost the same prices as equivalent treasury bonds without offering the same liquidity or guarantee as treasury bonds
(c) The spread premium puzzle refers to the moral hazard implicit in the monoline insurance market
(d) The spread premium puzzle refers to dollar denominated non-US sovereign bonds being priced a at significant discount to other similar USD denominated assets
The correct answer is choice ‘a’
Choice ‘a’ is the correct answer. The other choices represent non-sensical statements.
A portfolio has two loans, A and B, each worth $1m. The probability of default of loan A is 10% and that of loan B is 15%. The probability of both loans defaulting together is 1%. Calculate the expected loss on the portfolio.
(a) 250000
(b) 240000
(c) 500000
(d) 1000000
The correct answer is choice ‘a’
The easiest way to answer this question is to ignore the joint probability of default as that is irrelevant to expected losses. The joint probability of default impacts the volatility of the losses, but not the expected amount. One way to think about it is to think of asset portfolios, where diversification reduces risk (ie standard deviation) but the expected returns are nothing but the average of the expected returns in the portfolio. Just as the expected returns of the portfolio are not affected by the volatility or correlations (these affect standard deviation), in the same way the joint probability of default does not affect the expected losses. Therefore the expected losses for this portfolio are simply $1m x 10% + $1m x 15% = $250,000.
If two bonds with identical credit ratings, coupon and maturity but from different issuers trade at different spreads to treasury rates, which of the following is a possible explanation:
I. The bonds differ in liquidity
II. Events have happened that have changed investor perceptions but these are not yet reflected in the ratings
III. The bonds carry different market risk
IV. The bonds differ in their convexity
(a) II and IV
(b) I and II
(c) III and IV
(d) I, II and IV
The correct answer is choice ‘b’
When two bonds that appear identical in every respect trade at different prices, the difference is often due to differences in liquidity between the two bonds (the less liquid bond will be cheaper and yield higher), and also due to the fact that ratings from the major rating agencies do not generally react to day to day changes in the market. The market’s perception of the differences in the two credits will cause a divergence in the prices. This has been an extremely visible phenomenon during the credit crisis of 2007-2009, where fixed income security prices have changed sharply for many securities without any changes in external credit ratings.
A Bank Holding Company (BHC) is invested in an investment bank and a retail bank. The BHC defaults for certain if either the investment bank or the retail bank defaults. However, the BHC can also default on its own without either the investment bank or the retail bank defaulting. The investment bank and the retail bank’s defaults are independent of each other, with a probability of default of 0.05 each. The BHC’s probability of default is 0.11.
What is the probability of default of both the BHC and the investment bank? What is the probability of the BHC’s default provided both the investment bank and the retail bank survive?
(a) 0.11 and 0
(b) 0.08 and 0.0475
(c) 0.0475 and 0.10
(d) 0.05 and 0.0125
The correct answer is choice ‘d’
Since the BHC always fails when the investment bank fails, the joint probability of default of the two is merely the probability of the investment bank failing, ie 0.05.
The probability of just the BHC failing, given that both the investment bank and the retail bank have survived will be equal to 0.11 - (0.05+0.05-0.050.05) = 0.0125. (The easiest way to understand this would be to consider a venn diagram, where the area under the largest circle is 0.11, and there are two intersecting circles inside this larger circle, each with an area of 0.05 and their intersection accounting for 0.050.05. We need to calculate the area outside of the two smaller circles, but within the larger circle representing the BHC).
If A and B be two debt securities, which of the following is true?
(a) The probability of simultaneous default of A and B is greatest when their default correlation is negative
(b) The probability of simultaneous default of A and B is greatest when their default correlation is 0
(c) The probability of simultaneous default of A and B is greatest when their default correlation is +1
(d) The probability of simultaneous default of A and B is not dependent upon their default correlations, but on their marginal probabilities of default
The correct answer is choice ‘c’
If the marginal probability of default of two securities A and B is P(A) and P(B), then the probability of both of them defaulting together is affected by the default correlation between them. Marginal probability of default means the probability of default of each security on a standalone basis, ie, the probability of default of one security without considering the other security.
What is the probability that the bank will recover less than the principal advanced on this loan; assuming the probability of the home buyer’s default is independent of the value of the house?
(a) More than 5%
(b) Less than 1%
(c) 0
(d) More than 1%
The correct answer is choice ‘b’
The bank will not be able to recover the principal advanced on this loan if both the home buyer defaults, and the house value falls to less than $1m, ie the price moves adversely by more than $500k, which is $-500k/$150k = -3.33σ. (Note that 150k is the 1 year volatility in dollars, ie $1.5m * 10%).
The probability of both these things happening together is just the product of the two probabilities, one of which we know to be 5%. The other is also certainly a small number, and intuitively it is clear that the probability of both the things happening together will be less than 1%.
There are two bonds in a portfolio, each with a market value of $50m. The probability of default of the two bonds are 0.03 and 0.08 respectively, over a one year horizon. If the default correlation is 25%, what is the one year expected loss on this portfolio?
(a) $11m
(b) $5.5m
(c) $5.26m
(d) $1.38m
The correct answer is choice ‘b’
We will need to calculate the joint probability distribution of the portfolio as follows.
Probability of the joint default of both A and B =
default_correlation
=25%SQRT(0.03(1 - 0.03)0.08(1 - 0.08)) + 0.03*0.08 = 0.0140.
There are two bonds in a portfolio, each with a market value of $50m. The probability of default of the two bonds are 0.03 and 0.08 respectively, over a one year horizon. If the probability of the two bonds defaulting simultaneously is 1.4%, what is the default correlation between the two?
(a) 25%
(b) 100%
(c) 40%
(d) 0%
The correct answer is choice ‘a’
Probability of the joint default of both A and B = default_correlation
We know all the numbers except default correlation, and we can solve for it.
Default CorrelationSQRT(0.03(1 - 0.03)0.08(1 - 0.08)) + 0.03*0.08 = 0.014.
Solving, we get default correlation = 25%
In estimating credit exposure for a line of credit, it is usual to consider:
(a) only the value of credit exposure currently existing against the credit line as the exposure at default.
(b) the full value of the credit line to be the exposure at default as the borrower has an informational advantage that will lead them to borrow fully against the credit line at the time of default.
(c) the present value of the line of credit at the agreed rate of lending.
(d) a fixed fraction of the line of credit to be the exposure at default even though the currently drawn amount is quite different from such a fraction.
The correct answer is choice ‘d’
Choice ‘d’ is the correct answer. Exposures such as those to a line of credit of which only a part (or none) may be drawn at the time of assessment present a difficulty when attempting to quantify credit risk. It is not correct to take the entire amount of the line as the exposure at default, and likewise the current exposure is likely to be too aggressively low a number to consider.
While the borrower has an information advantage in that he would be aware of the deterioration in credit standing before the bank and would probably draw cash prior to default, it is unlikely that the entire amount of the line of credit would be drawn in all cases. In some cases, none may be drawn. In other cases, the bank would become aware of the situation and curtail or cancel access to the credit line in a timely fashion.
Therefore a fixed proportion of existing credit lines is considered a reasonable app
For a loan portfolio, expected losses are charged against:
(a) Economic credit capital
(b) Regulatory capital
(c) Economic capital
(d) Credit reserves
The correct answer is choice ‘d’
Credit reserves are created in respect of expected losses, which are considered the cost of doing business. Unexpected losses are borne by economic credit capital, which is a part of economic capital. Therefore Choice ‘d’ is the correct answer.