1 Flashcards

1
Q

Which of the following statements are true ?
I. Risk governance structures distribute rights and responsibilities among stakeholders in the corporation
II. Cybernetics is the multidisciplinary study of cyber risk and control systems underlying information systems in an organization
III. Corporate governance is a subset of the larger subject of risk governance
IV. The Cadbury report was issued in the early 90s and was one of the early frameworks for corporate governance

(a) II and III
 	(b) I and IV
 	(c) I, II and IV
 	(d) All of the above
A

The correct answer is choice ‘b’

Governance structures specify the policies, principles and procedures for making decisions about corporate direction. They distribute rights and responsibiliies among stakeholders that typically include executive management, employees, the board etc. Statement I is therefore correct.

“Cybernetics is a transdisciplinary approach for exploring regulatory systems, their structures, constraints, and possibilities. In the 21st century, the term is often used in a rather loose way to imply “control of any system using technology” (Wikipedia). Governance literature has been affected by cybernetics, which is not the same thing as information security or cyber security. Statement II is incorrect.

Corporate governance includes risk governance, and not the other way round. Therefore statement III is incorrect.

The Cadbury Report, titled Financial Aspects of Corporate Governance, was a report issued in the UK in December 1992 by “The Committee on the Financial Aspects of Corporate Governance”. The report is eponymous with the chair of the committee, and set out recommendations on the arrangement of company boards and accounting systems to mitigate corporate governance risks and failures. Statement IV is therefore correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the below are a way to classify risk governance structures:

(a) Committee based, regulation based and board mandated
(b) Top-down and Bottom-up
(c) Reactive, Preventative and Active
(d) Active and Passive

A

The correct answer is choice ‘c’

This is a tricky question in the sense no risk management professional can be expected to know the answer to this one unless they have read Chapter 2 of the PRMIA handbook. So this question appears purely for the sake of something you would need to know purely for the sake of the exam.

PRMIA’s handbook classifies governance sructures as reactive, preventative and active. Reactive structures involve monitoring signals after the event leading to corrective actions. Preventative structures are forward looking and anticipate issues before they arise. Active structures include considerations of operational efficiency and not just governance. All other answers are made up phrases and are incorrect.

In reality, corporations employ all structures together without worrying about the boundary between the three, and these distinctions do not exist except in textbooks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
Which of the following are a CRO's responsibilities:
I. Statutory financial reporting
II. Reporting to the audit committee
III. Compliance with risk regulatory standards
IV. Operational risk
 	(a)	III and IV
 	(b)	I and II
 	(c)	II and IV
 	(d)	All of the above
A

The correct answer is choice ‘a’

Statutory financial reporting is the responsibility of the Chief Financial Officer, not the Chief Risk Officer. The head of internal audit reports to the audit committee of the board, not the CRO. Therefore statements I and II are incorect.

The CRO is generally expected to drive risk and compliance with related regulatory standards. Market risk, credit risk and operational risk groups report into the CRO, so statements III and IV are correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following statements are correct?
I. A reliance upon conditional probabilities and a-priori views of probabilities is called the ‘frequentist’ view
II. Knightian uncertainty refers to things that might happen but for which probabilities cannot be evaluated
III. Risk mitigation and risk elimination are approaches to reacting to identified risks
IV. Confidence accounting is a reference to the accounting frauds that were seen in the past decade as a reflection of failed governance processes
(a) II and III
(b) I and IV
(c) II, III and IV
(d) All of the above

A

The correct answer is choice ‘a’

In statistics, which is relevant to risk management, a distinction is often drawn between ‘frequentists’ and ‘Bayesians’. Frequentists rely upon data to draw conclusions as to probabilities. Bayesians consider conditional probabilities, ie, take into account what things are already known, and inject sometimes subjective a-priori probabilities into the calculations. Statement I describes Bayesians, and not frequentists. In reality however, the difference is merely academic. Risk managers use whichever technique best applies to the given situation without making it about ideology.

The difference between ‘Knightian uncertainty’ and ‘Risk’ is similarly academic. Knightian uncertainty refers to risk that cannot be measured or calculated. ‘Risk’ on the other hand refers to things for which past data exists and calculations of exposure can be made. To give an example in the context of the financial world, the risk from a pandemic creating systemic failures from a failure of payment and settlement systems and the like is ‘Knightian uncertainty’, but the market risk from equity price movements can be modeled (albeit with limitations) and is calculable. Statement II is therefore correct.

Once a risk is identified, it can be mitigated, accepted, avoided or eliminated, or transferred by way of insurance. Therefore statement III is correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following statements are correct in relation to the financial system just prior to the current financial crisis:
I. The system was robust against small random shocks, but not against large scale disturbances to key hubs in the network
II. Financial innovation helped reduce the complexity of the financial network
III. Knightian uncertainty refers to risk that can be quantified and measured
IV. Feedback effects under stress accentuated liquidity problems
(a) I, II and IV
(b) III and IV
(c) I and IV
(d) II and III

A

The correct answer is choice ‘c’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which of the following statements is true:
I. Basel II requires banks to conduct stress testing in respect of their credit exposures in addition to stress testing for market risk exposures
II. Basel II requires pooled probabilities of default (and not individual PDs for each exposure) to be used for credit risk capital calculations
(a) I & II
(b) I
(c) II
(d) Neither statement is true

A

The correct answer is choice ‘a’

Both statements are accurate. Basel II requires pooled probabilities of default to be applied to risk buckets that contain similar exposures. Also, stress testing is mandatory for both market and credit risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following is a measure of the level of capital that an institution needs to hold in order to maintain a desired credit rating?

(a) Economic capital
(b) Book value
(c) Regulatory capital
(d) Shareholders

equity

A

The correct answer is choice ‘a’

Economic capital is a measure of the level of capital needed to maintain a desired credit rating. Regulatory capital is the amount of capital required to be held by regulation, and this may be quite different from economic capital. Book value is an accounting measure reflecting the assets minus liabilities as measured per accounting rules, this is often expressed per share. Shareholders’ equity is a narrow term which is the amount of capital attributable to the shareholders and includes paid up capital and reserves but not long term debt or other non-equity funding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following statements are true:
I. Capital adequacy implies the ability of a firm to remain a going concern
II. Regulatory capital and economic capital are identical as they target the same objectives
III. The role of economic capital is to provide a buffer against expected losses
IV. Conservative estimates of economic capital are based upon a confidence level of 100%
(a) I
(b) I, III and IV
(c) III
(d) I and III

A

The correct answer is choice ‘a’

Statement I is true - capital adequacy indeed is a reference to the ability of the firm to stay a ‘going concern’. (Going concern is an accounting term that means the ability of the firm to continue in business without the stress of liquidation.)

Statement II is not true because even though the stated objective of regulatory capital requirements is similar to the purposes for which economic capital is calculated, regulatory capital calculations are based upon a large number of ad-hoc estimates and parameters that are ‘hard-coded’ into regulation, while economic capital is generally calculated for internal purposes and uses an institution’s own estimates and models. They are rarely identical.

Statement II is not true as the purpose of economic capital is to provide a buffer against unexpected losses. Expected losses are covered by the P&L (or credit reserves), and not capital.

Statement IV is incorrect as even though economic capital may be calculated at very high confidence levels, that is never 100% which would require running a ‘risk-free’ business, which would mean there are no profits either. The level of confidence is set at a level which is an acceptable balance between the interests of the equity providers and the debt holders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When combining separate bottom up estimates of market, credit and operational risk measures, a most conservative economic capital estimate results from which of the following assumptions:

(a) Assuming that market, credit and operational risk estimates are perfectly negatively correlated
(b) Assuming that market, credit and operational risk estimates are perfectly positively correlated
(c) Assuming that the resulting distributions have a correlation between 0 and 1
(d) Assuming that market, credit and operational risk estimates are uncorrelated

A

The correct answer is choice ‘b’

If the risks are considered perfectly positively correlated, ie assumed to have a correlation equal to 1, the standard deviations can simply be added together. This gives the most conservative estimate of combined risk for capital calculation purposes. In practice, this is the assumption used most often.

If risks are uncorrelated, ie correlation is assumed to be zero, variances can be added or the standard deviation is the root of the sum of the squares of the individual standard deviations. This obviously gives a number lower than that given when correlation is assumed to be +1.

Similarly, assumptions of negative correlation, or any correlation other than +1 will give a standard deviation number that is smaller and therefore less conservative. Choice ‘b’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The Options Theoretic approach to calculating economic capital considers the value of capital as being equivalent to a call option with a strike price equal to:

(a) The market value of the debt
(b) The value of the assets
(c) The value of the firm
(d) The notional value of the debt

A

The correct answer is choice ‘d’

The Options Theoretic approach to calculating economic capital is a top-down approach that considers the value of capital as being equivalent to a call option with a strike price equal to the notional value of the debt - ie, the shareholders have a call option on the assets of the firm which they can acquire by paying the debt holders a value equal to their notional claim (ie the face value of the debt). Therefore Choice ‘d’ is the correct answer and the other choices are incorrect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Economic capital under the Earnings Volatility approach is calculated as:

(a) Earnings under the worst case scenario at a given confidence level/Required rate of return for the firm
(b) [Expected earnings less Earnings under the worst case scenario at a given confidence level]/Required rate of return for the firm
(c) Expected earnings/Required rate of return for the firm
(d) Expected earnings/Specific risk premium for the firm

A

The correct answer is choice ‘b’

The Earnings Volatility approach to calculating economic capital is a top down approach that considers economic capital as being the capital required to make for the worst case fall in earnings, and calculates EC as equal to the worst case decrease in earnings capitalized at the rate of return expected of the firm. The worst case decrease in earnings, or the earnings-at-risk can only be stated at a given confidence level, and is equal to the Expected Earnings less Earnings under the worst case scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following is not one of the ‘three pillars’ specified in the Basel accord:

(a) Minimum capital requirements
(b) National regulation
(c) Supervisory review
(d) Market discipline

A

The correct answer is choice ‘b’

The three pillars are minimum capital requirements, supervisory review and market discipline. National regulation is not a pillar described under the accord. Choice ‘b’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

According to the Basel framework, shareholders’ equity and reserves are considered a part of:

(a) Tier 1 capital
(b) Tier 2 capital
(c) Tier 3 capital
(d) All of the above

A

The correct answer is choice ‘a’

According to the Basel II framework, Tier 1 capital, also called core capital or basic equity, includes equity capital and disclosed reserves.

Tier 2 capital, also called supplementary capital, includes undisclosed reserves, revaluation reserves, general provisions/general loan-loss reserves, hybrid debt capital instruments and subordinated term debt.

Tier 3 capital, or short term subordinated debt, is intended only to cover market risk but only at the discretion of their national authority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

According to the Basel framework, reserves resulting from the upward revaluation of assets are considered a part of:

(a) Tier 2 capital
(b) Tier 1 capital
(c) Tier 3 capital
(d) All of the above

A

The correct answer is choice ‘a’

According to the Basel II framework, Tier 1 capital, also called core capital or basic equity, includes equity capital and disclosed reserves.

Tier 2 capital, also called supplementary capital, includes undisclosed reserves, revaluation reserves, general provisions/general loan-loss reserves, hybrid debt capital instruments and subordinated term debt.

Tier 3 capital, or short term subordinated debt, is intended only to cover market risk but only at the discretion of their national authority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

According to the Basel II framework, subordinated term debt that was originally issued 4 years ago with a maturity of 6 years is considered a part of:

(a) Tier 3 capital
(b) Tier 1 capital
(c) Tier 2 capital
(d) None of the above

A

The correct answer is choice ‘c’

According to the Basel II framework, Tier 1 capital, also called core capital or basic equity, includes equity capital and disclosed reserves.

Tier 2 capital, also called supplementary capital, includes undisclosed reserves, revaluation reserves, general provisions/general loan-loss reserves, hybrid debt capital instruments and subordinated term debt issued originally for 5 years or longer.

Tier 3 capital, or short term subordinated debt, is intended only to cover market risk but only at the discretion of their national authority. This only includes short term subordinated debt originally issued for 2 or more years.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which of the following is NOT an approach used to allocate economic capital to underlying business units:

(a) Fixed ratio economic capital contributions
(b) Incremental economic capital contributions
(c) Stand alone economic capital contributions
(d) Marginal economic capital contributions

A

The correct answer is choice ‘a’

Other than Choice ‘a’, all others represent valid approaches to allocate economic capital to underlying business units. There is no such thing as ‘fixed ratio economic capital contribution’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The sum of the stand alone economic capital of all the business units of a bank is:

(a) unrelated to the economic capital for the firm as a whole
(b) less than the economic capital for the firm as a whole
(c) equal to the economic capital for the firm as a whole
(d) more than the economic capital for the firm as a whole

A

The correct answer is choice ‘d’

Economic capital is sub-additive, ie, because of the correlation being less than perfect between the risks of the different business units, the total economic capital for the firm will be less than the sum of the EC for the individual business units. Therefore Choice ‘d’ is the correct answer.

In practice, correlations are difficult to estimate reliably, and banks often use estimates and corroborate their capital calculations with reference to a number of data points.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The standalone economic capital estimates for the three uncorrelated business units of a bank are $100, $200 and $150 respectively. What is the combined economic capital for the bank?

(a) 72500
(b) 450
(c) 269
(d) 21

A

The correct answer is choice ‘c’

Since the business units are uncorrelated, we can get the combined EC as equal to the square root of the sum of the squares of the individual EC estimates. Therefore Choice ‘c’ is the correct answer. [=SQRT(100^2+200^2+150^2)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The standalone economic capital estimates for the three business units of a bank are $100, $200 and $150 respectively. What is the combined economic capital for the bank, assuming the risks of the three business units are perfectly correlated?

(a) 21
(b) 269
(c) 72500
(d) 450

A

The correct answer is choice ‘d’

Since the business units are perfectly correlated, we can get the combined EC as equal to the sum of the individual EC estimates. Therefore Choice ‘d’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A key problem with return on equity as a measure of comparative performance is:

(a) that return on equity measures do not account for interest and taxes
(b) that return on equity are not adjusted for cash flows being different from accounting earnings
(c) that return on equity is not adjusted for risk
(d) that return on equity ignores the effect of leverage on returns to shareholders

A

The correct answer is choice ‘c’

The major problem with using return on equity as a measure of performance is that return on equity is not adjusted for risk. Therefore, a riskier investment will always come out ahead when compared to a less risky investment when using return on equity as a performance metric.

Return on equity does not ignore the effect of leverage (though return on assets does) because it considers the income attributable to equity, including income from leveraged investments.

Return on equity is generally measured after interest and taxes at the company wide level, though at business unit level it may use earnings before interest and taxes. However this does not create a problem so long as all performance being covered is calculated in the same way.

Cash flows being different from accounting earnings can create liquidity issues, but this does not affect the effectiveness of ROE as a measure of performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

As opposed to traditional accounting based measures, risk adjusted performance measures use which of the following approaches to measure performance:

(a) adjust returns based on the level of risk undertaken to earn that return
(b) adjust both return and the capital employed to account for the risk undertaken
(c) adjust capital employed to reflect the risk undertaken
(d) Any or all of the above

A

The correct answer is choice ‘d’

Performance measurement at a very basic level involves comparing the return earned to the capital invested to earn that return. Risk adjusted performance measures (RAPMs) come in various varieties - and the key difference between RAPMs and traditional measures such as return on equity, return on assets etc is that RAPMs account for the risk undertaken. They may do so by either adjusting the return, or the capital, or both. They are classified as RAROCs (risk adjusted return on capital), RORACs (return on risk adjusted capital) and RARORACs (risk adjusted return on risk adjusted capital).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

For a group of assets known to be positively correlated, what is the impact on economic capital calculations if we assume the assets to be independent (or uncorrelated)?

(a)

Economic capital estimates remain the same

(b)

The impact on economic capital cannot be determined in the absence of volatility information

(c)

Estimates of economic capital go down

(d)

Estimates of economic capital go up

A

The correct answer is choice ‘c’

By assuming the assets to be independent, we are reducing the correlation from a positive number to zero. Reducing asset correlations reduces the combined standard deviation of the assets, and therefore reduces economic capital. Therefore Choice ‘c’ is the correct answer.

Note that this question could also be phrased in terms of the impact on VaR estimates, and the answer would still be the same. Both VaR and economic capital are a multiple of standard deviation, and if standard deviation goes down, both VaR and economic capital estimates will reduce.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A financial institution is considering shedding a business unit to reduce its economic capital requirements. Which of the following is an appropriate measure of the resulting reduction in capital requirements?

(a) Marginal capital for the business unit in consideration
(b) Percentage of total gross income contributed by the business unit in question
(c) Proportionate capital for the business unit in consideration
(d) Incremental capital for the business unit in consideration

A

The correct answer is choice ‘d’

Incremental capital (or incremental VaR, depending upon the context), is a measure of the change in the capital (or VaR) requirements if a certain change is made to a portfolio. It uses the ‘before’ and ‘after’ approach, ie find out what the capital requirement or VaR will be without the change, and what it will be after the change. The difference is the incremental capital or incremental VaR. It helps measure the change in risk as a result of a particular action, eg a change in a position.

Marginal capital or VaR on the other hand is a method to break down the capital requirement or the VaR so that it can be assigned to individual positions within the portfolio. The total of marginal capital or marginal VaR for all the positions in a portfolio adds up to the total capital requirements or total VaR. Note that marginal VaR is also called component VaR.

Therefore incremental capital is the correct answer to this question. The other choices are incorrect. In the exam, the question may be phrased differently, so try to keep in mind the different between incremental and marginal capital, which can be a bit confusing given what these terms mean in plain English.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Which of the following best describes economic capital?

(a) Economic capital is a form of provision for market risk losses should adverse conditions arise
(b) Economic capital is the amount of regulatory capital that minimizes the cost of capital for firm
(c) Economic capital reflects the amount of capital required to maintain a firm

s target credit rating
(d) Economic capital is the amount of regulatory capital mandated for financial institutions in the OECD countries

A

The correct answer is choice ‘c’

Economic capital is often calculated with a view to maintaining the credit ratings for a firm. It is the capital available to absorb unexpected losses, and credit ratings are also based upon a certain probability of default. Economic capital is often calculated at a level equal to the confidence required for the desired credit rating. For example, if the probability of default for a AA rating is 0.02%, and the firm desires to hold an AA rating, then economic capital maintained at a confidence level of 99.98% would allow for such a rating. In this case, economic capital set at a 99.8% level can be thought of as the level of losses that would not be exceeded with a 99.8% probability, and would help get the firm its desired credit ratin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

If E denotes the expected value of a loan portfolio at the end on one year and U the value of the portfolio in the worst case scenario at the 99% confidence level, which of the following expressions correctly describes economic capital required in respect of credit risk?

(a) E - U
(b) U
(c) U/E
(d) E

A

The correct answer is choice ‘a’

Economic capital in respect of credit risk is intended to absorb unexpected losses. Unexpected losses are the losses above and beyond expected losses and up to the level of confidence that economic capital is being calculated for. The capital required to cover unexpected losses in this case is E - U, and therefore Choice ‘a’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A loan portfolio’s full notional value is $100, and its value in a worst case scenario at the 99% level of confidence is $65. Expected losses on the portfolio are estimated at 10%. What is the level of economic capital required to cushion unexpected losses?

(a) 35
(b) 10
(c) 65
(d) 25`

A

The correct answer is choice ‘d’

Expected value = $90 ($100 - 10%)
Value at 99% confidence level = $65
Therefore economic capital required at this level of confidence = $90 - $65 = $25.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

The VaR of a portfolio at the 99% confidence level is $250,000 when mean return is assumed to be zero. If the assumption of zero returns is changed to an assumption of returns of $10,000, what is the revised VaR?

(a) 226740
(b) 260000
(c) 273260
(d) 240000

A

The correct answer is choice ‘d’

The exact formula for VaR is = -(Zα σ + μ), where Z α is the z-multiple for the desired confidence level, and μ is the mean return. Now Zα is always a negative number, or at least will certainly be provided the desired confidence level is greater than 50%, and μ is often assumed to be zero because generally for the short time periods for which market risk VaR is calculated, its value is very close to zero.

Therefore in practice the formula for VaR just becomes -Zασ, and since Z is always negative, we normally just multiply the Z factor without the negative sign with the standard deviation to get the VaR.

For this question, there are two ways to get the answer. If we use the formula, we know that -Zασ= 250,000 (as μ=0), and therefore -Zασ - μ = 250,000 - 10,000 = $240,000.

The other, easier way to think about this is that if the mean changes, then the distribution’s shape stays exactly the same, and the entire distribution shifts to the right by $10,000 as the mean moves up by $10,000. Therefore the VaR cutoff, which was previously at -250,000 on the graph also moves up by 10k to -240,000, and therefore $240,000 is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What would be the correct order of steps to addressing data quality problems in an organization?

(a) Design the future state, perform a gap analysis, analyze the current state and implement the future state
(b) Assess the current state, design the future state, determine gaps and the actions required to be implemented to eliminate the gaps
(c) Articulate goals, do a

strategy-fit

analysis and plan for action
(d) Call in external consultants

A

The correct answer is choice ‘b’

The correct order of steps to addressing data quality problems in an organization would include:

  1. Assesing the current state
  2. Designing the future state, and
  3. Planning and implementation which would include identifying the gaps between the current and the desired future state, and implementation to address the gaps.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A bank’s detailed portfolio data on positions held in a particular security across the bank does not agree with the aggregate total position for that security for the bank. What data quality attribute is missing in this situation?

(a) Data completeness
(b) Auditability
(c) Data extensibility
(d) Data integrity

A

The correct answer is choice ‘d’

The term ‘data quality’ has multiple elements, ie, data in order to be considered of a high quality must have multiple attributes such as completeness, timeliness, auditability etc. Because this is not an exact science, every expert or text book will have a different view of what goes into data quality. For our purposes however, we will stick to what the PRMIA study material specifies, and according to the study material the following are the elements that can be considered attributes that make for quality data:

  1. Integration
  2. Integrity
  3. Completeness
  4. Accessibility
  5. Flexibility
  6. Extensibility
  7. Timeliness
  8. Auditability

I am not going to describe each of these here as that would be repetitive of the study material, but suffice it to say that the break-down of a number into its constituents should tie to the aggregate total. If that is not true, then the data lacks integrity - and therefore Choice ‘d’ is the correct answer. The other choices address other aspects of data quality but not this, and therefore are not correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Which of the following statements are true?
I. Retail Risk Based Pricing involves using borrower specific data to arrive at both credit adjudication and pricing decisions
II. An integrated ‘Risk Information Management Environment’ includes two elements - people and processes
III. A Logical Data Model (LDM) lays down the relationships between data elements that an organization stores
IV. Reference Data and Metadata refer to the same thing
(a) I and III
(b) II and IV
(c) I, II and III
(d) All of the above

A

The correct answer is choice ‘a’

Statement I is correct. Retail Risk Based Pricing (RRBP) involves the use of borrower specific data (such as FICO scores, average balances etc) to arrive at credit decisions. These ‘retail’ credit decisions may include decisions on whether to grant a line of credit, a mortgage, issue a credit card, or any of the various other retail activities a bank may be dealing with. At the same time, this data can also be used to price the product, in addition to providing a yes or no credit decision so that risky borrowers are charged more than less risky borrowers.

Statement II is not correct, because an integrated Risk Information Management Environment includes three elements - people, processes and technology (and not just people and processes).

Statement III is correct. An LDM is a blue print of an organization’s data, and describes the relationships between the various data elements.

Statement IV is not correct because reference data and metadata are not the same thing. Reference data refers to relatively static data, such as customer name (while actual transactions may not be so static). Metadata refers to data about data, and is stored in a data dictionary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Which of the following is not an approach proposed by the Basel II framework to compute operational risk capital?

(a) Factor based approach
(b) Advanced measurement approach
(c) Standardized approach
(d) Basic indicator approach

A

The correct answer is choice ‘a’

Basel II proposes three approaches to compute operational risk capital - the basic indicator approach (BIA), the standardized approach (SIA) and the advanced measurement approach (AMA). There is no operational risk approach called the factor based approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Which of the following data sources are expected to influence operational risk capital under the AMA:
I. Internal Loss Data (ILD)
II. External Loss Data (ELD)
III. Scenario Data (SD)
IV. Business Environment and Internal Control Factors (BEICF)

a) I, II and III only
(b) III only
(c) I and II
(d) All of the above

A

The correct answer is choice ‘d’

All four data sources are expected to be utilized as inputs as appropriate for operational risk calculations under the advanced measurement approach. Of these, the last one, BEICF, is slightly different from the rest as it does not yield data points that become the basis of curve fitting or other statistical computions underlying capital calculations. It includes items such as KRIs, risk assessments etc and allow the risk manager to assess the qualitative aspects of loss data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q
The Basel framework does not permit which of the following Units of Measure (UoM) for operational risk modeling:
I. UoM based on legal entity
II. UoM based on event type
III. UoM based on geography
IV. UoM based on line of business
 	(a)	I and IV
 	(b)	II only
 	(c)	III only
 	(d)	None of the above
A

The correct answer is choice ‘d’

Units of Measure for operational risk are homogenous groupings of risks to allow sensible modeling decisions to be made. For example, some risks may be fat-tailed, for example the risk of regulatory fines. Other risks may have finite tails - for example damage to physical assets risk (DPA) may be limited to the value of the asset in the question.

Additionally, risk reporting may need to be done at the line of business, legal entity or regional basis, and in order to be able to do, so the right level of granularity needs to be captured in the risk modeling exercise. The level of granularity applied is called the ‘unit of measurement’ (UoM), and it is okay to adopt all of the choices listed above as the dimensions that describe the unit of measure.

Note that it is entirely possible, even likely, to use legal entity, risk type, region, business and other dimensions simultaneously, though doing so is likely to result in an extremely large number of UoM combinations. That can be addressed by then subsequently grouping the more granular UoMs into larger UoMs, which may ultimately be used for frequency and severity estimation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Which of the following statements are true:
I. The set of UoMs used for frequency and severity modeling should be identical
II. UoMs can be grouped together into larger combined UoMs using judgment based on the knowledge of the business
III. UoMs can be grouped together into combined UoMs using statistical techniques
IV. One may use separate sets of UoMs for frequency and severity modeling
(a) I, II and III
(b) II, III and IV
(c) IV only
(d) All of the above

A

The correct answer is choice ‘b’

One may use separate UoMs for frequency and severity modeling, for example, a combined UoM may be used for estimating the frequency of cyber attacks in a scenario, while the severity may be modeled using a more granular line-of-business UoM. Therefore statement I is false, while statement IV is true.

Statement II is correct, UoMs can be grouped together into larger units based on the facts relating to the business, controls and the business environment. Similarly, UoMs can be grouped together based on statistical clustering techniques using the ‘distance’ between the units of measure and combining UoMs that are closer to each other. In addition, it is also possible to combine both business knowledge and statistical algorithms to combine UoMs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Which of the following distributions is generally not used for frequency modeling for operational risk

(a) Binomial
(b) Poisson
(c) Gamma
(d) Negative binomial

A

The correct answer is choice ‘c’

Frequency modeling is performed using discrete distributions that have a positive integer as a resultant - this allows for the number of events per period of time to be modeled. Of the distributions listed above, Poisson, negative binomial and binomial can be used for modeling frequency distributions. The Poisson and negative binomial distributions are encountered the most in practice.

The gamma distribution is a continuous distribution and cannot be used for frequency modeling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

For a given mean, which distribution would you prefer for frequency modeling where operational risk events are considered dependent, or in other words are seen as clustering together (as opposed to being independent)?

(a) Binomial
(b) Negative binomial
(c) Poisson
(d) Gamma

A

The correct answer is choice ‘b’

An interesting property that distinguishes the three most used distributions for modeling event frequency is that for a given mean, their variances differ. The ratio of variance to mean (the variance-mean ratio, calculated as variance/mean) can then be used to decide the kind of distribution to use. Both the variance and the mean can be estimated from available data points from the internal or external loss databases, or the scenario exercise.

The variance-mean ratio reflects how dispersed a distribution is. (In the PRMIA handbook, the variance to mean ratio has been described as the “Q-Factor”.)

The Poisson distribution has its mean equal to its variance, and therefore the variance to mean ratio is 1. For the negative binomial distribution, this ratio is always greater than 1, which means there is greater dispersion compared to the mean - or more intervals with low counts as well as more intervals with high counts. For the binomial distribution, the variance to mean ratio is less than one, which means it is less dispersed than the Poisson distribution with values closer to the mean.

In a situation where operational risk events are seen as clustering together, or dependent, the variance will be higher and it would be more appropriate to use the negative binomial distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Which of the following is closest to the description of a ‘risk functional’?

(a) A risk functional is the distribution that models the severity of a risk
(b) A risk functional is a model distribution that is an approximation of the true loss distribution of a risk
(c) A risk functional assigns a penalty value for the difference between a model distribution and a risk

s severity distribution
(d) Risk functional refers to the Kolmogorov-Smirnov distance

A

The correct answer is choice ‘c’

For operational risk modeling, both frequency and severity distributions need to be modeled. Modeling severity involves finding an analytical distribution, such as log-normal or other that approximates the distribution best represented by known data - whether from the internal loss database, the external loss database or scenario data. A ‘risk functional’ is a measure of the deviation of the model distribution from the risk’s actual severity distribution. It assigns a penalty value for the deviation, using a statistical measure, such as the KS distance (Kolmogorov-Smirnov distance).

The problem of finding the right distribution then becomes the problem of optimizing the risk functional. For example, if F is the model distribution, and G is the actual, or empirical severity distribution, and we are using the KS test, then the Risk Functional R is defined as follows:

Note that supx stands for ‘supremum’, which is a more technical way of saying ‘maximum’. In other words, we are calculating the maximum absolute KS distance between the two distributions. (Note that the KS distance is the max of the distance between identical percentiles of the two distributions using the CDFs of the two.)

Once the risk functional is identified, we can minimize it to determine the best fitting distribution for severity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Which of the following statements are true:
I. Heavy tailed parametric distributions are a good choice for severity modeling in operational risk.
II. Heavy tailed body-tail distributions are a good choice for severity modeling in operational risk.
III. Log-likelihood is a means to estimate parameters for a distribution.
IV. Body-tail distributions allow modeling small losses differently from large ones.
(a) I and IV
(b) II and III
(c) II, III and IV
(d) All of the above

A

The correct answer is choice ‘d’

When modeling for operational risk, we are generally concerned with tail losses - this is because the horizon for operational risk is 1 year at the 99.9th percentile. Since the 99.9th percentile is in the tail region, we would like to ensure that the tails are modeled as accurately as possible. Operational risk distributions are modeled using heavy tailed distributions.

Heavy tailed parametric distributions such as log-normal, pareto and others are therefore a good choice for modeling risk severity, therefore statement I is correct.

Body-tail distributions are combinations of parametric distributions, with different types of distributions being used to model the body and the tail - this provides flexibility because small and medium losses upto a threshold can be modeled using one distribution, and losses beyond the threshold can be modeled using a different distribution that is a better estimate of the tail. Statement II is therefore correct.

A log-likelihood function simplifies the optimization of a regular likelihood function. We generally maximize (or minimize the risk functional) a likelihood function with a view to estimating the parameters of the underlying distribution. If the likelihood function is complex, it may sometimes make it mathematically easier to optimize the log of the function - as that changes exponents and multiplications to additions, while behaving in the same way as the underlying function. Therefore statement III is correct, log-likelihood is a means to estimate parameters for a distribution.

Statement IV is correct as body-tail distributions allow modeling different parts of the distribution differently from each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

The difference between true severity and the best approximation of the true severity is called:

(a) Approximation error
(b) Total error
(c) Estimation error
(d) Fitting error

A

The correct answer is choice ‘a’

This question relates to fitting a distribution to the true severity of the operational risk loss we are trying to model. The quality of the fit, or the precision of the fit, has two elements to the difference between the severity as represented by our model and the true severity. To understand this, consider the three data points below:

a. The true severity,
b. The best approximation of the true severity in the model space, and
c. The fit based on the dataset.

  • True severity is what we are trying to model.
  • The model space refers to the collection of analytical distributions (log-normal, burr etc) that we are considering to arrive at the estimate of the severity.
  • The ‘best approximation of the true severity in the model space’ is reached by estimating the parameters of the distribution that optimizes the risk functional.
  • The ‘fit’ is the actual parameter estimates we settle for with the distribution we have determined best fits the true estimate of our severity. When estimating parameters, we have various methods available for estimation - the least squares method, the maximum likelihood method, for example, and we can get different estimates depending upon the method we choose to use.

Our severity model will be different from the true severity, and the total difference can be split into two types of errors:

  1. Fitting error, represented by ‘c - b’ above: The difference between the fit based on the dataset and the best approximation of the true severity is called ‘fitting error’, ie, a measure of the extent to which we could have estimated the parameters better.
  2. Approximation error, represented by ‘b - a’ above: Approximation error is the difference between the true severity, and the best approximation of the true severity that can be achieved within the model space is called ‘approximation error’.

One can reduce the approximation error by expanding the model space by adding more distributions. This will reduce the approximation error, but generally has the effect of increasing the fitting error because the complexity of the model space increases, and there are more ways to fit to the true severity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q
Which of the following are valid methods for selecting an appropriate model from the model space for severity estimation:
I. Cross-validation method
II. Bootstrap method
III. Complexity penalty method
IV. Maximum likelihood estimation method
 	(a)	I and IV
 	(b)	II and III
 	(c)	I, II and III
 	(d)	All of the above
A

The correct answer is choice ‘d’

Once we have a number of distributions in the model space, the task is to select the “best” distribution that is likely to be a good estimate of true severity. We have a number of distributions to pick from, an empirical dataset (from internal or external losses), and we can estimate the parameters for the different distributions. We then have to decide which distribution to pick, and that generally requires considering both approximation and fitting errors.

There are three methods that are generally used for selecting a model:

  1. The cross-validation method: This method divides the available data into two parts - the training set, and the validation set (the validation set is also called the ‘testing set’). Parameter estimation for each distribution is done using the training set, and differences are then calculated based on the validation set. Though the temptation may be to use the entire data set to estimate the parameters, that is likely to result in what may appear to be an excellent fit to the data on which it is based, but without any validation. So we estimate the parameters based on one part of the data (the training set), and check the differences we get from the remaining data (the validation set).
  2. Complexity penalty method: This is similar to the cross-validation method, but with an additional consideration of the complexity of the model. This is because more complex models are likely to produce a more exact fit than simpler models, this may be a spurious thing - and therefore a ‘penalty’ is added to the more complex models as to favor simplicity over complexity. The ‘complexity’ of a model may be measured by the number of parameters it has, for example, a log-normal distribution has only two parameters while a body-tail distribution combining two different distributions may have many more.
  3. The bootstrap method: The bootstrap method estimates fitting error by drawing samples from the empirical loss dataset, or the fit already obtained, and then estimating parameters for each draw which are compared using some statistical technique. If the samples are drawn from the loss dataset, the technique is called a non-parametric bootstrap, and if the sample is drawn from an estimated model distribution, it is called a parametric bootstrap.
  4. Using goodness of fit statistics: The candidate fits can be compared using MLE based on the KS distance, for example, and the best one selected. Maximum likelihood estimation is a technique that attempts to maximize the likelihood of the estimate to be as close to the true value of the parameter. It is a general purpose statistical technique that can be used for parameter estimation technique, as well as for deciding which distribution to use from the model space.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

: Which of the following is the most important problem to solve for fitting a severity distribution for operational risk capital:

(a) The fit obtained should reduce the combination of the fitting and approximation errors to a minimum
(b) Determine plausible scenarios to fill the data gaps in the internal and external loss data
(c) The risk functional

s minimization should lead to a good estimate of the 0.999 quantile
(d) Empirical loss data needs to be extended to the ranges below the reporting threshold and above large value losses

A

The correct answer is choice ‘c’

Ultimately, the objective of the operational risk severity estimation exercise is to calculate the 99.9th percentile loss over a one year horizon; and everything else we do with data, collecting loss information, modeling, curve fitting etc revolves around this objective. If we cannot estimate the 99.9th percentile loss accurately, then not much else matters. Therefore Choice ‘c’ is the correct answer.

Minimizing the combination of fitting and approximation errors is one of the things we do with a view to better estimating the operational loss distribution. Likewise, empirical loss data generally is range bound because corporations do not require employees to log losses less than an threshold, and high value losses are generally rare. This problem is addressed by extrapolating both large and small losses, something that impacts the performance of our model. Likewise, one of the objectives of scenario analysis is to fill data gaps by generating plausible scenarios. Yet while all these are real issues to address, the primary problem we are trying to solve is estimating the 0.999th quantile.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

When fitting a distribution in excess of a threshold as part of the body-tail distribution method described by the equation below, how is the parameter ‘p’ calculated.

Here, F(x) is the severity distribution. F(Tail) and F(Body) are the parametric distributions selected for the tail and the body, and T is the threshold in excess of which the tail is considered to begin.

A

The correct answer is choice ‘d’

p = k/N. If there are N observations of which K are upto T, then p = k/N allows us to have a continuous unbroken curve which gets increasingly weighted towards the distribution selected for the tail as we move towards the ‘right’, ie the higher values of losses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Which of the following risks and reasons justify the use of scenario analysis in operational risk modeling:
I. Risks for which no internal loss data is available
II. Risks that are foreseeable but have no precedent, internally or externally
III. Risks for which objective assessments can be made by experts
IV. Risks that are known to exist, but for which no reliable external or internal losses can be analyzed
V. Reducing the complexity of having to fit statistical models to internal and external loss data
VI. Managing the capital estimation process as to produce estimates in line with management’s desired capital buffers.

a) I, II, III and IV
(b) I, II and III
(c) V
(d) All of the above

A

The correct answer is choice ‘a’

All the reasons and risks presented above are valid reasons for using scenario analysis, except V and VI - ie, the need to reduce the complexity of calculations is not a valid reason for using scenario analysis. Similarly, making operational risk capital estimates match management’s desired capital allocation targets is also not a valid reason. Capital calculations are intended to provide adequate capital for managing the risk from operations, regardless of what management may desire them to be.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Which of the following statements is true
I. If no loss data is available, good quality scenarios can be used to model operational risk
II. Scenario data can be mixed with observed loss data for modeling severity and frequency estimates
III. Severity estimates should not be created by fitting models to scenario generated loss data points alone
IV. Scenario assessments should only be used as modifiers to ILD or ELD severity models.
(a) III and IV
(b) I
(c) I and II
(d) All statements are true

A

The correct answer is choice ‘c’

There are multiple ways to incorporate scenario analysis for modeling operational risk capital - and the exact approach used depends upon the quantity of loss data available, and the quality of scenario assessments. Generally:

  • If there is no past loss data available, scenarios are the only practical means to model operational risk loss distributions. Both frequency and severity estimates can be modeled based on scenario data.
  • If there is plenty of past data available, scenarios can be used as a modifier for estimates that are based solely on data (for example, consider the MAX of the loss estimates at the desired quantile as provided by the data, and as indicated by scenarios)
  • If high quality scenario data is available, and there is sufficient past data, one could mix scenario assessments with the loss data and fit the combined data set to create the loss distribution. Alternatively, both could be fitted with severity estimates and then the two severities could be parametrically combined.

In short, there is considerable flexibility in how scenarios can be used.

Statement I is therefore correct, and so is statement II as both indicate valid uses of scenarios.

Statement III is not correct because it may be okay to create severity estimates based on scenario data alone.

Statement IV is not correct because while using scenarios as modifiers to other means of estimation is acceptable, that is not the only use of scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

An operational loss severity distribution is estimated using 4 data points from a scenario. The management institutes additional controls to reduce the severity of the loss if the risk is realized, and as a result the estimated losses from a 1-in-10-year losses are halved. The 1-in-100 loss estimate however remains the same. What would be the impact on the 99.9th percentile capital required for this risk as a result of the improvement in controls?

(a) The capital required will decrease
(b) The capital required will increase
(c) The capital required will stay the same
(d) Can

t say based on the information provided

A

The correct answer is choice ‘b’

This situation represents one of the paradoxes in estimating severity that one needs to be aware of - the improvement in controls reduces the weight of the body/middle of the distribution and moves it towards the tails (as the total probability under the curve must stay at 100%) and the distribution becomes more heavy tailed. As a result, the 99.9th percentile loss actually increases. instead of decreasing, creating a counterintuitive result. Therefore the correct answer is that the capital required will increase.

If scenario analysis produces such a result, the analyst must question if the 1 in 100 year loss severity is still accurate. If the new control has reduced the severity in the body of the distribution, the question as to why the more extreme losses have not changed should be raised.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

For a hypotherical UoM, the number of losses in two non-overlapping datasets is 24 and 32 respectively. The Pareto tail parameters for the two datasets calculated using the maximum likelihood estimation method are 2 and 3. What is an estimate of the tail parameter of the combined dataset?

(a) 2.23
(b) 2.57
(c) 3
(d) Cannot be determined

A

The correct answer is choice ‘b’

For a number of processes, including many in finance, while a distribution such as the normal distribution is a good approximation of the distribution near the modal value of the variable, the same normal distribution may not be a good estimate of the tails. For this reason, the Pareto distribution is one of the distributions that is often used to model the tails of another distribution. Generally, if you have a set of observations, and you discard all observations below a threshold, you are left with what are called ‘exceedances’. The threshold needs to be reasonably far out in the tail. If from each value of the exceedances you subtract the threshold value, the resulting dataset is estimated by the generalized Pareto distribution.

Therefore 2.57 [=2(24/(24+32)) + 3(32/(24+32))] is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Which of the following statements is true:
I. When averaging quantiles of two Pareto distributions, the quantiles of the averaged models are equal to the geometric average of the quantiles of the original models based upon the number of data items in each original model.
II. When modeling severity distributions, we can only use distributions which have fewer parameters than the number of datapoints we are modeling from.
III. If an internal loss data based model covers the same risks as a scenario based model, they can can be combined using the weighted average of their parameters.
IV If an internal loss model and a scenario based model address different risks, the models can be combined by taking their sums.

(a)

II and III

(b)

I and II

(c)

III and IV

(d)

All statements are true

A

The correct answer is choice ‘d’

Statement I is true, the quantiles of the averaged models are equal to the geometric average of the quantiles of the original models.

Statement II is correct, the number of data points from which model parameters are estimated must be greater than the number of parameters. So if a distribution, say Poisson, has one parameter, we need at least two data points to estimate the parameter. Other complex distributions may have multiple parameters for shape, scale and other things, and the minimum number of observations required will be greater than the number of parameters.

Statement III is true, if the ILD data and scenarios cover the same risk, they are essentially different perspectives on the same risk, and therefore should be combined as weighted averages.

But if they cover completely different risks, the models will need to be added together, not averaged - which is why Statement IV is true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Which of the following are valid approaches to leveraging external loss data for modeling operational risks:
I. Both internal and external losses can be fitted with distributions, and a weighted average approach using these distributions is relied upon for capital calculations.
II. External loss data is used to inform scenario modeling.
III. External loss data is combined with internal loss data points, and distributions fitted to the combined data set.
IV. External loss data is used to replace internal loss data points to create a higher quality data set to fit distributions.
(a) I, II and III
(b) I and III
(c) II and IV
(d) All of the above

A

The correct answer is choice ‘a’

Internal loss data is generally the highest quality as it is relevant, and is ‘real’ as it has occurred to the organization. External loss data suffers from a significant limitation that the risk profiles of the banks to which the data relates is generally not known due to anonymization, and may likely may not be applicable to the bank performing the calculations. Therefore, replacing external loss data with external loss data is not a good idea. Statement IV is therefore incorrect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Which of the following steps are required for computing the aggregate distribution for a UoM for operational risk once loss frequency and severity curves have been estimated:
I. Simulate number of losses based on the frequency distribution
II. Simulate the dollar value of the losses from the severity distribution
III. Simulate random number from the copula used to model dependence between the UoMs
IV. Compute dependent losses from aggregate distribution curves
(a) None of the above
(b) III and IV
(c) I and II
(d) All of the above

A

The correct answer is choice ‘c’

A recap would be in order here: calculating operational risk capital is a multi-step process.

First, we fit curves to estimate the parameters to our chosen distribution types for frequency (eg, Poisson), and severity (eg, lognormal). Note that these curves are fitted at the UoM level - which is the lowest level of granularity at which modeling is carried out. Since there are many UoMs, there are are many frequency and severity distributions. However what we are interested in is the loss distribution for the entire bank from which the 99.9th percentile loss can be calculated. From the multiple frequency and severity distributions we have calculated, this becomes a two step process:

  • Step 1: Calculate the aggregate loss distribution for each UoM. Each loss distribution is based upon and underlying frequency and severity distribution.
  • Step 2: Combine the multiple loss distributions after considering the dependence between the different UoMs. The ‘dependence’ recognizes that the various UoMs are not completely independent, ie the loss distributions are not additive, and that there is a sort of diversification benefit in the sense that not all types of losses can occur at once and the joint probabilities of the different losses make the sum less than the sum of the parts.

Step 1 requires simulating a number, say n, of the number of losses that occur in a given year from a frequency distribution. Then n losses are picked from the severity distribution, and the total loss for the year is a summation of these losses. This becomes one data point. This process of simulating the number of losses and then identifying that number of losses is carried out a large number of times to get the aggregate loss distribution for a UoM.

Step 2 requires taking the different loss distributions from Step 1 and combining them considering the dependence between the events. The correlations between the losses are described by a ‘copula’, and combined together mathematically to get a single loss distribution for the entire bank. This allows the 99.9th percentile loss to be calculated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Which of the following steps are required for computing the total loss distribution for a bank for operational risk once individual UoM level loss distributions have been computed from the underlhying frequency and severity curves:
I. Simulate number of losses based on the frequency distribution
II. Simulate the dollar value of the losses from the severity distribution
III. Simulate random number from the copula used to model dependence between the UoMs
IV. Compute dependent losses from aggregate distribution curves
(a) III and IV
(b) I and II
(c) None of the above
(d) All of the above

A

The correct answer is choice ‘b’

A recap would be in order here: calculating operational risk capital is a multi-step process.

First, we fit curves to estimate the parameters to our chosen distribution types for frequency (eg, Poisson), and severity (eg, lognormal). Note that these curves are fitted at the UoM level - which is the lowest level of granularity at which modeling is carried out. Since there are many UoMs, there are are many frequency and severity distributions. However what we are interested in is the loss distribution for the entire bank from which the 99.9th percentile loss can be calculated. From the multiple frequency and severity distributions we have calculated, this becomes a two step process:

  • Step 1: Calculate the aggregate loss distribution for each UoM. Each loss distribution is based upon and underlying frequency and severity distribution.
  • Step 2: Combine the multiple loss distributions after considering the dependence between the different UoMs. The ‘dependence’ recognizes that the various UoMs are not completely independent, ie the loss distributions are not additive, and that there is a sort of diversification benefit in the sense that not all types of losses can occur at once and the joint probabilities of the different losses make the sum less than the sum of the parts.

Step 1 requires simulating a number, say n, of the number of losses that occur in a given year from a frequency distribution. Then n losses are picked from the severity distribution, and the total loss for the year is a summation of these losses. This becomes one data point. This process of simulating the number of losses and then identifying that number of losses is carried out a large number of times to get the aggregate loss distribution for a UoM.

Step 2 requires taking the different loss distributions from Step 1 and combining them considering the dependence between the events. The correlations between the losses are described by a ‘copula’, and combined together mathematically to get a single loss distribution for the entire bank. This allows the 99.9th percentile loss to be calculated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Which of the following statements are correct:
I. A training set is a set of data used to create a model, while a control set is a set of data is used to prove that the model actually works
II. Cleansing, aggregating or ensuring data integrity is a task for the IT department, and is not a risk manager’s responsibility
III. Lack of information on the quality of underlying securities and assets was a major cause of the collapse in the CDO markets during the credit crisis that started in 2007
IV. The problem of lack of historical data can be addressed reasonably satisfactorily by using analytical approaches
(a) I, III and IV
(b) I and III
(c) II and IV
(d) All of the above

A

The correct answer is choice ‘b’

Statement I is correct. Data is often divided into two sets - a ‘training set’ that is used to create and fine-tune the model while the ‘control set’ is used to prove that the model works on sample data. Back testing is then perfomed using actual data that becomes available over time, or may already be available as historical data.

Statement II is incorrect. A risk manager often spends a great deal of time in managing data, and ensuring that the data being used is accurate enough for the purpose it is being used for. A risk manager can expect to spend a good part of his or her team’s time in cleansing data. While he or she can try to get the IT processes and systems to produce correct data in the first place so it requires minimal subsequent cleansing or validation, this task is likely to remain a key part of a risk manager’s role for quite some time in the future given the challenges nearly all organizations face in managing risk data.

Statement III is correct. There was not enough granular data available on the underlying components of some of the derivative debt securities whose markets dried up during the crisis that began in 2007. This was because investors became increasingly unsure of what the value of these securities, such as CDOs was, leading to market seizure and firesale prices.

Statement IV is not correct. There is no easy solution to the lack of enough historical data, which is used to create as well as test models, and construct stress scenarios. Analytical approaches are not a good enough substitute for real market data. During the recent crisis, many instruments had rather short histories and there was not enough data available, and risk managers and portfolio managers relied upon analytical approaches to value and price them. Many of the assumptions that underpinned these approaches were untested in the real world and turned out to be incorrect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

When compared to a high severity low frequency risk, the operational risk capital requirement for a low severity high frequency risk is likely to be:

(a) Higher
(b) Zero
(c) Lower
(d) Unaffected by differences in frequency or severity

A

The correct answer is choice ‘c’

High frequency and low severity risks, for example the risks of fraud losses for a credit card issuer, may have high expected losses, but low unexpected losses. In other words, we can generally expect these losses to stay within a small expected and known range. The capital requirement will be the worst case losses at a given confidence level less expected losses, and in such cases this can be expected to be low.

On the other hand, medium severity medium frequency risks, such as the risks of unexpected legal claims, ‘fat-finger’ trading errors, will have low expected losses but a high level of unexpected losses. Thus the capital requirement for such risks will be high.

It is also worthwhile mentioning high severity and low frequency risks - for example a rogue trader circumventing all controls and bringing the bank down, or a terrorist strike or natural disaster creating other losses - will probably have zero expected losses & high unexpected losses but only at very high levels of confidence. In other words, operational risk capital is unlikely to provide for such events and these would lie in the part of the tail that is not covered by most levels of confidence when calculating operational risk capital.

Note that risk capital is required for only unexpected losses as expected losses are to be borne by P&L reserves. Therefore the operational risk capital requirements for a low severity high frequency risk is likely to be low when compared to other risks that are lower frequency but higher severity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

When compared to a low severity high frequency risk, the operational risk capital requirement for a medium severity medium frequency risk is likely to be:

(a) Zero
(b) Lower
(c) Higher
(d) Unaffected by differences in frequency or severity

A

The correct answer is choice ‘c’

high frequency and low severity risks, for example the risks of fraud losses for a credit card issuer, may have high expected losses, but low unexpected losses. In other words, we can generally expect these losses to stay within a small expected and known range. The capital requirement will be the worst case losses at a given confidence level less expected losses, and in such cases this can be expected to be low.

On the other hand, medium severity medium frequency risks, such as the risks of unexpected legal claims, ‘fat-finger’ trading errors, will have low expected losses but a high level of unexpected losses. Thus the capital requirement for such risks will be high.

It is also worthwhile mentioning high severity and low frequency risks - for example a rogue trader circumventing all controls and bringing the bank down, or a terrorist strike or natural disaster creating other losses - will probably have zero expected losses & high unexpected losses but only at very high levels of confidence. In other words, operational risk capital is unlikely to provide for such events and these would lie in the part of the tail that is not covered by most levels of confidence when calculating operational risk capital.

Note that risk capital is required for only unexpected losses as expected losses are to be borne by P&L reserves. Therefore the operational risk capital requirements for a low severity high frequency risk is likely to be low when compared to other risks that are lower frequency but higher severity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

When compared to a medium severity medium frequency risk, the operational risk capital requirement for a high severity very low frequency risk is likely to be:

(a) Higher
(b) Lower
(c) Zero
(d) Unaffected by differences in frequency or severity

A

The correct answer is choice ‘c’

On the other hand, medium severity medium frequency risks, such as the risks of unexpected legal claims, ‘fat-finger’ trading errors, will have low expected losses but a high level of unexpected losses. Thus the capital requirement for such risks will be high.

It is also worthwhile mentioning high severity and low frequency risks - for example a rogue trader circumventing all controls and bringing the bank down, or a terrorist strike or natural disaster creating other losses - will probably have zero expected losses & high unexpected losses but only at very high levels of confidence. In other words, operational risk capital is unlikely to provide for such events and these would lie in the part of the tail that is not covered by most levels of confidence when calculating operational risk capital

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

In respect of operational risk capital calculations, the Basel II accord recommends a confidence level and time horizon of:

(a) 99% confidence level over a 10 year time horizon
(b) 99.9% confidence level over a 1 year time horizon
(c) 99% confidence level over a 1 year time horizon
(d) 99.9% confidence level over a 10 day time horizon

A

The correct answer is choice ‘b’

Choice ‘b’ represents the Basel II requirement, all other choices are incorrect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

For a back office function processing 15,000 transactions a day with an error rate of 10 basis points, what is the annual expected loss frequency (assume 250 days in a year)

(a) 375
(b) 0.06
(c) 37500
(d) 3750

A

The correct answer is choice ‘d’

An error rate of 10 basis points means the number of errors expected in a day will be 15 (recall that 100 basis points = 1%). Therefore the total number of errors expected in a year will be 15 x 250 = 3750. Choice ‘d’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Once the frequency and severity distributions for loss events have been determined, which of the following is an accurate description of the process to determine a full loss distribution for operational risk?

(a) A firm wide operational risk distribution is set to be equal to the product of the frequency and severity distributions
(b) A firm wide operational risk distribution is generated using Monte Carlo simulations
(c) The frequency distribution alone forms the basis for the loss distribution for operational risk
(d) A firm wide operational risk distribution is generated by adding together the frequency and severity distributions

A

The correct answer is choice ‘b’

Once the frequency distribution has been determined (for example, using the binomial, Poisson or the negative binomial distributions) and the severity distribution has also been determined (for example, using the lognormal, gamma or other functions), the loss distribution can be produced by a Monte Carlo simulation using successive drawings from each of these two distributions. It is assumed that the severity and frequency are independent of each other. The resulting distribution gives a distribution showing the losses for operational risk, from which there Op Risk VaR can be determined using the appropriate percentile.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q
The frequency distribution for operational risk loss events can be modeled by which of the following distributions:
I. The binomial distribution
II. The Poisson distribution
III. The negative binomial distribution
IV. The omega distribution
 	(a)	I, III and IV
 	(b)	I, II, III and IV
 	(c)	I, II and III
 	(d)	I and III
A

The correct answer is choice ‘c’

The binomial, Poisson and the negative binomial distributions can all be used to model the loss event frequency distribution. The omega distribution is not used for this purpose, therefore Choice ‘c’ is the correct answer.

Also note that the negative binomial distribution provides the best model fit because it has more parameters than the binomial or the Poisson. However, in practice the Poisson distribution is most often used due to reasons of practicality and the fact that the key model risk in such situations does not arise from the choice of an incorrect underlying distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Which of the following statements is true:
I. Confidence levels for economic capital calculations are driven by desired credit ratings
II. Loss distributions for operational risk are affected more by the severity distribution than the frequency distribution
III. The Advanced Measurement Approach (AMA) referred to in the Basel II standard is a type of a Loss Distribution Approach (LDA)
IV. The loss distribution for operational risk under the LDA (Loss Distribution Approach) is estimated by separately estimating the frequency and severity distributions.
(a) I, II and IV
(b) I, III and IV
(c) I and II
(d) III and IV

A

The correct answer is choice ‘a’

Statement I is correct. Economic capital is the capital available to absorb unexpected losses, and credit ratings are also based upon a certain probability of default. Economic capital is often calculated at a level equal to the confidence required for the desired credit rating. For example, if the probability of default for a AA rating is 0.02%, then economic capital maintained at a 99.98% would allow for such a rating. Economic capital set at a 99.8% level can be thought of as the level of losses that would not be exceeded with a 99.8% probability.

Loss distributions are the product of the severity and frequency distributions, each of which are estimated separately. The total loss distribution is affected far more by the severity distribution than by the frequency distribution, therefore statement II is correct.

The Loss Distribution Approach (LDA) is one of the ways in which the requirements of the AMA can be satisfied, and not the other way round. Therefore statement III is incorrect.

Statement IV is correct as the total loss distribution is estimated using separate estimates of loss frequency and distributions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

The loss severity distribution for operational risk loss events is generally modeled by which of the following distributions:
I. the lognormal distribution
II. The gamma density function
III. Generalized hyperbolic distributions
IV. Lognormal mixtures

(a) I and III
(b) I, II and III
(c) II and III
(d) I, II, III and IV

A

The correct answer is choice ‘d’

All of the distributions referred to in the question can be used to model the loss severity distribution for op risk. Therefore Choice ‘d’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q
When modeling severity of operational risk losses using extreme value theory (EVT), practitioners often use which of the following distributions to model loss severity:
I. The 'Peaks-over-threshold' (POT) model
II. Generalized Pareto distributions
III. Lognormal mixtures
IV. Generalized hyperbolic distributions
 	(a)	I, II, III and IV
 	(b)	I, II and III
 	(c)	II and III
 	(d)	I and II
A

The correct answer is choice ‘d’

The peaks-over-threshold model is used when losses over a given threshold are recorded, as is often the case when using data based on external public sources where only large loss events tend to find a place. The generalized Pareto distribution is also used when attempting to model loss severity using EVT. Lognormal mixtures and generalized hyperbolic distributions are not used as extreme value distributions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Which of the following is not a permitted approach under Basel II for calculating operational risk capital

(a) the internal measurement approach
(b) the standardized approach
(c) the basic indicator approach
(d) the advanced measurement approach

A

The correct answer is choice ‘a’

The Basel II framework allows the use of the basic indicator approach, the standardized approach and the advanced measurement approaches for operational risk. There is no approach called the ‘internal measurement approach’ permitted for operational risk. Choice ‘a’ is therefore the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Under the basic indicator approach to determining operational risk capital, operational risk capital is equal to:

(a) 15% of the average gross income (considering only the positive years) of the past three years
(b) 25% of the average gross income (considering only the positive years) of the past three years
(c) 15% of the average net income (considering only the positive years) of the past three years
(d) 15% of the average gross income of the past five years

A

The correct answer is choice ‘a’

Choice ‘a’ is the correct answer. According to the Basel II document, banks using the Basic Indicator Approach must hold capital for operational risk equal to the average over the previous three years of a fixed percentage (denoted alpha, and currently 15%) of positive annual gross income. Figures for any year in which annual gross income is negative or zero should be excluded from both the numerator and denominator when calculating the average.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Under the standardized approach to determining operational risk capital, operations risk capital is equal to:

(a) 15% of the average gross income (considering only the positive years) of the past three years
(b) a varying percentage, determined by the national regulator, of the gross revenue of each of the bank

s business lines
(c) a fixed percentage of the latest gross income of the bank
(d) a fixed percentage (different for each business line) of the gross income of the eight specified business lines, averaged over three years

A

The correct answer is choice ‘d’

Choice ‘d’ is the correct answer, as laid down in the Basel II document. The other choices are incorrect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

When building a operational loss distribution by combining a loss frequency distribution and a loss severity distribution, it is assumed that:
I. The severity of losses is conditional upon the number of loss events
II. The frequency of losses is independent from the severity of the losses
III. Both the frequency and severity of loss events are dependent upon the state of internal controls in the bank
(a) II and III
(b) I and II
(c) II
(d) I, II and III

A

The correct answer is choice ‘c’

When a operational loss frequency distribution (which, for example, may be based upon a Poisson distribution) and a loss severity distribution (for example, based upon a lognormal distribution), it is assumed that the frequency of losses and the severity of the losses are completely independent and do not impact each other. Therefore statement II is correct, and the others are not valid assumptions underlying the operational loss distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Which of the following is a cause of model risk in risk management?

(a) Misspecification of the model
(b) Programming errors
(c) Incorrect parameter estimation
(d) All of the above

A

The correct answer is choice ‘d’

Model risk is the risk that a model built for estimating a variable will produce erroneous estimates. Model risk is caused by a number of factors, including:

a) Misspecifying the model: For example, using a normal distribution when it is not justified.
b) Model misuse: For example, using a model built to estimate bond prices to estimate equity prices
c) Parameter estimation errors: In particular, parameters that are subjectively determined can be subject to significant parameter estimation errors
d) Programming errors: Errors in coding the model as part of computer implementation may not be detected by end users
e) Data errors: Errors in data used for building the model may also introduce model risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

For a bank using the advanced measurement approach to measuring operational risk, which of the following brings the greatest ‘model risk’ to its estimates:

(a) Choice of incorrect parameters for loss severity distributions
(b) Choice of an incorrect distribution for loss event frequencies
(c) Insufficient number of simulations when building the loss distribution
(d) Aggregation risk, from selecting an incorrect value of estimated correlations between different operational risk estimates

A

The correct answer is choice ‘d’

The greatest model risk when calculating operational risk capital comes from incorrect assumptions about correlations between different operational risks for which standalone risk calculations have been made. Generally, the correlation can be expected to be positive, and would therefore vary between 0 and 1. These two values determine the ‘bounds’ between which the total operational risk capital would lie, and these bounds are generally quite far apart. Therefore the total value of the operational risk capital is very sensitive to the value chosen for the correlation, and this is the source of the biggest model risk under the AMA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Which of the following should be included when calculating the Gross Income indicator used to calculate operational risk capital under the basic indicator and standardized approaches under Basel II?

(a) Net non-interest income
(b) Fees paid to outsourcing service proviers
(c) Insurance income
(d) Operating expenses

A

The correct answer is choice ‘a’

Gross income is defined by Basel II (see para 650 of the Basel standard) as net interest income plus net non-interest income. It is intended that this measure should: (i) be gross of any provisions (e.g. for unpaid interest); (ii) be gross of operating expenses, including fees paid to outsourcing service providers; (iii) exclude realised profits/losses from the sale of securities in the banking book; and (iv) exclude extraordinary or irregular items as well as income derived from insurance.

What this means is that gross income is calculated without deducting any provisions or operating expenses from net interest plus non-interest income; and does not include any realised profits or losses from the sale of securities in the banking book, and also does not include any extraordinary or irregular item or insurance income.

Therefore operating expenses are to be not to be deducted for the purposes of calculating gross income, and neither are any provisions. Profits and losses from the sale of banking book securities are not considered part of gross income, and so isn’t any income from insurance or extraordinary items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Which of the following statements are true:
I. The three pillars under Basel II are market risk, credit risk and operational risk.
II. Basel II is an improvement over Basel I by increasing the risk sensitivity of the minimum capital requirements.
III. Basel II encourages disclosure of capital levels and risks
(a) I and II
(b) II and III
(c) III only
(d) I only

A

The correct answer is choice ‘b’

The three pillars under Basel II are minimum capital requirements, supervisory review process and market discipline. Therefore statement I is false. The other two statements are accurate. Therefore Choice ‘b’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

The key difference between ‘top down models’ and ‘bottom up models’ for operational risk assessment is:

(a) Top down approaches to operational risk are based upon an analysis of key risk drivers, while bottom up approaches consider causality in risk scenarios.
(b) Bottom up approaches to operational risk are based upon an analysis of key risk drivers, while top down approaches consider causality in risk scenarios.
(c) Bottom up approaches to operational risk calculate the implied operational risk using available data such as income volatility, capital etc; while top down approaches use causal factors, risk drivers and other factors to get an aggregated estimate of risk.
(d) Top down approaches to operational risk calculate the implied operational risk using available data such as income volatility, capital etc; while bottom up approaches use causal factors, risk drivers and other factors to get an aggregated estimate of risk.

A

The correct answer is choice ‘d’

Top down approaches rely upon available data such as total capital, income volatility, peer group information etc and attempt to imply the capital attributable to operational risk. They do not consider firm specific scenarios or causal factors. Bottom up approaches on the other hand attempt to determine operational risk capital based upon an identification and quantification of firm specific risks. Bottom up approaches help determine a traditional loss distribution from which capital requirements can be determined at a given level of confidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Which of the following statements are true:
I. Top down approaches help focus management attention on the frequency and severity of loss events, while bottom up approaches do not.
II. Top down approaches rely upon high level data while bottom up approaches need firm specific risk data to estimate risk.
III. Scenario analysis can help capture both qualitative and quantitative dimensions of operational risk.
(a) III only
(b) II only
(c) II and III
(d) I only

A

The correct answer is choice ‘c’

Top down approaches do not consider event frequency and severity, on the other hand they focus on high level available data such as total capital, income volatility, peer group information on risk capital etc. Bottom up approaches focus on severity and frequency distributions for events. Statement I is therefore not correct.

Top down approaches do indeed rely upon high level aggregate data and tend to infer operational risk capital requirements from these. Bottom up approaches look at more detailed firm specific information. Statement II is correct.

Scenario analysis requires estimating losses from risk scenarios, and allows incorporating the judgment and views of managers in addition to any data that might be available from internal or external loss databases. Statement III is correct. Therefore Choice ‘c’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

According to the implied capital model, operational risk capital is estimated as:

(a) Capital implied from known risk premiums and the firm

s earnings
(b) Operational risk capital held by similar firms, appropriately scaled
(c) Total capital based on the capital asset pricing model
(d) Total capital less market risk capital less credit risk capital

A

The correct answer is choice ‘d’

Operational risk capital estimated using the implied capital model is merely the capital that is not attributable to market or credit risk. Therefore Choice ‘d’ is the correct answer. All other responses are incorrect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

A bank expects the error rate in transaction data entry for a particular business process to be 0.005%. What is the range of expected errors in a day within +/- 2 standard deviations if there are 2,000,000 such transactions each day?

(a) 80 to 120 errors in a day
(b) 90 to 110 errors in a day
(c) 60 to 80 errors in a day
(d) 0 to 200 errors in a day

A

The correct answer is choice ‘a’

Error rates are generally modeled using the Poisson distribution. Recall that the Poisson distribution has only one parameter - λ - which is its mean and also its variance.

In the given case, the mean number of errors is 2,000,000 x 0.005% = 100. Since this is the variance as well, the standard deviation is √100 = 10. Therefore the range of outcomes within 2 standard deviations of the mean is 100 +/- (2*10) = 80 to 120 errors in a day.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

The generalized Pareto distribution, when used in the context of operational risk, is used to model:

(a) Expected losses
(b) Tail events
(c) Unexpected losses
(d) Average losses

A

The correct answer is choice ‘b’

Some risk experts have suggested the use of extreme value theory to model tail risk or extreme events for operational risk. The generalized Pareto model or the Peaks-over-Threshold (POT) model are often used to model extreme value distributions, and therefore Choice ‘b’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

When modeling operational risk using separate distributions for loss frequency and loss severity, which of the following is true?

(a) Loss severity and loss frequency are considered independent
(b) Loss severity and loss frequency distributions are considered as a bivariate model with positive correlation
(c) Loss severity and loss frequency are modeled as conditional probabilities
(d) Loss severity and loss frequency are modeled using the same units of measurement

A

The correct answer is choice ‘a’

When modeling operational loss frequency distribution (which, for example, may be based upon a Poisson distribution) and a loss severity distribution (for example, based upon a lognormal distribution), it is assumed that the frequency of losses and the severity of the losses are completely independent and do not impact each other. Therefore Choice ‘a’ is correct, and the others are not valid assumptions underlying the operational loss modeling.

Once each of these distributions has been built, a random number is drawn from each to determine a loss scenario. The process is repeated many times as part of a Monte Carlo simulation to get a the loss distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Which of the following will be a loss not covered by operational risk as defined under Basel II?

(a) Systems failure
(b) Strategic planning
(c) Earthquakes
(d) Fat finger losses

A

The correct answer is choice ‘b’

Operational risk is defined as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk.

Therefore any losses from poor strategic planning will not be a part of operational risk. Choice ‘b’ is the correct answer.

Note that floods, earthquakes and the like are covered under the definition of operational risk as losses arising from loss or damage to physical assets from natural disaster or other events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

What does a middle office do for a trading desk?

(a) Transaction data entry
(b) Risk analysis
(c) Operations
(d) Reconciliations

A

The correct answer is choice ‘b’

The ‘middle office’ is a term used for the risk management function, therefore Choice ‘b’ is the correct answers. The other functions describe what the ‘back office’ does (IT, accounting). The ‘front office’ includes the traders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q
The definition of operational risk per Basel II includes which of the following:
I. Risk of loss resulting from inadequate or failed internal processes, people and systems or from external events
II. Legal risk
III. Strategic risk
IV. Reputational risk
 	(a)	I and II
 	(b)	I, II, III and IV
 	(c)	II and III
 	(d)	I and III
A

The correct answer is choice ‘a’

Operational risk as defined in Basel II specifically excludes strategic and reputational risk. Therefore Choice ‘a’ is the correct answer.

Note that Basel II defines operational risk as follows:
Operational risk is defined as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Which of the following is true in relation to the application of Extreme Value Theory when applied to operational risk measurement?
I. EVT focuses on extreme losses that are generally not covered by standard distribution assumptions
II. EVT considers the distribution of losses in the tails
III. The Peaks-over-thresholds (POT) and the generalized Pareto distributions are used to model extreme value distributions
IV. EVT is concerned with average losses beyond a given level of confidence
(a) I, II and IV
(b) II and III
(c) I, II and III
(d) I and IV

A

The correct answer is choice ‘c’

EVT, when used in the context of operational risk measurement, focuses on tail events and attempts to build a distribution of losses beyond what is covered by VaR. Statements I, II and II are correct. Statement IV describes conditional VaR (CVAR) and not EVT.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

According to Basel II’s definition of operational loss event types, losses due to acts by third parties intended to defraud, misappropriate property or circumvent the law are classified as:

(a) Internal fraud
(b) External fraud
(c) Execution delivery and system failure
(d) Third party fraud

A

The correct answer is choice ‘b’

Choice ‘b’ is the correct answer. Refer to the detailed loss event type classification under Basel II (see Annex 9 of the accord). You should know the exact names of all loss event types, and examples of each.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Loss from a lawsuit from an employee due to physical harm caused while at work is categorized per Basel II as:

(a) Damage to physical assets
(b) Unsafe working environment
(c) Execution delivery and process management
(d) Employment practices and workplace safety

A

The correct answer is choice ‘d’

Choice ‘d’ is the correct answer. Refer to the detailed loss event type classification under Basel II (see Annex 9 of the accord). You should know the exact names of all loss event types, and examples of each.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

An error by a third party service provider results in a loss to a client that the bank has to make up. Such as loss would be categorized per Basel II operational risk categories as:

(a) Business disruption and process failure
(b) Outsourcing loss
(c) Abnormal loss
(d) Execution delivery and process management

A

The correct answer is choice ‘d’

Choice ‘d’ is the correct answer. Refer to the detailed loss event type classification under Basel II (see Annex 9 of the accord). You should know the exact names of all loss event types, and examples of each.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Which of the following event types is hacking damage classified under Basel II operational risk classifications?

(a) Information security
(b) Technology risk
(c) Damage to physical assets
(d) External fraud

A

The correct answer is choice ‘d’

Choice ‘d’ is the correct answer. All other answers are incorrect.

Refer to the detailed loss event type classification under Basel II (see Annex 9 of the accord).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

What percentage of average annual gross income is to be held as capital for operational risk under the basic indicator approach specified under Basel II?

(a) 0.12
(b) 0.08
(c) 0.125
(d) 0.15

A

The correct answer is choice ‘d’

Banks using the basic indicator approach must hold 15% of the average annual gross income for the past three years, excluding any year that had a negative gross income. Therefore Choice ‘d’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

Under the standardized approach to calculating operational risk capital, how many business lines are a bank’s activities divided into per Basel II?

(a) 8
(b) 15
(c) 7
(d) 12

A

The correct answer is choice ‘a’

In the Standardized Approach, banks’ activities are divided into eight business lines: corporate finance, trading & sales, retail banking, commercial banking, payment & settlement, agency services, asset management, and retail brokerage. Therefore Choice ‘a’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Under the standardized approach to calculating operational risk capital under Basel II, negative regulatory capital charges for any of the business units:

(a) Should be included after ignoring the negative sign
(b) Should be excluded from capital calculations
(c) Should be offset against positive capital charges from other business units
(d) Should be ignored completely

A

The correct answer is choice ‘c’

According to Basel II, in any given year, negative capital charges (resulting from negative gross income) in any business line may offset positive capital charges in other business lines without limit. Therefore Choice ‘c’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Which loss event type is the loss of personally identifiable client information classified as under the Basel II framework?

(a) Clients, products and business practices
(b) Information security
(c) External fraud
(d) Technology risk

A

The correct answer is choice ‘a’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Which loss event type is the failure to timely deliver collateral classified as under the Basel II framework?

(a) Clients, products and business practices
(b) External fraud
(c) Execution, Delivery

&

Process Management
(d) Information security

A

The correct answer is choice ‘c’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Which of the following is not a risk faced by a bank from holding a portfolio of residential mortgages?

(a) The risk that mortgage interest rates will rise in the future
(b) The risk that CDS spreads on the bank

s debt will rise making funding more expensive
(c) The risk that the homeowners will not be able to pay their mortgage when they are due
(d) The risk that the homeowners will pay the mortgage off before they are due

A

The correct answer is choice ‘b’

Choice ‘b’ represents a risk that does not arise from its holdings of mortgages. Therefore Choice ‘b’ is the correct answer.

All the other risks identified are correct - the bank faces interest rate, default and prepayment risks on its mortgages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

A bank prices retail credit loans based on median default rates. Over the long run, it can expect:

(a) Correct pricing of risk in the retail credit portfolio
(b) Underestimation and therefore underpricing of risk in it retail portfolio
(c) Overestimation of risk and overpricing, leading to loss of market share
(d) A reduction in the rate of defaults

A

The correct answer is choice ‘b’

The key to pricing loans is to make sure that the prices cover expected losses. The correct measure of expected losses is the mean, and not the median. To the extent the median is different from the mean, the loans would be over or underpriced.

The loss curve for credit defaults is a distribution skewed to the right. Therefore its mode is less than its median which is less than its mean. Since the median is less than the mean, the bank is pricing in fewer losses than the mean, which means over the long run it is underestimating risk and underpricing its loans. Therefore Choice ‘b’ is the correct answer.

If on the other hand for some reason the bank were overpricing risk, its loans would be more expensive than its competitors and it would lose market share. In this case however, this does not apply. Loan pricing decisions are driven by the rate of defaults, and not the other way round, therefore any pricing decisions will not reduce the rate of default.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Loss provisioning is intended to cover:

(a) Expected losses
(b) Losses in excess of unexpected losses
(c) Unexpected losses
(d) Both expected and unexpected losses

A

The correct answer is choice ‘a’

Loss provisioning is intended to cover expected losses. Economic capital is expected to cover unexpected losses. No capital or provisions are set aside for losses in excess of unexpected losses, which will ultimately be borne by equity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

What would be the consequences of a model of economic risk capital calculation that weighs all loans equally regardless of the credit rating of the counterparty?
I. Create an incentive to lend to the riskiest borrowers
II. Create an incentive to lend to the safest borrowers
III. Overstate economic capital requirements
IV. Understate economic capital requirements
(a) I only
(b) III only
(c) I and IV
(d) II and III

A

The correct answer is choice ‘c’

If capital calculations are done in a standard way regardless of risk (as reflected by credit ratings), then it creates a perverse incentive for the lenders’ employees to lend to the riskiest borrowers that offer the highest expected returns as there is no incentive to ‘save’ on economic capital requirements that are equal for both safe and unsafe borrowers. Therefore statement I is correct.

Given that the portfolio of such an institution is likely to then comprise poor quality borrowers, and economic capital would be based upon ‘average’ expected ratings, it is likely to carry lower economic capital given its exposures. Therefore any such economic risk capital model is likely to understate economic capital requirements. Therefore statement IV is correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

Which of the following best describes Altman’s Z-score

(a) A standardized z based upon the normal distribution
(b) A regression of probability of survival against a given set of factors
(c) A calculation of default probabilities
(d) A numerical computation based upon accounting ratios

A

The correct answer is choice ‘d’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

Altman’s Z-score does not consider which of the following ratios:

(a) Sales to total assets
(b) Net income to total assets
(c) Working capital to total assets
(d) Market capitalization to debt

A

The correct answer is choice ‘b’

A computation of Altman’s Z-score considers the following ratios:

  • Working capital to total assets
  • Retained earnings to total assets
  • EBIT to total assets
  • Market cap to debt
  • Sales to total assets

It does not consider Net Income to total assets, therefore Choice ‘b’ is the correct answer. This makes sense as net income is after interest and taxes, both of which are not relevant for considering the cash flows for debt servicing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

he Altman credit risk score considers:

(a) A historical database of the firms that have defaulted
(b) A combination of accounting measures and market values
(c) A historical database of the firms that have survived
(d) A quadratic approximation of the credit risk based on underlying risk factors

A

The correct answer is choice ‘b’

A computation of Altman’s Z-score considers the following ratios:

  • Working capital to total assets
  • Retained earnings to total assets
  • EBIT to total assets
  • Market cap to debt
  • Sales to total assets

Nearly all the numbers above are accounting measures derived straight from the balance sheet or the income statement. Market capitalization is a market driven number. Therefore Choice ‘b’ is the correct answer as the Altman credit risk score considers both accounting and market based measures.

Altman’s score, though computationally straightforward and intuitively easy to understand, was introduced in the late sixties and has been very accurate in predicting corporate bankruptcies, which is why it continues to be used extensively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Which of the following losses can be attributed to credit risk:
I. Losses in a bond’s value from a credit downgrade
II. Losses in a bond’s value from an increase in bond yields
III. Losses arising from a bond issuer’s default
IV. Losses from an increase in corporate bond spreads
(a) II and IV
(b) I, III and IV
(c) I and II
(d) I and III

A

The correct answer is choice ‘d’

Losses due to credit risk include the loss of value from credit migration and default events (which can be considered a migration to the ‘default’ category). Therefore Choice ‘d’ is the correct answer. Changes in spreads or interest rates are examples of market risk events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

A bank holds a portfolio of corporate bonds. Corporate bond spreads widen, resulting in a loss of value for the portfolio. This loss arises due to:

(a) Credit risk
(b) Market risk
(c) Liquidity risk
(d) Counterparty risk

A

The correct answer is choice ‘b’

The difference between the yields on corporate bonds and the risk free rate is called the corporate bond spread. Widening of the spread means that corporate bonds yield more, and their yield curve shifts upwards, driving down bond prices. The increase in the spread is a consequence of the market risk from holding these interest rate instruments, which is a part of market risk. If the reduction in the value of the portfolio were to be caused by a change in the credit rating of the bonds held, it would have been a loss arising due to credit risk. Counterparty risk and liquidity risk are not relevant for this question. Therefore Choice ‘b’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

Under the credit migration approach to assessing portfolio credit risk, which of the following are needed to generate a distribution of future portfolio values?

(a) The forward yield curve
(b) A rating migration matrix
(c) A specified risk horizon
(d) All of the above

A

The correct answer is choice ‘d’

The credit migration approach to assessing portfolio credit risk involves obtaining a distribution of future portfolio values from the ratings migration matrix. First, the frequencies in the matrix are used as probabilities, and expected future values of the securities belonging to each rating category are calculated. These are then discounted to the present using the discount rate appropriate to the ‘future’ rating category. This gives us a forward distribution of the value of each security in the portfolio. These are then combined using the default correlations between the issuers. The default correlation between the issuers is often proxied using asset returns, and recognizing that default occurs when asset values fall below a certain threshold. A distribution for the future value of the portfolio is generated using simulation, and from this distribution the Credit VaR can be calculated.

Thus, we need the migration matrix, the risk horizon from which the present values need to be calculated, and the forward yield curve or the discount curve for each rating category for the risk horizon. Thus, Choice ‘d’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

If F be the face value of a firm’s debt, V the value of its assets and E the market value of equity, then according to the option pricing approach a default on debt occurs when:

(a) F < V
(b) V < E
(c) F > V
(d) F - E < V

A

The correct answer is choice ‘c’

According to the option pricing approach developed by Merton, the shareholders of a firm have a put on the assets of the firm where the strike price is equal to the face value of the firm’s debt. This is just a more complicated way of saying that the debt holders are entitled to all the assets of the firm if these assets are insufficient to pay off the debts, and because of limited liability of the shareholders of a corporation this part payment will fully extinguish the debt.

A firm will default on its debt if the value of the assets falls below the face value of the debt.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

Which of the following credit risk models relies upon the analysis of credit rating migrations to assess credit risk?

(a) KMV

s EDF based approach
(b) The CreditMetrics approach
(c) The actuarial approach
(d) The contingent claims approach

A

The correct answer is choice ‘b’

The correct answer is Choice ‘b’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:

  1. CreditMetrics: based on the credit migration framework. Considers the probability of migration to other credit ratings and the impact of such migrations on portfolio value.
  2. CreditPortfolio View: similar to CreditMetrics, but adds the impact of the business cycle to the evaluation.
  3. The contingent claims approach: uses option theory by considering a debt as a put option on the assets of the firm.
  4. KMV’s EDF (expected default frequency) based approach: relies on EDFs and distance to default as a measure of credit risk.
  5. CreditRisk+: Also called the ‘actuarial approach’, considers default as a binary event that either happens or does not happen. This approach does not consider the loss of value from deterioration in credit quality (unless the deterioration implies default).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

Which of the following credit risk models includes a consideration of macro economic variables such as unemployment, balance of payments etc to assess credit risk?

(a) The actuarial approach
(b) The CreditMetrics approach
(c) KMV

s EDF based approach
(d) CreditPortfolio View

A

The correct answer is choice ‘d’

The correct answer is Choice ‘d’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:

  1. CreditMetrics: based on the credit migration framework. Considers the probability of migration to other credit ratings and the impact of such migrations on portfolio value.
  2. CreditPortfolio View: similar to CreditMetrics, but adds the impact of the business cycle to the evaluation.
  3. The contingent claims approach: uses option theory by considering a debt as a put option on the assets of the firm.
  4. KMV’s EDF (expected default frequency) based approach: relies on EDFs and distance to default as a measure of credit risk.
  5. CreditRisk+: Also called the ‘actuarial approach’, considers default as a binary event that either happens or does not happen. This approach does not consider the loss of value from deterioration in credit quality (unless the deterioration implies default).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

Which of the following credit risk models considers debt as including a put option on the firm’s assets to assess credit risk?

(a) CreditPortfolio View
(b) The actuarial approach
(c) The CreditMetrics approach
(d) The contingent claims approach

A

The correct answer is choice ‘d’

The correct answer is Choice ‘d’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:

  1. CreditMetrics: based on the credit migration framework. Considers the probability of migration to other credit ratings and the impact of such migrations on portfolio value.
  2. CreditPortfolio View: similar to CreditMetrics, but adds the impact of the business cycle to the evaluation.
  3. The contingent claims approach: uses option theory by considering a debt as a put option on the assets of the firm.
  4. KMV’s EDF (expected default frequency) based approach: relies on EDFs and distance to default as a measure of credit risk.
  5. CreditRisk+: Also called the ‘actuarial approach’, considers default as a binary event that either happens or does not happen. This approach does not consider the loss of value from deterioration in credit quality (unless the deterioration implies default).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

Which of the following credit risk models focuses on default alone and ignores credit migration when assessing credit risk?

(a) CreditPortfolio View
(b) The actuarial approach
(c) The contingent claims approach
(d) The CreditMetrics approach

A

The correct answer is choice ‘b’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Which of the following does not affect the credit risk facing a lender institution?

(a) The state of the economy
(b) The degree of geographical or sectoral concentration in the loan book
(c) The applicability or otherwise of mark to market accounting to the institution
(d) Credit ratings of individual borrowers

A

The correct answer is choice ‘c’

The state of the economy, credit quality of individual borrowers and concentration risk are all factors that affect the credit risk facing a lender. Mark to market accounting does not change the credit risk, or the underlying economic reality facing the institution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

What is the risk horizon period used for credit risk as generally used for economic capital calculations and as required by regulation?

(a) 10 years
(b) 1 year
(c) 10 days
(d) 1-day

A

The correct answer is choice ‘b’

The credit risk horizon for credit VaR is generally one year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

Conditional default probabilities modeled under CreditPortfolio view use a:

(a) Probit function
(b) Altman

s z-score
(c) Logit function
(d) Power function

A

The correct answer is choice ‘c’

Conditional default probabilities are modeled as a logit function under CreditPortfolio view. That ensures the resulting probabilities are ‘well behaved’, ie take a value between 0 and 1. The probability may be expressed as = 1/ (1 + exp(I)), where I is a country specific index taking various macro economic factors into account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

Under the CreditPortfolio View model of credit risk, the conditional probability of default will be:

(a) higher than the unconditional probability of default in an economic expansion
(b) lower than the unconditional probability of default in an economic expansion
(c) the same as the unconditional probability of default in an economic expansion
(d) lower than the unconditional probability of default in an economic contraction

A

The correct answer is choice ‘b’

When the economy is expanding, firms are less likely to default. Therefore the conditional probability of default, given an economic expansion, is likely to be lower than the unconditional probability of default. Therefore Choice ‘b’ is the correct answer and the other statements are incorrect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

Under the CreditPortfolio View approach to credit risk modeling, which of the following best describes the conditional transition matrix:

(a) The conditional transition matrix is the unconditional transition matrix adjusted for probabilities of defaults
(b) The conditional transition matrix is the transition matrix adjusted for the distribution of the firms

asset returns
(c) The conditional transition matrix is the unconditional transition matrix adjusted for the state of the economy and other macro economic factors being modeled
(d) The conditional transition matrix is the transition matrix adjusted for the risk horizon being different from that of the transition matrix

A

The correct answer is choice ‘c’

Under the CreditPortfolio View approach, the credit rating transition matrix is adjusted for the state of the economy in a way as to increase the probability of defaults when the economy is not doing well, and vice versa. Therefore Choice ‘c’ is the correct answer. The other choices represent nonsensical options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

he principle underlying the contingent claims approach to measuring credit risk equates the cost of eliminating credit risk for a firm to be equal to:

(a) the cost of a call on the firm

s assets with a strike equal to the value of the debt
(b) the market valuation of the firm

s equity less the value of its liabilities
(c) the value of a put on the firm

s assets with a strike equal to the value of the debt
(d) the probability of the firm

s assets falling below the critical value for default

A

The correct answer is choice ‘c’

Under the contingent claims approach, a firm will default on its debt when the value of its assets fall to less than the face value of the debt. Debt holders can protect themselves against such an event by buying a put on the assets of the firm, where the strike price is equal to the value of the debt. In other words, Risky Debt + Put on the firm’s assets = Risk free debt. This is because if the value of the assets is greater than the value of the debt, they will be paid in full. If the value of the assets is lower than the value of the debt, they will exercise the put and be paid in full.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

Under the contingent claims approach to measuring credit risk, which of the following factors does NOT affect credit risk:

(a) Volatility of the firm

s asset values
(b) Leverage in the capital structure
(c) Maturity of the debt
(d) Cash flows of the firm

A

The correct answer is choice ‘d’

Under the contingent claims approach, credit risk is modeled as the value of a put option on the value of the firm’s assets with a strike equal to the face value of the debt and maturity equal to the maturity of the obligation. The cost of credit risk is determined by the leverage ratio, the volatility of the firm’s assets and the maturity of the debt. Cash flows are not a part of the equation. Therefore Choice ‘d’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q
Under the contingent claims approach to credit risk, risk increases when:
I. Volatility of the firm's assets increases
II. Risk free rate increases
III. Maturity of the debt increases
 	(a)	I and II
 	(b)	I and III
 	(c)	II and III
 	(d)	I, II and III
A

The correct answer is choice ‘b’

Under the contingent claims approach, credit risk is evaluated as the value of the put on the firm’s assets with a strike price equal to the face value of the debt and maturity equal to the maturity of the obligation. The Black Scholes model can then be used to value the put, and therefore an increase in volatility and the time to expiry (ie maturity) will increase the value of the debt. An increase in the risk free rate will actually reduce the value of the put, therefore statements I and III are correct and Choice ‘b’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

Under the KMV Moody’s approach to calculating expecting default frequencies (EDF), firms’ default on obligations is likely when:

(a) asset values reach a level below total liabilities
(b) asset values reach a level between short term debt and total liabilities
(c) asset values reach a level below short term debt
(d) expected asset values one year hence are below total liabilities

A

The correct answer is choice ‘b’

An observed fact that the KMV approach relies upon is that firms do not default when their liabilities exceed assets, but when asset values are somewhere between short term liabilities and the total liabilities. In fact, the ‘default point’ in the KMV methodology is defined as the short term debt plus half of the long term debt. The difference between expected value of the assets in one year and this ‘default point’, when expressed in terms of standard deviation of the asset values, is called the ‘distance-to-default’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

If EV be the expected value of a firm’s assets in a year, and DP be the ‘default point’ per the KMV approach to credit risk, and σ be the standard deviation of future asset returns, then the distance-to-default is given by?

A

The correct answer is choice ‘b’

The distance to default is the number of standard deviations that expected asset values are away from the default point (EV-DP)/sigma

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

Under the KMV Moody’s approach to credit risk measurement, which of the following expressions describes the expected ‘default point’ value of assets at which the firm may be expected to default?

(a) Short term debt + 0.5* Long term debt
(b) Short term debt + Long term debt
(c) 2* Short term debt + Long term debt
(d) Long term debt + 0.5* Short term debt

A

The correct answer is choice ‘a’

A situation where a firm has more liabilities than assets does not necessarily imply default, so long as the firm is able to pay its obligations when they come due. Therefore, short term debts have a greater bearing on a firm’s default than longer term debt. However, this is not to say that merely having enough to pay off the short term debts (ie debts due within one year) is enough to avoid default. Over time, the long term debt will also be turning to short term debt, and it may not be possible for the firm to roll over its liabilities without lenders considering the long term debt. The KMV approach considers the entire short term debt and half of the long term debt as the critical value of assets below which default will be triggered

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

nder the KMV Moody’s approach to credit risk measurement, how is the distance to default converted to expected default frequencies?

(a) Using a normal distribution
(b) Using a proprietary database based on historical information
(c) Using migration matrices
(d) Using Monte Carlo simulations

A

The correct answer is choice ‘b’

KMV Moody’s uses a proprietary database to convert the distance to default to expected default probabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

Changes in which of the following do not affect the expected default frequencies (EDF) under the KMV Moody’s approach to credit risk?

(a) Changes in the debt level
(b) Changes in asset volatility
(c) Changes in the risk free rate
(d) Changes in the firm

s market capitalization

A

The correct answer is choice ‘c’

EDFs are derived from the distance to default. The distance to default is the number of standard deviations that expected asset values are away from the default point, which itself is defined as short term debt plus half of the long term debt. Therefore debt levels affect the EDF. Similarly, asset values are estimated using equity prices. Therefore market capitalization affects EDF calculations. Asset volatilities are the standard deviation that form a place in the denominator in the distance to default calculations. Therefore asset volatility affects EDF too. The risk free rate is not directly factored in any of these calculations (except of course, one could argue that the level of interest rates may impact equity values or the discounted values of future cash flows, but that is a second order effect). Therefore Choice ‘c’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

Which of the following is true for the actuarial approach to credit risk modeling (CreditRisk+):

(a) The number of defaults is modeled using a binomial distribution where the number of defaults are considered discrete events
(b) The approach considers only default risk, and ignores the risk to portfolio value from credit downgrades
(c) Default correlations between obligors are accounted for using a multivariate normal model
(d) The approach is based upon historical rating transition matrices

A

The correct answer is choice ‘b’

The actuarial model considers defaults to follow a Poisson distribution with a given mean per period, and these are binary in nature, ie a default happens or it does not happen. The model does not consider the loss of value from credit downgrades, and focuses only on defaults. The model also does not consider default correlations between obligors. Therefore Choice ‘b’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

CreditRisk+, the actuarial model for calculating portfolio credit risk, is based upon:

(a) the normal distribution
(b) the exponential distribution
(c) the Poisson distribution
(d) the log-normal distribution

A

The correct answer is choice ‘c’

CreditRisk+ treats default as a binary event, ignoring downgrade risk, capital structures of individual firms in the portfolio or the causes of default. It uses a single parameter, λ or the mean default rate, and derives credit risk based upon the Poisson distribution. Therefore Choice ‘c’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

Under the actuarial (or CreditRisk+) based modeling of defaults, what is the probability of 4 defaults in a retail portfolio where the number of expected defaults is 2?

(a) 9%
(b) 18%
(c) 4%
(d) 2%

A

The correct answer is choice ‘a’

The actuarial or CreditRisk+ model considers default as an ‘end of game’ event modeled by a Poisson distribution. The annual number of defaults is a stochastic variable with a mean of μ and standard deviation equal to √μ.

The probability of n defaults is given by (μ^n e^-μ) /n!, and therefore in this case is equal to (=2^4 * exp(-2))/FACT(4)) = 0.0902.

Note that CreditRisk+ is the same methodology as the actuarial approach, and requires using the Poisson distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

The probability of default of a security over a 1 year period is 3%. What is the probability that it would not have defaulted at the end of four years from now?

(a) 11.47%
(b) 12.00%
(c) 88.00%
(d) 88.53%

A

The correct answer is choice ‘d’

The probability that the security would not default in the next 4 years is equal to the probability of survival raised to the power four. In other words, =(1 - 3%)^4 = 88.53%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

If the default hazard rate for a company is 10%, and the spread on its bonds over the risk free rate is 800 bps, what is the expected recovery rate?

(a) 0.00%
(b) 8.00%
(c) 20.00%
(d) 40.00%

A

The correct answer is choice ‘c’

The recovery rate, the default hazard rate (also called the average default intensity) and the spread on debt are linked by the equation Hazard Rate = Spread/(1 - Recovery Rate). Therefore, the recovery rate implicit in the given data is = 1 - 8%/10% = 20%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

Which of the following cannot be used as an internal credit rating model to assess an individual borrower:

(a) Altman

s Z-score
(b) Distance to default model
(c) Probit model
(d) Logit model

A

The correct answer is choice ‘b’

Altman’s Z-score, the Probit and the Logit models can all be used to assess the credit rating of an individual borrower. There is no such model as the ‘distance to default model’, and therefore Choice ‘b’ is the correct answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

Which of the following statements are true:
I. A high score according to Altman’s Z-Score methodology indicates a lower default risk
II. A high score according to the Probit or Logit models indicates a higher default risk
III. A high score according to Altman’s Z-Score methodology indicates a higher default risk
IV. A high score according to the Probit or Logit models indicates a lower default risk
(a) I and II
(b) III and IV
(c) II and III
(d) I and IV

A

The correct answer is choice ‘a’

A high score under the probit and logit models indicates a higher default risk, while under Altman’s methodology it indicates a lower default risk. Therefore Choice ‘a’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q
For a corporate issuer, which of the following can be used to calculate market implied default probabilities?
I. CDS spreads
II. Bond prices
III. Credit rating issued by S&amp;P
IV. Altman's scoring model
 	(a)	I, II and III
 	(b)	II and III
 	(c)	III and IV
 	(d)	I and II
A

The correct answer is choice ‘d’

Generally, the probability of default is an input into determining the price of a security. However, if we know the market price of a security, we can back out the probability of default that the market is factoring into pricing that security. Market implied default probabilities are the probabilities of default priced into security prices, and can be determined from both bond prices and CDS spreads. Credit ratings issued by a credit agency do not give us ‘market implied default probabilities’, and neither does an internal scoring model like Altman’s as these do not consider actual market prices in any way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

A long position in a credit sensitive bond can be synthetically replicated using:

(a) a long position in a treasury bond and a short position in a CDS
(b) a long position in a treasury bond and a long position in a CDS
(c) a short position in a treasury bond and a short position in a CDS
(d) a short position in a treasury bond and a long position in a CDS

A

The correct answer is choice ‘a’

A long position in a credit sensitive bond is equivalent to earning the risk free rate and the spread on the bond. The risk free rate can be earned through a long position in a treasury bond, and the spread can be earned in the form of premiums on a CDS, which are received by the protection seller, ie the party short a CDS contract. Therefore we can get the same results as a long bond position using a combination of a long treasury bond and a short position in a CDS. Choice ‘a’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

The CDS rate on a defaultable bond is approximated by which of the following expressions:

(a) Loss given default x Default hazard rate
(b) Credit spread x Loss given default
(c) Hazard rate x Recovery rate
(d) Hazard rate / (1 - Recovery rate)

A

The CDS rate is approximated by the [Loss given default x Default hazard rate]. Thus Choice ‘a’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

For a corporate bond, which of the following statements is true:
I. The credit spread is equal to the default rate times the recovery rate
II. The spread widens when the ratings of the corporate experience an upgrade
III. Both recovery rates and probabilities of default are related to the business cycle and move in opposite directions to each other
IV. Corporate bond spreads are affected by both the risk of default and the liquidity of the particular issue
(a) IV only
(b) III and IV
(c) I, II and IV
(d) III only

A

The correct answer is choice ‘b’

The credit spread is equal to the default rate times the loss given default, or stated another way, default rate times (1 - recovery rate). It is not equal to the default rate times the recovery rate. Therefore statement I is not correct.

When ratings are upgraded by rating agencies, the spread contracts and not widen. Therefore statement II is not correct.

Both recovery rates and probabilities of default are related to the business cycle, and they move in opposite directions. Economic recessions witness an increase in the default rate and a decrease in the recovery rate, and economic expansions result in a decrease in the default rate and an increase in the recovery rates when default does happen. Therefore statement III is correct.

Bond spreads incorporate both the risk of default, but also considerations of liquidity in the case of corporate bonds. Hence statement IV is correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

The CDS quote for the bonds of Bank X is 200 bps. Assuming a recovery rate of 40%, calculate the default hazard rate priced in the CDS quote.

(a) 3.33%
(b) 0.80%
(c) 2.00%
(d) 5.00%

A

The correct answer is choice ‘a’

Hazard rate x Loss given default = CDS quote. In other words, Hazard rate x (1 - recovery rate) = CDS quote. We can therefore calculate the hazard rate for this problem as 200 bps/(1 - 40%) = 3.33%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

: Which of the following is the best description of the spread premium puzzle:

(a) The spread premium puzzle refers to observed default rates being much less than implied default rates, leading to lower credit bonds being relatively cheap when compared to their actual default probabilities
(b) The spread premium puzzle refers to AAA corporate bonds being priced at almost the same prices as equivalent treasury bonds without offering the same liquidity or guarantee as treasury bonds
(c) The spread premium puzzle refers to the moral hazard implicit in the monoline insurance market
(d) The spread premium puzzle refers to dollar denominated non-US sovereign bonds being priced a at significant discount to other similar USD denominated assets

A

The correct answer is choice ‘a’

Choice ‘a’ is the correct answer. The other choices represent non-sensical statements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

A portfolio has two loans, A and B, each worth $1m. The probability of default of loan A is 10% and that of loan B is 15%. The probability of both loans defaulting together is 1%. Calculate the expected loss on the portfolio.

(a) 250000
(b) 240000
(c) 500000
(d) 1000000

A

The correct answer is choice ‘a’

The easiest way to answer this question is to ignore the joint probability of default as that is irrelevant to expected losses. The joint probability of default impacts the volatility of the losses, but not the expected amount. One way to think about it is to think of asset portfolios, where diversification reduces risk (ie standard deviation) but the expected returns are nothing but the average of the expected returns in the portfolio. Just as the expected returns of the portfolio are not affected by the volatility or correlations (these affect standard deviation), in the same way the joint probability of default does not affect the expected losses. Therefore the expected losses for this portfolio are simply $1m x 10% + $1m x 15% = $250,000.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

If two bonds with identical credit ratings, coupon and maturity but from different issuers trade at different spreads to treasury rates, which of the following is a possible explanation:
I. The bonds differ in liquidity
II. Events have happened that have changed investor perceptions but these are not yet reflected in the ratings
III. The bonds carry different market risk
IV. The bonds differ in their convexity
(a) II and IV
(b) I and II
(c) III and IV
(d) I, II and IV

A

The correct answer is choice ‘b’

When two bonds that appear identical in every respect trade at different prices, the difference is often due to differences in liquidity between the two bonds (the less liquid bond will be cheaper and yield higher), and also due to the fact that ratings from the major rating agencies do not generally react to day to day changes in the market. The market’s perception of the differences in the two credits will cause a divergence in the prices. This has been an extremely visible phenomenon during the credit crisis of 2007-2009, where fixed income security prices have changed sharply for many securities without any changes in external credit ratings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

A Bank Holding Company (BHC) is invested in an investment bank and a retail bank. The BHC defaults for certain if either the investment bank or the retail bank defaults. However, the BHC can also default on its own without either the investment bank or the retail bank defaulting. The investment bank and the retail bank’s defaults are independent of each other, with a probability of default of 0.05 each. The BHC’s probability of default is 0.11.

What is the probability of default of both the BHC and the investment bank? What is the probability of the BHC’s default provided both the investment bank and the retail bank survive?

(a) 0.11 and 0
(b) 0.08 and 0.0475
(c) 0.0475 and 0.10
(d) 0.05 and 0.0125

A

The correct answer is choice ‘d’

Since the BHC always fails when the investment bank fails, the joint probability of default of the two is merely the probability of the investment bank failing, ie 0.05.

The probability of just the BHC failing, given that both the investment bank and the retail bank have survived will be equal to 0.11 - (0.05+0.05-0.050.05) = 0.0125. (The easiest way to understand this would be to consider a venn diagram, where the area under the largest circle is 0.11, and there are two intersecting circles inside this larger circle, each with an area of 0.05 and their intersection accounting for 0.050.05. We need to calculate the area outside of the two smaller circles, but within the larger circle representing the BHC).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

If A and B be two debt securities, which of the following is true?

(a) The probability of simultaneous default of A and B is greatest when their default correlation is negative
(b) The probability of simultaneous default of A and B is greatest when their default correlation is 0
(c) The probability of simultaneous default of A and B is greatest when their default correlation is +1
(d) The probability of simultaneous default of A and B is not dependent upon their default correlations, but on their marginal probabilities of default

A

The correct answer is choice ‘c’

If the marginal probability of default of two securities A and B is P(A) and P(B), then the probability of both of them defaulting together is affected by the default correlation between them. Marginal probability of default means the probability of default of each security on a standalone basis, ie, the probability of default of one security without considering the other security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

What is the probability that the bank will recover less than the principal advanced on this loan; assuming the probability of the home buyer’s default is independent of the value of the house?

(a) More than 5%
(b) Less than 1%
(c) 0
(d) More than 1%

A

The correct answer is choice ‘b’

The bank will not be able to recover the principal advanced on this loan if both the home buyer defaults, and the house value falls to less than $1m, ie the price moves adversely by more than $500k, which is $-500k/$150k = -3.33σ. (Note that 150k is the 1 year volatility in dollars, ie $1.5m * 10%).

The probability of both these things happening together is just the product of the two probabilities, one of which we know to be 5%. The other is also certainly a small number, and intuitively it is clear that the probability of both the things happening together will be less than 1%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

There are two bonds in a portfolio, each with a market value of $50m. The probability of default of the two bonds are 0.03 and 0.08 respectively, over a one year horizon. If the default correlation is 25%, what is the one year expected loss on this portfolio?

(a) $11m
(b) $5.5m
(c) $5.26m
(d) $1.38m

A

The correct answer is choice ‘b’

We will need to calculate the joint probability distribution of the portfolio as follows.

Probability of the joint default of both A and B =

default_correlation
=25%SQRT(0.03(1 - 0.03)0.08(1 - 0.08)) + 0.03*0.08 = 0.0140.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

There are two bonds in a portfolio, each with a market value of $50m. The probability of default of the two bonds are 0.03 and 0.08 respectively, over a one year horizon. If the probability of the two bonds defaulting simultaneously is 1.4%, what is the default correlation between the two?

(a) 25%
(b) 100%
(c) 40%
(d) 0%

A

The correct answer is choice ‘a’

Probability of the joint default of both A and B = default_correlation
We know all the numbers except default correlation, and we can solve for it.
Default CorrelationSQRT(0.03(1 - 0.03)0.08(1 - 0.08)) + 0.03*0.08 = 0.014.

Solving, we get default correlation = 25%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

In estimating credit exposure for a line of credit, it is usual to consider:

(a) only the value of credit exposure currently existing against the credit line as the exposure at default.
(b) the full value of the credit line to be the exposure at default as the borrower has an informational advantage that will lead them to borrow fully against the credit line at the time of default.
(c) the present value of the line of credit at the agreed rate of lending.
(d) a fixed fraction of the line of credit to be the exposure at default even though the currently drawn amount is quite different from such a fraction.

A

The correct answer is choice ‘d’

Choice ‘d’ is the correct answer. Exposures such as those to a line of credit of which only a part (or none) may be drawn at the time of assessment present a difficulty when attempting to quantify credit risk. It is not correct to take the entire amount of the line as the exposure at default, and likewise the current exposure is likely to be too aggressively low a number to consider.

While the borrower has an information advantage in that he would be aware of the deterioration in credit standing before the bank and would probably draw cash prior to default, it is unlikely that the entire amount of the line of credit would be drawn in all cases. In some cases, none may be drawn. In other cases, the bank would become aware of the situation and curtail or cancel access to the credit line in a timely fashion.

Therefore a fixed proportion of existing credit lines is considered a reasonable app

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
138
Q

For a loan portfolio, expected losses are charged against:

(a) Economic credit capital
(b) Regulatory capital
(c) Economic capital
(d) Credit reserves

A

The correct answer is choice ‘d’

Credit reserves are created in respect of expected losses, which are considered the cost of doing business. Unexpected losses are borne by economic credit capital, which is a part of economic capital. Therefore Choice ‘d’ is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
139
Q

Which of the following statements is true:

(a) Total expected losses are greater than the sum of individual underlying exposures while total unexpected losses are less than the sum of unexpected losses on underlying exposures
(b) Total expected losses are equal to the sum of individual underlying exposures while total unexpected losses are greater than the sum of unexpected losses on underlying exposures
(c) Both total expected losses and total unexpected losses are less than the sum of expected and unexpected losses on underlying exposures respectively
(d) Total expected losses are equal to the sum of expected losses in the individual underlying exposures while total unexpected losses are less than the sum of unexpected losses on underlying exposures

A

The correct answer is choice ‘d’

Total expected losses which are average and anticipated are equal to the sum of expected losses in the underlying exposures. Total unexpected losses, which are the excess of worst case losses at a certain confidence level over the expected losses, benefit from the diversification effect and are lower than the sum of unexpected losses of the underlying exposures

140
Q

For a loan portfolio, unexpected losses are charged against:

(a) Credit reserves
(b) Economic credit capital
(c) Regulatory capital
(d) Economic capital

A

The correct answer is choice ‘b’

Credit reserves are created in respect of expected losses, which are considered the cost of doing business. Unexpected losses are borne by economic credit capital, which is a part of economic capital. This question is a bit nuanced - and ‘economic capital’ would generally be a good answer as well. However, taking a rather beady eyed view of the terminology and distinguishing between ‘economic credit capital’ which is a subset of ‘economic capital’, we can say that ‘economic credit capital’ is a more appropriate

141
Q

6 : Which of the following is not a parameter to be determined by the risk manager that affects the level of economic credit capital:

(a) Definition of credit losses
(b) Confidence level
(c) Probability of default
(d) Risk horizon

A

The correct answer is choice ‘c’

Three parameters define economic credit capital: the risk horizon, ie the time horizon over which the risk is being assessed; the confidence level, ie the quintile of the loss distribution; and the definition of credit losses, ie whether mark-to-market losses are considered in addition to default-only losses. The probability of default is not a parameter within the control of the risk manager, but an input into the capital calculation process that he has to estimate.

142
Q

Which of the following statements are true:
I. Credit VaR often assumes a one year time horizon, as opposed to a shorter time horizon for market risk as credit activities generally span a longer time period.
II. Credit losses in the banking book should be assessed on the basis of mark-to-market mode as opposed to the default-only mode.
III. The confidence level used in the calculation of credit capital is high when the objective is to maintain a high credit rating for the institution.
IV. Credit capital calculations for securities with liquid markets and held for proprietary positions should be based on marking positions to market.
(a) I, III and IV
(b) II and III
(c) I and III
(d) I and II

A

The correct answer is choice ‘a’

Statement I is correct as credit VaR calculations often use a one year time horizon. This is primarily because the cycle in respect of credit related activities, such as loan loss reviews, accounting cycles for borrowers etc last a year.

Statement II is false. There are two ways in which loss assessments in respect of credit risk can be made: default mode, where losses are considered only in respect of default, and no losses are recognized in respect of the deterioration of the creditworthiness of the borrower (which is often expressed through a credit rating transition matrix); and the mark-to-market mode, where losses due to both defaults and credit quality are considered. The default mode is used for the loan book where the institution has lent moneys and generally intends to hold the loan on its books till maturity. The mark to market mode is used for traded securities which are not held to maturity, or are held only for trading.

Statement III is correct. The confidence interval, or the quintile of losses used for maintaining credit ratings tends to be very high as the possibility of the institution’s default needs to be remote.

Statement IV is correct too, for the reasons explained earlier.

143
Q

The capital adequacy ratio applied to risk weighted assets for the calculation of capital requirements for credit risk per Basel II is:

(a) 150%
(b) 12.5%
(c) 8%
(d) 100%

A

The correct answer is choice ‘c’

The capital adequacy ratio, also called the minimum capital requirement for credit risk per Basel II is 8% of risk weighted assets. The other choices are incorrect.

144
Q

Under the internal ratings based approach for risk weighted assets, for which of the following parameters must each institution make internal estimates (as opposed to relying upon values determined by a national supervisor):

(a) Probability of default
(b) Loss given default
(c) Exposure at default
(d) Effective maturity

A

The correct answer is choice ‘a’

Regardless of the approach being followed by a bank (ie, whether foundation IRB or advanced IRB), it must make its own estimates for the probability of default. Banks following the foundation IRB approach may use values set by the supervisor for the other three parameters, though those following the advanced IRB approach may use their own estimates for all four inputs. (This is also the difference between advanced IRB and the foundation IRB approaches.) Therefore Choice ‘a’ is the correct answer.

Also note the four difference elements that go as inputs to the internal ratings based approach in the choices provided.

145
Q

If the full notional value of a debt portfolio is $100m, its expected value in a year is $85m, and the worst value of the portfolio in one year’s time at 99% confidence level is $60m, then what is the credit VaR?

(a) $40m
(b) $25m
(c) $60m
(d) $15m

A

The correct answer is choice ‘b’

Credit VaR is the difference between the expected value of the portfolio and the value of the portfolio at the given confidence level. Therefore the credit VaR is $85m - $ 60m = $25m. Choice ‘b’ is the correct answer.

Note that economic capital and credit VaR are identical at a risk horizon of one year. Therefore if the question asks for economic capital, the answer would be the same.

146
Q

For credit risk calculations, correlation between the asset values of two issuers is often proxied with:

(a) Default correlations
(b) Credit migration matrices
(c) Equity correlations
(d) Transition probabilities

A

The correct answer is choice ‘c’

Asset returns are relevant for credit risk models where a default is related to the value of the assets of the firm falling below the default threshold. When assessing credit risk for portfolios with multiple credit assets, it becomes necessary to know the asset correlations of the different firms. Since this data is rarely available, it is very common to approximate asset correlations using equity prices. Equity correlations are used as proxies for asset correlation

147
Q

ll else remaining the same, an increase in the joint probability of default between two obligors causes the default correlation between the two to:

(a) Stay the same
(b) Increase
(c) Decrease
(d) Cannot be determined from the given informatio

A

The correct answer is choice ‘b’

The default correlation between two obligors goes

148
Q

Which of the following statements are true:
I. A transition matrix is the probability of a security migrating from one rating class to another during its lifetime.
II. Marginal default probabilities refer to probabilities of default in a particular period, given survival at the beginning of that period.
III. Marginal default probabilities will always be greater than the corresponding cumulative default probability.
IV. Loss given default is generally greater when recovery rates are low.
(a) I and IV
(b) I, III and IV
(c) II and IV
(d) I and III

A

The correct answer is choice ‘c’

Statement I is incorrect. A transition matrix expresses the probabilities of moving to a given set of ratings at the end of a period (usually one year) conditional upon a given rating at the beginning of the period. It does not make a reference to an individual security and certainly not to the probability of migrating to other ratings during its entire lifetime.

Statement II is correct. Marginal default probabilities are the probability of default in a given year, conditional upon survival at the beginning of that year.

Statement III is incorrect. Cumulative probabilities of default will always be greater than the marginal probabilities of default - except in year 1 when they will be equal.

Statement IV is correct. LGD = 1 - Recovery Rate, therefore a low recovery rate implies higher LGD.

149
Q

A cumulative accuracy plot:

(a) measures accuracy of default probabilities observed empirically
(b) is a measure of the correctness of VaR calculations
(c) measures the accuracy of credit risk estimates
(d) measures rating accuracy

A

The correct answer is choice ‘d’

A cumulative accuracy plot measures the accuracy of credit ratings assigned by rating agencies by considering the relative rankings of obligors according to the ratings given.

150
Q
Which of the following objectives are targeted by rating agencies when assigning ratings:
I. Ratings accuracy
II. Ratings stability
III. High accuracy ratio (AR)
IV. Ranked ratings
 	(a)	I, II and III
 	(b)	III and IV
 	(c)	II and III
 	(d)	I and II
A

The correct answer is choice ‘d’

Rating agencies target both accuracy and stability when they assign ratings. These two objectives can sometimes conflict, so a balance needs to be struck between the two. Rating agencies do not target any particular ‘accuracy ratio’ or rankings.

151
Q

Which of the following describes rating transition matrices published by credit rating firms:

(a) Expected ex-ante frequencies of migration from one credit rating to another over a one year period
(b) Realized frequencies of migration from one credit rating to another over a one year period
(c) Probabilities of default for each credit rating class
(d) Probabilities of ratings transition from one rating to another for a given set of issuers

A

The correct answer is choice ‘b’

Transition matrices are used for building distributions of the value of credit portfolios, and are the realized frequencies of migration from one credit rating to another over a period, generally one year. Therefore Choice ‘b’ is the correct answer.

152
Q
Which of the following need to be assumed to convert a transition probability matrix for a given time period to the transition probability matrix for another length of time:
I. Time invariance
II. Markov property
III. Normal distribution
IV. Zero skewness
 	(a)	III and IV
 	(b)	I and II
 	(c)	I, II and IV
 	(d)	II and III
A

The correct answer is choice ‘b’

Time invariance refers to all time intervals being similar and identical, regardless of the effects of business cycles or other external events. The Markov property is the assumption that there is no ratings momentum, and that transition probabilities are dependent only upon where the rating currently is and where it is going to. Where it has come from, or what the past changes in ratings have been, have no effect on the transition probabilities.

Rating agencies generally provide transition probability matrices for a given period of time, say a year. The risk analyst may need to convert these into matrices for say 6 months, 2 years or whatever time horizon he or she is interested in. Simplifying assumptions that allow him to do so using simple matrix multiplication include these two assumptions - time invariance and the Markov property. Thus Choice ‘b’ is the correct answer. The other choices (normal distribution and zero skewness) are non-sensical in this context.

153
Q

An assumption regarding the absence of ratings momentum is referred to as:

(a) Markov property
(b) Herstatt risk
(c) Ratings stability
(d) Time invariance

A

The correct answer is choice ‘a’

Choice ‘a’ is the correct answer. The Markov property is the assumption that there is no ratings momentum, and that transition probabilities are dependent only upon where the rating currently is and where it is going to. Where it has come from, or what the past changes in ratings have been, have no effect on the transition probabilities. (‘Herstatt risk’ refers to settlement risk, and is irrelevant.)

154
Q

If X represents a matrix with ratings transition probabilities for one year, the transition probabilities for 3 years are given by the matrix:

(a) P ^ (-3)
(b) 3 [P ^ (-1)]
(c) 3 [P]
(d) P x P x P

A

The correct answer is choice ‘d’

Assuming time invariance and the Markov property, it is easy to calculate the transition matrix for any time period as P^n, where P is the given transition matrix for one period and n the number of time periods that we need to compute the new transition matrix for. Thus Choice ‘d’ is the correct answer.

155
Q

If P be the transition matrix for 1 year, how can we find the transition matrix for 4 months?

(a) By calculating the matrix P x P x P
(b) By calculating the cube root of P
(c) By numerically calculating a matrix M such that M x M x M is equal to P
(d) By dividing P by 3

A

The correct answer is choice ‘c’

Assuming time invariance and the Markov property, it is easy to calculate the transition matrix for any time period as P^n, where P is the given transition matrix for one period and n the number of time periods that we need to compute the new transition matrix for.

However, when the new time period is less than the time period the matrix is available for, the only way to deriving a transition matrix for a partial period is to numerically calculate a matrix M such that M^n = P. Therefore Choice ‘c’ is the correct answer. Taking cube roots of a matrix is not a possible operation, dividing by 3 gives a matrix meaningless in this context, and P x P x P will give us the transition matrix for 3 years, not 1/3rd of a year.

156
Q

Which of the following belong in a credit risk report?

(a) Largest exposures by counterparty
(b) Exposures by country
(c) Exposures by industry
(d) All of the above

A

The correct answer is choice ‘d’

157
Q

Which of the following statements are true:
I. Credit risk and counterparty risk are synonymous
II. Counterparty risk is the contingent risk from a counterparty’s default in derivative transactions
III. Counterparty risk is the risk of a loan default or the risk from moneys lent directly
IV. The exposure at default is difficult to estimate for credit risk as it depends upon market movements
(a) II and III
(b) I and II
(c) III and IV
(d) II

A

The correct answer is choice ‘d’

Credit risk is the risk from a borrower defaulting on moneys lent. Counterparty risk, on the other hand, is the risk that a counterparty to a derivative transaction will be unable to pay at the time the transaction is in-the-money.

Credit risk therefore relates more to the banking book, counterparty risk relates more to the trading book. Credit risk and counterparty risk differ in that for counterparty risk, the amount at risk fluctuates for counterparty risk depending upon the value of the underlying derivative. Counterparty risk generally starts at zero, for most swaps and other derivatives are near zero value at inception. Over time, as the prices of the underlying instruments move, one party ends up owing money to the other. A deterioration in the financial situation of the party owing moneys may lead to a loss to the other party, resulting in counterparty risk. Counterparty risk can also arise from stock lending operations and repo trades.

Credit risk on the other hand is the traditional risk of default by a borrower, or a bank’s customer who has taken a loan or has an overdraft or other credit facility.

Statement I is therefore incorrect as credit risk and counterparty risks are different.

Statement II is correct as counterparty risk is ‘contingent’ in the sense it arises only if the transaction with the counterparty ends up being in-the-money, and the counterparty defaults.

Statement III is incorrect. The statement describes credit risk.

Statement IV is incorrect, as the exposure is known for moneys lent. Derivative exposures for the future are difficult to estimate, they can even turn from moneys owed to moneys due as the value of the underlying changes.

158
Q

Which of the following represents a riskier exposure for a bank: A LIBOR based loan, or an Overnight Indexed Swap? Which of the two rates is expected to be higher?

Assume the same counterparty and the same notional.

(a) A LIBOR based loan; OIS rate will be higher
(b) Overnight Index Swap; LIBOR rate will be higher
(c) A LIBOR based loan; LIBOR rate will be higher
(d) Overnight Index Swap; OIS rate will be higher

A

The correct answer is choice ‘c’

A LIBOR based loan requires cash to move from the lender to the borrower in the amount of the notional. The Overnight Index Swap requires only the exchange of interest payments, and therefore represents less risk.

Therefore the LIBOR based loan is a riskier exposure.

The LIBOR is generally higher than the OIS. In fact, the difference between the two, the LIBOR-OIS spread, is a standard measure of the risk premium in the market that goes up when the risk of default by counterparty banks is considered high. This is because when the market perceives the risk of default to be high, the participants need a risk premium to take on the default risk which is considerably lesser with the OIS.

159
Q
Which of the following are valid approaches to calculating potential future exposure (PFE) for counterparty risk:
I. Add a percentage of the notional to the mark-to-market value
II. Monte Carlo simulation
III. Maximum Likelihood Estimation
IV. Parametric Estimation
 	(a)	I, III and IV
 	(b)	I and II
 	(c)	III and IV
 	(d)	All of the able
A

The correct answer is choice ‘b’

When a derivative position is entered into, its mark-to-market value is generally close to zero (though the notional may be high). With the passage of time, the derivative’s value fluctuates in an unpredictable way, creating a counterparty exposure that may be difficult to estimate and risk manage. Counterparty risk in such cases is estimated based on Potential Future Exposure, which may be calculated using either:

a) Take the mark-to-market at present, and add a certain percentage of the notional, or
b) Perform a Monte Carlo simulation, capturing the stochastic nature of the PFE.

Therefore I and II are valid choices. MLE and parametric estimation are not methods for calculating PFE.

160
Q

When pricing credit risk for an exposure, which of the following is a better measure than the others:

(a) Expected Exposure (EE)
(b) Mark-to-market
(c) Potential Future Exposure (PFE)
(d) Notional amount

A

The correct answer is choice ‘a’

Exposure for derivative instruments can vary significantly over the lifetime of the instrument, depending upon how the market moves. The potential future exposure represents the extremes, not the most likely outcome. The expected exposure is the most suitable measure for pricing the credit risk. Over time, as multiple transactions are entered into, the expectation (or the mean) will be realized - though individual transactions may have more or less by way of exposure.

The notional amount may not be relevant, though for loans it may be the most important contributor to the expected exposure. Mark-to-market will represent the exposure at a given point in time, but cannot be predicted nor be used to price the credit risk.

161
Q

Which of the following statements is NOT true in relation to the recent financial crisis of 2007-08?

(a) An intention to diversify from their core activities led all market participants to the same activities, which though appearing diversified at the bank

s level, created a concentration risk at the systemic level
(b) Counterparty risk was difficult to gauge as it was impossible to know who the counterparty

s counterparties were
(c) The existence of central counterparties could have limited the damage caused by the financial crisis
(d) Central banks had data on the interconnections between institutions, but poor understanding and analysis meant this data was never analyzed

A

The correct answer is choice ‘d’

Counterparty risk was difficult to gauge as it was impossible to know who the counterparty’s counterparties were - this is true as the chain of financial transactions became excessively long with no central transparency of who owed who what. Bank A’s credit depended upon the health of its counterparties, whose health in turn depended upon other counterparties. Thus Choice ‘b’ is a correct statement.

In an attempt to diversify, banks became more like each other - chasing yield, they piled into securitized products, and chasing diversification, they piled into different types of securitized products. The system as a whole became susceptible to small shocks in the assets underlying this vast edifice of structured products. Therefore Choice ‘a’ represents a correct statement.

Choice ‘d’ does not represent a correct statement. Central banks had little data on the interconnections between institutions. They were aware of the large volumes of OTC transactions, but had no data to figure out who was connected to who, and who had what kind of exposures.

Choice ‘c’ represents a correct statement. Most transactions, other than exchange cleared futures trades (which were a tiny fraction of all trades) were cleared on a bilateral basis. The existence of central counterparties (CCPs) could have limited the impact of the crisis significantly as market participants would not have lost trust in each other, and the ‘collateral damage’ that was witnessed from a fall in housing prices, and thereby mortgage assets, would have been more contained.

162
Q

Identify the correct sequence of events as it unfolded in the credit crisis beginning 2007:
I. Mortgage defaults increased
II. Collapse in prices of unrelated assets as banks tried to create liquidity
III. Banks refused to lend or transact with each other
IV. Asset prices for CDOs collapsed
(a) IV, I, II and III
(b) I, III, IV and II
(c) III, IV, I and II
(d) I, IV, III and II

A

The correct answer is choice ‘d’

According to a paper by the BCBS, here is an excellent summary of what happened.

163
Q

Which of the following contributed to the systemic failure during the credit crisis that began in 2007?

(a) Stress tests that did not stress enough
(b) Inadequate attention paid to liquidity risk
(c) Moral hazard from the strategy of

originate and distribute


(d) All of the above

A

The correct answer is choice ‘d’

All the factors listed above contributed to systemic failure. Liquidity risk was not on the radar of regulators, and was a second priority for risk managers, and most of the focus was on capital adequacy as liquidity was thought to be an unlikely problem. Liquidity, regardless of capital adequacy, was the primary cause of failure of a number of institutions during the crisis.

Similarly, stress tests proved to be much milder than the shocks that were actually experienced, and the strategy of ‘originate and distribute’ implied that the mortgage and other debt originators had no interest in any due diligence as they intended to package and sell the debt to other investors.

164
Q

A bullet bond and an amortizing loan are issued at the same time with the same maturity and with the same principal. Which of these would have a greater credit exposure halfway through their life?

(a) The bullet bond
(b) Indeterminate with the given information
(c) The amortizing loan
(d) They would have identical exposure half way through their lives

A

The correct answer is choice ‘a’

A bullet bond is a bond that pays coupons covering interest during the life of the bond and the principal at maturity. An amortizing loan pays the interest as well as a part of the principal with every payment. Therefore, the exposure of the amortizing loan continually reduces, and approaches zero towards the end of its life. The bullet bond will always have a higher exposure at any time during its life when compared to an equivalent amortizing loan.

165
Q

Credit exposure for derivatives is measured using

(a) Standard normal distribution
(b) Notional value of the derivative
(c) Forward looking exposure profile of the derivative
(d) Current replacement value

A

The correct answer is choice ‘c’

Current replacement values are a very poor measure of the credit exposure from a derivative contract, because the future value of these instruments is unpredictable, ie is stochastic, and the range of values it can take increases the further ahead in the future we look. Therefore it is common for credit exposures for derivatives to be measured using forward looking exposure profiles, which are distributions of the expected value of the derivative at the time horizon for which credit risk is being measured. To be conservative, a high enough quintile of this distribution is taken as the ‘loan equivalent value’ of the derivative as the exposure. Choice ‘c’ is the correct answer.

The notional value of derivative contracts generally tends to be quite high and unrelated to their economic value or the counterparty exposure. Therefore notional value is irrelevant.

166
Q

A derivative contract has a negative current replacement value. Which of the following statements is true about its loan equivalent value for credit risk calculations over a 2-year horizon?

(a) The current exposure can be used for loan equivalence calculations as that is an unbiased proxy for the future value.
(b) The credit exposure will be a given quintile of the expected distribution of the value of the derivatives contract in the future.
(c) The notional value of the derivatives contract should be used for loan equivalence calculations.
(d) Since the derivatives contract has a negative current replacement value, exposure will be zero

A

The correct answer is choice ‘b’

The current exposure is negative, so there is no immediate credit exposure. However, since the price of the derivative is volatile, we can reasonably expect the value to be greater than zero sometime in the future. This is a stochastic variable which will have a distribution, and not just a unique value, in the future that will represent the credit exposure. Since there is no unique value, a conservative approach is to pick a quintile of the distribution, and use that as the future value of the derivative contract, with the assurance that the probability of the credit exposure exceeding that quintile is known and has been consciously selected. This number can then be converted to a loan equivalent amount for credit risk purposes

167
Q

or a 10 year interest rate swap, what would be the worst time for a counterparty to default (in terms of the maximum likely credit exposure)

(a) Right after inception
(b) 2 years
(c) 10 years
(d) 7 years

A

The correct answer is choice ‘d’

Right after inception’ is incorrect as the interest rate swap (IRS) would be valued at close to zero right after inception and the credit risk would be minimum. Choice ‘c’ (ie 10 years, at maturity) is incorrect as at maturity there would be no more cash flows to exchange, and the replacement value of the contract would again be close to zero.

Therefore the worst time for the counterparty to default is somewhere between inception and maturity - in fact the range of possible outcomes for the contract increases with the passage of time, and we should find the worst time to default to be a later date. However, towards maturity, the value of the contract starts to go towards zero again, and the maximum value is reached around 7 years. 2 years is too early for the maximum to be reached for the 10 year IRS, and therefore choice a is the correct answer.

168
Q

For a FX forward contract, what would be the worst time for a counterparty to default (in terms of the maximum likely credit exposure)

(a) Roughly three-quarters of the way towards maturity
(b) Right after inception
(c) Indeterminate from the given information
(d) At maturity

A

The correct answer is choice ‘d’

With the passage of time, the range of possible values the FX contract can take increases. Therefore the maximum value of the contract, which is when the credit risk would be maximum, would be at maturity. (Note that this is different than an interest rate swap whose value at maturity approaches zero.) Therefore Choice ‘d’ is the correct answer and the others are incorrect.

169
Q

For a given notional amount, which of the following carries the greatest counterparty exposure (assuming the same counterparty credit rating for each):

(a) A futures contract on an equity index
(b) A one year interest rate swap
(c) A one year forward foreign exchange contract
(d) A one year certificate of deposit

A

The correct answer is choice ‘d’

The exposure at default is the greatest for the certificate of deposit as the entire notional amount is exposed to the risk of default. The other choices represent derivatives for which the current replacement value, which would be far less than notional, would be the credit exposure.

Said another way - if the counterparty were to default, the entire money in the CD would be at risk, whereas for the derivative contracts it would only be the replacement value that would be at risk.

170
Q

Which of the following carry greater counterparty risk: a forward contract on a 10 year note, or a commercial paper carrying a AA credit rating with identical maturity and notional?

(a) They both carry the same credit risk
(b) Credit risk can not be compared in these terms
(c) The forward contract has greater credit risk as its future gains are unknown
(d) The commercial paper has greater credit risk as the entire notional is outstanding

A

The correct answer is choice ‘d’

The commercial paper has greater credit risk as the entire notional is outstanding. On the forward contract, only the replacement value of the contract, which normally would be a mere fraction of the notional, would be at risk.

171
Q
Which of the following are ordered correctly in the order of debt seniority in a bankruptcy situation?
I. Equity, Subordinate debt, Senior debt
II. Senior debt, Preferred stock, Equity
III. Secured debt, Accounts payable, Preferred stock
IV. Secured debt, DIP financing, Equity
 	(a)	I
 	(b)	II, III and IV
 	(c)	II and III
 	(d)	I and IV
A

The correct answer is choice ‘c’

In a bankruptcy, equity ranks last. Preferred equity is one level above equity. Senior debt gets paid out first compared to junior debt, and secured debt is paid out first to the extent of the asset securing it (after which it counts as unsecured debt). Accounts payable and other short term liabilities are treated like unsecured creditors. Debtor-in-possession (DIP) financing ranks higher than any other asset as it is financing secured after the bankruptcy to continue the business.

Based on the above, statement I does not represent a correct ordering of seniority as equity is paid last. Similarly, DIP financing receives higher priority than even secured debt, and therefore statement IV is incorrect. T

172
Q

If the loss given default is denoted by L, and the recovery rate by R, then which of the following represents the relationship between loss given default and the recovery rate?

(a) R = 1 / L
(b) L = 1 + R
(c) R = 1 + L
(d) R = 1 - L

A

The correct answer is choice ‘d’

When a default occurs, the proportion of the exposure represented by the recovery rate is recovered. For example, if the recovery rate is 40% for a loan, the actual loss in the event of a default would be $60 for a $100 loan. In other words, the loss given default = 1 - recovery rate. Hence Choice ‘d’ is the correct answer. All other choices are incorrect.

173
Q

Company A issues bonds with a face value of $100m, sold at issuance at $98. Bank B holds $10m in face of these bonds acquired at a price of $70. What is Bank B’s exposure to the debt issued by Company A?

(a) $9.8m
(b) $6.86m
(c) $10m
(d) $7m

A

The correct answer is choice ‘d’

Bank B’s exposure is measured by the price it paid for the bonds, which in this case is $7m ($10m x 70/100). Hence Choice ‘d’ represents the correct answer.

(Note that the question is asking for ‘exposure’ and not the legal claim in the event of default. The legal claim in the event of default would be the full notional of $10m. )

174
Q

Company A issues bonds with a face value of $100m, sold at $98. Bank B holds $10m in face of these bonds acquired at a price of $70. Company A then defaults, and the recovery rate is expected to be 30%. What is Bank B’s loss?

(a) $2.1m
(b) $4m
(c) $4.9m
(d) $7m

A

The correct answer is choice ‘b’

The bank paid $7m for the bonds, and expected recovery is $3m (30% x $10m face). Therefore Bank B’s loss is $4m ($7m - $3m). Choice ‘b’ is the correct answer. All other answers are incorrect.

175
Q

The unexpected loss for a credit portfolio at a given VaR estimate is defined as:

(a) Actual Loss - VaR
(b) Actual Loss - Expected Loss
(c) max(Actual Loss - Expected Loss, 0)
(d) VaR - Expected Loss

A

The correct answer is choice ‘d’

Unexpected loss for a credit portfolio refers to the excess of the VaR estimate over the average expected loss. The term ‘unexpected loss’ has this specific meaning in the context of credit risk, and not any other intuitive meaning. So if for a portfolio worth $100m expected losses are 4%, and the credit VaR at 99% is $12m, then unexpected losses at that VaR quintile are $8m. This is unrelated to actual realized losses versus expected losses.

Therefore Choice ‘d’ is the correct answer and the others are not.

Unexpected loss is used to determine the capital reserves to be maintained against a credit portfolio at a certain level of confidence.

176
Q

Which of the following is not an event of default covered in the ISDA Master Agreement?

I. failure to pay or deliver
II. credit support default
III. merger without assumption
IV. Bankruptcy
 	(a)	II and III
 	(b)	All are considered events of default
 	(c)	I
 	(d)	IV
A

Note that events of default under the ISDA MA are caused by one of the parties that is considered ‘at fault’. In contrast, “termination events” are events for which no one is at fault, for example changes in legislation, illegality etc that still justify termination of the transactions under the contract.

The ISDA MA describes the following 8 types of events of default:

  1. failure of pay or deliver
  2. breach of agreement
    credit support default
  3. misrepresentation
  4. default under specified transaction
  5. cross default
  6. bankruptcy
  7. merger without assumption

All of the options presented in the question are events of default.

177
Q

Which of the following best describes a ‘break clause’ ?

(a) A break clause gives either party to a transaction the right to terminate the transaction at market price at future date(s)
(b) A break clause describes rights and obligations when the derivative contract is broken
(c) A break clause sets out the conditions under which the transaction will be terminated upon non-compliance with the ISDA MA
(d) A break clause determines the process by which amounts due on early termination will be determined

A

The correct answer is choice ‘a’

A break close, also called a ‘mutual put’, gives either party the right to terminate a transaction at market price at a given date, or dates in the future. These are usually availed of in longer dated transactions, eg 10 years and over. For example, a 15-year swap might have a mutual put in year 5, and every 2 years thereafter.

178
Q

Under the ISDA MA, which of the following terms best describes the netting applied upon the bankruptcy of a party?

(a) Payment netting
(b) Multilateral netting
(c) Chapter 11
(d) Closeout netting

A

The correct answer is choice ‘d’

Netting is the ability to set just the net balances when amounts are both owed and due. Netting can take many forms. Payment netting is netting between counterparties that owe moneys to each other in the same currency under the same transaction (or master agreement). Closeout netting is when parties settle a net amount for the value of all outstanding transactions upon the occurrence of an event of default such as bankruptcy. Multiateral netting involves a third party that sets off exposures across counterparties that owe moneys to each other.

Closeout netting under the ISDA master agreement enables a party to terminate transactions early if an Event of Default or Termination Event occurs in respect of the other party. It involves the calculation and netting of the termination values of all transactions to produce a single amount payable between the parties. Closeout netting is therefore the correct answer.

179
Q

What ensures that firms are not able to selectively default on some obligations without being considered in default on the others?

(a) Exchange listing requirements
(b) Cross-default clauses in debt covenants
(c) Chapter 11 regulations
(d) The bankruptcy code

A

The correct answer is choice ‘b’

It is the cross-default clauses in debt agreements that generally provide that a default on one obligation is considered a credit event applying to all debts of the obligor, and therefore we are able to deal with credit risk at the borrower level, and not at the level of the individual security. It also helps avoid situations where borrowers can selectively default on some obligations while continuing to service others.

180
Q

When considering a request for a loan from a retail customer, which of the following factors is relevant for a bank to consider:

(a) The credit worthiness of the retail customer
(b) The other retail loans in its portfolio
(c) The contribution this new loan would bring to total portfolio risk
(d) All of the above

A

The correct answer is choice ‘d’

The credit worthiness of the retail customer is certainly a factor for the bank to consider as it will need to price the loan to cover the expectation of default. At the same time, it will need to look at other loans in its portfolio as to avoid unacceptable concentration risk. A corollary of the same theme is that the bank will need to take a portfolio view of the loan request and consider its contribution to total portfolio risk

181
Q

Which of the following statements are true:
I. Pre-settlement risk is the risk that one of the parties to a contract might default prior to the maturity date or expiry of the contract.
II. Pre-settlement risk can be partly mitigated by providing for early settlement in the agreements between the counterparties.
III. The current exposure from an OTC derivatives contract is equivalent to its current replacement value.
IV. Loan equivalent exposures are calculated even for exposures that are not loans as a practical matter for calculating credit risk

A

The correct answer is choice ‘b’

Pre-settlement risk is the risk that one of the counterparties defaults prior to the date for the maturity of the transaction in question. This may be an unrelated default, in fact there may have been no default on that particular contract, but the party may have defaulted on its other obligations, or filed for bankruptcy. To deal with such cases and to protect the interests of both the parties, it is common to provide for immediate termination of positions and settlement based on the current replacement value of the contracts. Therefore statements I and II are correct.

Statement III is correct as well - the exposure from an OTC derivative contract derives from its current replacement value, and not the notional. If the current replacement value is negative, then the credit exposure is considered equal to zero.

Statement IV is correct as it is quite common to restate all exposures - those from credit lines, OTC derivatives etc - in loan equivalent terms prior to estimating credit risk.

182
Q
Which of the following can be used to reduce credit exposures to a counterparty:
I. Netting arrangements
II. Collateral requirements
III. Offsetting trades with other counterparties
IV. Credit default swaps
 	(a)	III and IV
 	(b)	I, II, III and IV
 	(c)	I and II
 	(d)	I, II and IV
A

The correct answer is choice ‘d’

Offsetting trades with other counterparties will not reduce credit exposure to a given counterparty. All other choices represent means of reducing credit risk.

183
Q

The risk that a counterparty fails to deliver its obligation upon settlement while having received the leg owed to it is called:

(a) Settlement risk
(b) Credit risk
(c) Replacement risk
(d) Pre-settlement risk

A

The correct answer is choice ‘a’

Choice ‘a’ is the correct answer. Settlement risk, as the name suggests, arises upon settlement when one of the parties delivers its obligation under the transaction and the other does not. Consider a EUR/USD FX forward contract maturing in a month. At maturity, one of the parties will deliver EURs and the other USDs. If one party fails to deliver, then it constitutes a very large risk to the other party. This risk is much larger than pre-settlement risk, because the amount at risk is the entire notional and not just the replacement value. Of course, settlement risk exists for a very short period of time, no more than a number or hours or a day.

There is no such thing as ‘replacement risk’, and credit risk is a larger category of which settlement risks is one component. Settlement risk is the most appropriate answer.

184
Q

Which of the following is not a credit event under ISDA definitions?

(a) Rating downgrade
(b) Failure to pay
(c) Obligation accelerations
(d) Restructuring

A

The correct answer is choice ‘a’

According to ISDA, a credit event is an event linked to the deteriorating credit worthiness of an underlying reference entity in a credit derivative. The occurrence of a credit event usually triggers full or partial termination of the transaction and a payment from protection seller to protection buyer. Credit events include

  • bankruptcy,
  • failure to pay,
  • restructuring,
  • obligation acceleration,
  • obligation default and
  • repudiation/moratorium.

A rating downgrade is not a credit event.

185
Q

Which of the following is NOT true in respect of bilateral close out netting:

(a) The net amount due is immediately receivable or payable
(b) Transactions are separated by transaction type and immediately settled separately at each

s replacement value
(c) All transactions are netted against each other
(d) All transactions are immediately closed out upon the occurrence of a credit event for either of the counterparties

A

The correct answer is choice ‘b’

Choice ‘d’, Choice ‘c’ and Choice ‘a’ correctly describe a bilateral close out netting as recommended by the ISDA. However Choice ‘b’ is not correct as it suggests individual settlement of transactions without netting which is the whole point of bilateral close out netting.

186
Q

Which of the following measures can be used to reduce settlement risks:

(a) increasing the timing differences between the two legs of the transaction
(b) escrow arrangements using a central clearing house
(c) providing for physical delivery instead of netted cash settlements
(d) all of the above

A

The correct answer is choice ‘b’

increasing the timing differences between the two legs of the transaction will increase settlement risk and not reduce it. Using escrow arrangements, such as central clearing houses to settle transactions (eg the DTCC in the United States) reduces settlement risk. Cash settlements based on netting arrangements reduce settlement risk, while physical delivery combined with gross cash payments increase it.

187
Q

Which of the following is not a tool available to financial institutions for managing credit risk:

(a) Third party guarantees
(b) Cumulative accuracy plot
(c) Collateral
(d) Credit derivatives

A

The correct answer is choice ‘b’

Collateral, limits to avoid credit exposure concentrations, termination rights based upon credit ratings, third party guarantees and credit derivatives are all tools or instruments that financial institutions use to manage their credit risk. A cumulative accuracy plot measures the accuracy of ratings, and is not a tool for managing credit risk.

188
Q

: Which of the following statements are true:
I. The sum of unexpected losses for individual loans in a portfolio is equal to the total unexpected loss for the portfolio.
II. The sum of unexpected losses for individual loans in a portfolio is less than the total unexpected loss for the portfolio.
III. The sum of unexpected losses for individual loans in a portfolio is greater than the total unexpected loss for the portfolio.
IV. The unexpected loss for the portfolio is driven by the unexpected losses of the individual loans in the portfolio and the default correlation between these loans.
(a) I, II and III
(b) III and IV
(c) II and IV
(d) I and II

A

The correct answer is choice ‘b’

Unexpected losses (UEL) for individual loans in a portfolio will always sum to greater than the total unexpected loss for the portfolio (unless all the loans are correlated in such a way that they default together). This is akin to the ‘diversification effect’ in market risk, in other words, not all the obligors would default together. So the UEL for the portfolio will always be less than the sum of the UELs for individual loans. Therefore statement III is true.

This ‘diversification effect’ will be affected by the default correlations between the obligors, in cases where the probability of various obligors defaulting together is low, the UEL for the portfolio would be much less than the UEL for the individual loans. Hence statement IV is true.

I and II are false for the reasons explained above.

189
Q

Which of the following statements is true:
I. Expected credit losses are charged to the unit’s P&L while unexpected losses hit risk capital reserves.
II. Credit portfolio loss distributions are symmetrical
III. For a bank holding $10m in face of a defaulted debt that it acquired for $2m, the bank’s legal claim in the bankruptcy court will be $10m.
IV. The legal claim in bankruptcy court for an over the counter derivatives contract will be the notional value of the contract.
(a) III and IV
(b) I, II and IV
(c) I and III
(d) II and IV

A

The correct answer is choice ‘c’

Statement I is true as expected losses are the ‘cost of doing business’ and charged against the P&L of the unit holding the exposure. When evaluating the business unit, expected losses are taken into account. Unexpected losses however require risk capital reserves to be maintained against them.

Statement II is not true. Credit portfolio loss distributions are not symmetrical, in fact they are highly skewed and have heavy tails.

Statement III is true. The notional, or the face value of a defaulted debt is the basis for a claim in bankruptcy court, and not the market value.

Statement IV is false. In the case of over the counter instruments, the replacement value of the contract represents the amount of the claim, and not the notional amount (which can be very high!).

190
Q

Concentration risk in a credit portfolio arises due to:

(a) A high degree of correlation between the default probabilities of the credit securities in the portfolio
(b) A low degree of correlation between the default probabilities of the credit securities in the portfolio
(c) Independence of individual default losses for the assets in the portfolio
(d) Issuers of the securities in the portfolio being located in the same country

A

The correct answer is choice ‘a’

Concentration risk in a credit portfolio arises due to a high degree of correlation between the default probabilities of the issuers of securities in the portfolio. For example, the fortunes of the issuers in the same industry may be highly correlated, and an investor exposed to multiple such borrowers may face ‘concentration risk’.

A low degree of correlation, or independence of individual defaults in the portfolio actually reduces or even eliminates concentration risk.

The fact that issuers are from the same country may not necessarily give rise to concentration risk - for example, a bank with all US based borrowers in different industries or with different retail exposure types may not face practically any concentration risk. What really matters is the default correlations between the borrowers, for example a lender exposed to cement producers across the globe may face a high degree of concentration risk.

191
Q

An investor enters into a 5-year total return swap with Bank A, with the investor paying a fixed rate of 6% annually on a notional value of $100m to the bank and receiving the returns of the S&P500 index with an identical notional value. The swap is reset monthly, ie the payments are exchanged monthly. On Jan 1 of the fourth year, after settling the last month’s payments, the bank enters bankruptcy. What is the legal claim that the hedge fund has against the bank in the bankruptcy court?

(a) $6m
(b) $0, as all payments on the swap are current
(c) $100m
(d) The replacement value of the swap

A

The correct answer is choice ‘d’

According to ISDA standard definitions, the legal claim for OTC derivatives is the current replacement value of the contract. Therefore Choice ‘d’ is the correct answer. None of the other choices are correct.

192
Q

Which of the following is the most accurate description of EPE (Expected Positive Exposure):

(a) The average of the distribution of positive exposures at a specified future date
(b) The maximum average credit exposure over a period of time
(c) Weighted average of the future positive expected exposure across a time horizon.
(d) The price that would be received to sell an asset or paid to transfer a liability in an orderly transaction between market participants at the measurement date

A

The correct answer is choice ‘c’

When a derivative transaction is entered into, its value generally is close to zero. Over time, as the value of the underlying changes, the transaction acquires a positive or negative value. It is not possible to predict the future value of the transaction in advance, however distributional assumptions can be made and potential exposure can be measured in multiple ways. Of all the possible future exposures, it is generally positive exposures that are relevant to credit risk because that is the only situation where the bank may lose money from a default of the counterparty.

The maximum (generally a quantile eg, the 97.5th quantile) exposure possible over the time of the transaction is the ‘Potential Future Exposure’, or PFE.

The average of the distribution of positive exposures at a specified date before the longest trade in the portfolio is called ‘Expected Exposure’, or EE.

The expected positive exposure calculated as the weighted average of the future positive Expected Exposure across a time horize is called the EPE, or the ‘Expected Positive Exposure’.

The price that would be received to sell an asset or paid to transfer a liability in an orderly transaction between market participants at the measurement date - is the ‘fair value’, as defined under FAS 157.

Therefore the corect answer is that EPE is the weighted average of the future positive expected exposure across a time horizon.

193
Q

Which of the following formulae describes CVA (Credit Valuation Adjustment)? All acronyms have their usual meanings (LGD=Loss Given Default, ENE=Expected Negative Exposure, EE=Expected Exposure, PD=Probability of Default, EPE=Expected Positive Exposure, PFE=Potential Future Exposure)

(a)

LGD * EE * PD

(b)

LGD * PFE * PD

(c)

LGD * ENE * PD

(d)

LGD * EPE * PD

A

The correct answer is choice ‘d’

The correct definition of CVA is LGD * EPE * PD. All other answers are incorrect.

CVA reflects the adjustment for counterparty default on derivative and other trading book transactions. This reflects the credit charge, that neeeds to be reduced from the expected value of the transaction to determine its true value. It is calculated as a product of the loss given default, the probability of default and the average weighted exposure of future EPEs across the time horizon for the transaction.

The future exposures need to be discounted to the present, and occasionally the equations for CVA will state that explicitly. Similarly, in some more advanced dynamic models the correlation between EPE and PD is also accounted for. The conceptual ideal though remains the same: CVA=LGDEPEPD.

194
Q

Random recovery rates in respect of credit risk can be modeled using:

(a) the omega distribution
(b) the binomial distribution
(c) the beta distribution
(d) the normal distribution

A

The beta distribution is commonly used to model recovery rates. It is a distribution for variables whose values lie between 0 & 1, and the parameters of the distribution can be estimated using the mean and standard deviation of the data. Therefore Choice ‘c’ is correct and the others are wrong.

195
Q

Which of the following statements is true:
I. Recovery rate assumptions can be easily made fairly accurately given past data available from credit rating agencies.
II. Recovery rate assumptions are difficult to make given the effect of the business cycle, nature of the industry and multiple other factors difficult to model.
III. The standard deviation of observed recovery rates is generally very high, making any estimate likely to differ significantly from realized recovery rates.
IV. Estimation errors for recovery rates are not a concern as they are not directionally biased and will cancel each other out over time.
(a) III and IV
(b) I, II and IV
(c) II and IV
(d) II and III

A

The correct answer is choice ‘d’

Recovery rates vary a great deal from year to year, and are difficult to predict. Therefore statement III is true. Similarly, any attempt to predict these is hamstrung by a high standard error, which can be as high as the historical mean itself. The error does not cancel itself out due to the effect of the business cycle making the error directionally biased. Thus statement IV is false.

Statement II is true as these are all factors that make forecasting recovery rates for any credit risk model rather difficult. Statement I is false because recovery rates are difficult to predict and assumptions are not easy to make.

196
Q

A stock that follows the Weiner process has its future price determined by:
(a) its expected return alone
(b) its standard deviation and past technical movements
(c) its expected return and standard deviation
(d) its current price, expected return and standard deviation
`

A

The correct answer is choice ‘d’

The change in the price of a security that follows a Weiner process is determined by its standard deviation and expected return. To get the price itself, we need to add this change in price to the current price. Therefore the future price in a Weiner process is determined by all three of current price, expected return and standard deviation.

197
Q

Which of the following statements is true in respect of a non financial manufacturing firm?
I. Market risk is not relevant to the manufacturing firm as it does not take proprietary positions
II. The firm faces market risks as an externality which it must bear and has no control over
III. Market risks can make a comparative assessment of profitability over time difficult
IV. Market risks for a manufacturing firm are not directionally biased and do not increase the overall risk of the firm as they net to zero over a long term time horizon
(a) I and II
(b) III and IV
(c) IV only
(d) III only

A

: Which of the following statements is true in respect of a non financial manufacturing firm?
I. Market risk is not relevant to the manufacturing firm as it does not take proprietary positions
II. The firm faces market risks as an externality which it must bear and has no control over
III. Market risks can make a comparative assessment of profitability over time difficult
IV. Market risks for a manufacturing firm are not directionally biased and do not increase the overall risk of the firm as they net to zero over a long term time horizon
(a) I and II
(b) III and IV
(c) IV only
(d) III only

198
Q

risk management function is best organized as:

(a) reporting directly to the traders, as to be closest to the point at which risks are being taken
(b) report independently of the risk taking functions
(c) integrated with the risk taking functions as risk management should be a pervasive activity carried out at all levels of the organization.
(d) a part of the trading desks and other risk taking teams

A

The correct answer is choice ‘b’

The point that this question is trying to emphasize is the independence of the risk management function. The risk function should be segregated from the risk taking functions as to maintain independence and objectivity.

Choice ‘d’, Choice ‘a’ and Choice ‘c’ run contrary to this requirement of independence, and are therefore not correct. The risk function should report directly to senior levels, for example directly to the audit committee, and not be a part of the risk taking functions.

199
Q

Which of the following formulae describes Marginal VaR for a portfolio p, where V_i is the value of the i-th asset in the portfolio? (All other notation and symbols have their usual meaning.)
(a)

(b)
(c)
(d) All of the above

A

The correct answer is choice ‘d’

Marginal VaR of a component of a portfolio is the change in the portfolio VaR from a $1 change in the value of the component. It helps a risk analyst who may be trying to identify the best way to influence VaR by changing the components of the portfolio. Marginal VaR is also important for calculating component VaR (for VaR disaggregation), as component VaR is equal to the marginal VaR multiplied by the value of the component in the portfolio.

Marginal VaR is by definition the derivative of the portfolio value with respect to the component i. This is reflected in Choice ‘a’ above. Using the definitions and relationships between correlation, covariance, beta and volatility of the portfolio and/or the component, we can show that the other two choices are also equivalent to Choice ‘a’.

Therefore all the choices present are correct.

200
Q

Which of the formulae below describes incremental VaR where a new position ‘m’ is added to the portfolio? (where p is the portfolio, and V_i is the value of the i-th asset in the portfolio. All other notation and symbols have their usual meaning.)

(a) δ γ
(b)

(c)

(d)

A

The correct answer is choice ‘b’

Incremental VaR is the change in portfolio VaR resulting from a change in a single position. This is accurately described by VaR_(p+a) - VaR_p. The other answers are incorrect, and describe other concepts.

It is important to know and understand the ideas behind MVaR (marginal VaR), CVaR (component VaR) and iVaR (incremental VaR), and the differences between them.

201
Q

Which of the following are true:
I. The total of the component VaRs for all components of a portfolio equals the portfolio VaR.
II. The total of the incremental VaRs for each position in a portfolio equals the portfolio VaR.
III. Marginal VaR and incremental VaR are identical for a $1 change in the portfolio.
IV. The VaR for individual components of a portfolio is sub-additive, ie the portfolio VaR is less than (or in extreme cases equal to) the sum of the individual VaRs.
V. The component VaR for individual components of a portfolio is sub-additive, ie the portfolio VaR is less than the sum of the individual component VaRs.
(a) II and IV
(b) II and V
(c) I and II
(d) I, III and IV

A

The correct answer is choice ‘d’

Statement I is true - component VaR for individual assets in the portfolio add up to the total VaR for the portfolio. This property makes component VaR extremely useful for risk disaggregation and allocation.

Stateent II is incorrect, the incremental VaRs for the positions in a portfolio do not add up to the portfolio VaR, in fact their sum would be greater.

Statement III is correct. Marginal VaR for an asset or position in the portfolio is by definition the change in the VaR as a result of a $1 change in that position. Incremental VaR is the change in the VaR for a portfolio from a new position added to the portfolio - and if that position is $1, it would be identical to the marginal VaR.

Statement IV is correct, VaR is sub-additive due to the diversification effect. Adding up the VaRs for all the positions in a portfolio will add up to more than the VaR for the portfolio as a whole (unless all the positions are 100% correlated, which effectively would mean they are all identical securities which means the portfolio has only one asset).

Statement V is in incorrect. As explained for Statement I above, component VaR adds up to the VaR for the portfolio.

202
Q

Which of the following is not a measure of risk sensitivity of some kind?

(a) Delta
(b) CR01
(c) PL01
(d) Convexity

A

The correct answer is choice ‘c’

Measures of risk sensitivity include delta, gamma, vega, PV01, convexity and CR01, among others. They allow approximating the change in the value of a portfolio from a change (generally small) in one of the underlying risk factors.

Risk sensitivity measures are derivatives of the value of the portfolio, calculated with respect to the risk factor. Some risk sensitivity measures are second derivatives, and allow a more precise calculation of the change in the value of the portfolio. Many risk sensitivities are represented by Greek letter, but not all.

Delta (δ) is a measure of the change in portfolio value based on a 1% change in the value of the underlying. Gamma (γ) is a second order derivative that improves the calculation as part of a Taylor expansion. CR01 is a measure of the change due to a 1 basis point change in the credit spread. PL01 is not a measure of any kind of risk sensitivity, it does not mean anything.

203
Q

Which of the following decisions need to be made as part of laying down a system for calculating VaR:
I. The confidence level and horizon
II. Whether portfolio valuation is based upon a delta-gamma approximation or a full revaluation
III. Whether the VaR is to be disclosed in the quarterly financial statements
IV. Whether a 10 day VaR will be calculated based on 10-day return periods, or for 1-day and scaled to 10 days
(a) I and III
(b) II and IV
(c) I, II and IV
(d) All of the above

A

The correct answer is choice ‘c’

While conceptually VaR is a fairly straightforward concept, a number of decisions need to be made to select between the different choices available for the exact mechanism to be used for the calculations.

The Basel framework requires banks to estimate VaR at the 99% confidence level over a 10 day horizon. Yet this is a decision that needs to be explicitly made and documented. Therefore ‘I’ is a correct choice.

At various stages of the calculations, portfolio values need to be determined. The valuation can be done using a ‘full valuation’, where each position is explicitly valued; or the portfolio(s) can be reduced to a handful of risk factors, and risk sensitivities such as delta, gamma, convexity etc be used to value the portfolio. The decision between the two approaches is generally based on computational efficiency, complexity of the portfolio, and the degree of exactness desired. ‘II’ therefore is one of the decisions that needs to be made.

The decision as to disclosing the VaR in financial filings comes after the VaR has been calculated, and is unrelated to the VaR calculation system a bank needs to set up. ‘III’ is therefore not a correct answer.

Though the Basel framework requires a 10-day VaR to be calculated, it also allows the calculation of the 1-day VaR and and scaling it to 10 days using the square root of time rule. The bank needs to decide whether it wishes to scale the VaR based on a 1-day VaR number, or compute VaR for a 10 day period to begin with. ‘IV’ therefore is a decision to be made for setting up the VaR system.

204
Q

Which of the following decisions need to be made as part of laying down a system for calculating VaR:
I. How returns are calculated, eg absoluted returns, log returns or relative/percentage returns
II. Whether VaR is calculated based on historical simulation, Monte Carlo, or is computed parametrically
III. Whether binary/digital options are included in the portfolio positions
IV. How volatility is estimated
(a) I and III
(b) II and IV
(c) I, II and IV
(d) All of the above

A

The correct answer is choice ‘c’

While conceptually VaR is a fairly straightforward concept, a number of decisions need to be made to select between the different choices available for the exact mechanism to be used for the calculations.

There is more than one way to calculate returns. Absolute returns may be relevant for risk factors where the size of the movement is unrelated to its current value. For other risk factors, the returns might scale with the size of the existing value of the risk factor, eg equity prices. The right return definition needs to be adopted for each risk factor, therefore ‘I’ is a correct choice.

The risk analyst has a Choice ‘b’etween parametric VaR, Monte Carlo, and historical simulation based VaR. ‘II’ therefore is one of the decisions that needs to be made (though historical simulation is the choice most often made).

The decision as to what to include in a portfolio is not a decision that is affected by choices made for VaR calculations. ‘III’ is therefore not a correct answer.

There are multiple ways to calculate volatility - including decisions on how long back in time to go for the data, and whether volatility clustering needs to be accounted for using EWMA or GARCH. Therefore ‘IV’ is a correct answer.

205
Q
ick underlying risk factors for a position in an equity index option:
I. Spot value for the index
II. Risk free interest rate
III. Volatility of the underlying
IV. Strike price for the option
 	(a)	I, II and III
 	(b)	II and II
 	(c)	I and IV
 	(d)	All of the above
A

The correct answer is choice ‘a’

The index option is affected by the spot value for the underlying index, as also the risk free interest rate, or the zero rate for the duration of the option. It is also affected by the volatility of the underlying. The ‘strike price’ is set and is fixed at the time the option is purchased, and therefore is not a risk factor.

Therefore other than IV, all other choices are valid risk factors that underlie an equity index option.

Other instruments may have other risk factors - for example, a long forex position will have the spot exchange rate as the only risk factor.

206
Q

The sensitivity (delta) of a portfolio to a single point move in the value of the S&P500 is $100. If the current level of the S&P500 is 2000, and has a one day volatility of 1%, what is the value-at-risk for this portfolio at the 99% confidence and a horizon of 10 days? What is this method of calculating VaR called?

(a) $4,660, Monte Carlo simulation VaR
(b) $14,736, parametric VaR
(c) $4,660, parametric VaR
(d) $14,736, historical simulation VaR

A

The correct answer is choice ‘b’

If the current level of the S&P 500 is 2000, and a single day volatility is 1%, and the delta (ie change in portfolio value from a one point change) is $100, then the 1 day volatility for the portfolio in dollars is 2000 * 1% * $100 = $2,000.

At the 99% confidence level, the value of the inverse cumulative density function for the normal distribution is 2.33 (=NORMSINV(99%), in Excel). Therefore the 1 day VaR will be 2.33 * $2000 = $4,660. Extending it to 10 days using the square root of time rule, we get the 10 day VaR as equal to SQRT(10)*4660 = $14,736.

Since this method of calculating VaR relies upon a delta approximation of a risk factor (in this case the S&P500), it is the parametric approach to calculating VaR (the other methods being historical simulation, and Monte Carlo simulation).

The 2015 Handbook provides an excellent example of parametric (and other) VaR calculations in Chapter 3 of Volume III of Book 3. The spreadsheet used for the illustration can be downloaded from http://www.prmia.org/prm-exam/handbook-resources.

207
Q

Which of the following situations are not suitable for applying parametric VaR:
I. Where the portfolio’s valuation is linearly dependent upon risk factors
II. Where the portfolio consists of non-linear products such as options and large moves are involved
III. Where the returns of risk factors are known to be not normally distributed
(a) I and III
(b) II and III
(c) I and II
(d) All of the above

A

The correct answer is choice ‘b’

Parametric VaR relies upon reducing a portfolio’s positions to risk factors, and estimating the first order changes in portfolio values from each of the risk factors. This is called the delta approximation approach. Risk factors include stock index values, or the PV01 for interest rate products, or volatility for options. This approach can be quite accurate and computationally efficient if the portfolio comprises products whose value behaves linearly to changes in risk factors. This includes long and short positions in equities, commodities and the like.

However, where non-linear products such as options are involved and large moves in the risk factors are anticipated, a delta approximation based valuation may not give accurate results, and the VaR may be misstated. Therefore in such situations parametric VaR is not advised (unless it is extended to include second and third level sensitivities which can bring its own share of problems).

Parametric VaR also assumes that the returns of risk factors are normally distributed - an assumption that is violated in times of market stress. So if it is known that the risk factor returns are not normally distributed, it is not advisable to use parametric VaR.

208
Q

Which of the following techniques is used to generate multivariate normal random numbers that are correlated?

(a) Markov process
(b) Simulation
(c) Cholesky decomposition of the correlation matrix
(d) Pseudo random number generator

A

The correct answer is choice ‘c’

A PRNG (pseudo random number generators of the kind included in statistical packages and Excel) is used to generate random numbers that are not correlated with each other, ie they are random. A Markov process is a stochastic model that depends only upon its current state. Simulation underlies many financial calculations. None of these directly relate to generating correlated multivariate normal random numbers. That job is done utilizing a Cholesky decomposition of the correlation matrix.

Specifically, a Cholesky decomposition involves the factorization of the correlation matrix into a lower triangular matrix (a square matrix all of whose entries above the diagonal are zero) and its transpose. This can then be combined with random numbers to generate a set of correlated normal random numbers. This technique is used for calculating Monte Carlo VaR.

209
Q

Which of the following statements are true in relation to Monte Carlo based VaR calculations:
I. Monte Carlo VaR relies upon a full revalution of the portfolio for each simulation
II. Monte Carlo VaR relies upon the delta or delta-gamma approximation for valuation
III. Monte Carlo VaR can capture a wide range of distributional assumptions for asset returns
IV. Monte Carlo VaR is less compute intensive than Historical VaR
(a) I and III
(b) II and IV
(c) I, III and IV
(d) All of the above

A

The correct answer is choice ‘a’

Monte Carlo VaR computations generally include the following steps:

  1. Generate multivariate normal random numbers, based upon the correlation matrix of the risk factors
  2. Based upon these correlated random numbers, calculate the new level of the risk factor (eg, an index value, or interest rate)
  3. Use the new level of the risk factor to revalue each of the underlying assets, and calculate the difference from the initial valuation of the portfolio. This is the portfolio P&L.
  4. Use the portfolio P&L to estimate the desired percentile (eg, 99th percentile) to get and estimate of the VaR.

Monte Carlo based VaR calculations rely upon full portfolio revaluations, as opposed to delta/delta-gamma approximations. As a result, they are also computationally more intensive. Because they are not limited by the range of instruments and the properties they can cover, they can capture a wide range of distributional assumptions for asset returns. They also tend to provide more robust estimates for the tail, including portions of the tail that lie beyond the VaR cutoff.

Therefore I and III are true, and the other two are not.

210
Q

Which of the following statements are true in relation to Historical Simulation VaR?
I. Historical Simulation VaR assumes returns are normally distributed but have fat tails
II. It uses full revaluation, as opposed to delta or delta-gamma approximations
III. A correlation matrix is constructed using historical scenarios
IV. It particularly suits new products that may not have a long time series of historical data available

(a)

II

(b)

II and III

(c)

I and IV

(d)

All of the above

A

The correct answer is choice ‘a’

Historical Simulation VaR is conceptually very straightforward: actual prices as seen during the observation period (1 year, 2 years, or other) become the ‘scenarios’ forming the basis of the valuation of the portfolio. For each scenario, full revaluation is performed, and a P&L data set becomes available from which the desired loss quantile can be extracted.

Historical simulation is based upon actually seen prices over a selected historical period, therefore no distributional assumptions are required. The data is what the data is, and is the distribution. Statement I is therefore not correct.

It uses full revaluation for each historical scenario, therefore statement II is correct.

Since the prices are taken from actual historical observations, a correlation matrix is not required at all. Statement III is therefore incorrect (it would be true for Monte Carlo and parametric Var).

Historical simulation VaR suffers from the limitation that if enough representative data points are no available during the historical observation period from which the scenarios are drawn, the results would be inaccurate. This is likely to be the case for new products. Therefore Statement IV is incorrect.

211
Q

Which of the following best describes the concept of marginal VaR of an asset in a portfolio:

(a) Marginal VaR is the change in the VaR estimate for the portfolio as a result of including the asset in the portfolio.
(b) Marginal VaR describes the change in total VaR resulting from a $1 change in the value of the asset in question.
(c) Marginal VaR is the value of the expected losses on occasions where the VaR estimate is exceeded.
(d) Marginal VaR is the contribution of the asset to portfolio VaR in a way that the sum of such calculations for all the assets in the portfolio adds up to the portfolio VaR.

A

The correct answer is choice ‘b’

Marginal VaR is just the change in total VaR from a $1 change in the value of the asset in the portfolio. All other answers are incorrect. Mathematically, it is expressed as follows, where VaRp is the VaR for the portfolio, and Vi is the value of the asset in question.

Other answers describe other VaR related concepts such as incremental VaR, Component VaR and Conditional VaR.

212
Q

Which of the following formulae correctly describes Component VaR. (p refers to the portfolio, and i is the i-th constituent of the portfolio. MVaR means Marginal VaR, and other symbols have their usual meanings.)

(a) III
(b) I and II
(c) II
(d) I

A

The correct answer is choice ‘b’

The first two formulae describe component VaR. The last formula is the formula for Marginal VaR. Therefore I and II is the correct answer.

Component VaR is a VaR decomposition technique that allows the total VaR for a portfolio to be broken down and attributed to the components of a portfolio. The total of the component VaR for each constituent of a portfolio is equal to the VaR for the portfolio. This property is extremely useful as opposed to the standalone VaR for each constituent taken alone as it can be used for allocating trading budgets.

213
Q

A stock that follows the Weiner process has its future price determined by:

(a) its standard deviation and past technical movements
(b) its expected return alone
(c) its expected return and standard deviation
(d) its current price, expected return and standard deviation

A

The correct answer is choice ‘d’

The change in the price of a security that follows a Weiner process is determined by its standard deviation and expected return. To get the price itself, we need to add this change in price to the current price. Therefore the future price in a Weiner process is determined by all three of current price, expected return and standard deviation

214
Q

A risk analyst peforming PCA wishes to explain 80% of the variance. The first orthogonal factor has a volatility of 100, and the second 40, and the third 30. Assume there are no other factors. Which of the factors will be included in the final analysis?

(a) First and Second
(b) First, Second and Third
(c) First
(d) Insufficient information to answer the question

A

The correct answer is choice ‘c’

The total variance of the system is 100^2 + 40^2 + 30^2 = 12500 (as variance = volatility squared). The first factor alone has a variance of 10,000, or 80%. Therefore only the first factor will be included in the final analysis, and the rest will be ignored.

Interestingly, this example highlights one of the limitations of PCA. Obviously, the second and third factors are material when considering volatility, though the effect of squaring them to get the variance makes them appear less important than they are.

215
Q

Which of the following are valid criticisms of value at risk:
I. There are many risks that a VaR framework cannot model
II. VaR does not consider liquidity risk
III. VaR does not account for historical market movements
IV. VaR does not consider the risk of contagion
(a) I and III
(b) I, II and IV
(c) II and IV
(d) All of the above

A

The correct answer is choice ‘b’

Risks such as abrupt changes to a firm’s business model caused by legislation, or the introduction of capital controls in foreign countries where a firm in invested, geo-political risks etc are not modelable in the traditional sense. These risks cannot be modeled using VaR. Therefore statement I is correct.

VaR indeed does not consider liquidity risk, it is only concerned with the standard deviation of portfolio returns. Statement II is a valid criticism.

Statement III is not correct, as VaR can consider historical price movements.

Statement IV is correct, as VaR does not consider systemic risk or the risk of contagion.

216
Q

Which of the following was not a policy response introduced by Basel 2.5 in response to the global financial crisis:

(a) Comprehensive Risk Model (CRM)
(b) Comprehensive Capital Analysis and Review (CCAR)
(c) Incremental Risk Charge (IRC)
(d) Stressed VaR (SVaR)

A

The correct answer is choice ‘b’

The CCAR is a supervisory mechanism adopted by the US Federal Reserve Bank to assess capital adequacy for bank holding companies it supervises. It was not a concept introduced by the international Basel framework.

The other three were indeed rules introduced by Basel 2.5, which was ultimately subsumed into Basel III.

Stressed VaR is just the standard 99%/10 day VaR, calculated with the assumption that relevant market factors are under stress.

The Incremental Risk Charge (IRC) is an estimate of default and migration risk of unsecuritized credit products in the
trading book. (Though this may sound like a credit risk term, it relates to market risk - for example, a bond rated A being downgraded to BBB. In the old days, the banking book where loans to customers are held was the primary source of credit risk, but with OTC trading and complex products the trading book also now holds a good deal of credit risk. Both IRC and CRM account for these.)

While IRC considers only non-securitized products, the CRM (Comprehensive Risk Model) considers securitized products such as tranches, CDOs, and correlation based instruments.

The IRC, SVaR and CRM complement standard VaR by covering risks that are not included in a standard VaR model. Their results are therefore added to the VaR for capital adequacy determination.

217
Q

The 99% 10-day VaR for a bank is $200mm. The average VaR for the past 60 days is $250mm, and the bank specific regulatory multiplier is 3. What is the bank’s basic VaR based market risk capital charge?

(a) $250mm
(b) $600mm
(c) $200mm
(d) $750mm

A

The correct answer is choice ‘d’

The current Basel rules for the basic VaR based charge for market risk capital set market risk capital requirements as the maximum of the following two amounts:

  1. 99%/10-day VaR,
  2. Regulatory Multiplier x Average 99%/10-day VaR of the past 60 days

The ‘regulatory multiplier’ is a number between 3 and 4 (inclusive) calculated based on the number of 1% VaR exceedances in the previous 250 days, as determined by backtesting.

  • If the number of exceedances is <= 4, then the regulatory multiplier is 3.
  • If the number of exceedances is between 5 and 9, then the multiplier = 3 + 0.2*(N-4), where N is the number of exceedances.
  • If the number of exceedances is >=10, then the multiplier is 4.

So you can see that in most normal situations the risk capital requirement will be dictated by the multiplier and the prior 60-day average VaR, because the product of these two will almost often be greater than the current 99% VaR.

The correct answer therefore is = max(200mm, 3*250mm) = $750mm.

Interestingly, also note that a 99% VaR should statistically be exceeded 1%*250 days = 2.5 times, which means if the bank’s VaR model is performing as it should, it will still need to use a reg multiplier of 3.

218
Q
Which of the following belong to the family of generalized extreme value distributions:
I. Frechet
II. Gumbel
III. Weibull
IV. Exponential
 	(a)	IV
 	(b)	I, II and III
 	(c)	II and III
 	(d)	All of the above
A

The correct answer is choice ‘b’

Extreme value theory focuses on the extreme and rare events, and in the case of VaR calculations, it is focused on the right tail of the loss distribution. In very simple and non-technical terms, EVT says the following:

  1. Pull a number of large iid random samples from the population,
  2. For each sample, find the maximum,
  3. Then the distribution of these maximum values will follow a Generalized Extreme Value distribution.

(In some ways, it is parallel to the central limit theorem which says that the the mean of a large number of random samples pulled from any population follows a normal distribution, regardless of the distribution of the underlying population.)

Generalized Extreme Value (GEV) distributions have three parameters: ξ (shape parameter), μ (location parameter) and σ (scale parameter). Based upon the value of ξ, a GEV distribution may either be a Frechet, Weibull or a Gumbel. These are the only three types of extreme value distributions.

219
Q
Which of the following are valid approaches for extreme value analysis given a dataset:
I. The Block Maxima approach
II. Least squares approach
III. Maximum likelihood approach
IV. Peak-over-thresholds approach
 	(a)	II and III
 	(b)	I, III and IV
 	(c)	I and IV
 	(d)	All of the above
A

The correct answer is choice ‘c’

For EVT, we use the block maxima or the peaks-over-threshold methods. These provide us the data points that can be fitted to a GEV distribution.

Least squares and maximum likelihood are methods that are used for curve fitting, and they have a variety of applications across risk management.

220
Q

A risk analyst attempting to model the tail of a loss distribution using EVT divides the available dataset into blocks of data, and picks the maximum of each block as a data point to consider.

Which approach is the risk analyst using?

(a) Fourier transformation
(b) Block Maxima approach
(c) Peak-over-thresholds approach
(d) Expected loss approach

A

The correct answer is choice ‘b’

The risk analyst is using the block maxima approach. The data points that result will then be used to fit a GEV distribution.

Expected shortfall refers to the expected losses beyond a specified threshold. The peaks-over-threshold approach is an alternative approach to the block maxima approach, and involves considering exceedances above a threshold. Fourier transformation is not relevant in this context, and is a non-sensical option.

221
Q

The largest 10 losses over a 250 day observation period are as follows. Calculate the expected shortfall at a 98% confidence level:

20m
19m
19m
17m
16m
13m
11m
10m
9m
9m
 	(a)	16
 	(b)	19.5
 	(c)	18.2
 	(d)	14.3
A

The correct answer is choice ‘c’

For a dataset with 250 observations, the top 2% of the losses will be the top 5 observations. Expected shortfall is the average of the losses beyond the VaR threshold. Therefore the correct answer is (20 + 19 + 19 + 17 + 16)/5 = 18.2m .

Note that Expected Shortfall is also called conditional VaR (cVaR), Expected Tail Loss and Tail average.

222
Q
Which of the following are considered properties of a 'coherent' risk measure:
I. Monotonicity
II. Homogeneity
III. Translation Invariance
IV. Sub-additivity
 	(a)	II and IV
 	(b)	I and III
 	(c)	II and III
 	(d)	All of the above
A

The correct answer is choice ‘a’

All of the properties described are the properties of a ‘coherent’ risk measure.

Monotonicity means that if a portfolio’s future value is expected to be greater than that of another portfolio, its risk should be lower than that of the other portfolio. For example, if the expected return of an asset (or portfolio) is greater than that of another, the first asset must have a lower risk than the other. Another example: between two options if the first has a strike price lower than the second, then the first option will always have a lower risk if all other parameters are the same. VaR satisfies this property.

Homogeneity is easiest explained by an example: if you double the size of a portfolio, the risk doubles. The linear scaling property of a risk measure is called homogeneity. VaR satisfies this property.

Translation invariance means adding riskless assets to a portfolio reduces total risk. So if cash (which has zero standard deviation and zero correlation with other assets) is added to a portfolio, the risk goes down. A risk measure should satisfy this property, and VaR does.

Sub-additivity means that the total risk for a portfolio should be less than the sum of its parts. This is a property that VaR satisfies most of the time, but not always. As an example, VaR may not be sub-additive for portfolios that have assets with discontinuous payoffs close to the VaR cutoff quantile.

223
Q

Which of the following is not a limitation of the univariate Gaussian model to capture the codependence structure between risk factros used for VaR calculations?

(a) Determining the covariance matrix becomes an extremely difficult task as the number of risk factors increases.
(b) A single covariance matrix is insufficient to describe the fine codependence structure among risk factors as non-linear dependencies or tail correlations are not captured.
(c) It cannot capture linear relationships between risk factors.
(d) The univariate Gaussian model fails to fit to the empirical distributions of risk factors, notably their fat tails and skewness.

A

The correct answer is choice ‘c’

In the univariate Gaussian model, each risk factor is modeled separately independent of the others, and the dependence between the risk factors is captured by the covariance matrix (or its equivalent combination of the correlation matrix and the variance matrix). Risk factors could include interest rates of different tenors, different equity market levels etc.

While this is a simple enough model, it has a number of limitations.

First, it fails to fit to the empirical distributions of risk factors, notably their fat tails and skewness. Second, a single covariance matrix is insufficient to describe the fine codependence structure among risk factors as non-linear dependencies or tail correlations are not captured. Third, determining the covariance matrix becomes an extremely difficult task as the number of risk factors increases. The number of covariances increases by the square of the number of variables.

But an inability to capture linear relationships between the factors is not one of the limitations of the univariate Gaussian approach - in fact it is able to do that quite nicely with covariances.

A way to address these limitations is to consider joint distributions of the risk factors that capture the dynamic relationships between the risk factors, and that correlation is not a static number across an entire range of outcomes, but the risk factors can behave differently with each other at different intersection points.

224
Q

As the persistence parameter under EWMA is lowered, which of the following would be true:

(a) The model will give lower weight to recent returns
(b) High variance from the recent past will persist for longer
(c) The model will react slower to market shocks
(d) The model will react faster to market shocks

A

The correct answer is choice ‘d’

The persistence parameter, λ, is the coefficient of the prior day’s variance in EWMA calculations. A higher value of the persistence parameter tends to ‘persist’ the prior value of variance for longer. Consider an extreme example - if the persistence parameter is equal to 1, the variance under EWMA will never change in response to returns.

1 - λ is the coefficient of recent market returns. As λ is lowered, 1 - λ increases, giving a greater weight to recent market returns or shocks. Therefore, as λ is lowered, the model will react faster to market shocks and give higher weights to recent returns, and at the same time reduce the weight on prior variance which will tend to persist for a shorter period.

225
Q

Financial institutions need to take volatility clustering into account:
I. To avoid taking on an undesirable level of risk
II. To know the right level of capital they need to hold
III. To meet regulatory requirements
IV. To account for mean reversion in returns
(a) I, II and III
(b) I, II and IV
(c) II, III and IV
(d) I & II

A

The correct answer is choice ‘d’

Volatility clustering leads to levels of current volatility that can be significantly different from long run averages. When volatility is running high, institutions need to shed risk, and when it is running low, they can afford to increase returns by taking on more risk for a given amount of capital. An institution’s response to changes in volatility can be either to adjust risk, or capital, or both. Accounting for volatility clustering helps institutions manage their risk and capital and therefore statements I and II are correct.

Regulatory requirements do not require volatility clustering to be taken into account (at least not yet). Therefore statement III is not correct, and neither is IV which is completely unrelated to volatility clustering.

226
Q

As the persistence parameter under GARCH is lowered, which of the following would be true:

(a) The model will react faster to market shocks
(b) High variance from the recent past will persist for longer
(c) The model will give lower weight to recent returns
(d) The model will react slower to market shocks

A

The correct answer is choice ‘a’

The persistence parameter, β, is the coefficient of the most recent day’s returns in GARCH calculations. A higher value of the persistence parameter tends to ‘persist’ the prior value of variance for longer. Consider an extreme example - if the persistence parameter is equal to 1, the variance under GARCH will never change in response to returns.

227
Q

The EWMA and GARCH approaches to volatility clustering can be applied to VaR calculations using:

(a) analytical VaR
(b) Monte Carlo simulations
(c) historical simulations
(d) all of the above

A

The correct answer is choice ‘d’

The EWMA and GARCH approaches to volatility clustering are independent of the method used to calculate VaR. Therefore Choice ‘d’ is the correct answer.

228
Q

Which of the following statements is true:
I. If the sum of its parameters is less than one, GARCH is a mean reverting model of volatility, while EWMA is never mean reverting
II. Standardized returns under both EWMA and GARCH show less non-normality than non standardized returns
III. Steady state variance under GARCH is affected only by the persistence coefficient
IV. Good risk measures are always sub-additive
(a) I, II and III
(b) II, III and IV
(c) I, II and IV
(d) I & II

A

The correct answer is choice ‘c’

GARCH is a mean reverting model of volatility, with a steady state mean that the model reverts to in the absence of market shocks. EWMA is not mean reverting and volatility under EWMA stays constant in the absence of shocks. Therefore statement I is correct.

Both EWMA and GARCH models are designed to address volatility clustering, which explains much of the non-normality of returns. When returns are standardized to the volatility calculations under either of these methods, the returns appear far closer to the normal distribution than non-standardized returns. (If it were not to, then there would have been no point in using these techniques.) Statement II is correct.

Steady state variance under GARCH is defined as ω / (1 - α - β), where α and β are the two parameters (called the reaction and persistence coefficient respectively). Clearly, it is affected by more than just the persistence coefficient, therefore statement III is not correct.

Sub-additivity is a very desired property in risk measures. It implies the sum of the parts is greater than the whole. In the case of risk measures, the whole is smaller than the sum of the parts due to diversification. Statement IV is true.

Therefore the correct answer is Choice ‘c’

229
Q

Conditional VaR refers to:

(a) the value at risk estimate for non-normal distributions
(b) expected average losses above a given VaR estimate
(c) value at risk when certain conditions are satisfied
(d) expected average losses conditional on the VaR estimates not being exceeded

A

The correct answer is choice ‘b’

Conditional VaR is the expected average losses above a given percentile, or a given VaR estimate at the given level of confidence. For example, if we know what the 99% VaR is, we still do not know what we can expect our losses to be if this VaR loss estimate were to be exceeded. Conditional VaR provides the answer to this question by providing an estimate of the average or expected losses beyond 99% mark. Therefore Choice ‘b’ is the correct answer and the other choices are mostly non-sensical.

230
Q

Which of the following statements is true in relation to a normal mixture distribution:
I. The mixture will always have a kurtosis greater than a normal distribution with the same mean and variance
II. A normal mixture density function is derived by summing two or more normal distributions
III. VaR estimates for normal mixtures can be calculated using a closed form analytic formula
(a) II and III
(b) I, II and III
(c) I and III
(d) I and II

A

The correct answer is choice ‘d’

Normal mixtures have higher peaks, and therefore higher kurtosis than a normal distribution with an equivalent mean and variance. Therefore statement I is correct.

The term ‘normal mixture’ literally means that - the distribution is derived by summing two or more normal distributions. Statement II is correct. One interesting thing to note about normal mixtures is that their mean and variances are just the weighted averages of the means and variances of their underlying component normal distributions. But their kurtosis is higher than that of either of the components. They are more peaked, and have fatter tails, a property that makes them useful in finance.

Unfortunately there is no analytical formula for calculating VaR based on normal mixtures. However, we can back solve for VaR (using Excel’s Solver, for example), given we know the density functions for the underlying normal distributions. Statement III is not correct.

231
Q

Which of the following statements is true in relation to a normal mixture distribution:
I. Normal mixtures represent one possible solution to the problem of volatility clustering
II. A normal mixture VaR will always be greater than that under the assumption of normally distributed returns
III. Normal mixtures can be applied to situations where a number of different market scenarios with different probabilities can be expected
(a) III
(b) II and III
(c) I and II
(d) I, II and III

A

The correct answer is choice ‘a’

Normal mixtures address fat or heavy tails, not volatility clustering. Therefore statement I is not correct.

Statement II is not correct. Where VaR is calculated at low levels of confidence, VaR based on normal mixtures may be lower than that under a normal assumption. This is no different than for other fat tailed distributions.

Statement III is correct. In situations where multiple market scenarios can unfold with a given probability, and each scenario is normal, we can express the result with a normal mixture where the underlying normal distributions have the probabilities of the different scenarios.

232
Q

What is the combined VaR of two securities that are perfectly positively correlated.

(a) The difference of the two VaRs.
(b) The sum of the individual VaRs of the two securities.
(c) The root of the sum of squares of the individual VaRs of the two securities.
(d) Combined VaR cannot be derived using the available information.

A

The correct answer is choice ‘b’

Choice ‘b’ is the correct answer. When two securities have a correlation of +1, they are effectively the same security. In such cases, the standard deviations of the two securities are additive, which means the VaRs can simply be added together to get the combined VaR. All the other choices are incorrect.

Choice ‘c’ in particular would have been correct if the securities were completely uncorrelated, ie if they had a correlation of zero.

Choice ‘a’ would have been correct if their correlation were -1.

233
Q

If the systematic VaR for an equity portfolio is $100 and the specific VaR is $80, then which of the following is true in relation to the total VaR:

(a) Total VaR is greater than $180
(b) Total VaR is $20
(c) Total VaR is less than $180
(d) Total VaR is $180

A

The correct answer is choice ‘c’

Choice ‘c’ is correct because VaR is sub-additive in cases where correlation is less than one.

Specific VaR refers to the risk in the portfolio from security selection, ie the risk from holding the specific equities in the portfolio, while systematic risk refers to the market risk. Definitionally, specific risk and systematic risk are uncorrelated, ie their correlation is zero. Since their correlation is zero, combining them will produce a VaR number lower than their stand alone aggregate. Total risk includes both specific risk and systematic risk, and can be calculated taking into account the specific and systematic VaRs and their correlation.

234
Q

Which of the following is additive, ie equal to the sum of its components

(a) Specific VaR
(b) Component VaR
(c) Incremental VaR
(d) Conditional VaR

A

The correct answer is choice ‘b’

Component VaR measures the proportion of total VaR that can be allocated to each asset in the portfolio. It is based upon the covariance matrix multiplied by the weights, and each row represents the component VaR for the asset in question. Since the total of such a matrix is the total VaR, component VaR is additive. Component VaR is used to assess the contribution of each asset in the portfolio to total risk and has the useful property of being additive so some sense can be made of the contribution of each asset to total risk.

Incremental VaR, conditional VaR and VaR are sub-additive by definition, and therefore not the correct answer.

235
Q

Which of the following would not be a part of the principal component structure of the term structure of futures prices?

(a) Tilt component
(b) Curvature component
(c) Trend component
(d) Parallel component

A

The correct answer is choice ‘d’

The trend component refers to parallel shifts in the term structure, the tilt refers to changes in the shape of the term structure at the long and short ends, and the curvature refers to movements in the medium term part. The phrase ‘parallel component’ has no meaning and is not a part of the principal components in analyzing term structures.

Changes in the term structure can also be analyzed as “level, slope and curvature”, so you should be aware of this terminology as well to refer to the principal components of a term structure analysis.

236
Q

: According to the Basel II standard, which of the following conditions must be satisfied before a bank can use ‘mark-to-model’ for securities in its trading book?
I. Marking-to-market is not possible
II. Market inputs for the model should be sourced in line with market prices
III. The model should have been created by the front office
IV. The model should be subject to periodic review to determine the accuracy of its performance
(a) II and III
(b) I, II and IV
(c) III and IV
(d) I, II, III and IV

A

The correct answer is choice ‘b’

According to Basel II, where marking-to-market is not possible, banks may mark-to-model, where this can be demonstrated to be prudent. Marking-to-model is defined as any valuation which has to be benchmarked, extrapolated or otherwise calculated from a market input. When marking to model, an extra degree of conservatism is appropriate. Supervisory authorities will consider the following in assessing whether a mark-to-model valuation is prudent:

  • Senior management should be aware of the elements of the trading book which are subject to mark to model and should understand the materiality of the uncertainty this creates in the reporting of the risk/performance of the business.
  • Market inputs should be sourced, to the extent possible, in line with market prices. The appropriateness of the market inputs for the particular position being valued should be reviewed regularly.
  • Where available, generally accepted valuation methodologies for particular products should be used as far as possible.
  • Where the model is developed by the institution itself, it should be based on appropriate assumptions, which have been assessed and challenged by suitably qualified parties independent of the development process. The model should be developed or approved independently of the front office. It should be independently tested. This includes validating the mathematics, the assumptions and the software implementation.
  • There should be formal change control procedures in place and a secure copy of the model should be held and periodically used to check valuations.
  • Risk management should be aware of the weaknesses of the models used and how best to reflect those in the valuation output.
  • The model should be subject to periodic review to determine the accuracy of its performance (e.g. assessing continued appropriateness of the assumptions, analysis of P&L versus risk factors, comparison of actual close out values to model outputs).
  • Valuation adjustments should be made as appropriate, for example, to cover the uncertainty of the model valuation.

The model should be created independent of the front office, and not by it. Therefore statement III does not represent an appropriate choice. Choice ‘b’ is the correct answer.

237
Q

If an institution has $1000 in assets, and $800 in liabilities, what is the economic capital required to avoid insolvency at a 99% level of confidence? The VaR in respect of the assets at 99% confidence over a one year period is $100.

(a) 1000
(b) 200
(c) 100
(d) 1100

A

The correct answer is choice ‘c’

The economic capital required to avoid insolvency is just the asset VaR, ie $100. This means that if the worst case losses are realized, the institution would need to have a buffer equivalent to those losses which in this case will be $100, and this buffer is the economic capital.

The actual value of liabilities is not relevant as they are considered ‘riskless’ from the institution’s point of view, ie they will be taken at full value. In this particular case, the institution has $200 in capital which is more than the economic capital required.

238
Q

Regulatory arbitrage refers to:

(a) the practice of transferring business and profits to jurisdictions (such as those in other countries) to avoid or reduce capital adequacy requirements
(b) the practice of structuring a financial institution

s business as a bank holding company to arbitrage the differing capital and credit rating requirements for different business lines
(c) the practice of investing and financing decisions being driven by associated regulatory capital requirements as opposed to the true underlying economics of these decisions
(d) All of the above

A

The correct answer is choice ‘c’

239
Q

If the returns of an asset display a strong tendency for mean reversion, what is the relationship between annualized volatility calculated based on daily versus weekly volatilities (using the square root of time rule)?

(a) Weekly volatility will be greater than daily volatility
(b) Either daily or weekly volatility will be greater, depending upon how the week went
(c) Daily volatility will be greater than weekly volatility
(d) Daily and weekly volatilities will be the same

A

The correct answer is choice ‘c’

If returns display mean reversion, then clearly daily volatilities will be greater than weekly volatility, both annualized using the square root of time rule. Mean reversion would imply that the deviation from the mean will be lower over a longer time period than a shorter time period, and therefore annualized volatility based on daily volatility will be greater.

240
Q

Fill in the blank in the following sentence:
Principal component analysis (PCA) is a statistical tool to decompose a ____________ matrix into its principal components and is useful in risk management to reduce dimensions.
(a) Volatility
(b) Correlation
(c) Positive semi-definite
(d) Covariance

A

The correct answer is choice ‘c’

PCA is a statistical tool that decomposes a positive semi-definite matrix into its principal components. The first few principal components explain nearly all the variation and other components can then be ignored as they are too small in the larger picture. PCA in risk management is applied to a positive semi-definite correlation or covariance matrix to reveal the principal components that cause the variation. By allowing a focus on a few components, PCA reduces dimensionality.

While performing the math of PCA is unlikely to be asked in the PRMIA exam, you should remember that principal components have the additional property of being uncorrelated to each other which makes it useful as it is possible to vary one of the components without having to worry about the effect of that on the other components.

241
Q

Which of the following statements are true in relation to Principal Component Analysis (PCA) as applied to a system of term structures?
I. The factor weights on the first principal component will show whether there is common trend in the system
II. The factors to be applied to principal components are obtained from eigenvectors of the correlation matrix
III. PCA is a standard method for reducing dimensionality in data when considering a large number of correlated variables
IV. The smallest absolute eigenvalues and their associated eigenvectors are the most useful for explaining most of the variation
(a) I, II and III
(b) I and IV
(c) I and III
(d) II and IV

A

The correct answer is choice ‘a’

If you have not studied PCA prior to the preparing for the PRMIA exams, you will find the handbook to be a bit difficult to understand this topic. However, PRMIA have been asking questions about PCA on their exams so it is important to conceptually know what PCA is (and be able to answer the exam questions which are unlikely to ever require you to do a calculation).

Using PCA, a complex correlated system (eg, the rates in a term structure which are all correlated to each other) is effectively transformed into a ‘principal component representation’. A principal component representation expresses each of the independent variables in the system as a linear representation based on ‘principal components’. Principal components are derived from the eigenvectors of the correlation matrix. Remember that if you have a positive semi-definite correlation matrix (which will be an n x n square matrix), it will have ‘n’ eigenvectors, each with an eigenvalue. The eigenvectors will be orthogonal (ie perpendicular) to each other. The eigenvalues for each of the eigenvectors determine which are principal components - in descending order of the eignevalues. Therefore the eigenvector with the highest eigenvalue is the first principal component, the second highest value the second principal component and so on. If there are n variables in the system, then eigenvalue divided by n gives us the proportion of the variation explained by that particular principal component.

In the case of a PCA done on the term structure of interest rates, the first principal component is the ‘trend’ component, and under a principal component representation its coefficients are given by the first eigenvector. (By the way, remember the next two principal components are tilt and curvature respectively). If these coefficients are the same (they are, for an interest rate term structure), this means all rates move up or down when the value of the first principal component moves. Thust factor weights on the first component show whether there is common trend in the system. Statement I is correct.

The factors to be applied to the principal components are indeed obtained from the eigenvectors of the correlation matrix (and not the covariance matrix), and therefore statement II is correct.

PCA is used to reduce dimensionality, this is true and in fact one of the main reasons why PCA is used at all. Therefore statement III is correct.

Statement IV is false as it is the largest eigenvalues and not the smallest that determine the eigenvectors affecting most of the variation (through the principal components they represent).

Thus Choice ‘a’ is the correct answer.

242
Q

Which of the following is true in relation to Principal Component Analysis (PCA)?
I. An n x n positive definite square matrix will have n-1 eigenvectors
II. The eigenvalues for a correlation matrix can be derived from the corresponding values for the covariance matrix
III. Principal components are uncorrelated to each other
IV. PCA is useful as it allows 100% of the variation in a complex system to be explained by the first three principal components
(a) I and III
(b) III and IV
(c) I, II and IV
(d) III

A

The correct answer is choice ‘d’

An n x n positive definite square matrix will have n eigenvectors, and not n - 1. Therefore statement I is incorrect.

A correlation and covariance matrix are related to each other through the matrix of standard deviations. If the covariance matrix is represented by V, the correlation matrix by C and D is the diagonal matrix of standard deviations, then V = DCD. However, there is no simple relationship between the eigenvalues of the two matrices, and it is not possible to derive the eigenvalues for one given the eigenvalues for the other. Therefore statement II is false.

Principal components are uncorrelated to each other. That is correct, and in fact PCA is useful because of this being so. Statement III is therefore true.

PCA does not explain 100% of the variation in a system with just three components - statement IV is false. (Remember though that most (though not 100%) of the variation in a system of term structures is explained by the first three components - trend, tilt and curvature).

Thus Choice ‘d’ is the correct answer.

243
Q

What is the annualized steady state volatility under a GARCH model where alpha is 0.1, beta is 0.8 and omega is 0.00025?

(a) 0.08
(b) 0.1
(c) 0.0025
(d) 0.05

A

The correct answer is choice ‘d’

Steady state variance under the GARCH model is given by the formula ω/(1 - α – β). In this case, steady state variance therefore works out to 0.00025/(1 - 0.1 - 0.8) = 0.0025. Since this is the variance, volatility is the square root of 0.0025, which works out to 0.05.

Thus, 5% (=0.05) is the correct answer, and the others are incorrect.

244
Q

Which of the following statements is a correct description of the phrase present value of a basis point?

(a) It refers to the discounted present value of 1/100th of 1% of a future cash flow
(b) It is another name for duration
(c) It is the principal component representation of the duration of a bond
(d) It refers to the present value impact of 1 basis point move in an interest rate on a fixed income security

A

The correct answer is choice ‘d’

This is a trick question, no great science to it. Remember that the ‘present value of a basis point’ refers to PV01, which is the same as BPV (basis point value) referred to in the PRMIA handbook. In other textbooks, the same term is also variously called ‘DV01’ (dollar value of a basis point). Remember these other terms too.

PV01, or the present value of a basis point, is the change in the value of a bond (or other fixed income security) from a 1 basis point change in the yield. PV01 is calculated as (Price * Modified Duration/10,000).

245
Q

The returns for a stock have a monthly volatilty of 5%. Calculate the volatility of the stock over a two month period, assuming returns between months have an autocorrelation of 0.3.

(a) 10%
(b) 5%
(c) 8.062%
(d) 7.071%

A

The correct answer is choice ‘c’

The square root of time rule cannot be applied here because the returns across the periods are not independent. (Recall that the square root of time rule requires returns to be iid, independent and identically distributed.) Here there is a ‘autocorrelation’ in play, which means one period’s returns affect the returns of the other period.

This problem can be solved by combining the variance of the returns from the two consecutive periods in the same way as one would combine the variance of different assets that have a given correlation. In such cases we know that:
Variance (A + B) = Variance(A) + Variance(B) + 2CorrelationStdDev(A)*StdDev(B).
The standard deviation can be calculated by taking the square root of the variance.

Therefore the combined volatility over the two months will be equal to =SQRT((5%^2) + (5%^2) + 20.35%*5%) = 8.062%. All other answers are incorrect.

246
Q

The 10-day VaR of a diversified portfolio is $100m. What is the 20-day VaR of the same portfolio assuming the market shows a trend and the autocorrelation between consecutive periods is 0.2?

(a) 154.92
(b) 141.42
(c) 200
(d) 100

A

The correct answer is choice ‘a’

The square root of time rule cannot be applied here because the returns across the periods are not independent. (Recall that the square root of time rule requires returns to be iid, independent and identically distributed.) Here there is a ‘autocorrelation’ in play, which means one period’s returns affect the returns of the other period.

VaR is merely a multiple of volatility, or standard deviation, using the factor for the desired confidence level. VaR across time periods can be combined using the square root of time rule, in fact if returns were independent we could have easily calculated the VaR for the 20-day period as equal to $100m*SQRT(20/10) = $141.4m

But in this case we need to account for the autocorrelation. We can do this akin to the way we combine the VaR of different assets that have a given correlation. Since we know that:
Variance (A + B) = Variance(A) + Variance(B) + 2CorrelationStdDev(A)*StdDev(B).
The standard deviation, which the VaR is a multiple of, can be calculated by taking the square root of the variance.

Therefore the combined VaR over the two months will be equal to =SQRT( (100^2) + (100^2) + 20.2100*100 )= $154.92m. All other answers are incorrect.

247
Q

The daily VaR of an investor’s commodity position is $10m. The annual VaR, assuming daily returns are independent, is ~$158m (using the square root of time rule). Which of the following statements are correct?
I. If daily returns are not independent and show mean-reversion, the actual annual VaR will be higher than $158m.
II. If daily returns are not independent and show mean-reversion, the actual annual VaR will be lower than $158m.
III. If daily returns are not independent and exhibit trending (autocorrelation), the actual annual VaR will be higher than $158m.
III. If daily returns are not independent and exhibit trending (autocorrelation), the actual annual VaR will be lower than $158m.
(a) II and III
(b) I and IV
(c) II and IV
(d) I and III

A

The correct answer is choice ‘a’

In the case of mean reversion, the actual VaR would be lower than that estimated using the square root of time rule. This is because gains over a period would be followed by losses so that the price can revert to the mean. In such cases, the autocorrelation between subsequent periods is effectively negative. This means the combined VaR over the periods would be lower.

In the case of positive autocorrelation, the actual VaR would be higher than that estimated using the square root of time rule for exactly the opposite reason than that described for the mean-reverting case.

(Recall that Variance (A + B) = Variance(A) + Variance(B) + 2CorrelationStdDev(A)*StdDev(B). In cases where correlation is zero, the variance can simply be added together (which is the case for iid observations). In cases where the correlation is negative, the combined variance (and therefore standard deviation and also VaR) will be lower; and where correlation is positive, the combined variance (and therefore standard deviation and also VaR) will be higher.)

248
Q

A risk analyst uses the GARCH model to forecast volatility, and the parameters he uses are ω = 0.001%, α = 0.05 and β = 0.93. Yesterday’s daily volatility was calculated to be 1%. What is the long term annual volatility under the analyst’s model?

(a) 0.25 %
(b) 0.22 %
(c) 7.94 %
(d) 3.54 %

A

The correct answer is choice ‘d’

Recall the following summary of the GARCH model. The long term variance in a GARCH model is given by ω/(1 - α - β). In this case, this works out to =SQRT(0.001/(1 - 0.05 - 0.93)) * SQRT(250) = 3.54%. Yesterday’s volatility of 1% is irrelevant to the question.

249
Q

A stock’s volatility under EWMA is estimated at 3.5% on a day its price is $10. The next day, the price moves to $11. What is the EWMA estimate of the volatility the next day? Assume the persistence parameter λ = 0.93.

(a) 0.0421
(b) 0.0429
(c) 0.0018
(d) 0.0224

A

The correct answer is choice ‘a’

Recall the formula for calculating variance under EWMA. See below. Therefore the correct answer is =SQRT((1 - 0.93)(LN(11/10))^2 + 0.93((3.5%^2))) = 4.21%. Other answers are incorrect. Note that continuous returns are to be used, ie ln(11/10) and not discrete returns (=1/10) - though generally the difference between the two is small over short time periods. (If in the exam the answer doesn’t exactly match, try using discrete returns.)

250
Q

An investor holds a bond portfolio with three bonds with a modified duration of 5, 10 and 12 years respectively. The bonds are currently valued at $100, $120 and $150. If the daily volatility of interest rates is 2%, what is the 1-day VaR of the portfolio at a 95% confidence level?

(a) 163.11
(b) 115.51
(c) 370
(d) 165

A

The correct answer is choice ‘b’

The total value of the portfolio is $370 (=$100 + $120 + $150). The modified duration of the portfolio is the weighted average of the MDs of the different bonds, ie =(5 * 100/370) + (10 * 120/370) + (12 * 150/370) = 9.46.

This means that for every 1% change in interest rates, the value of the portfolio changes by 9.46%. Since the daily volatility of interest rates is 2%, the 95% confidence level move will be 1.65 * 2% = 3.30%. Thus, the VaR of the portfolio at the 95% confidence level will be 3.3 * 9.46% * $370 = $115.51.

251
Q

Which of the following represent the parameters that define a VaR estimate?

(a) confidence level, the holding period and expected volatility
(b) trading position and distribution assumption
(c) confidence level and the underlying stochastic process
(d) confidence level and the holding period

A

The correct answer is choice ‘d’

VaR is specified by just two parameters - the holding period, and the confidence level. We speak of, for example, a 10-day VaR at the 95% confidence level. No other parameters are required.

252
Q

In setting confidence levels for VaR estimates for internal limit setting, it is generally desirable:

(a) that actual losses exceed the VaR estimates with some reasonably observable frequency that is neither too high nor too low
(b) that actual losses very frequently exceed the VaR estimates
(c) that actual losses exceed the VaR estimates on only the rarest of occasions
(d) that actual losses never exceed the VaR estimates

A

The correct answer is choice ‘a’

If the confidence levels for a VaR estimate are set too high, there may never be any exceedences, ie actual losses will never exceed VaR estimates. For limit setting, we want actual losses to exceed the VaR estimates enough number of times as during the year so that the limits are considered seriously. If the VaR estimate is exceeded too many times, or never, then it is unlikely to be considered seriously. Therefore Choice ‘a’ is the correct answer.

The other answers are incorrect as they either require the VaR to be too high (ie zero or rare excess loss situations) or too low (ie there will be too many cases of excess loss situations to be taken seriously).

253
Q

Which of the following statements are true:
I. It is usual to set a very high confidence level when estimating VaR for capital requirements.
II. For model validation, very high VaR confidence levels are used to minimize excess losses.
III. For limit setting for managing day to day positions, it is usual to set VaR confidence levels that are neither too low to be exceeded too often, nor too high as to be never exceeded.
IV. The Basel accord requirements for market risk capital require the use of a time horizon of 1 year.
(a) I and III
(b) II and III
(c) I and IV
(d) III and IV

A

Which of the following statements are true:
I. It is usual to set a very high confidence level when estimating VaR for capital requirements.
II. For model validation, very high VaR confidence levels are used to minimize excess losses.
III. For limit setting for managing day to day positions, it is usual to set VaR confidence levels that are neither too low to be exceeded too often, nor too high as to be never exceeded.
IV. The Basel accord requirements for market risk capital require the use of a time horizon of 1 year.
(a) I and III
(b) II and III
(c) I and IV
(d) III and IV

254
Q

The minimum ‘multiplication factor’ to be applied to VaR calculations for calculating the capital requirements for the trading book per Basel II is equal to:

(a) 4
(b) 1
(c) 3
(d) 2

A

The correct answer is choice ‘c’

The minimum multiplication factor specified under Basel II is 3. Therefore the correct answer is Choice ‘c’. The exact requirements are laid down below.

Each bank must meet, on a daily basis, a capital requirement expressed as the higher of (i) its previous day’s value-at-risk number measured according to the parameters specified in this section and (ii) an average of the daily value-at-risk measures on each of the preceding sixty business days, multiplied by a multiplication factor.

The multiplication factor will be set by individual supervisory authorities on the basis of their assessment of the quality of the bank’s risk management system, subject to an absolute minimum of 3. Banks will be required to add to this factor a “plus” directly related to the ex-post performance of the model, thereby introducing a built in positive incentive to maintain the predictive quality of the model. The plus will range from 0 to 1 based on the outcome of so-called “backtesting.”

255
Q

A statement in the annual report of a bank states that the 10-day VaR at the 95% level of confidence at the end of the year is $253m. Which of the following is true:
I. The maximum loss that the bank is exposed to over a 10-day period is $253m.
II. There is a 5% probability that the bank’s losses will not exceed $253m
III. The maximum loss in value that is expected to be equaled or exceeded only 5% of the time is $253m
IV. The bank’s regulatory capital assets are equal to $253m
(a) I and IV
(b) II and IV
(c) III only
(d) I and III

A

The correct answer is choice ‘c’

Statement I is not correct as VaR does not set an upper limit on losses. In this case, the bank expects the losses to exceed $253m 5% of the times, and the VaR number does not indicate any theoretical maximum amount of losses.

Statement II is incorrect as there is a 95% (and not 5%) probability that the bank’s losses will not exceed $253m

Statement III is correct and describes VaR.

Statement IV is incorrect, as regulatory capital is a more complex computation for which VaR is only one of the various input.

256
Q

A risk analyst analyzing the positions for a proprietary trading desk determines that the combined annual variance of the desk’s positions is 0.16. The value of the portfolio is $240m. What is the 10-day stand alone VaR in dollars for the desk at a confidence level of 95%? Assume 250 trading days in a year.

(a) 31488000
(b) 157440000
(c) 6297600
(d) 12595200

A

The correct answer is choice ‘a’

The z value at the 95% confidence level is 1.64. Since the variance is 0.16, the annual volatility is 40%. Therefore the daily volatility is 40% x √10/250 = 8%. The VaR therefore is 8% x 1.64 x $240m = $31,488,000

257
Q

For a security with a daily standard deviation of 2%, calculate the 10-day VaR at the 95% confidence level. Assume expected daily returns to be nil.

(a) 0.1471
(b) 0.02
(c) 0.104
(d) None of the above.

A

The correct answer is choice ‘c’

If the daily standard deviation is 2%, the 10-day standard deviation will be 2%* √10 = 0.063245. The value of Z at the 95% confidence level is 1.64485. Therefore the VaR value is 1.64485 * 0.063245 = 10.4%. The other choices are incorrect.

258
Q

For a security with a daily standard deviation of 2%, calculate the 10-day VaR at the 95% confidence level. Assume expected daily returns to be nil.

(a) 0.02
(b) 0.1471
(c) 0.104
(d) None of the above.

A

The correct answer is choice ‘c’

If the daily standard deviation is 2%, the 10-day standard deviation will be 2%* √10 = 0.063245. The value of Z at the 95% confidence level is 1.64485. Therefore the VaR value is 1.64485 * 0.063245 = 10.4%. The other choices are incorrect.

259
Q

If the 1-day VaR of a portfolio is $25m, what is the 10-day VaR for the portfolio?

(a) $250m
(b) $79.06m
(c) $7.906m
(d) Cannot be determined without the confidence level being specified

A

The correct answer is choice ‘b’

The 10-day VaR is = $25m x SQRT(10) = $79.06m. Choice ‘b’ is the correct answer.

260
Q

If the annual variance for a portfolio is 0.0256, what is the daily volatility assuming there are 250 days in a year.

(a) 0.4048
(b) 0.0016
(c) 0.0101
(d) 0.0006

A

The correct answer is choice ‘c’

If annual variance is 0.0256, then annual volatility (ie standard deviation) is √0.0256. Therefore the daily volatility will be √0.0256/√250 = 1.01%.

261
Q

For identical mean and variance, which of the following distribution assumptions will provide a higher estimate of VaR at a high level of confidence?

(a) A distribution with kurtosis = 8
(b) A distribution with kurtosis = 2
(c) A distribution with kurtosis = 3
(d) A distribution with kurtosis = 0

A

The correct answer is choice ‘a’

A fat tailed distribution has more weight in the tails, and therefore at a high level of confidence the VaR estimate will be higher for a distribution with heavier tails. At relatively lower levels of confidence however, the situation is reversed as the heavier tailed distribution will have a VaR estimate lower than a thinner tailed distribution.

A higher level of kurtosis implies a ‘peaked’ distribution with fatter tails. Among the given choices, a distribution with kurtosis equal to 8 will have the heaviest tails, and therefore a higher VaR estimate. Choice ‘a’ is therefore the correct answer. Also refer to the tutorial about VaR and fat tails.

262
Q

Which of the following distribution assumptions will produce the lowest probability of exceeding an extreme value, assuming identical means and variances?

(a) a normal mixture distribution
(b) t-distribution
(c) a distribution with kurtosis = 5
(d) a normal distribution

A

The correct answer is choice ‘d’

An ‘extreme value’ will be a value that will lie in the tails. We need to determine the distribution that will have the least weight in the tails so that the probability of exceeding this tail value is minimum across the given choices.

The t-distribution, a distribution with kurtosis > 3 and a normal mixture distribution are all distributions with tails fatter than that for a normal distribution. A normal distribution will have the ‘thinnest’ tails among the choices and therefore the lowest probability of exceeding a given tail event value.

263
Q

An assumption of normality when returns data have fat tails leads to:
I. underestimation of VaR at high confidence levels
II. overestimation of VaR at low confidence levels
III. overestimation of VaR at high confidence levels
IV. underestimation of VaR at low confidence levels
(a) I and II
(b) I, II, III and IV
(c) I, II and III
(d) II, III and IV

A

The correct answer is choice ‘a’

When returns are non-normal and have fat tails, an assumption of normality in returns leads to underestimation of VaR at high confidence levels. At the same time, at lower confidence levels the normal distribution may give higher VaR estimates. Therefore Choice ‘a’ is correct. The other choices are incorrect.

264
Q

A Monte Carlo simulation based VaR can be effectively used in which of the following cases:

(a) When returns data cannot be analytically modeled
(b) When returns are discontinuous or display large jumps
(c) Where analytical methods are too complex to effectively use
(d) All of the above

A

The correct answer is choice ‘d’

Monte Carlo simulations can be effectively used in all cases where an analytical estimate of the VaR cannot be made for any reason - which may include complexity of portfolios, discontinuities or non-linearity in returns or just the plain unavailability of closed form analytical models. Therefore Choice ‘d’ is the correct answer.

265
Q

If μ and σ are the expected rate of return and volatility of an asset whose prices are log-normally distributed, and Ψ a random drawing from a standard normal distribution, we can simulate the asset’s returns using the expressions:

(a) μ - Ψ.σ
(b) -μ + Ψ.σ
(c) μ / Ψ.σ
(d) μ + Ψ.σ

A

The correct answer is choice ‘d’

A standard model for representing asset returns in finance is the Geometric Brownian Motion process, and returns according to this model can be estimated by the expression given in Choice ‘d’. Note that prices according to this model are log-normally distributed, and returns are normally distributed.

266
Q

Which of the following are true:
I. Monte Carlo estimates of VaR can be expected to be identical or very close to those obtained using analytical methods if both are based on the same parameters.
II. Non-normality of returns does not pose a problem if we use Monte Carlo simulations based upon parameters and a distribution assumed to be normal.
III. Historical VaR estimates do not require any distribution assumptions.
IV. Historical simulations by definition limit VaR estimation only to the range of possibilities that have already occurred.
(a) I, II and III
(b) III and IV
(c) I, III and IV
(d) All of the above

A

The correct answer is choice ‘c’

Statement I is true. If a Monte Carlo simulation is based upon the same parameters as used for analytical VaR, and enough number of simulations are carried out, we would get the same results as with analytical VaR.

Statement II is false. We cannot use Monte Carlo simulations using parameters based upon a normal assumption when the underlying distribution is not normal. For example, if a return stream is based upon say a uniform distribution, we cannot use a simulation based upon drawings from a normal distribution even though we use the same mean and standard deviation.

Statement III is true. This is the advantage of historical simulations - no assumptions are necessary. (Historical simulations however often suffer from the great disadvantage of the paucity of data that would cover all possibilities.)

Statement IV is true. The results of historical simulations are limited to the data they are based upon.

267
Q

Monte Carlo simulation based VaR is suitable in which of the following scenarios:
I. When no assumption can be made about the distribution of underlying risk factors

II. When underlying risk factors are discontinuous, show heavy tails or are otherwise difficult to model
III. When the portfolio consists of a heterogeneous mix of disparate financial instruments with complex correlations and non-linear payoffs
IV. A picture of the complete distribution is desired in addition to the VaR estimate

(a) I, III and IV
(b) II, III and IV
(c) I, II and III
(d) III and IV

A

The correct answer is choice ‘b’

II, III and IV represent situations where Monte Carlo simulations can be employed. I is not a situation where Monte Carlo can be used, as there is no basis available to simulate the returns. When no distribution assumption is possible, it may be advisable to use historical simulations. Therefore Choice ‘b’ is the correct answer, and the others are incorrect.

268
Q

In the case of historical volatility weighted VaR, a higher current volatility when compared to historical volatility:

(a) will increase the confidence interval
(b) will not affect the VaR estimate
(c) will increase the VaR estimate
(d) will decrease the VaR estimate

A

The correct answer is choice ‘c’

When calculating volatility weighted VaR, returns are adjusted by a factor equal to the current volatility divided by the historical volatility, ie the volatility that existed during the time period the returns were earned. If the current volatility is greater than the historical volatility (also called contemporary volatility), then it has the effect of increasing the magnitude of any past returns (whether positive or negative). This in turn increases the VaR.

Consider an example: if the current volatility is 2%, and a return of -5% was earned at a time when the volatility was 0.8%, then the volatility weighted return would be 12.5% (=-5% x 2%/0.8%). Clearly, this has the effect of increasing the VaR.

269
Q

: If the duration of a bond yielding 10% is 6 years, the volatility of the underlying interest rates 5% per annum, what is the 10-day VaR at 99% confidence of a bond position comprising just this bond with a value of $10m? Assume there are 250 days in a year.

(a) 233000
(b) 279600
(c) 139800
(d) 984000

A

The correct answer is choice ‘c’

The VaR of a fixed income instrument is given by Duration x Interest Rate x Volatility of the interest rate x z-factor corresponding to the confidence level. For this question, VaR =6 * 10% * 5% * SQRT(10/250) *2.33 * 10,000,000 = $139,800m. Choice ‘c’ is the correct answer.

270
Q

The estimate of historical VaR at 99% confidence based on a set of data with 100 observations will end up being:

(a) the worst single observation in the data set
(b) the weighted average of the top 2.33 observations
(c) the extrapolated returns of the last 1.64 observations
(d) None of the above

A

The correct answer is choice ‘a’

The VaR in this case will be the top quintile of observations. In this case, since there are exactly 100 observations, this would mean the worst return would become the VaR. Therefore Choice ‘a’ is the correct answer. Choice ‘c’ and Choice ‘b’ make no sense. This highlights that at higher confidence levels, fewer and fewer observations impact the VaR if we are using historical simulation based VaR.

271
Q

For the purposes of calculating VaR, an FRA can be modeled as a combination of:

(a) a fixed rate bond and a zero coupon bond
(b) two zero coupon bonds
(c) a zero coupon bond and a floating rate note
(d) a zero coupon bond and an interest rate swap

A

The correct answer is choice ‘b’

A forward rate agreement allows one of the parties to borrow an amount at a rate for a length of time, all of which are agreed in advance. Consider a “3 x 6” FRA. This allows a fixed rate borrowing starting at 3 months till the end of 6 months. This is economically equivalent to holding a zero coupon bond till the end of 6 months, and being short another zero coupon bond till the end of 3 months (or the other way round, depending upon which end of the FRA you are on). Therefore Choice ‘b’ is the correct answer.

272
Q

When the volatility of the yield for a bond increases, which of the following statements is true:

(a) The VaR for the bond increases and its value decreases
(b) The VaR for the bond decreases and its value increases
(c) The VaR for the bond decreases and its value is unaffected
(d) The VaR for the bond increases and its value stays the same

A

The correct answer is choice ‘d’

The VaR of a fixed income instrument is given by Duration x Volatility of the interest rate x z-factor corresponding to the confidence level. Therefore as the volatility of the yield goes up, the value at risk for the instrument goes up.

At the same time, the value of the bond is given by the present value of its future cash flows using the current yield curve. This value is unaffected by the volatility of the underlying interest rates. Therefore a change in volatility of interest rates does not affect the value of the bond.

273
Q

For the purposes of calculating VaR, an interest rate swap can be modeled as a combination of:

(a) a fixed rate bond and a zero coupon bond
(b) a zero coupon bond and an interest rate swap
(c) a fixed coupon bond and a floating rate note
(d) two zero coupon bonds

A

The correct answer is choice ‘c’

In an interest rate swap, the parties agree to exchanging interest rate payments, with one party being a fixed interest rate payer and the other paying floating rates. The party receiving fixed rates and paying floating can be considered to be long a fixed rate bond and short a floating rate note. Therefore an IRS can be modeled as a combination of a fixed coupon bond and a floating rate note.

274
Q

For an option position with a delta of 0.3, calculate VaR if the VaR of the underlying is $100.

(a) 33.33
(b) 130
(c) 100
(d) 30

A

The correct answer is choice ‘d’

The first order approximation of the VaR of an option position is nothing but the VaR of the underlying multiplied by the option’s delta. This is intuitive because the delta is the sensitivity of the option price to changes in the prices of the underlying, and in this case since the delta is 0.3 and the underlying’s VaR is $100, the VaR of the options position is 0.3 x $100 = $30. Therefore Choice ‘d’ is the correct answer.

(Note that the second order approximation of the VaR of an options position considers the option gamma too, and VaR reduces if gamma increases.)

275
Q

Between two options positions with the same delta and based upon the same underlying, which would have a smaller VaR?

(a) the position with a lower gamma
(b) the position with a higher theta
(c) both positions would have an identical VaR
(d) the position with a higher gamma

A

The correct answer is choice ‘d’

The second order approximation of the VaR of an options position is given by [Option delta x Underlying’s VaR - Option gamma/2 x (Underlying’s VaR)^2]. Therefore, a higher gamma reduces VaR and a lower gamma increases VaR.

276
Q

The backtesting of VaR estimates under the Basel accord requires comparing the ex-ante VaR to:

(a) hypothetical profit and loss keeping the positions constant
(b) realized profit and loss for the period
(c) ex-ante VaR calculated for the subsequent periods
(d) the Basel accord does not require banks to backtest VaR estimates

A

The correct answer is choice ‘b’

Basel II requires financial institutions to compare their ex-ante VaR estimates to actual realized P&L. Therefore Choice ‘b’ is the correct answer. A bank may use hypothetical P&L based upon constant positions to validate its model, but that is not required for Basel II.

277
Q

Ex-ante VaR estimates may differ from realized P&L due to:
I. the effect of intra day trading
II. timing differences in the accounting systems
III. incorrect estimation of VaR parameters
IV. security returns exhibiting mean reversion
(a) I, II and IV
(b) I, II and III
(c) I and III
(d) II, III and IV

A

The correct answer is choice ‘b’

Ex-ante VaR calculations can differ from actual realized P&L due to a large number of reasons. I, II and III represent some of them. Mean reversion however has nothing to do with VaR estimates differing from actual P&L. Therefore Choice ‘b’ is the correct answer.

278
Q

The correct answer is choice ‘b’

Ex-ante VaR calculations can differ from actual realized P&L due to a large number of reasons. I, II and III represent some of them. Mean reversion however has nothing to do with VaR estimates differing from actual P&L. Therefore Choice ‘b’ is the correct answer.

A

The correct answer is choice ‘c’

Normal mixtures, EVT and the t-distribution are all possible solutions addressing the issue of heavy tails in financial returns.

EWMA and GARCH address volatility clustering, which is the other problem when doing risk calculations. Therefore Choice ‘c’ is the correct answer as EWMA is not used to address heavy tails but volatility clustering.

279
Q

For an equity portfolio valued at V whose beta is β, the value at risk at a 99% level of confidence is represented by which of the following expressions? Assume σ represents the market volatility.

(a) 1.64 x V x σ / β
(b) 1.64 x β x V x σ
(c) 2.326 x V x σ / β
(d) 2.326 x β x V x σ

A

The correct answer is choice ‘d’

For the PRM exam, it is important to remember the z-multiples for both 99% and 95% confidence levels (these are 2.33 and 1.64 respectively).

The value at risk for an equity portfolio is its standard deviation multiplied by the appropriate z factor for the given confidence level. If we knew the standard deviation, VaR would be easy to calculate. The standard deviation can be derived using a correlation matrix for all the stocks in the portfolio, which is not a trivial task. So we simplify the calculation using the CAPM and essentially say that the standard deviation of the portfolio is equal to the beta of the portfolio multiplied by the standard deviation of the market.

Therefore VaR in this case is equal to Beta x Mkt Std Dev x Value x z-factor,

280
Q

For a US based investor, what is the 10-day value-at risk at the 95% confidence level of a long spot position of EUR 15m, where the volatility of the underlying exchange rate is 16% annually. The current spot rate for EUR is 1.5. (Assume 250 trading days in a year).

(a) 1184400
(b) 526400
(c) 5922000
(d) 2632000

A

The correct answer is choice ‘a’

The VaR for a spot FX position is merely a function of the standard deviation of the exchange rate. If V be the value of the position (in this case, EUR 15m x 1.5 = USD 22.5m), z the appropriate z value associated with the level of confidence desired, and σ be the standard deviation of the portfolio, the VaR is given by ZσV.

In this case, the 10-day standard deviation is given by SQRT(10/250)16%. Therefore the VaR is =1.645151.5(16%*SQRT(10/250)) = USD 1.1844m. Choice ‘a’ is the correct answer.

281
Q

: A portfolio’s 1-day VaR at the 99% confidence level is $250m. What is the annual volatility of the portfolio? (assuming 250 days in the year)

(a) $107.5m
(b) $2,410.3m
(c) $1,699.4m
(d) $3,952.8m

A

The correct answer is choice ‘c’

This is easy to calculate as follows: At the 99% confidence level, the VaR=2.326 * Std Deviation (remember the z values at the 95% and 99% levels, the PRMIA exam may not give you these values). Thus the 1-day standard deviation is $250m/2.326, and the 250-day standard deviation is √250 * ($250m/2.326) = $1,699.4m.

Remember: if you know the VaR, you know the standard deviation. Once you know the standard deviation for any period of time, you can convert it into standard deviation for another period using the square root of time rule. You can also calculate the VaR at a different confidence level too.

282
Q

The VaR of a portfolio at the 99% confidence level is $250,000 when mean return is assumed to be zero. If the assumption of zero returns is changed to an assumption of returns of $10,000, what is the revised VaR?

(a) 226740
(b) 273260
(c) 260000
(d) 240000

A

The correct answer is choice ‘d’

The exact formula for VaR is = -(Zα σ + μ), where Z α is the z-multiple for the desired confidence level, and μ is the mean return. Now Zα is always a negative number, or at least will certainly be provided the desired confidence level is greater than 50%, and μ is often assumed to be zero because generally for the short time periods for which market risk VaR is calculated, its value is very close to zero.

Therefore in practice the formula for VaR just becomes -Zασ, and since Z is always negative, we normally just multiply the Z factor without the negative sign with the standard deviation to get the VaR.

For this question, there are two ways to get the answer. If we use the formula, we know that -Zασ= 250,000 (as μ=0), and therefore -Zασ - μ = 250,000 - 10,000 = $240,000.

The other, easier way to think about this is that if the mean changes, then the distribution’s shape stays exactly the same, and the entire distribution shifts to the right by $10,000 as the mean moves up by $10,000. Therefore the VaR cutoff, which was previously at -250,000 on the graph also moves up by 10k to -240,000, and therefore $240,000 is the correct answer.

The other choices are intended to confuse by multiplying the z-factor for the 99% confidence level with 10,000 etc.

283
Q

What is the 1-day VaR at the 99% confidence interval for a cash flow of $10m due in 6 months time? The risk free interest rate is 5% per annum and its annual volatility is 15%. Assume a 250 day year.

(a) 85123
(b) 5500
(c) 109031
(d) 1744500

A

correct answer is choice ‘b’

The $10m cash flow due in 6 months is equivalent to a bond with a present value of 10m/(1.05)^0.5 =$9,759,000. Essentially, the question requires us to calculate the VaR of a bond.

The VaR of a fixed income instrument is given by Duration x Interest Rate x Volatility of the interest rate x z-factor corresponding to the confidence level.

In this case, since the question requires us to calculate the value “closest to” the correct answer, we can use an estimate for the modified duration of the bond as equal to 0.5 years/(1+r) = 0.5/1.05 = 0.47 years. The VaR would be given by 0.5 * 5% * 15% * 2.326 * sqrt(1/250) * 9,759,000 = $5,384 which is closest to $5,500. Therefore Choice ‘b’ is the correct answer. Note that we have to multiply by sqrt(1/250) as the given volatility is annual and the question is asking for daily VaR. All other answers are incorrect.

284
Q

Calculate the 99% 1-day Value at Risk of a portfolio worth $10m with expected returns of 10% annually and volatility of 20%.

(a) 2326000
(b) 126491
(c) 294218
(d) 290218

A

The correct answer is choice ‘d’

Be wary of questions asking you to calculate VaR where the mean or expected returns are different from zero. The VaR formula of z-value times standard deviation needs to have an adjustment for the expected return [ie use VaR = z-value times standard deviation minus expected return]. In this case, the standard deviation for 1 day for the portfolio is =SQRT(1/250)20%$10m = $126,491. The VaR is therefore (2.326 * $126,491) - ($10,000,000 * 10% * 1/250) = $290,218.

285
Q

Consider a portfolio with a large number of uncorrelated assets, each carrying an equal weight in the portfolio. Which of the following statements accurately describes the volatility of the portfolio?

(a) The volatility of the portfolio will be close to zero
(b) The volatility of the portfolio will be equal to the weighted average of the volatility of the assets in the portfolio
(c) The volatility of the portfolio will be equal to the square root of the sum of the variances of the assets in the portfolio weighted by the square of their weights
(d) The volatility of the portfolio is the same as that of the market

A

The correct answer is choice ‘c’

When assets are uncorrelated, variances are additive. But volatility (which is standard deviation) is not. In the given situation, the total variance of the portfolio will be equal to the the square root of the sum of the variances of the assets in the portfolio weighted by the square of their weights. Its volatility will be the square root of this variance. Thus Choice ‘c’ is the correct answer.

(This is because V(cA + dB) = c^2 V(A) + d^2 V(B) - refer tutorial on combining variances.)

286
Q

Which of the following assumptions underlie the ‘square root of time’ rule used for computing VaR estimates over different time horizons?
I. the portfolio is static from day to day
II. asset returns are independent and identically distributed (i.i.d.)
III. volatility is constant over time
IV. no serial correlation in the forward projection of volatility
V. negative serial correlations exist in the time series of returns
VI. returns data display volatility clustering
(a) I, II, V and VI
(b) I, II, III and IV
(c) I and II
(d) III, IV, V and VI

A

The correct answer is choice ‘b’

The square root of time rule can be used to convert, say a 1-day VaR to a 10-day VaR, by multiplying the known number by the square root of time to get the VaR over a different time horizon. However, there are key assumptions that underlie the application of this rule, and statements I to IV correctly state those assumptions.

Statements V and VI are not correct, because the application of the square root of time rule requires the absence of serial correlations, and also the absence of volatility clustering (ie independence). Therefore Choice ‘b’ is the correct answer.

The square root of time rule is also applied to convert volatility or standard deviation for one period to the volatility for a different time period. Remember that VaR is just a multiple of volatility, and therefore the assumptions that apply to the square root of time rule for VaR also apply to the same rule when used in the context of volatilities or standard deviation.

287
Q

The standard error of a Monte Carlo simulation is:

(a) The same as that for a lognormal distribution
(b) Proportional to the inverse of the square root of the sample size
(c) Zero
(d) None of the above

A

The correct answer is choice ‘b’

When we do a Monte Carlo simulation, the statistic we obtain (eg, the expected price) is an estimate of the real variable. The difference between the real value (which would be what we would get if we had access to the entire population) and that estimated by the Monte Carlo simulation is measured by the ‘standard error’, which is the standard deviation of the difference between the ‘real’ value and the simulated value (ie, the ‘error’).

As we increase the number of draws in a Monte Carlo simulation, the closer our estimate will be to the true value of the variable we are trying to estimate. But increasing the sample size does not reduce the error in a linear way, ie doubling the sample size does not halve the error, but reduces it by the inverse of the square root of the sample size. So if we have a sample size of 1000, going up to a sample size of 100,000 will reduce the standard error by a factor of 10 (and not 100), ie, SQRT(1/100) = 1/10. In other words, standard error is proportional to 1/√N, where N is the sample size.

288
Q

The accuracy of a VaR estimate based on a Monte carlo simulation of portfolio prices is affected by:
I. The shape of the distribution of portfolio values
II. The number simulations carried out
III. The confidence level selected for the VaR estimate
(a) III
(b) II
(c) II and III
(d) I, II and III

A

The correct answer is choice ‘d’

VaR calculations look at the lower part of the distribution of future portfolio values, for example, if the desired confidence level is 95%, the cut-off for the VaR calculation will be at the bottom 5%; similarly at 1% for a 99% confidence level. The number of observations that will end up in these bottom ranges will be few and sparse, and therefore their accuracy will generally be lower than, say, the average where observations are more likely to be concentratred. If the shape of the distribution of future portfolio values is not symmetrical and has a long tail to the left, then this problem gets further exacerbated as there may be even fewer and less reliable simulated numbers at the 5% or 1% quintiles. Thus the shape of the distribution will affect the accuracy of a VaR estimate. The distribution for a short option position, for example, will have a long tail to the left, and the VaR number will be quite significantly affected by a few simulations. On the other hand, for a long option position where the long tail is to the right, and we are interested in the left tail which is better defined and ends at zero we are more likely to get a better VaR estimate. Therefore Statement I is correct.

The number of simulations carried out directly affects the standard error, which is inversely proportional to the square root of the sample size (ie the number of simulations). THe accuracy of the VaR estimate can be increased by increasing the sample size (or reduced by reducing the sample size). Therefore Statement II is correct.

The confidence level selected for the VaR estimate also affects the accuracy of the estimate. To intuitively understand this, consider this extreme example where the desired confidence level is 99.9% and there are 1000 observations. Therefore the VaR will be determined by the last value in the sample, and will therefore be quite fickle and dependent upon what chance produces as the lowest value in the simulation. But if for the same sample the confidence level desired were to be 90%, there would be 100 observations beyond the 90% cut-off and this would be a much more stable and accurate number. Therefore the confidence level selected for the VaR estimate is also a determinant of the accuracy of the VaR estimate derived from the simulation. Statement III is correct.

289
Q

A long bond position is hedged using a short position in the futures market. If the hedge performs as expected, then which of the following statements is most accurate:

(a) the investor will be able to avoid losses but will also forgo the gains on his positions
(b) the investor will be able to avoid losses and will also be able to keep the gains on his positions
(c) the investor will be able to avoid losses
(d) None of the above

A

The correct answer is choice ‘a’

If the hedge performs as expected, then any P&L on the long bond position will be offset by identical losses (or gains) on the hedge.

Since hedges are never perfect, and some residual risk such as basis risk, the inability to enter into an unrounded number of futures contracts will remain. However, the bulk of the risk would be mitigated, and the investor will be able to avoid any losses but will also forgo any gains. Therefore choice b is the correct answer and the rest are incorrect.

290
Q

Which of the following attributes of an investment are affected by changes in leverage:

(a) risk and return
(b) Sharpe ratio
(c) Information ratio
(d) All of the above

A

The correct answer is choice ‘a’

Changing leverage does not affect the Sharpe ratio or the Information ratio. However, leverage magnifies both risk and return. Therefore Choice ‘a’ is the correct answer.

Recall that Sharpe ratio is the ratio of the excess returns (over the risk free rate) of an investment to its standard deviation, and the information ratio is the ratio of the ‘alpha’ returns to the standard deviation of such returns.

291
Q

For an investor with a long position in market index futures, which of the following is a primary risk:

(a) Movement in interest rates underlying the futures prices
(b) Risk that expected dividends will differ from realized dividend yields
(c) Increase or decrease in the level of the underlying index
(d) Basis risk between futures and spot prices

A

The correct answer is choice ‘c’

This question emphasizes the difference between primary and secondary risks. Primary risks are the risks consciously undertaken, ie the risks whose premium the investor is trying to earn. Secondary risks are risks that accompany the primary risks that the investor will either hedge, or will ignore if they are small. It is important to watch out for secondary risks because they could become significant and offset the returns being sought even if the investor’s market view is proved correct.

An investor in market index futures is betting that the index will rise. Index futures prices are largely driven by the spot value of the index, but are also affected by costs of carry. In particular, futures prices will be driven by interest rates, expected dividends, and any other factors that may cause the basis between spot and futures prices to diverge. These risks are secondary risks.

In this question, Choice ‘c’ represents the primary risk, and Choice ‘d’, Choice ‘a’ and Choice ‘b’ are all secondary risks.

292
Q

The diversification effect is responsible for:

(a) the super-additivity property of market risk VaR assessments
(b) VaR being applicable only to short term horizons
(c) total VaR numbers being greater than the sum of the individual VaRs for underlying portfolios
(d) the sub-additivity property of market risk VaR assessments

A

The correct answer is choice ‘d’

Any good risk measure has the property that it is sub-additive, which means the whole is less than the sum of the parts. In the case of VaR, sub-additivity arises due to the diversification effect, or said differently, due to the correlation between different assets being less than one. Therefore Choice ‘d’ is the correct answer.

Super-additivity is just the opposite of sub-additivity, ie, the whole is greater than the sum of the parts. Good risk measures do not have super-additivity. Therefore Choice ‘a’ is incorrect.

293
Q

Which of the following are true:
I. Delta hedges need to be rebalanced frequently as deltas fluctuate with fluctuating prices.
II. Portfolio managers are right to focus on primary risks over secondary risks.
III. Increasing the hedge rebalance frequency reduces residual risks but increases transaction costs.
IV. Vega risk can be hedged using options.
(a) II, III and IV
(b) I and II
(c) I, II, III and IV
(d) I, II and III

A

The correct answer is choice ‘c’

All of the given statements are correct.

Delta is non-linear with respect to prices for a number of securities such as bonds, options and other derivatives. It changes with changes in prices, and any hedge initially undertaken becomes quickly mismatched. Therefore delta hedges need to be managed quite actively and kept up-to-date. Therefore I is true.

Primary risks comprise most of the risk in a position, and therefore portfolio managers are right to focus on them over secondary risks. Therefore II is true.

The greater the hedge rebalance frequency, the lower is the hedge mismatch at any point in time, and therefore residual risks would be lower. However, rebalancing hedges requires rebalance trades to be done, and these involve transaction costs. Generally, a reasonable balance needs to be struck between the frequency of rebalances (a lower frequency increases residual risk, but this residual risk is not directionally biased) and the costs of rebalancing. III is correct.

Vega risk is the risk arising due to changes in prices due to changes in volatility. Options carry vega risk. Therefore any hedges against vega risks can only be obtained using other options positions. (Vega risk may also be hedged using other volatility based products, eg an OTC volatility swap, or a VIX futures type product.)

294
Q

Which of the following introduces model error when basing VaR on a normal distribution with a static mean and standard deviation?

(a) Volatility clustering
(b) Heavy tails
(c) Autocorrelation of squared returns
(d) All of the above

A

The correct answer is choice ‘d’

When VaR is based on an assumption of normality with a static mean and volatility, it means anything that violates these assumptions will introduce model error. Volatility clustering implies a non-static volatility. Heavy tails imply non-normality of the shape of the distribution. Autocorrelation of squared returns implies that returns are not independent and identically distributed. Therefore all of these introduce model error.

295
Q

An asset has a volatility of 10% per year. An investment manager chooses to hedge it with another asset that has a volatility of 9% per year and a correlation of 0.9. Calculate the hedge ratio.

(a) 1.2345
(b) 0.81
(c) 1
(d) 0.9

A

The correct answer is choice ‘c’

The minimum variance hedge ratio answers the question of how much of the hedge to buy to hedge a given position. It minimizes the combined volatility of the primary and the hedge position. The minimum variance hedge ratio is given by the expression [ σ(x) / σ(y) ] * ρ(x,y)]. Effectively, this is the same as the beta of the primary position with respect to the hedge.

In this case, the hedge ratio is = 10%/9% * 0.9 = 1

296
Q

Assuming all other factors remain the same, an increase in the volatility of the returns on the assets of a firm causes which of the following outcomes?

(a) A decrease in the value of the implicit put in in the debt of the firm
(b) A decrease in the value of the non-callable debt issued by the firm
(c) An increase in the value of the callable debt of the firm
(d) An increase in the value of the equity of the firm

A

The correct answer is choice ‘b’

Some parts of this question draw upon contingent claims framework to the value of a firm. According to this framework, the relationship between the debt and equity holders of a firm can be viewed as follows: The equity holders have a call option on the assets of the firm with a strike price equal to the value of the debt, and the debt holders have sold them this call. This is so because should the value of the assets of the firm fall below the value of the debt, the equity holders can walk away by handing over the assets to the debt holders in full extinguishment of their claims. If the value of the firm’s assets is greater than the value of debt, the equity holders will exercise their call option. At the same time, it is also possible to view the debtholders as holding an asset and having sold a put on the assets of the firm with a strike price equal to the value of the debt. If the value of the assets of the firm were to fall below the value of the debt, they will end up buying the assets at a price equal to the value of the debt.

An increase in the volatility of returns on the assets of a firm increases the volatility of the asset value. This means the likelihood that the asset value will go below the value of the debt of the firm will increase. Callable debt can be considered to be a summation of two separate securities: a regular debt, for which the firm pays interest and receives a principal loan; and a call option that the debt holders have sold to the firm allowing the firm to buy back the debt. In return, the firm pays an implied ‘premium’ to the debt holders who have agreed to buy the callable debt. Therefore the total payments by the company to callable debt holders is equal to the interest payments plus the premium payment for the option. An increase in asset volatility will increase the value of this option as it is likely that the assets will increase in value, and strengthen the company’s credit, and lower its spread at which time it would like to repay the debt and refinance/roll over at the new lower rate. Therefore an increase in volatility will increase the ‘premium’ demanded by the callable debt holders, thereby increasing the total yield and lowering the value of the callable debt. Therefore Choice ‘c’ (An increase in the value of the callable debt of the firm) is incorrect.

An increase in asset volatility will decrease the value of the firm as it is now riskier than before (higher standard deviation, same expected returns). Therefore Choice ‘d’ (An increase in the value of the equity of the firm) is false too.

The value of the implicit put in the debt of the firm will increase and not decrease as the volatility of the underlying assets increases. Therefore Choice ‘a’ (A decrease in the value of the implicit put in in the debt of the firm) is incorrect too.

Choice ‘b’ (A decrease in the value of the non-callable debt issued by the firm) is correct because higher asset volatility will increase the riskiness of the company’s debt, making the required yield higher and decreasing its value.

297
Q

If A and B be two uncorrelated securities, VaR(A) and VaR(B) be their values-at-risk, then which of the following is true for a portfolio that includes A and B in any proportion. Assume the prices of A and B are log-normally distributed.

(a) The combined VaR cannot be predicted till the correlation is known
(b) VaR(A+B) = VaR(A) + VaR(B)
(c) VaR(A+B) > VaR(A) + VaR(B)
(d) VaR(A+B) < VaR(A) + VaR(B)

A

The correct answer is choice ‘d’

First of all, if prices are lognormally distributed, that implies the returns (which are equal to the log of prices) are normally distributed. To say that prices are lognormally distributed is just another way of saying that returns are normally distributed.

Since the correlation between the two securities is zero, this means their variances can be added. But standard deviations, or volatilities cannot be added (they will be the square root of sum of variances). VaR is nothing but a multiple of standard deviation, and therefore it is not additive if correlations are anything other than 1 (ie perfect positive, which would imply we are dealing with the same asset). Therefore VaR(A+B)=SQRT(VaR(A)^2 + VaR(B)^2). This implies the combined VaR of a portfolio with these two securities will be less than the sum of VaRs of the two individual securities.

298
Q

Which of the following statements is true in respect of different approaches to calculating VaR?
I. Linear or parametric VaR does not take correlations into account
II. For large portfolios with little or no optionality or other non-linear attributes, parametric VaR is an efficient approach to calculating VaR
III. For large portfolios with complex sources of risk and embedded optionalities, the full revaluation method of calculating VaR should be preferred
IV. Delta normal local revaluation based VaR is suitable for fixed income and option portfolios only
(a) III only
(b) I and IV
(c) I, II, III and IV
(d) II and III

A

The correct answer is choice ‘d’

This question is different in that it uses terminology you will not find in the PRMIA handbook. Yet it is important to understand these as there may be a question based on this slightly different terminology. (It is only the terminology that is different, the concepts are the same.)

If you read the PRMIA handbook, there are three methods of calculating VaR: Analytical or parametric, historical simulation and Monte Carlo simulations. There is one more way of categorizing the methods of calculating VaR, and these are as follows:

  1. Local valuation: This refers to analytical or parametric VaR. This relies upon a neat statistical formula to calculate VaR and assumes a normal distribution. It also relies upon a known covariance matrix between the different components of VaR. Local valuation based VaR is further subdivided into two types:
    a. Linear VaR: Linear VaR is calculated assuming the portfolio is linear, and its value changes just based upon the delta of the portfolio. In such cases, once a change (eg, in stock values) is known, that change is multiplied by the delta alone to get the VaR. Second order effects, such as gamma or convexity are ignored.
    b. Non-linear VaR: Non linear analytical VaR is calculated using both delta and the second derivative, ie gamma or the convexity. This is more accurate if the portfolio is non-linear.

The key thing about ‘local revaluation’ VaR is that it does not require us to reprice or completely value all instruments in the portfolio. All we have to know is the delta (or the gamma and convexity as well) and multiply that with the number of standard deviations of change in the risk factor that we are interested in. So if we are considering a bond, we don’t have to recalculate the new value of the bond as we can just use the delta. This can be a significant computational advantage for a large financial institution where there may be a large number of positions.

  1. Full revaluation: This refers to a VaR method where the asset in question is fully repriced based on the new value of the risk factor - and this includes both historical and Monte Carlo based VaR methods.

Local revaluation, or analytical method based VaR is computationally easier to calculate, specially if based on just the delta-normal method (ie ignoring second order effects from convexity or gamma). But it will give incorrect results if the portfolio includes substantial non-linearity or other complexities. The full revaluation methods will always give the correct results, but they can be computationally difficult to arrive at.

Statement I is completely inaccurate - local revaluation methods do take correlations into account through the correlation or covariance matrices. Statement IV is false too - the ‘delta normal’ VaR refers to Var calculations based upon just the delta and do not account for the convexity or optionality. Statements II and III are correct. Therefore Choice ‘d’ is the correct answer.

299
Q

: An equity manager holds a portfolio valued at $10m which has a beta of 1.1. He believes the market may see a dip in the coming weeks and wishes to eliminate his market exposure temporarily. Market index futures are available and the current futures notional on these is $50,000 per contract. Which of the following represents the best strategy for the manager to hedge his risk according to his views?

(a) Sell 200 futures contracts
(b) Sell 220 futures contracts
(c) Buy 220 futures contracts
(d) Liquidate his portfolio as soon as possible

A

The number of futures contracts to sell are equal to $10m x 1.1/$50,000 = 220. Liquidating his portfolio would reduce the beta to zero, but would also get rid of the bets he wants to play on. Therefore Choice ‘b’ is the correct answer.

300
Q

When estimating the risk of a portfolio of equities using the portfolio’s beta, which of the following is NOT true:

(a) relies upon the single factor CAPM model
(b) using the beta significantly eases the computational burden of calculating risk
(c) use of the beta assumes that the portfolio is diversified enough so that the specific risks of the individual stocks offset each other
(d) explicitly considers specific risk inherent in the portfolio for risk calculations

A

The correct answer is choice ‘d’

Using the beta for VaR calculations is a significant simplification based on the CAPM and the assumption that any specific risks are diversified away. The one thing a risk model based on the CAPM does not consider is the specific risk of individual stocks, because, as mentioned, these are considered to be offsetting each other so that the portfolio only carries market risk reflected in the beta.

301
Q

Which of the following risks were not covered in detail in most stress tests prior to the current crisis:
I. The behavior of complex structured products under stressed liquidity conditions
II. Pipeline or securitization risk
III. Basis risk in relation to hedging strategies
IV. Counterparty credit risk
V. Contingent risks
VI. Funding liquidity risk
(a) I, II, III, IV and VI
(b) II, III and V
(c) I, IV and VI
(d) All of the above

A

The correct answer is choice ‘d’

The BCBS publication ‘Principles for sound stress testing practices and supervision’ (May 2009) identifies all of the above as risks that were covered in insufficient detail in most stress tests prior to the current crisis. Therefore Choice ‘d’ is the correct answer.

For the PRM exam, you should have read this document. You should also be familiar with all the above risk types as being contributors to the crisis, and know what each of these mean.

302
Q

Which of the following statements is the most appropriate description of feedback effects:

(a) The spread of contagion from the bankruptcy of one participant leading to a similar outcome for other market participants
(b) The lack of a comprehensive view of risk across credit, market and liquidity risks leading to an underestimation of correlations that tend to spike up in the event of a crisis
(c) The amplification of smaller initial shocks to one risk factor creating larger subsequent shocks through system-wide interactions between other risks, creating self-perpetuating downward stresses in the markets
(d) The revision of stress testing scenarios based upon management, business unit and regulatory feedback on the plausibility or otherwise of stress scenarios.

A

The correct answer is choice ‘c’

Choice ‘c’ (The amplification of smaller initial shocks to one risk factor creating larger subsequent shocks through system-wide interactions between other risks, creating self-perpetuating downward stresses in the markets) is the most comprehensive description of ‘feedback effects’, as described in the BCBS document on stress testing. Choice ‘a’ is one manifestation of feedback effects, but does not describe the entire effect. Choice ‘b’ is not a description of ‘feedback effects’, but one of the various weaknesses in stress testing that was seen during the crisis. Choice ‘d’ is plain nonsensical.

The BCBS paper provides a good and succinct description of feedback effect: how mortgage default shocks led to a deterioration of market prices of CDOs, followed by a drying up of the liquidity in these markets. This led to banks having to hold on to assets they intended to securitize (securitization and warehousing risk), and given the absence of transparency on who was exposed to what, banks refusing to lend to each other and a drying up of the wholesale funding market as well. All of this was additionally accompanied by a general flight to quality, households withdrawing money from money market funds creating a crisis in that market as well. At each stage, the initial shock was amplified and fed back into the system through interactions that had not been imagined by any market participant or regulator, leave alone risk managers.

303
Q

: Who has the ultimate responsibility for the overall stress testing programme of an institution?

(a) Business Unit leaders
(b) The Board
(c) The Risk Committee
(d) Senior Management

A

The correct answer is choice ‘b’

According to the first principle set out by the BCBS paper on stress testing, the Board has ultimate responsibility for the overall stress testing programme. Senior management is accountable for the implementation, management and oversight of the programme, but the overall responsibility stays with the Board of Directors of the institution. Therefore Choice ‘b’ is the correct answer.

Additionally, this principle lays down that stress testing should be a part of the overall governance, ie support the strategic choices made as part of business planning; and be integrated with the risk management culture of the bank, ie stress tests should be used as an input for setting the risk appetite, exposure limits, and the capital and liquidity planning processes of the bank.

304
Q

Which of the following statements are true:
I. Stress testing, if exhaustive, can replace traditional risk management tools such as value-at-risk (VaR)
II. Stress tests can be particularly useful in identifying risks with new products
III. Stress testing is distinct from a bank’s ICAAP carried out periodically
IV. Stress testing is a powerful communication tool that can convey risks to decisionmakers in an organization
(a) II and IV
(b) I and III
(c) I, II and III
(d) All of the above

A

The correct answer is choice ‘a’

Stress testing provides an independent and complementary perspective to other risk management tools such as value-at-risk and economic capital. Both are tools that serve similar purposes but are not interchangeable. Stress testing, no matter how exhaustively done, can not replace other tools such as those based on analytical or historical models. It can provide a useful sense check to validate models and assumptions, but is not a replacement for traditional techniques. Therefore statement I is false.

Stress testing can certainly help identify risks with new products for which historical data may be limited, and analytical models may be based upon many unproven assumptions. It can help challenge the risk characteristics of new products where stress situations have not been observed in the past. Therefore statement II is correct.

ICAAP stands for the ‘internal capital adequacy assessment process’ performed by a bank (remember the acronym and its expansion). Stress testing is an integral part of a firm’s ICAAP, and not distinct. It is one of the elements of the internal process. Therefore statement III is false.

Statement IV is correct as stress testing is indeed a powerful tool that can communicate risks throughout the organization as the stress scenarios are easier to comprehend than arcane statistical models. They are also easier to explain to regulators, and are a powerful communication tool.

305
Q

A bank evaluates the impact of large and severe changes in certain risk factors on its risk using a quantitative valuation model. Which of the following best describes this exercise?

(a) Stress testing
(b) Scenario analysis
(c) Sensitivity analysis
(d) Simulation

A

The correct answer is choice ‘b’

It is important to note the difference between sensitivity analysis and stress testing. Sensitivity analysis applies to measuring the effect of changes on the outputs of a model by varying the inputs - generally one input at a time.

In scenario analysis, a number of variables may be changed at the same time to see the impact on the dependent variable. For example, a bank may measure the changes in the value of its mortgage portfolio by varying its assumptions on prepayment expectations, interest rates and other factors, using its modeling software or application. The changes in the inputs may or may not relate to integrated real world situations that may arise. Sensitivity analysis is purely a quantitative exercise, much like calculating the delta of a portfolio.

A stress test may include shocks or large changes to input parameters but it does so as part of a larger stress testing programme that generally considers the interaction of risk factors, past scenarios etc. At its simplest, a stress test may be no different from a sensitivity analysis exercise, but that is generally not what is considered a stress test at large financial institutions.

A stress test may consider multiple scenarios, for example one scenario may include the events witnessed during the Asian crisis, another may include the events of the recent credit crisis. Simulation generally refers to a Monte Carlo or historical simulation, and is often a more limited exercise.

The exercise described in the question is the closest to a scenario analysis, therefore Choice ‘b’ is the correct answer.

It is important to note that all of the choices referred to in this question are related to each other, and the boundaries between them tend to be fuzzy. At what point does a complex sensitivity analysis start resembling a scenario, or a stress test can always be debatable, but such a debate would be more about the symantics than be of any practical use.

306
Q

Which of the following are attributes of a robust stress testing programme at a bank?

(a) Written policies and procedures
(b) Data of appropriate quality and granularity
(c) Robust systems infrastructure
(d) All of the above

A

The correct answer is choice ‘d’

A bank’s stress testing programme in relation to firm wide stress tests should document the type, frequency and the purpose of the programme, as well as methodologies for defining scenarios and the remedial actions envisaged. Choice ‘a’ is therefore a necessary attribute of a robust stress testing programme.

The programme should be supported by a robust systems infrastructure that allows the execution of periodic as well as ad-hoc stress tests at the right level (business unit, as well as firm-wide) at the right level of detail or granularity. Choice ‘c’ also therefore is a valid choice.

A related element is data quality - without which no stress tests can be be credible.

307
Q

Which of the following is not an example of a risk concentration?

(a) Large combined positions in assets affected by different risk factors that are highly correlated
(b) Origination of a large number of SIVs with exposures to the same asset class, where the SIVs are separate legal entities without recourse to the originator
(c) Location of a portfolio

s assets in a single country but spread across different industries
(d) Material amounts of treasury obligations held as collateral provided by a single counterparty

A

The correct answer is choice ‘d’

Choice ‘c’ represents a risk concentration due to excessive exposure to a single country, even though spread across different industries as the risk factors (economy, exchange rate, interest rate, political risk etc) are the same for all companies in the country.

Choice ‘a’ represents a risk concentration because even though the risk factors are different, they are highly correlated and therefore effectively behave as one. These undetected correlations proved to be fatal to many financial institutions during the credit crisis.

Choice ‘b’ represents a risk concentration as was borne out by the recent credit crisis. Large banks had to take over the obligations of SIVs they had created, even though the SIVs were separate legal entities with no legally enforceable recourse to the originating bank. This had to be done for moral and reputational reasons, and banks had to absorb the losses of these supposedly separate vehicles.

Choice ‘d’ does not represent a risk concentration, in fact it is not a risk at all because it refers to collateral held, even though the collateral may have been provided by the same counterparty. In this case the risk is to the party providing the collateral (in case the party holding the collateral rehypothecates or sells the collateral and is unable to return it).

308
Q

Which of the following correctly describes a reverse stress test:

(a) A stress test that considers only qualitative factors that go beyond mathematical modeling to examine feedback loops and the effect of macro-economic fundamentals
(b) Stress tests that are prescribed and conducted by a regulator in addition to the tests done by a bank
(c) A stress test that requires a role reversal between risk managers and the risk taking business units in order to determine credible scenarios
(d) Stress tests that start from a known stress test outcome and then ask what events could lead to such an outcome for the bank

A

The correct answer is choice ‘d’

Generally, stress tests consider a shock or a severe scenario in order to determine the ‘what-if’ that circumstance were to materialize. They focus on the outcome based upon a set of shocks. In a reverse stress test, the outcome is assumed to be known (generally something as severe as bankruptcy, non-compliance with capital requirements etc), and the test is intended to work out what shocks or events would lead to such an outcome.

Reverse stress tests therefore start from a known stress test outcome (such as breaching regulatory captial ratios, illiquidity or insolvency) and then asking what events could lead to such an outcome for the bank. This can be quite a challenging task. Principle 9 laid out in the BCBS document on stress testing (May 2009) (which is part of the PRM syllabus effective March 1, 2010) lays down the expectations relating to reverse stress tests.

309
Q

Which of the following statements are true:
I. Stress tests should consider simultaneous pressures in funding and asset markets, and the impact of a reduction in liquidity
II. Judging the effectiveness of risk mitigation techniques is not a part of stress testing
III. A reverse stress test is useful for discovering hidden vulnerabilities and inconsistencies in hedging strategies
IV. Reputational risk, which is explicitly excluded from the definition of operational risk under Basel II, should still be considered as part of stress tests.
(a) I and III
(b) II and IV
(c) I, III and IV
(d) All of the above

A

The correct answer is choice ‘c’

All the statements in this question are directly based on the principles for effective stress testing as laid down in the BCBS document on stress testing issued in May 2009. Statement 1 is correct and is an almost verbatim reproduction of principle 10 as laid down in that document. Statement II is incorrect as it is contrary to principle 11 laid down in the same document. Statement III is correct as discovering hidden vulnerabilities and inconsistencies in hedging strategies is one of the objectives of reverse stress tests. Similarly, even though reputational risk is not really covered under any risk category under Basel II (as it is not a part of either market, credit or operational risk), principle 14 of this paper requires the mitigation of spill-over effects on market confidence of reputational risk when thinking about stress tests.

310
Q

In January, a bank buys a basket of mortgages with a view to securitize them by April. Due to an unexpected lack of investors in the securitization market, it is unable to do so and is left with the exposure to the mortgages on its books. This is an example of:

(a) Market risk
(b) Wrong-way risk
(c) Pipeline and warehousing risk
(d) Basis risk

A

The correct answer is choice ‘c’

This is an example of pipeline and warehousing risk. Generally there is a lag between acquiring assets and securitizing them due to the legal work to be done, the work to be done by the ratings agencies and in finding investors. During this period, the bank is exposed to the underlying assets purchased, and this is the ‘pipeline and warehousing’ risk as these assets are in the pipeline and warehoused for intended subsequent sale. Generally this period tends to be short. However, during the credit crisis this became a significant source of risk as many banks were left exposed to risk they had intended to get rid of, but could not do so as the market dried up. The other choices are all incorrect.

Note that pipeline and warehousing risk is also known as ‘securitzation risk’. It means that funding from securitization cannot be relied upon as a matter of fact.

311
Q

As part of designing a reverse stress test, at what point should a bank’s business plan be considered unviable (ie the point where it can be considered to have failed)?

(a) When the realization of risks leads market participants to lose confidence in the bank as a counterparty or a business worthy of funding
(b) Where EBITDA for the year is forecast to be negative
(c) Where large known losses have been incurred on the bank

s positions
(d) When the regulatory capital of the bank has been exhausted

A

The correct answer is choice ‘a’

As part of a reverse stress test, a firm has to identify and assess the scenarios most likely to cause it to fail, or in other words using the language used by the FSA in the UK, for its current business plan to become unviable. A firm’s business plan should be considered to become unviable at the point that crystallizing risks cause the market to lose confidence it it, with the consequence that counterparties and other stakeholders are unwilling to transact with it or provide capital to the firm and, where releant, that existing counterparties may seek to terminate their contracts. Recent experience suggests that this point is reached well before a firm’s regulatory capital is exhausted.

Large known losses, or negative EBITDA (earnings before interest , tax, depreciation and amortization) may be indicators or contribute to the loss of confidence, but do not of themselves make the current business plan unviable. Therefore Choice ‘a’ is the correct answer.

312
Q

Which of the following are valid objectives of a reverse stress test:
I. Ensure that a firm can survive for long enough after risks have materialized for it to either regain market confidence, restructure or be sold, or be closed down in an orderly manner,
II. Discover the vulnerabilities of the current business plan,
III. Better integrate business and capital planning,
IV. Create a ‘zero-failure’ environment at the systemic level in the financial sector
(a) I, II and III
(b) II and III
(c) I and IV
(d) All of the above

A

The correct answer is choice ‘a’

Statement I is true. According to the statement CP08/24: Stress and scenario testing (December 2008) issued by the FSA in the UK, an underlying objective of reverse stress tests is to ensure that a firm can survive long enough after risks have crystallized for one of the following to occur:

  • the market decides that its lack of confidence is unfounded and recommences transacting with the firm;
  • the firm down-sizes and re-structures its business;
  • the firm is taken over, or its business is transferred in an orderly manner; or
  • public authorities take the firm over, or wind down its business in an orderly manner.

Statement II and III are true. The same statement clarifies the intention of the reverse stress testing requirement, which is to encourage firms to: explore more fully the vulnerabilities of theirbusiness model (including ‘tail risks’); make decisions that better integrate business and capital planning; and improve their contingency planning.

Statement IV is incorrect. Since the question is asking for the statement which is NOT an objective for reverse stress tests, Choice ‘a’ is the correct answer. The same statement clarifies that the introduction of a reverse-stress test requirement should not be interpreted as indicating that the FSA is pursuing a ‘zero-failure’ policy. In the FSA’s view, such a policy is neither possible, nor desirable.

313
Q

Which of the following statements is true in relation to the Supervisory Capital Assessment Program (SCAP):
I. The SCAP is an annual exercise conducted by the Treasury Department to determine the health of key financial institutions in the US economy
II. The SCAP was essentially a stress test where the stress scenarios were specified by the regulators
III. Capital buffers calculated under the SCAP represented the amount of capital that the institutions covered by SCAP held in excess of Basel II requirements
IV. The SCAP focused on both total Tier 1 capital as well as Tier 1 common capital

A

The correct answer is choice ‘a’

In the February of 2009, the Federal Reserve (which is the US central bank system) and other US banking regulators embarked on a simultaneous assessment of the capital held by the 19 largest US bank holding companies. This was an unprecedented exercise of a kind never undertaken before, and was known as the Supervisory Capital Assessment Program (SCAP). The purpose of the exercise was to determine the amount of additional capital (called the ‘capital buffer’) each of the institutions covered would need to ensure that it would have sufficient capital if the economy weakened more than was then expected. The idea was that these financial institutions would then raise additional capital equal to their respective capital buffers by the fourth quarter of 2009.

Statement I is false on two counts: firstl the SCAP was conducted by the US central bank and other regulators, and not the ‘Treasury Department’ (the Treasury Department in the US is the equivalent of the Ministry of Finance in may other countries). Second, the SCAP was a one time exercise, and not annual.

Statement II is correct. The regulators prescribed rates of losses on credit assets of different kinds and other macro-economic assumptions, and asked the banks to determine the extent of losses they would need to bear (in addition to calculating them independently too). Therefore the SCAP was a stress test where the scenario was prescribed by the regulators.

Statement III is false. Capital buffer under the SCAP referred to the additional capital the banks would need to have certain ratios of capital, and not ‘excess’ capital.

Statement IV is correct. The SCAP envisaged two capital targets: a Tier 1 capital ratio in excess of 6% at the end of 2010; and a Tier 1 common capital ratio in excess of 4%. Therefore both the total Tier 1 capital and Tier 1 common capital were targeted.

314
Q
Which of the following are elements of 'group risk':
I. Market risk
II. Intra-group exposures
III. Reputational contagion
IV. Complex group structures
 	(a)	I and IV
 	(b)	I and II
 	(c)	II and III
 	(d)	II, III and IV
A

The correct answer is choice ‘d’

The term ‘group risk’ has been defined in the FSA document 08/24 on stress testing as the risk that a firm may be adversely affected by an occurrence (financial or non-financial) in another group entity or an occurrence that affects ther group as a whole. These risks may occur through:

  • reputational contagion,
  • financial contagion,
  • leveraging,
  • double or multiple gearing,
  • concentrations and large exposures (particularly intra-group).

Thus, the insurance sector may be considered a group, and a firm may suffer just because another group firm has had losses or reputational issues.

The FSA statement goes on to identify some elements of group risk as follows:

  • intra-group exposures (credit or operational exposures through outsourcing or service arrangements, as well as more standard business exposures);
  • concentration risks (from credit, market or insurance risks which could put a strain on capital resources across entities simultaneously);
  • contagion (reputational damage, operational or financial pressures); and
  • complex group structures (with dependencies, complex split of responsibilities and accountabilities).
315
Q

If the 99% VaR of a portfolio is $82,000, what is the value of a single standard deviation move in the portfolio?

(a) 50000
(b) 134480
(c) 82000
(d) 35248

A

The correct answer is choice ‘d’

Remember that VaR is merely a multiple of the portfolio’s standard deviation. The multiple is determined by the confidence level, and for a 99% confidence level this multiple is 2.3264 (=-NORMSINV(1%) in Excel). Therefore one standard deviation at this level of confidence would be equal to VaR/2.3264.

316
Q

Which of the following statements are true with respect to stress testing:
I. Stress testing results in a dollar estimate of losses
II. The results of stress testing can replace VaR as a measure of risk as they are better grounded in reality
III. Stress testing provides an estimate of losses at a desired level of confidence
IV. Stress testing based on factor shocks can allow modeling extreme events that have not occurred in the past
(a) II and III
(b) II, III and IV
(c) I and IV
(d) I, II and IV

A

The correct answer is choice ‘c’

Any stress test is conducted with a view to produce a dollar estimate of losses, therefore statement I is correct. However, these numbers do not come with any probabilities or confidence levels, unlike VaR, and statement III is incorrect. Stress testing can complement VaR, but not replace it, therefore statement II is not correct. Statement IV is correct as stress tests can be based on both actual historical events, or simulated factor shocks (eg, a factor, such as interest rates, moves by say 10-z).

317
Q

Stress testing is useful for which of the following purposes:
I. For providing the risk manager with an intuitive check on his risk estimates
II. Providing a means of communicating risk implications using plausible scenarios that can be easily explained to a non-technical audience
III. Guarding against major errors in the form of model risk
IV. Complying with the requirements of Basel II.
(a) II and IV
(b) I, II, III and IV
(c) IV only
(d) I, II and IV

A

The correct answer is choice ‘b’

318
Q

Which of the following is not an approach used for stress testing:

(a) Monte Carlo simulation
(b) Algorithmic approaches
(c) Hypothetical scenarios
(d) Historical scenarios

A

The correct answer is choice ‘a’

Choice ‘a’ is the correct answer as Monte Carlo simulations are not used to generate stress scenarios. They are applicable to VaR calculations under certain situations, and are not used for stress tests. The other three represent valid approaches to stress testing.

319
Q

The results of ‘desk-level’ stress tests cannot be added together to arrive at institution wide estimates because:

(a) Desk-level stress tests tend to ignore higher level risks that are relevant to the institution but completely outside the control of the individual desks.
(b) Desk-level stress tests tend to focus on extreme movements in risk parameters (such as volatility) without considering economy wide scenarios that may represent more realistic and consistent situations for the institution.
(c) Desk-level stress tests focus on desk specific risks that may be minor or irrelevant in the larger scheme at the institution level.
(d) All of the above

A

The correct answer is choice ‘d’

All the above listed reasons are valid explanations as to why an institution level stress test cannot be estimated by merely summing up the results of the stress tests of the individual desks.

320
Q

Which of the following statements are true:
I. Common scenarios for stress tests include the 1997 Asian crisis, the Russian default in 1998 and other well known economic stress situations.
II. Stress tests provide the assurance that an institution’s worst case losses will be covered.
III. Performing stress tests is highly recommended but is not mandated under Basel II.
IV. Historical events can be modeled quite accurately as they have defined start and end dates.
(a) I only
(b) All of the above
(c) I, III and IV
(d) I and II

A

The correct answer is choice ‘a’

Stress tests can cover known events, but since the future is unknown, and new events may be entirely different from what has happened in the past, they provide no assurance that an institution’s worst case losses would be covered. Hence II is false.

Stress testing is required to be performed as part of Basel II, and therefore III is false.

Historical events do not have sharply defined start and end dates. Often, even after a crises ends, its after effects may continue to affect the markets for a long time. In such cases, it may be difficult to define the start and end of the crises. In many cases, the crises may persist for months or even years, making it difficult for the risk manager to identify a time period that covers the essence of the crises, and yet is focused enough to constitute a plausible scenario. Therefore IV is false too. Only I is true,

321
Q

Which of the following statements are true:
I. Shocks to risk factors should be relative rather than absolute if we wish to avoid a change in the sign of the risk factor.
II. Interest rate shocks are generally modeled as absolute shocks.
III. Shocks to volatility are generally modeled as absolute shocks.
IV. Shocks to market spreads are generally modeled as relative shocks.
(a) I and II
(b) II only
(c) II and IV
(d) I, II and III

A

The correct answer is choice ‘a’

Suppose during a historical event interest rates rose from 2% to 2.25%. This can be understood as a change of either 25 basis points, or a change of 12.5%. When applied to the current portfolio when interest rates are 0.50%, we may model this ‘shock’ as either a rise to 0.75%, or 0.5625% (ie a rise of 12.5% over existing levels). The former is called an absolute shock, and the latter a relative shock.

I is true as relative shocks can never change the sign of a risk factor. Yet interest rate changes are modeled as absolute changes as relative shocks can get artificially amplified or attenuated if the current level of interest rates is too different from those that existed during the crisis being modeled. Therefore II is true. III and IV are false as volatility is modeled as a relative shock and spreads are modeled as absolute shocks.

322
Q

When doing stress tests based on historical scenarios, if no appropriate historical scenarios exist for a security, it is most INAPPROPRIATE to:

(a) Estimate a shock factor based upon extrapolation
(b) Estimate a shock factor based upon interpolation
(c) Leave the position unshocked
(d) Estimate a shock factor based on other instruments that might be considered as proxies for such a security

A

The correct answer is choice ‘c’

Where a historical shock factor does not exist for a security, for example because the security is new or was only thinly traded earlier, or because a particular emerging market was immature at the time of the historical scenario being considered, it is inappropriate to leave the position unshocked. By and large, the general rule to be followed when carrying out stress testing is to leave no position unshocked. Therefore Choice ‘c’ is the correct answer.

323
Q

When performing portfolio stress tests using hypothetical scenarios, which of the following is not generally a challenge for the risk manager?

(a) Building a positive semi-definite covariance matrix
(b) Building a consistent set of hypothetical shocks to individual risk factors
(c) Evaluating interrelationships between counterparties when considering liquidity risks
(d) Considering back office capacity to deal with increased transaction volumes

A

Choice ‘d’ relates to operational risk and process capabilities, generally not a concern when evaluating market risk of a portfolio. Choice ‘b’, Choice ‘a’ and Choice ‘c’ represent real concerns for the risk manager when building stress tests for the value of a portfolio.

Choice ‘b’ is relevant because certain shocks may be inconsistent with each other, and therefore implausible. For example, an increase in futures prices may be inconsistent with without spot prices and/or interest rates increasing according to the no-arbitrage condition. Choice ‘a’ is relevant when modeling a covariance matrix in a stressed situation with higher correlations, as a hypothetical covariance matrix which is not positive semi-definite may give absurd results (negative variance). Choice ‘c’ is relevant as liquidity risks may affect the price that can be realized for positions held

324
Q

Which of the following are valid techniques used when performing stress testing based on hypothetical test scenarios:
I. Modifying the covariance matrix by changing asset correlations
II. Specifying hypothetical shocks
III. Sensitivity analysis based on changes in selected risk factors
IV. Evaluating systemic liquidity risks
(a) I, II and III
(b) I and II
(c) II, III and IV
(d) I, II, III and IV

A

The correct answer is choice ‘d’

Each of these represent valid techniques for performing stress testing and building stress scenarios. Therefore d is the correct answer. In practice, elements of each of these techniques is used depending upon the portfolio and the exact situation.

325
Q

Which of the following statements are true:
I. Liquidity risks during time of crisis may be exacerbated by large collateral calls continuing over a period of time.
II. Stress tests are always separately modeled from VaR computations which cannot deal with stress scenarios of the kind considered in stress tests.
III. A maximum loss scenario considers the maximum possible loss given a ‘plausibility constraint’ that is based upon the joint probability of such a loss happening
(a) II and III
(b) I, II and III
(c) I and II
(d) I and III

A

The correct answer is choice ‘d’

If VaR is calculated based upon historical simulations, and these simulations are designed as to include all stress scenarios of interest, then VaR and stress tests can be a part of an integrated risk measurement system. Therefore it is not correct to say that stress tests are always separately modeled from VaR and II is false. I and III are true, and therefore Choice ‘d’ is the correct answer.

326
Q

Which of the following is a valid approach to determining the magnitude of a shock for a given risk factor as part of a historical stress testing exercise?
I. Determine the maximum peak-to-trough change in the risk factor over the defined period of the historical event
II. Determine the minimum peak-to-trough change in the risk factor over the defined period of the historical event
III. Determine the total change in the risk factor between the start date and the finish date of the event regardless of peaks and troughs in between
IV. Determine the maximum single day change in the risk factor and multiply by the number of days covered by the stress event
(a) IV only
(b) II and IV
(c) I and III
(d) I, II and IV

A

The correct answer is choice ‘c’

Stress events rarely play out in a well defined period of time, and looking back it is always difficult to put exact start and end dates on historical stress events. Even after that is done, the question arises as to what magnitude of a change in a particular risk factor (for example interest rates, spreads, or exchange rates) are reasonable to consider for the purposes of the stress test.

Statements I and III correctly identify the two approaches that are acceptable and used in practice - the risk manager can either take the maximum adverse move - from peak to trough - in the risk factor, or alternatively he or she could consider the change in the risk factor from the start of the event to the end as defined for the purposes of the stress test. Between the two, the approach mentioned in statement III is considered slightly superior as it produces more believable shocks.

Statement II is incorrect because we never want to consider the minimum, and statement IV is not correct as it is likely to generate a shock of a magnitude that is not plausible. Therefore Choice ‘c’ is the correct answer.

327
Q

he degree distribution of the nodes of the financial network is:

(a) normally distributed
(b) long tailed
(c) best approximated by a beta distribution
(d) non-linear

A

The correct answer is choice ‘b’

The ‘degree’ of a node in a network measures the number of links to other nodes. For the financial network, each market participant can be thought of as a node. The ‘degree distribution’ can be thought of as the histogram of the number of links for each node.

The financial network has a degree distribution with rather long tails - and therefore Choice ‘b’ is the correct answer. The other choices are incorrect. Long tailed networks have the property that they are robust when affected by random disturbances, but susceptible to targeted attacks, for example on key hubs.

328
Q

Which of the following statements are true in relation to the current state of the financial network?
I. Interconnectivity between countries has reduced while that between institutions in the same country has increased significantly
II. The degrees of separation between institutions has gone up
III. The average path length connecting any two given institutions has shrunk
IV. Knife-edge dynamics imply that systemic risk arises from the financial system flipping from risk sharing to risk spreading
(a) I and IV
(b) I and II
(c) II and III
(d) III and IV

A

The correct answer is choice ‘d’

Over the past decade or so, systemic risk has been increased by vastly increasing network complexities resulting from greater interconnectivity between institutions as well as countries. Therefore statement I is incorrect.

Statement II is incorrect and statement III is correct because the average path length between institutions, or their degree of separation where they are not directly dealing with each other but through other counterparties to which they are exposed (analogous to 6 degrees of separation, or the ‘small world’ property), has shrunk and not increased.

Statement IV correctly describes knife edge dynamics, which is another way of waying that the financial network displays a tipping point property.

329
Q

Which of the following correctly describes survivorship bias:

(a) Survivorship bias refers to prudent and conservative risk management
(b) Survivorship bias is the tendency for failed companies, markets or investments to be excluded from performance data.
(c) Survivorship bias is the positive tail risk that ensures survival over the long run
(d) Survivorship bias is the negative skew in returns data resulting from credits that have survived despite a high probability of default

A

The correct answer is choice ‘b’

Survivorship bias is the tendency for failed companies, funds, investments and even entire markets (eg Russian stock market returns after the Communist revolution) to be excluded from performance studies because they no longer exist. Survivorship bias results in past results looking better than they actually were as data points relating to failures are not included.

A risk manager needs to be aware of survivorship bias when basing risk analysis on historical data and should question if failures (eg failed funds, delisted companies etc) have been included in the data he or she is relying upon.

330
Q

The systemic manifestation of the liquidity crisis during the current credit crisis took many forms. Which of the following is not one of those forms?

(a) Stress and large withdrawals from the money markets
(b) Drying up of liquidity in the cash market for treasury bonds
(c) Drying up of liquidity in the wholesale money markets
(d) Drying up of liquidity in the corporate bond markets

A

The correct answer is choice ‘b’

The stresses on liquidity that happened as part of the credit crisis beginning 2007-08 led to drying up of trading and liquidity crisis in the corporate bond markets, the auction rate securities markets, the wholesale (interbank lending) markets, the money markets, the markets for structured products, and even the otherwise liquid futures and forwards markets (as there was no liquidity available to fund the financing of futures). The one market that was not affected was the market for treasuries, in fact the flight to quality ensured that this market was very liquid (even though stressed from a pricing perspective as yields plummetted).

331
Q

Which of the following is not a consideration in determining the liquidity needs of a firm (as opposed to determining the time horizon for liquidity risk)?

(a) Collateral
(b) Speed with which new equity can be issued to the owners
(c) The firm

s business model
(d) Off balance sheet items

A

The correct answer is choice ‘b’

Managing liquidity requires understanding and providing for two things: the amount of liquidity needed to pay for all current obligations (in both normal and stressed scenarios), and determining the time horizon over which this liquidity should be available. The first is essentially a function of the business of the firm, and the assets and liabilities resulting from operations. The second considers other factors such as the speed with which new cash can be borrowed (eg, from the repo markets), the consequences are of running out of liquidity (eg, maybe only an overdrawn fee as opposed to bankruptcy), etc. In other words, liquidity risk management answers two questions: how much, and for how long.

This question asks for identifying the factors that affect the ‘how much’ part. Choice ‘c’, Choice ‘d’ and Choice ‘a’ do affect the determination of the liquidity needs of the firm. Choice ‘b’ does not affect the liquidity needs, but the ‘how long’ part. Choice ‘b’ is therefore the correct answe

332
Q

Which of the following statements is correct in relation to liquidity risk management?
I. Pricing for products that do not impact the balance sheet need not reflect the cost of maintaining liquidity
II. Time horizons for liquidity risk management are impacted by both regulatory requirements and the speed at which new sources of liquidity can be tapped
III. Collateral management is an important aspect of liquidity risk management
IV. The maturity period of various instruments in the capital structure has a significant impact on liquidity needs
(a) I and II
(b) III and IV
(c) II, III and IV
(d) II and III

A

The correct answer is choice ‘c’

All product pricing should reflect the cost of maintaining the liquidity required to support a product. This is regardless of the accounting treatment for the product, ie irrespective of whether the product is on or off balance sheet. Therefore statement I is incorrect.

The time horizon to consider for liquidity risk management is determined taking into account a number of factors, which include both the speed at which new sources of liquidity can be generated and any applicable regulatory requirements. Statement II is correct.

Managing collateral, both collateral received and collateral posted with counterparties, is an important aspect of liquidity risk management as liquidity problems often manifest themselves in the form of margin calls requiring collateral to be posted. Statement III is therefore correct.

The maturity period of the different sources of capital funding for a bank, for example equity capital, preferred shares, long term debt etc quite clearly has a significant impact on liquidity needs. Stable sources of funds such as equity or preferred capital, or debt that is not maturing shortly help the liquidity position. Statement IV is therefore correct.

333
Q

Which of the following statements is true in relation to collateral management?
I. A collateral management system need not consider the failure by counterparties to return collateral when due
II. The extent to which counterparties may have rehypothecated collateral is not a consideration for a collateral management system
III. Cash is an acceptable substitute for any type of collateral required to be posted
IV. Haircuts do not apply to treasury issued instruments posted as collateral
(a) I, II and III
(b) I, II, III and IV
(c) II and III
(d) None of the statements is true

A

The correct answer is choice ‘d’

Strong management of collateral, both receivable and payable, is emerging as an area requiring significant investment by financial institutions and asset managers in IT infrastructures and business processes. A bank needs to make collateral calls daily, based upon the P&L of the previous day, and likewise receives collateral calls from its counterparties. Just like cash, a bank needs to make sure that it does not run out of collateral to post when a call is received. Interestingly, based upon the agreements between banks and their mutual understanding, only certain types of instruments often qualify as valid collateral - and in such cases even cash is not acceptable if the right type of bond or other agreed security is not available to post. The operational challenges of managing collateral increase manifold due to ‘rehypothecation’, ie when collateral received from one counterparty gets posted out as collateral where it is due. In such cases, the bank should have the mechanisms to receive the right assets back in a timely way in case rehypothecated assets are to be returned. The systems should be able to deal with delays, failures without impacting the ability of the bank to post collateral as needed. All of this requires major investments in IT and processes.

Statement I is not true as a bank is bound to post collateral to third parties when needed regardless of the failure of its counterparties to post collateral to it when owed. In the markets, failures by counterparties can and do happen, and a collateral management system needs to account for and keep a buffer for the fact that some collateral when due will not be received.

Statement II is not true as rehypothecation by counterparties of collateral posted increases the chances of the collateral not being received in time. The system should consider the need for liquidity to generate assets that can be posted as collateral when others have failed to return the collateral in a timely way.

Statement III is not correct as cash may not be acceptable to counterparties as collateral. From a practical point of view, they may not have the infrastructure to receive and account for cash as collateral. A Swiss bank, for example, may have an ‘account’ to receive US t-bills as collateral but may not even have a US dollar account to receive cash. Even if it did, the volumes of transactions going back and forth may make tracking and reconciliations impossible. Thus a bank should always make sure that it has the right type of collateral available to post.

Statement IV is incorrect as well, as treasury issued instruments are also subject to haircuts. Their value also fluctuates in response to changes in yields, and therefore they are subject to haircuts as well.

334
Q

Which of the following statements is true?
I. It is sufficient to ensure that a parent entity has sufficient excess liquidity to cover a liquidity shortfall for a subsidiary.
II. If a parent entity has a shortfall of liquidity, it can always rely upon any excess liquidity that its foreign subsidiaries might have.
III. Wholesale funding sources for a bank refer to stable sources of funding provided by the central bank.
IV. Funding diversification refers to diversification of both funding sources and funding tenors.
(a) I and III
(b) I and IV
(c) III and IV
(d) IV

A

The correct answer is choice ‘d’

It is not generally sufficient to ensure the adequacy of liquidity across a group - ie it is not appropriate to just add up the sources and needs for liquidity across multiple entities in a group. This is because there can be restrictions on transferring liquidity between entities, particularly when the entities are located across borders. In cases where transfers of liquidity are permitted, there may be settlement delays in transferring funds from one entity to another. Therefore both statements I and II are incorrect.

Wholesale funding sources refers to the temporary interbank funding sources that need to be rolled over on very short intervals, often as short as overnight. These are not stable sources for long term funding. Statement III is therefore false.

Statement IV is correct as funding diversification refers to diversification of both funding sources and the duration for which the amounts are borrowed, ie tenor diversity.

Statement IV is the only correct statement and therefore Choice ‘d’ is the correct answer.

335
Q

Which of the following is not a possible early warning indicator in relation to the health of a counterparty?

(a) Falling stock price
(b) Credit rating downgrade
(c) A decline in the counterparty

s corporate debt yield
(d) Negative publicity

A

The correct answer is choice ‘c’

Negative publicity, a downgrade in the credit rating, a falling stock price are all pointers to potential credit problems, and the counterparty credit monitoring group of a bank should be using these as possible early indicators of an upcoming credit health problem. A decline in the yield of the debt issued by a counterparty means its spread is declining and the health of the credit is actually improving. Therefore a decline in the counterparty’s corporate debt yield cannot be used as an indicator of potential credit problems.

336
Q

Which of the following statements is true?
I. Real Time Gross Systems (RTGS) for large value payments consume less system liquidity than Deferred Net Systems (DNS)
II. The US Fedwire is an example of a Real Time Gross System
III. Current disclosure requirements in relation to liquidity risk as laid down in the Basel framework require banks to disclose how liquidity stress scenarios were formulated
IV. A CFP (Contingency Funding Plan) provides access to Central Bank financing
(a) II and IV
(b) I, II, III and IV
(c) II
(d) I and III

A

The correct answer is choice ‘c’

For settlement of interbank payments, there are broadly two kinds of systems: RTGS (Real Time Gross Systems) and Deferred Net Systems (DNS). RTGS process payments in real time, settlement by settlement, and each transaction is settled by the a clearing institution (mostly the central bank) on a gross basis without regard for other settlements affecting the counterparty. DNS systems, on the other hand, allow for debiting or crediting the accounts of counterparties at periodic intervals after netting all payments paid or received since the last settlement. The exact timing of the payments does not matter so long as a bank has sufficient funding on a net basis at settlement time. Implicit in the DNS system is the extension of credit and liquidity by the central bank to the participating banks as it is possible for a bank to issue payment instructions even without having funds so long as they can arrange for such funds prior to settlement at the end of the day. In RTGS, a bank needs to have funds to make a payment at any point, and cannot make a payment against moneys expected to be received later intra-day. RTGS systems therefore need more liquidity on the part of the participants, and consume far more liquidity than DNS arrangements. Of course, the ‘liquidity’ of the DNS arrangement has a cost - which is that someone is taking up settlement risk, and invariably it is the central bank. If a bank under DNS fails to settle, its transactions have to be ‘unwound’, ie all payments made by it have to be rolled back. This can cause other banks to trip, causing further unwinding transactions. RTGS systems do not carry this risk. Therefore statement I is not correct as RTGS consume more liquidity than DNS arrangements.

Statement II is correct. US Fedwire or European TARGET are RTGS while CHIPS is a DNS based payment system.

Statement III is not correct. Current Basel requirements do not require any disclosure in respect of liquidity risk management. A consultative paper was issued by BIS in Dec 2009 for comments from members, but it is far from final. The BIS is still reacting to the liquidity issues that arose during the 2007-09 credit crisis.

Statement IV is not correct as a CFP is like a disaster recovery plan for liquidity, ie it helps a bank plan for and think about what steps would be taken to deal with a liquidity disaster situation. It does not provide any access to central bank financing.

337
Q

Which of the following is true in relation to a Contingency Funding Plan (CFP)?
I. A CFP is like a disaster recovery plan to deal with a liquidity crisis
II. A CFP should consider market stress conditions, but failures of payment systems are not relevant as they fall under the remit of operational risk
III. Reputational damage may result if the market finds out that a firm has had to execute its CFP
IV. Sources of emergency funding considered in the CFP should include the role of the central bank as the lender of last resort
(a) I, II and III
(b) IV
(c) I and III
(d) II and IV

A

The correct answer is choice ‘c’

A CFP is indeed a disaster recovery plan to deal with a liquidity crisis. Therefore statement I is correct.

A CFP should consider market stress conditions, including wide scale failures of payment and settlement systems. Statement II is not correct.

It is true that reputational damage may result if a firm has to activate its CFP - therefore the plan should consider internal and external communications, the timing of information release and the groups within the firm who need to know about the implementation of the plan. Reputational damage can only make any existing liquidity problems worse. Statement III is correct.

Sources of emergency funding should not include funding from the central bank - unless as part of a regular lending facility. Its role as a lender of last resort can not be considered in a CFP. Statement IV is incorrect.

338
Q

Which of the following statements is correct?

(a) Dynamic simulations of liquidity needs require an assumption of counterparty risk remaining constant
(b) Funding liquidity risks present themselves in the form of an adverse market impact on prices from a trade
(c) Market liquidity risk is idiosyncratic while funding liquidity risk is not
(d) Market liquidity risks present themselves in the form of higher bid offer spreads

A

The correct answer is choice ‘d’

Simulations of liquidity needs can be of various types: historical simulations, where the current positions are subjected to the kind of liquidity shocks experienced in the past; static simulations, where a static view of current positions, counterparty credit position, and the business is considered; and dynamic simulations where all factors are dynamically changed including counterparty credit standing, changes to the current portfolio and behavioural aspects of the business. Choice ‘a’ is incorrect as dynamic simulations require no such assumptions.

Liquidity risk is often thought of in terms of market liquidity risk and funding liquidity risk. Market liquidity risk relates to the the liquidity for a particular type of asset drying up. For example, during the 2007-2009 crisis a large number of corporate bonds and structured products became extremely illiquid. Market liquidity risk manifests itself in the form of higher bid offer spreads, higher pricde impact, and a reduction in the normal market size (ie, the ‘normal’ size of a trade for which a dealer quote is valid for). Therefore Choice ‘d’ is correct. Similarly, Choice ‘b’ is incorrect as adverse price impact results from market liquidity risk and not funding liquidity risk.

Market liquidity risk applies to the entire market and all its participants. It is not idiosyncratic. Therefore Choice ‘c’ is incorrect too. Funding liquidity risk on the other hand applies to an individual institution that is under liquidity stress in the sense of not being able to meet its obligations such as margin or collateral calls because of a lack of liquid assets. Thus it is funding liquidity that is idiosyncratic. Market liquidity risk often leads to funding liquidity risks materializing as firms are unable to get to the funds they were relying upon due to assets becoming illiquid.

339
Q

A bank holds $10m of a corporate debt that it has purchased CDS protection against. What is the impact on the short term liquidity of the bank in the event of a default by the corporate on its bonds?

(a) No impact
(b) An immediate reduction in available liquidity
(c) A short term increase in available liquidity
(d) Cannot be determined without information on recovery rates

A

The correct answer is choice ‘c’

The immediate impact of the default would be to improve the liquidity available in the short term due to the pay out from the CDSs.

It is also important to consider the impact on liquidity from the occurence of a default even in situations where CDS protection may not have been purchased. In such cases, there may be a nearer term payout in the form of the recovery rate. Of course, recovery payments are generally not realized for longer periods of time as court cases linger on, but there is a good likelihood that a payment, albeit lower in total, is likely to be realized sooner than the maturity of the bond in cases where the bond is a longer term bond. At the same time, any interest payments, and the final principal payment, which may have been included in liquidity projections, will not occur.

340
Q
Which of the following are considered asset based credit enhancements?
I. Collateral
II. Credit default swaps
III. Close out netting arrangements
IV. Cash reserves
 	(a)	I, II and IV
 	(b)	I and IV
 	(c)	II and IV
 	(d)	I and III
A

The correct answer is choice ‘d’

Credit enhancements come in two varieties: counterparty based, where the exercise of the credit enhancement requires a third party to pay, and this includes guarantees and CDS contracts. Asset based credit enhancements are based upon a physical asset in possession, and these include collateral and balances owed on other trades or transactions, and availed through close out netting arrangements.

Of the listed choices, I and III are asset based credit enhancements, and II is third party based. Cash reserves are not credit enhancements (unless held as collateral).

341
Q
Which of the following are considered counterparty based credit enhancements?
I. Collateral
II. Credit default swaps
III. Close out netting arrangements
IV. Guarantees
 	(a)	I and III
 	(b)	II and IV
 	(c)	I and IV
 	(d)	I, II and IV
A

The correct answer is choice ‘b’

Credit enhancements come in two varieties: counterparty based, where the exercise of the credit enhancement requires a third party to pay, and this includes guarantees and CDS contracts. Asset based credit enhancements are based upon a physical asset in possession, and these include collateral and balances owed on other trades or transactions, and availed through close out netting arrangements.

Of the listed choices, I and III are asset based credit enhancements, and II and IV are third party based.

342
Q

Which of the following statements is true?

(a) For an issuer of life insurance policies, longevity risk can lead to reserves falling short of payments due
(b) Under times of liquidity stress, both prepayments of loans extended and expected withdrawals from on-demand deposits will decrease
(c) Deterioration in the balance sheets of key counterparties is a concern for a liquidity manager even though it may not immediately affect a firm
(d) Only the drawn portions of credit facilities extended to clients by a bank count towards its liquidity exposure

A

The correct answer is choice ‘c’

Deterioration in the balance sheets of key counterparties is a concern for a liquidity manager even though it may not immediately affect a firm, and this is true because counterparty failures may lead to liquidity shortfalls for an institution for no fault of its own. It is important for a liquidity risk manager to watch the health of key counterparties where exposure is concentrated and take timely steps to reduce it should the health deteriorate.

Under times of liquidity stress, prepayments of loans extended will decline while withdrawals of demand deposits are likely to increase. Both will not decrease, and therefore Choice ‘b’ is incorrect.

A bank is exposed to the undrawn portions of a line of credit extended to a borrower as the borrower, with superior information on its own finances, is likely to draw upon undrawn lines of credit thereby increasing the bank’s exposure. Therefore Choice ‘d’ is incorrect. Generally, a portion of the undrawn part is counted towards a liquidity outflow.

Longevity risk is the risk facing sellers of annuities that their clients will outlive their assumptions on their length of life, while mortality risk is the downside risk for an insurer that clients will die sooner than expected causing the reserves to fall short of what is needed. Therefore Choice ‘a’ is not correct as the opposite is true.

343
Q

Which of the following is a most complete measure of the liquidity gap facing a firm?

(a) Cumulative liquidity gap
(b) Liquidity at Risk
(c) Marginal liquidity gap
(d) Residual liquidity gap

A

The correct answer is choice ‘d’

Marginal liquidity gap measures the expected net change in liquidity over, say, a day. It is just equal to the liquidity inflow minus liquidity outflow. The cumulative liquidity gap measures the aggregate change in liquidity from a point in time, in other words it is just the summation of the marginal liquidity gap for each of the days included in the period under consideration. The residual liquidity gap goes one step further and adds available ‘opening balance’ of liquidity to the cumulative liquidity gap to reveal the days or times when the net liquidity is most at risk.

Liquidity at Risk measures the expected time to survival at a certain confidence level applied to the firm’s cash flows - and is not a measure of the liquidity gap.

344
Q

Which of the following methods cannot be used to calculate Liquidity at Risk?

(a) Monte Carlo simulation
(b) Historical simulation
(c) Scenario analysis
(d) Analytical or parametric approaches

A

The correct answer is choice ‘d’

Analytical or parametric approaches are not useful at all for liquidity at risk calculations because there are no neat distributions available to parameterize the large number of factors that affect the calculations of liquidity inflows and outflows. Historical simulations, Monte Carlo and scenario analysis (which can complement historical scenarios) are all valid choices

345
Q
Which of the following are likely to be useful to a risk manager analyzing liquidity risk for an international bank?
I. Information on liquidity mismatches
II. Funding concentration
III. Lending concentration
IV. A report on illiquid assets
 	(a)	I and II
 	(b)	III and IV
 	(c)	I, II, III and IV
 	(d)	I, II and IV
A

The correct answer is choice ‘c’

All of the listed reports (or information) would be useful to a risk manager analyzing liquidity risk. Therefore Choice ‘c’ is the correct answer. Additionally, reports on assets brought on margin (that may result in collateral or margin calls), trading exposures to different counterparties, reports on financial health, share price and credit ratings of key counterparties will also be useful.

346
Q
Which of the following are measures of liquidity risk
I. Liquidity Coverage Ratio
II. Net Stable Funding Ratio
III. Book Value to Share Price
IV. Earnings Per Share
 	(a)	I and II
 	(b)	II and III
 	(c)	I and IV
 	(d)	III and IV
A

The correct answer is choice ‘a’

In December 2009 the BIS came out with a new consultative document on liquidity risk. Given the events of 2007 - 2009, it has been clear that a key characteristic of the financial crisis was the inaccurate and ineffective
management of liquidity risk

The paper two separate but complementary objectives in respect of liquidity risk management: The first objective relates to the short-term liquidity risk profile of institution, and the second objective is to promote resiliency over longer-term time horizons. The paper identifies the following two ratios - you should be aware of these - though I am not sure if these will show up in the PRMIA exam:

  1. Liquidity Coverage Ratio addresses the ability of an institution to survive an acute liquidity risk stress scenario lasting one month. It is calculated as follows:
    Liquidity Coverage Ratio = Stock of high quality liquid assets/Net cash outflows over a 30-day time period
  2. Net Stable Funding Ratio has been developed to capture structural issues related to funding choices.
    Net Stable Funding Ratio = Available amount of stable funding/Required amount of stable funding

Both ratios should be equal to or greater than 1. The statement contains detailed definitions of what is included or excluded from each of the terms used in the calculations for each of the ratios. In addition, the standard also describes the what the ‘acute’ scenario should include (things such as a 3 notch credit downgrade, reduction in retail deposits etc)

Therefore Choice ‘a’ is the correct answer. Book Value to Share Price and Earnings Per Share are accounting measures unrelated to liquidity.