Chapter 6: Market Value-at-Risk Flashcards

1
Q

What are the VaR approaches?

A
  • When modelling the returns or P&L, we can choose between:
    1. Local valuation: uses local derivatives to revalue the investment due to market risk
    2. Full valuation: revalues the investment due to market risk
  • When modelling the randomness in returns or P&L, we have the choice between 2 approaches:
    1. Impose a parametric distribution
    2. Simulate an empirical (non parametric) distribution
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the two approaches to calculate market VaR?

A
  1. Historical simulation: here we simulate a discribution and we can use a full or local valuation method
  2. Model building approach: impose a normal distribution to returns and which uses full or local valuation methods.
    • When assumption of normal returns is combined with a local valuation model, we talk about the delta-normal method.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the basic historical simulation?

A
  • The historical simulation method assumes that we can replay history on current positions to model the future returns of our investments.
  • Each observation gets an equal weight.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How can we improve the historical simulation?

A
  1. Accounting for non-stationarity
    • Use a different weighting scheme to give more weight to more recent observations or we can update volatility to account for the observed time-variation in volatitility.
  2. Smoothing the tails of the distribution
    • Use extreme value theory that allows us to estimate the tails of the empirical distribution more accurately.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How can we account for non-stationarity with historical simulation by using exponential declining weights?

A
  • More recent returns get a higher weight (Boudock)
  • Such weighting scheme allows for a more accurate reflection of the current reality of returns.
  • Use a exponential declining weight, where in this weighting scheme ς indicates the memory of the weighting scheme, when it nears to 1, this scheme converges to an equal weighting.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can we account for non-stationarity in historical simultion through volatility updating?

A
  • Second way to improve the basic historical approach by scaling the historical returns to reflect the current volatility regime: such scaled returns allows for a more accurate reflection of the current reality of returns.
  • When simulating the distribution of future values Vi,T, the return of scenario i is scaled by the ratio of the most recent volatility forecast to the scenario i volatility forecast.
    • These volatility forecasts can be generated by an EWMA or GARCH methodology.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the smoothing of tails?

A
  • One of the porblems of historical simulation is that the tail of the distribution includes few observations and can therefore have a rather erratic shape.
  • Estimating VaR by picking out one particular value is then likely not very accurate.
  • To improve upon this, we can estimate a VaR based on a parametric tail estimation: this allows us to smooth and extrapolate the tails of the empirical distribution and will lead to more accurate VaR estimates.
  • This is called extreme value theory (EVT) and is based on Gnedenko’s results who shows that the tails of a wide range of distributions converges to a generalized Pareto distribution
  • This approach allows us to parametrize the tail of the distribution of losses and can be used to calculate an EVT VaR
  • Extreme value approach and smoothing of tails:
    • Main advantage: improves the accuracy of VaR, and is therefore, much more robust. to different samples.
    • Drawback: it is not robust to model risk: it is only for a cutoff point u deep in the tail that the distribtuion of the tails converges to a GPD.
    • There are no clear guidelines on choosing this cutoff point u and VaR estimates can be very different for different levels of u.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can we evaluate the advanced historical simulation?

A
  • It is the VaR method that is most widely used by financial institutions.
  • This is because so few data is needed, and because it accounts for historically observed distributional properties of returns, moreover, some simple adjustments can be made to improve upon the accuracy of our VaR estimate.
  • However: this simulation method also has important drawbacks:
    1. History is not always an accurate guide of what will happen in the future
    2. There is large samping variation in VaR, which leads to a low precision of our VaR estimate (EVT VaR tries to improve upon this).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How can we verify how accurate our VaR estimate is?

A
  • We want to know how much faith we can put in the estimate. To this end, we calculate confidence bounds on the VaR.
  • For an empirical distribution, we use the results of Kendall and Stuart to contstruct such confidence bounds.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are bootstrapping bounds?

A
  • Instead of calculating the Kendall and Stuart bounds, we can also simulate confidence intervals by bootstrapping.
  • In particular, by samlping with replacement from the historical simulation, we can calculate a series of corresponding VaR estimates.
  • When repeating this many times, we can directly calculate confidence bounds on this sample of VaR estimates.
  • This confidence interval constructed this way, is typically smaller than the confidence bounds of the Kendall & Stuart approach.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the normal VaR model?

A
  • The normal method assumes that we can proxy the distribution of returns by a normal distribution.
  • It is a full valuation with a parametric structure, namely normality.
  • Wile normal VaR only needs an estimate of the variance (and mean), its estimation can be cumbersome when analysing a portfolio VaR.
  • We need a lot of estimates, which can quickly become too large.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is risk mapping?

A
  • To keep our risk estimation tractable and efficient for the normal VaR method, we need to find ways to reduce the number of parameters to estimate.
  • Solution: applying a local valuation approach instead of a full valuation approach.
  • Such local valuation can be implemented by introducing structure to the return dynamics and the risk exposures of the assets in a portfolio.
  • Intuitively, we use formal models describing the risk-return relationship and then map the instruments into limited number of underlying risk factors.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is delta risk mapping?

A
  • Underlying idea is that we do a linear mapping to underlying risk drivers.
  • Linear impact of a change in the risk driver F on the value V is the derivative V to F.
  • There is a crucial tradeoff between the number of risk factors F and the quality of our model: more factors increases the fit of our model, but comes at a cost of less simplification.
  • Impact of a change in the risk driver of the market return on the value return of a stock can be computed by the market model.
  • The impact of a change in the risk driver of the duration on the return of a bond can be computed by the duration model.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the cash flow approach for delta risk mapping for bonds?

A
  • While duration is often used to compute the impact of a change in yields, alternative mapping models exists for fixed income instruments.
  • Common mapping techniques aim at mapping the bond positions to a selected set of standard bonds, while preserving the key charachteristics of the initial bond position.
  • One such method maps cash flows to selected standard bonds, while preserving both bond value and bond risk.
  • Decision criteria:
    1. Use interpolation to preserve the present value of the cash flows
    2. Use interpolation to preserve the riskiness of cash flows.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the delta-normal VaR?

A
  • The delta-normal method assumes that the primary risk factor (return or P&L) is normally distributed and that there is a linear relation between the financial instrument and its primary risk factor (delta-exposure): such linearity makes that the instrument inherits the normality of the primary risk factor.
  • For an instrument with value V and risk driver F, the linear impact of a change in the risk driver F on the value V is computed as the derivative of V over F.
  • This is a local valuation with a parametric structure, namely normality.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the delta-normal VaR with options and its weakness?

A
  • While the delta-normal for an option gives a first indication of the risk, excluding higher order gamma effects is problematic.
  • In particular, a nonzero gamma wil skew the option distribution such that the normality of the market risk factor is no longer inherited by the option.
  • Computing a delta normal VaR will then over or underestimate the true risk in the option position.
    • Nonzero gamma for a long call translates a normally distributed risk factor into a positively skewed option distribution. Delta normal VaR will then overstate true underlying risk.
    • Nonzero gamma for a short call translates a normally distributed risk factor into a negatively skewed option distribution. Delta normal VaR will then understate true underlying risk.
17
Q

How can we account for the delta-gamma exposure of an option?

A
  • When the option position has a non-zero gamma, we should account for both delta and gamma effects in the pricing relation.
18
Q

What is the Cornish-Fisher approximation?

A
  • An alternative modification to account for non-normality is by using the Cornish Fisher expansion to estimate the percentile of a non-normal distribution.
19
Q

How precise is our normal VaR estimate?

A
  • In comparison to the historical VaR, the precision of a normal VaR is higher, as it is judged on more observations.
20
Q

How can we evaluate the delta normal VaR?

A
  • There are multiple advantages to using delta-normal VaR:
    • Easily implemented and computationally fast
    • Derive simple analytical expressions for the VaR and the portfolio VaR tools
  • Main drawbacks:
    • Normality assumption is highly restrictive: in many cases it leads to an underestimation of true risk
    • Such normal distribution is therefore not suitvable for complex instruments with nonlinear payoffs.