Topics 52-57 Flashcards
VaR conversion to a different time period of time and to a different confidence level
VaR, as calculated previously, measured the risk of a loss in asset value over a short time period. Risk managers may, however, be interested in measuring risk over longer time periods, such as a month, quarter, or year. VaR can be converted from a 1-day basis to a longer basis by multiplying the daily VaR by the square root of the number of days (J) in the longer time period (called the square root rule).
For example, to convert to a weekly VaR, multiply the daily VaR by the square root of 5 (i.e., five business days in a week).
VaR can also be converted to different confidence levels. For example, a risk manager may want to convert VaR with a 95% confidence level to VaR with a 99% confidence level. This conversion is done by adjusting the current VaR measure by the ratio of the updated confidence level to the current confidence level.
We can generalize the conversion method as follows:
The VaR Methods
The three main VaR methods can be divided into two groups: linear methods and full valuation methods.
-
Linear methods replace portfolio positions with linear exposures on the appropriate risk factor. For example, the linear exposure used for option positions would be delta while the linear exposure for bond positions would be duration. This method is used when calculating VaR with the delta-normal method.
The delta-normal method is appropriate for large portfolios without significant option-like exposures. This method is fast and efficient. -
Full valuation methods fully reprice the portfolio for each scenario encountered over a historical period, or over a great number of hypothetical scenarios developed through historical simulation or Monte Carlo simulation. Computing VaR using full revaluation is more complex than linear methods. However, this approach will generally lead to more accurate estimates of risk in the long run.
Full-valuation methods, either based on historical data or on Monte Carlo simulations, are more time consuming and costly. However, they may be the only appropriate methods for large portfolios with substantial option-like exposures, a wider range of risk factors, or a longer-term horizon.
Linear Valuation: The Delta-Normal Valuation Method
The delta-normal approach begins by valuing the portfolio at an initial point as a
relationship to a specific risk factor, S (consider only one risk factor exists):
V0 = V(S0)
Here, Δ0 is the sensitivity of the portfolio to changes in the risk factor, S. As with any linear relationship, the biggest change in the value of the portfolio will accompany the biggest change in the risk factor. The VaR at a given level of significance, z, can be written as:
VaR = |Δ0| x (zσS0)
where:
- zσS0 = VaRs
Generally speaking, VaR developed by a delta-normal method is more accurate over shorter horizons than longer horizons.
Since the delta-normal method is only accurate for linear exposures, non-linear exposures, such as convexity, are not adequately captured with this VaR method. By using a Taylor series expansion, convexity can be accounted for in a fixed income portfolio by using what is known as the delta-gamma method.
The delta-normal method (a.k.a. the variance-covariance method or the analytical method) for estimating VaR requires the assumption of a normal distribution.
The assumption of normality is troublesome because many assets exhibit skewed return distributions (e.g., options), and equity returns frequently exhibit leptokurtosis (fat tails). When a distribution has “fat tails,” VaR will tend to underestimate the loss and its associated probability. Also know that delta-normal VaR is calculated using the historical standard deviation, which may not be appropriate if the composition of the portfolio changes, if the estimation period contained unusual events, or if economic conditions have changed.
Advantages of the delta-normal VaR method include the following:
- Easy to implement.
- Calculations can be performed quickly.
- Conducive to analysis because risk factors, correlations, and volatilities are identified.
Disadvantages of the delta-normal method include the following:
- The need to assume a normal distribution.
- The method is unable to properly account for distributions with fat tails, either because of unidentified time variation in risk or unidentified risk factors and/or correlations.
- Nonlinear relationships of option-like positions are not adequately described by the delta-normal method. VaR is misstated because the instability of the option deltas is not captured.
Converting annual standard deviation into daily and monthly standard deviation
Historical Simulation Method: advantages and disadvantages
Advantages of the historical simulation method include the following:
- The model is easy to implement when historical data is readily available.
- Calculations are simple and can be performed quickly.
- Horizon is a positive choice based on the intervals of historical data used.
- Full valuation of portfolio is based on actual prices.
- It is not exposed to model risk.
- It includes all correlations as embedded in market price changes.
Disadvantages of the historical simulation method include the following:
- It may not be enough historical data for all assets.
- Only one path of events is used (the actual history), which includes changes in correlations and volatilities that may have occurred only in that historical period.
- Time variation of risk in the past may not represent variation in the future.
- The model may not recognize changes in volatility and correlations from structural changes.
- It is slow to adapt to new volatilities and correlations as old data carries the same weight as more recent data. However, exponentially weighted average (EWMA) models can be used to weigh recent observations more heavily.
- A small number of actual observations may lead to insufficiently defined distribution tails.
Monte Carlo Simulation Method: advantages and disadvantages
Advantages of the Monte Carlo method include the following:
- It is the most powerful model.
- It can account for both linear and nonlinear risks.
- It can include time variation in risk and correlations by aging positions over chosen horizons.
- It is extremely flexible and can incorporate additional risk factors easily.
- Nearly unlimited numbers of scenarios can produce well-described distributions.
Disadvantages of the Monte Carlo method include the following:
- There is a lengthy computation time as the number of valuations escalates quickly.
- It is expensive because of the intellectual and technological skills required.
- It is subject to model risk of the stochastic processes chosen.
- It is subject to sampling variation at lower numbers of simulations.
Explain how asset return distributions tend to deviate from the normal distribution
Three common deviations from normality that are problematic in modeling risk result from asset returns that are: fat-tailed, skewed, or unstable.
Fat-tailed refers to a distribution with a higher probability of observations occurring in the tails relative to the normal distribution.
In modeling risk, a number of assumptions are necessary. If the parameters of the model are unstable, they are not constant but vary over time. For example, if interest rates, inflation, and market premiums are changing over time, this will affect the volatility of the returns going forward.
Explain reasons for fat tails in a return distribution and describe their imphcations. Distinguish between conditional and unconditional distributions.
The phenomenon of “fat tails” is most likely the result of the volatility and/or the mean of the distribution changing over time. If the mean and standard deviation are the same for asset returns for any given day, the distribution of returns is referred to as an unconditional distribution of asset returns. However, different market or economic conditions may cause the mean and variance of the return distribution to change over time. In such cases, the return distribution is referred to as a conditional distribution.
Assume we separate the full data sample into two normally distributed subsets based on market environment with conditional means and variances. Pulling a data sample at different points of time from the full sample could generate fat tails in the unconditional distribution even if the conditional distributions are normally distributed with similar means but different volatilities. If markets are efficient and all available information is reflected in stock prices, it is not likely that the first moments or conditional means of the distribution vary enough to make a difference over time.
The second possible explanation for “fat tails” is that the second moment or volatility is time-varying. This explanation is much more likely given observed changes in interest rate volatility (e.g., prior to a much-anticipated Federal Reserve announcement). Increased market uncertainty following significant political or economic events results in increased volatility of return distributions.
Describe the implications of regime switching on quantifying volatility
A regime-switching volatility model assumes different market regimes exist with high or low volatility. The conditional distributions of returns are always normal with a constant mean but either have a high or low volatility.
The probability of large deviations from normality occurring are much less likely under the regime-switching model. The regime-switching model captures the conditional normality and may resolve the fat-tail problem and other deviations from normality.
If we assume that volatility varies with time and that asset returns are conditionally normally distributed, then we may be able to tolerate the fat-tail issue.
However, some tools exist that serve to complement VaR by examining the data in the tail of the distribution. For example, stress testing and scenario analysis can examine extreme events by testing how hypothetical and/or past financial shocks will impact VaR. Also, extreme value theory (EVT) can be applied to examine just the tail of the distribution and some classes of EVT apply a separate distribution to the tail. Despite not being able to accurately capture events in the tail, VaR is still useful for approximating the risk level inherent in financial assets.
Explain the various approaches for estimating VaR
A value at risk (VaR) method for estimating risk is typically either a historical-based approach or an implied-volatility-based approach. Under the historical-based approach, the shape of the conditional distribution is estimated based on historical time series data.
Historical-based approaches typically fall into three sub-categories: parametric,
nonparametric, and hybrid.
- The parametric approach requires specific assumptions regarding the asset returns distribution. A parametric model typically assumes asset returns are normally or lognormally distributed with time-varying volatility. The most common example of the parametric method in estimating future volatility is based on calculating historical variance or standard deviation using “mean squared deviation.” For example, the following equation is used to estimate future variance based on a window of the K most recent returns data (in order to adjust for one degree of freedom related to the conditional mean, the denominator in the formula is K — 1. In practice, adjusting for the degrees of freedom makes little difference when large sample sizes are used).
- The nonparametric approach is less restrictive in that there are no underlying assumptions of the asset returns distribution. The most common nonparametric approach models volatility using the historical simulation method.
- As the name suggests, the hybrid approach combines techniques of both parametric and nonparametric methods to estimate volatility using historical data.
The implied-volatility-based approach uses derivative pricing models such as the Black-Scholes-Merton option pricing model to estimate an implied volatility based on current market data rather than historical data.
The delta-normal method is an example of a parametric approach.
Parametric Approaches for VaR
The RiskMetrics® [i.e., exponentially weighted moving average (EWMA) model] and GARCH approaches are both exponential smoothing weighting methods. RiskMetrics® is actually a special case of the GARCH approach. Both exponential smoothing methods are similar to the historical standard deviation approach because all three methods:
- Are parametric.
- Attempt to estimate conditional volatility.
- Use recent historical data.
- Apply a set of weights to past squared returns.
The RiskMetrics® approach is just an EWMA model that uses a pre-specified decay factor for daily data (0.94) and monthly data (0.97).
The only major difference between the historical standard deviation approach and the two exponential smoothing approaches is with respect to the weights placed on historical returns that are used to estimate future volatility. The historical standard deviation approach assumes all K returns in the window are equally weighted. Conversely, the exponential smoothing methods place a higher weight on more recent data, and the weights decline exponentially to zero as returns become older. The rate at which the weights change, or smoothness, is determined by a parameter λ (known as the decay factor) raised to a power. The parameter λ must fall between 0 and 1 (i.e., 0 < λ < 1); however, most models use
parameter estimates between 0.9 and 1 (i.e., 0.9 < λ < 1).
Figure 4 illustrates the weights of the historical volatility for the historical standard
deviation approach and RiskMetrics® approach. Using the RiskMetrics® approach,
conditional variance is estimated using the following formula:
Using the RiskMetrics® approach, calculate the weight for the most current historical return, t = 0, when X = 0.97.
Calculate the weight for the most recent return using historical standard deviation approach with K = 75.
The weight for the most current historical return, t = 0, when λ = 0.97 is calculated as follows:
(1 - λ) λt = (1 - 0.97)0.970 = 0.03
All historical returns are equally weighted. Therefore, the weights will all be equal to
0.0133 (i.e., 1 / K = 1 / 75 = 0.0133).
GARCH
Nonparametric vs. Parametric VaR Methods
Three common types of nonparametric methods used to estimate VaR are:
- historical simulation,
- multivariate density estimation, and
- hybrid.
These nonparametric methods exhibit the following advantages and disadvantages over parametric approaches.
Advantages of nonparametric methods compared to parametric methods:
- Nonparametric models do not require assumptions regarding the entire distribution of returns to estimate VaR.
- Fat tails, skewness, and other deviations from some assumed distribution are no longer a concern in the estimation process for nonparametric methods.
- Multivariate density estimation (MDE) allows for weights to vary based on how relevant the data is to the current market environment, regardless of the timing of the most relevant data.
- MDE is very flexible in introducing dependence on economic variables (called state variables or conditioning variables).
- Hybrid approach does not require distribution assumptions because it uses a historical simulation approach with an exponential weighting scheme.
Disadvantages of nonparametric methods compared to parametric methods:
- Data is used more efficiently with parametric methods than nonparametric methods. Therefore, large sample sizes are required to precisely estimate volatility using historical simulation.
- Separating the full sample of data into different market regimes reduces the amount of usable data for historical simulations.
- MDE may lead to data snooping or over-fitting in identifying required assumptions regarding the weighting scheme identification of relevant conditioning variables and the number of observations used to estimate volatility.
- MDE requires a large amount of data that is directly related to the number of conditioning variables used in the model
Hybrid Approach
Multivariate Density Estimation (MDE)
Explain the process of return aggregation in the context of volatility forecasting methods
When a portfolio is comprised of more than one position using the RiskMetrics® or historical standard deviation approaches, a single VaR measurement can be estimated by assuming asset returns are all normally distributed. The covariance matrix of asset returns is used to calculate portfolio volatility and VaR. The delta-normal method requires the calculation of N variances and [N x (N — 1)]/2 covariances for a portfolio of Appositions. The model is subject to estimation error due to the large number of calculations. In addition, some markets are more highly correlated in a downward market, and in such cases, VaR is underestimated.
The historical simulation approach requires an additional step that aggregates each period’s historical returns weighted according to the relative size of each position. The weights are based on the market value of the portfolio positions today, regardless of the atual allocation of positions K days ago in the estimation window. A major advantage of this approach compared to the delta-normal approach is that no parameter estimates are required.Therefore, the model is not subject to estimation error related to correlations and the problem of higher correlations in downward markets.
A third approach to calculating VaR estimates the volatility of the vector of aggregated returns and assumes normality based on the strong law of large numbers. The strong law of large numbers states that an average of a very large number of random variables will end up converging to a normal random variable. However, this approach can only be used in a well-diversified portfolio.
Evaluate implied volatility as a predictor of future volatility and its shortcomings
Estimating future volatility using historical data requires time to adjust to current changes in the market. An alternative method for estimating future volatility is implied volatility. The Black-Scholes-Merton model is used to infer an implied volatility from equity option prices. Using the most liquid at-the-money put and call options, an average implied volatility is extrapolated using the Black-Scholes-Merton model.
A big advantage of implied volatility is the forward-looking predictive nature of the model. Forecast models based on historical data require time to adjust to market events. The implied volatility model reacts immediately to changing market conditions.
The implied volatility model does, however, exhibit some disadvantages. The biggest disadvantage is that implied volatility is model dependent. A major assumption of the model is that asset returns follow a continuous time lognormal diffusion process. The volatility parameter is assumed to be constant from the present time to the contract maturity date. However, implied volatility varies through time; therefore, the Black-Scholes-Merton model is misspecified. Options are traded on the volatility of the underlying asset with what is known as “vol” terms. In addition, at a given point in time, options with the same underlying assets may be trading at different vols. Empirical results suggest implied volatility is on average greater than realized volatility. In addition to this upward bias in implied volatility, there is the problem that available data is limited to only a few assets and market factors.
Explain long horizon volatility/VaR and the process of mean reversion according to an AR(1) model. Calculate conditional volatility with and without mean reversion.
To demonstrate mean reversion, consider a time series model with one lagged variable:
Xi = a + b x Xi-1
This type of regression, with a lag of its own variable, is known as an autoregressive (AR) model. In this case, since there is only one lag, it is referred to as an AR(1) model. The longrun mean of this model is evaluated as [a / (1 — b)]. The key parameter in this long-run mean equation is b. Notice that if b = 1, the long-run mean is infinite (i.e., the process is a random walk). If b, however, is less than 1, then the process is mean reverting (i.e., the time series will trend toward its long-run mean).
Backtesting VaR model
Backtesting is the process of comparing losses predicted by the value at risk (VaR) model to those actually experienced over the sample testing period. If a model were completely accurate, we would expect VaR to be exceeded (this is called an exception) with the same frequency predicted by the confidence level used in the VaR model. In other words, the probability of observing a loss amount greater than VaR is equal to the significance level (x%). This value is also obtained by calculating one minus the confidence level.
For example, if a VaR of $10 million is calculated at a 95% confidence level, we expect to have exceptions (losses exceeding $10 million) 5% of the time. If exceptions are occurring with greater frequency, we may be underestimating the actual risk. If exceptions are occurring less frequently, we may be overestimating risk.
There are three desirable attributes of VaR estimates that can be evaluated when using a backtesting approach. The first desirable attribute is that the VaR estimate should be unbiased. To test this property, we use an indicator variable to record the number of times an exception occurs during a sample period. For each sample return, this indicator variable is recorded as 1 for an exception or 0 for a non-exception. The average of all indicator variables over the sample period should equal x% (i.e., the significance level) for the VaR estimate to be unbiased.
A second desirable attribute is that the VaR estimate is adaptable. For example, if a large return increases the size of the tail of the return distribution, the VaR amount should also be increased. Given a large loss amount, VaR must be adjusted so that the probability of the next large loss amount again equals x%. This suggests that the indicator variables, discussed previously, should be independent of each other. It is necessary that the VaR estimate account for new information in the face of increasing volatility.
A third desirable attribute, which is closely related to the first two attributes, is for the VaR estimate to be robust. A strong VaR estimate produces only a small deviation between the number of expected exceptions during the sample period and the actual number of exceptions. This attribute is measured by examining the statistical significance of the autocorrelation of extreme events over the backtesting period. A statistically significant autocorrelation would indicate a less reliable VaR measure.
By examining historical return data, we can gain some clarity regarding which VaR method actually produces a more reliable estimate in practice. In general, VaR approaches that are nonparametric (e.g., historical simulation and the hybrid approach) do a better job at producing VaR amounts that mimic actual observations when compared to parametric methods such as an exponential smoothing approach (e.g., GARCH). The likely reason for this performance difference is that nonparametric approaches can more easily account for the presence of fat tails in a return distribution. Note that higher levels of λ (the exponential weighing parameter) in the hybrid approach will perform better than lower levels of λ. Finally, when testing the autocorrelation of tail events, we find that the hybrid approach performs better than exponential smoothing approaches. In other words, the hybrid approach tends to reject the null hypothesis that autocorrelation is equal to zero fewer times than exponential smoothing approaches.
Describe and calculate VaR for linear derivatives
In general, the VaR of a long position in a linear derivative is VaRp = ΔVaRf, where
VaRf is the VaR of the underlying factor and the derivative’s delta, Δ, is the sensitivity of the derivative’s price to changes in the underlying factor.
Taylor Series approximation of the function f(x)
Explain the full revaluation method for computing VaR. Compare delta-normal and full revaluation approaches for computing VaR
Explain structured Monte Carlo method for computing VaR and identify strengths and weaknesses of this approach
The structured Monte Carlo (SMC) approach simulates thousands of valuation outcomes for the underlying assets based on the assumption of normality.
An advantage of the SMC approach is that it is able to address multiple risk factors by assuming an underlying distribution and modeling the correlations among the risk factors.
A disadvantage of the SMC approach is that in some cases it may not produce an accurate forecast of future volatility and increasing the number of simulations will not improve the forecast.
Describe the implications of correlation breakdown for scenario analysis
The key point here is that in times of crisis, correlations increase (some substantially) and strategies that rely on low correlations fall apart in those times. Certain economic or crisis events can cause diversification benefits to deteriorate in times when the benefits are most needed.
A simulation using the SMC approach is not capable of predicting scenarios during times of crisis if the covariance matrix was estimated during normal times.
Stress testing
Stressing the correlation is a method used to model the contagion effect that could occur in a crisis event.
One approach for stress testing is to examine historical crisis events, such as the Asian crisis, October 1987 market crash, etc. After the crisis is identified, the impact on the current portfolio is determined. The advantage of this approach is that no assumptions of underlying asset returns or normality are needed. The biggest disadvantage of using historical events for stress testing is that it is limited to only evaluating events that have actually occurred.
The historical simulation approach does not limit the analysis to specific events. Under this approach, the entire data sample is used to identify “extreme stress” situations for different asset classes. For example, certain historical events may impact the stock market more than the bond market. The objective is to identify the five to ten worst weeks for specific asset classes and then evaluate the impact on today’s portfolio. The advantage of this approach is that it may identify a crisis event that was previously overlooked for a specific asset class. The focus is on identifying extreme changes in valuation instead of extreme movements in
underlying risk factors. The disadvantage of the historical simulation approach is that it is still limited to actual historical data.
An alternative approach is to analyze different predetermined stress scenarios.
An advantage to scenario analysis is that it is not limited to the evaluation of risks that have occurred historically. It can be used to address any possible scenarios. A disadvantage of the stress scenario approach is that the risk measure is deceptive for various reasons. For example, a shift in the domestic yield curve could cause estimation errors by overstating the risk for a long and short position and understating the risk for a long-only position.
Describe worst-case scenario (WCS) analysis and compare WCS to VaR
The worst case scenario (WCS) assumes that an unfavorable event will occur with certainty. The focus is on the distribution of worst possible outcomes given an unfavorable event. An expected loss is then determined from this worst case distribution analysis. Thus, the WCS information extends the VaR analysis by estimating the extent of the loss given an unfavorable event occurs.
In other words, the tail of the original return distribution is more thoroughly examined with another distribution that includes only probable extreme events. For example, within the lowest 5% of returns, another distribution can be formed with just those returns and a 1 % WCS return can then be determined. Recall that VaR provides a value of the minimum loss for a given percentage, but says nothing about the severity of the losses in the tail. WCS analysis attempts to complement the VaR measure with analysis of returns in the tail.
Describe the mean-variance framework and the efficient frontier
Under the mean-variance framework, it is necessary to assume that return distributions for portfolios are elliptical distributions. The most commonly known elliptical probability distribution function is the normal distribution.