Chapter 3: Volatility and comovement Flashcards
What is volatility?
- The most traditional measure of risk.
- The standard deviation of logreturns per unit of time.
- Unit of time in risk management is a day
- For IID normal logreturns: standard deviation of a day is square 250*standard deviation day
How do we obtain an estimate of volatility?
- Different approaches existto obtain an estimate of volatility:
-
Historical based volatility = gives a backward looking estimate:
- Most common and most simple approach
- It can easily be modified to allow for time-variation
-
Implied volatility:
- Gives a forward looking estimate as implicit in observed market prices of options
What is the operational version used for risk management purposes for volatility?
- Mean logreturn is zero
- Simple returns are a proxy of logreturns
- ML estimate instead of unbiased estimate
What are the main issues when computing volatility based on an historical sample?
- How to define the sample size?
- Large samples yield more accurate estimates
- long-time ago observations might be less relevant
- There might be stationarity issues: this would imply a violation of the ‘identical’ assumption of IID returns.
- Stationarity = statistical properties are constant over time
- Are equal weights reasonable?
- More recent observations might be more predictive for the near future
What are the methods to account for time-variation in volatility?
-
Different weighting scheme: extra alpha introduced which is the weight given to the observation of i days ago.
- Special case: exponentially weighted moving average model
- Different weighting scheme and reversion to the mean:
- Special case: autoregressive conditional heteroskedasticity model
- Vl is the long-run variance rate with weight ypsilon.
What is the EWMA model?
- Exponentially weighted moving average (EWMA) volatility: introduces exponentially decreasing weights, with a decay parameter between 0 and 1. This weighting scheme gives less weight to older observation.
- Responsiveness by the decay parameter to recent daily changes:
- If decay paramter is small: quick response of volatility to new information.
- Eg. daily data: typical 0.9
- If decay paramter is large: slow response of volatility to new information.
- Eg. typical: 0.97.
- If decay paramter is small: quick response of volatility to new information.
- Nice characteristic: simple updating rule for volatilities where today’s volatility is a weighted average of previous forecast and latest % change.
How can we evaluate the EWMA volatility?
- Major advantage is the simple updating rule:
- quickly obtain a volatility estimate
- Few data to store/input
- Major disadvantage: no build-in mean reversion
- EWMA variance could drift away in principle
- However: it is an empirical fact that the variance rate pulls back to the long run mean
- A realistic volatility model should include a long-run variance component.
What is the GARCH model?
- Generalized autoregressive conditional heteroskedasticity volatility
- Has a long-run average variance rate, with weight.
- Has a decay parameter.
- The GARCH(1,1) model is a weighted average of the previous forecast, the latest %-change and the long run variance rate
- The EWMA volatility is a special case of the GARCH(1,1) volatility.
What are implied volatilities?
- Implied volatilities are volatilities as implied in observed market prices of options.
- Since market prices are forward looking: implied volatilities are also forward looking.
- Intuitively this is more appropriate to estimate future volatility.
- To obtain an estimate of the implied volatility, we need a particular option pricing model.
- Extract volatility: the volatility, when plugged into the option priving model, gives the observed market price is the implied volatility.
- Most common option pricing: Black & Scholes.
What is comovement?
- When evaluating risk in a portfolio, a standalone volatility analysis needs to be complemented with an analysis of the comovement between the different portfolio components.
- The risk profile of the portfolio can be very different depending upon the degree of comovement of the portfolio constituents.
- On the level of the portfolio, comovement reflects benefits of diversification = which leads to lower portfolio volatility.
What are the different approaches to model comovement?
- Most traditional measure: Pearson correlation
- Also possible to introduce variation over time, so similar to the approach in volatilities
- Main drawback: makes strong distributional assumptions
- More general measures of comoveemnt can be applied to more general distributions
- Rank correlations: more general measures of concordance
- Copulas: tool of defining a correlation structure and joint distribution between variables - regardless of shape of their marginal probability distributions.
What is the Pearson correlation?
- Correlation = standardized covariance of the logreturns of asset x and y per unit of time
- Possible to monitor time-variation in correlations:
- Exponentially weighted moving average correlations
- Generalized autoregressive conditional heteroskedasticity correlations
- The variance-covariance matric needs to be estimated and updated consistently. Simple in the EWMA model, but trickier in the GARCH setting, and reuires a multivariate GARCH model.
How can we evaluate the Pearson correlation?
- Most common measure as it is easy to compute.
- Shortcomings:
- Only appropriate for elliptical distributions. In probability and statistics, an elliptical distribution is any member of a broad family of probability distributions that generalize the multivariate normal distribution.
- It only captures linear dependence
How can we capture association more generally?
- To measure the assocation between non-normally distributed variables: we can use:
- Rank correlation: statistic of association between the two variables
- Spearman correlation
- Kendaly tau corrlation
- Copules: allows us to link marginal distribution of variables into a joint distribution
- Rank correlation: statistic of association between the two variables
What is the Spearman correlation?
- Non-parametric measure of correlation based on data ranks.
- Spearman calculates differences between ranks.
What is Kendall’s Tau?
- Kendall’s tau is a non-parametric measure of correlation based on the number of concordances and discordances in paired observations:
- Any pair of observations is concordant is the ranks for both elements agree and disconcordant otherwise.
- Kendall’s tau measures the proportion of concordant pairs vs disconcordant pairs.
How do we choose the quantitative factors, the VaR parameters when we want to use it as a potential loss measure?
- The choice of c is arbitrary as the VaR is never the worst loss, it is only the loss corresponding to a particular significance level.
- The choice of hoirzon T is determined by the period over which the portfolio is assumed static.
- Determined by its liquidity.
How do we choose the quantitative factors, the VaR parameters when we want to use it as a capital cushion?
- c is set high to capture tail risk (99%-99.9%)
- The choice of T is determined by the period over which the portfolio is assumed static:
- Up to 10 days for market risk
- 1 year for credit risk
How do we choose the quantitative factors, the VaR parameters when we want to use it as a backtesting?
- Here we allow to observe frequent breaches.
- c is set rather low as to allow VaR to be breached frequently.
- The horizon T is chosen rather short as to observe independent VaR breaches.
- Using a 2 week VaR horizon implies 26 independent observations per year. A 1-day VaR horizon has 250 observations over the same year.
- Using a daily VaR, you can compare your P&L number 250 times to VaR and determine how many times P&L exceeds VaR.