Chapter 13: Extreme Value Theory (EVT) Flashcards
Extreme Value Theory (EVT)
Focuses on the techniques used to describe the tails of a set of data using a distribution that is different from the main set of data (i.e. not the tails).
The assumption is that a different process underlies the extreme value (i.e. tail) data than the normal, attritional data.
2 Families of distributions considered (EVT)
- Generalised Extreme Value (GEV) distributions
- Generalised Pareto Distributions (GPD)
5 Uses of EVT in general insurance and reinsurance
- catastrophe modelling: EVT gives a natural framework within which to work when we attempt to quantify potential losses from these causes. It is distinct from third party catastrophe models.
- reinsurance/higher layer pricing
- capital modelling
- incurred but not enough reported (IBNER) reserving
- pure incurred but not reported (IBNR) reserving
Pros of EVT
- EVT offers an interesting (and in many ways the mathematically correct) view on the quantitative measure of risk
- The key point is that EVT describes extremes (maxima, minima, longest runs, longest time and so on) of random phenomena
- The estimation method is based on the object of interest, namely the tail of the distribution, rather than the centre.
- It provides a reasonable formula for the tail distribution which can be justified from a priori considerations.
Cons of EVT
- To estimate the extremes of the tails, we have to make mathematical assumptions about the tail model. These assumptions are very difficult to verify in practice. Hence, there is intrinsic risk in using the model.
- Even for (standard) Extreme Value Theory estimations, we have to set the optimal threshold above which the data are to be used for tail estimation. There is no canonical optimal choice.
- The EVT literature routinely assumes independent and identically distributed data, which is often not the case.
- These models can be difficult to explain to a non-statistical audience.
Limitations of EVT in general insurance (8)
- Frequency distribution not being Poisson (i.e. the variance of the frequency is usually larger than the mean)
- Adding IBNER and IBNR back to the historic data reduces the apparent volatility of the severity and frequency distributions respectively. We can allow for this by fitting a model to both IBNER and IBNR claims.
- There is no cap on large losses when using the GDP. We can, however, truncate the distribution by applying a cap on the range of the distribution. The probability density must then be scaled up slightly so that the total probability is still 1.
- There can be parameter error due to the finite volume data used to fit the model. This can be allowed for by simply increasing the probability, bootstrapping historic large losses or by using Bayesian approaches.
- Uncertain future inflation. This can be allowed for using a stochastic claims inflation model.
- There are other applicable distributions that can be used instead of EVT distributions, including the log-normal, normal, Gamma, Burr and Student’s t.
- The GPD can have infinite moments. This can be allowed for by applying a cap (i.e. truncate) on large losses.
- Insurance losses may not be independent and identically distribute, which is a fundamental assumption of the EVT model. There can be correlations between severity, frequency or both.
What is an extreme event?
Extreme events can be defined as very unlikely or rare events that have a large financial impact.
3 Types of models that may have been used to manage and understand extreme events
- Statistical/actuarial models, where past experience is used to estimate the consequences of future events.
- Physical models, where, for example, the consequence of a landslide or submarine earthquake is estimated from a scaled down model in a laboratory.
- Simulation or catastrophe models, depending on computer simulations of events based on predetermined parameters and physical constraints, e.g. models used in forecasting weather or predicting the impact of pandemics.
What are the pros of statistical/actuarial models? (3)
- Parameterised using past data therefore arguably the most relevant
- Past data can be adjusted to take account of current or expected scenarios
- Solutions can sometimes be closed form and can therefore be more manageable and relatable to clients.
What are the cons of statistical/actuarial models? (3)
- Past data is not necessarily a good indicator for the future.
- Current prejudices/biases in long-standing models can be perpetuated without much critical thinking.
- Data may not be available or of sufficient quantity/quality to complete a rigorous analysis.
What are the pros of physical models? (2)
- Can arguably be the most objective and hence accurate model as it deals with actual physical situations in a controlled laboratory environment.
- Would usually be commissioned and carried out by experts in their fields which places more confidence in the model.
What are the cons of physical models? (3)
- Can only be used for natural catastrophes
- The results need to be translated into the effects on a particular insurance contract.
- Difficult and expensive to commission and out of the ambit of a typical actuary’s expertise.
What are the pros of simulation/catastrophe models? (2)
- Combining the statistical and physical models, should be able to get pros from each of these.
- Since these models produce simulations, can build up statistical models of extreme events tailored to the specific insurance product.
What are the cons of simulation/catastrophe models? (2)
- Requires large computing power and expensive modelling software.
- Distributions aren’t closed form and can be difficult to represent or explain.
We may consider the structures of the model types in 3 parts
- Diagnostic, (e.g. post-loss assessment)
- Investigative, (formulate an explanation as to why the events occur)
- Predictive (attempt to forecast)