Chapter 19: Methods of calculating the risk premium Flashcards

1
Q

Burning cost

A

The actual cost of claims during a past period of years expressed as an annual rate per unit of exposure.

Use a simple regression model, based entirely on historical data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Burning cost premium (BCP) calculation

A

BCP = (ΣClaims)/Total Exposed to Risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Effective burning cost

A

The burning cost calculated using unadjusted data.

Claims are usually adjusted to allow for past inflation and IBNR.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Indexed burning cost

A

The curning cost calculated using adjusted data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why has the burning cost premium been criticised when applied to current figures without adjustments?

A
  • we ignore trends such as claims inflation
  • by taking current exposure (often premiums) and comparing this with current undeveloped claims, we will understate the ultimate position = loss ratios higher than expected
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Burning cost approach:
Basic elements of the risk premium per unit of exposure

A
  • average cost per claim
  • average unit of exposure per policy
  • average cost per claim
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Burning cost approach:
Pure risk premium

A

(Expected claim frequency per policy) / (Average exposure per policy)x(Expected cost per claim)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Burning cost approach:
Information needed for each policy

A
  • dates on cover
  • all rating factors and exposure measure details
  • details for premiums charged, unless they can be calculated by reference to the details on rating factors and exposure.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Burning cost approach:
When do we usually use this method?

A
  • where litlle individual claims data are available
  • where aggregate claims data by policy year are available
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Burning cost approach:
Advantages

A
  • simplicity
  • needs relatively little data
  • quicker than other methods to perform
  • allows for experience of individual risks or portfolios
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Burning cost approach:
Disadvantages

A
  • harder to spot trends so it provides less understanding of changes impacting the individual risks
  • adjusting past data is hard
  • adjusting for changes in cover, deductibles and so on may be hard as we often lack individual claims data
  • it can be a very crude approach depending on what adjustments are made
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Frequency-severity approach

A

We assess the expected loss for a particular insurance structure by estimating the distribution of expected claims frequencies and distribution of severities for that structure and combining the results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Key assumption of the frequency-severity approach

A

The loss frequency and severity distributions are not correlated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Causes of frequency trends

A

Changes in:

  • accident frequency
  • the propensity to make a claim and other changes in the social and economic environment
  • legislation
  • the structure of the risk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Frequency-severity approach:
For each historical policy year, the frequency of losses are calculated as:

A

frequency =
(ultimate number of losses)/(exposure measure)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Frequency-severity approach:
A standard trend applied to the frequency is based on:

A
  • an analysis of all the risks within an insurer’s portfolio
  • external information, such as industry surveys
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Drivers of severity trends

A
  • economic inflation
  • changes in court awards and legislation
  • economic conditions
  • changes to the structure of the risk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

“from the ground up”

A

“From the ground up” claims data shows all claims, no matter how small they are, and shows the original claim amount. It is often used in reinsurance to refer to data which shows all claims, even though reinsurance is only required for large claims.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Frequency-severity approach:
For each historical policy year, the average severity of losses is calculated as:

A

average severity =
(ultimate cost of losses)/(ultimate number of losses)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Possible drivers of frequency trends for employer’s liability insurance

A
  • increasing compensation culture
  • propensity of no-win-no-fee arrangements
  • growth of claims management companies
  • changes in health and safety regulations
  • court decisions
  • changes in economic conditions
  • emergence of latent claims
  • changes in policy terms, conditions, excesses, limits, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Possible drivers of severity trends for employer’s liability insurance

A
  • salary inflation
  • court decisions/inflation
  • medical advances/medical inflation
  • inflation of legal costs
  • legislative changes
  • interest rate changes
  • changes in policy terms, conditions, excesses, limits, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Methods used to develop individual losses for IBNER

A
  • apply an incurred development factor to each individual loss (open and closed claims), reflecting its maturity, to estimate it ultimate settlement value
  • (more realistic approach) is to only develop open claims using “case estimate” development factors. These case estimate factors will be higher than the incurred development factors at the same maturity to offset the effect of not developing closed claims
  • use stochastic development methods to allow for the variation that may occur in individual ultimate loss amounts around each of their expected values
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Aggregate deductible

A

The maximum amount that the insured can retain within their their deductible when losses are aggregated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Non-ranking deductible

A

The non-ranking component of the deductible does not contribute towards the aggregate deductible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Ranking deductible

A

The ranking component of a deductible does contribute towards an insured’s aggregate deductible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Trading deductible

A

The amount that is retained by the insured for each individual loss once the aggregate deductible has been fully erroded

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Per occurence limit

A

The maximum amount the insurer can retain for each individual loss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Annual aggregation limit

A

The maximum amount the insurer can retain when all losses for an annual policy period are aggregated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Frequency-severity approach:
Loss distributions often used

A

Frequency: Poisson, Negative Binomial
Severity: LogNormal, Weibull, Pareto, Gamma

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Frequency-severity approach:
Common underlying fitting alogrithms (methods)

A
  • maximum likelihood estimation
  • method of least squares
  • method of moments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Frequency-severity approach:
Statistical goodness of fit tests usually used

A
  • Chi-Squared statistic
  • Kolmogorov-Smirnov statistic
  • Anderson-Darling statistic
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Practical considerations of simulation modelling

A
  • we require more observations if investigating the tails of the resulting loss distribution than if investigating the mean
  • we will require more simulations if assessing an excess layer than if assessing the underlying primary layer
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Frequency-severity approach:
Common data issues

A
  • Form of the data - loss information gross of reinsurance and from the ground up for all claims
  • choice of base period to achieve required quantity of data - 5 years but more is desirable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Frequency-severity approach:
Advantages

A
  • mirrors the underlying process - a number of losses are generated, each with its ultimate value - and so is readily understood by underwriters
  • can use approach for complex insurance structure
  • by separately assessing information on loss frequency and severity, we gain additional insights into aggregate loss amounts
  • helps us identify trends. Trends for frequency and severity can be allowed for separately
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Frequency-severity apprach:
Disadvantages

A
  • assessing the compound frequency-loss distribution has more onerous data requirements than assessing aggregate amounts
  • the approach can be time-consuming for a single risk
  • requires a high level of expertise
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

General linear model (GLM)

A
  • most common multivariate model
  • allows for the effect of a number of predictor variables on a certain response variable to be modelled
  • statistically speaking, the GLM generalises linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measure to be a function of its predicted value
37
Q

Common forms of models:
Claim frequency

A

May be modelled by a Poisson process.

A log link function is normally used since this results in a multiplicative model structure of factors and these have been confirmed in practice to be the best reflection of the relationship between the variables

Typical form: Poisson error function with a log link

38
Q

Common forms of models:
Claim severity

A

Typically claim amounts are modelled with a Gamma error term, and a log link functions, which assumes factors have a multiplicative effect on risk

Gamma distribution doesn’t allow zero responses - zero-sized claims are removed from frequency and claim models

39
Q

Propensity

A

A tendancy towards a particular way of behaving

40
Q

Propensity to claim modelling

A

Binary in nature and is modelled using a binomial error distribution.

Multiplicative model is usually used, but we need to predict in the interval [0,1] rather than (0,inf) which is achieved using a logit link function

41
Q

Relativities

A

Numbers that quantify the level of risk in one category compared to that in another. They do not describe the absolute level of risk.

42
Q

Interaction term

A

Used when the pattern in the response variable is better modelled by including extra parameters for each combination of two or more factors. An interaction exists when the effect of one factor varies depending on the level of another factor.

43
Q

GLM modelling considerations

A
  • choosing the factors to include in the model
  • analysis of dignificance of factors
  • approaches to classification
  • measuring uncertainty in the estimates of the model parameters
  • comparisons with time
  • consistency checks with other factors
  • restrictions on the use of factors in the model
  • correlation between predictor variables
  • parameter smoothing
44
Q

GLM:
Choosing the factors to include in the model

A
  • as few parameters as possible should be used to find a satisfactory fit to the data (parsimony)
  • one-way and two-way analysis of variance can identify factors that have influence on response variable - ensure all factors have enough exposure
45
Q

GLM:
Techniques used to analyse statistical significance of factors used

A
  • deviance
  • scaled deviance
  • chi-squared statistics
  • F statistics
  • Alkaike Information Criteria (AIC)
46
Q

GLM:
Approaches to classification

A
  • spatial smoothing
  • decision trees
  • Chi-Squared Automatic Interaction Detector (CHAID)
47
Q

GLM:
Comparisons with time

A
  • fit model that includes interaction of single factor with a measure of time
  • test whether effect of the factor varies depending on the measure of time
  • determine whether effect of each factor is consistent year to year
48
Q

Other types of multivariate models

A
  • Minimum bias methods
  • Generalised non-linear models - demand modelling
  • Generalised additive models
49
Q

Minimum bias method

A

Involved assessing the effect of one factor on a one-way basis and then assessing the effect of a second factor having standardised for the effect of the first factor.

Done until the iterations had correctly assessed the true effect of each rating factor over and above the effect of all the correlated factors.

  • lack proper statistical framework
  • don’t provide helpful diagnostics that indicate whether effect of rating factor on experience is systematic and significant or not
  • computationally less efficient than GLM
50
Q

Categories of risk to which exposure curves can be applied

A
  • those risks where the loss is finite
  • those risks where (theoretically) there is no limit
51
Q

Most common forms of original loss curves

A
  • first loss scales/exposure curves
  • XL scales
  • ILFs
52
Q

First loss scales/exposure curves

A
  • usually seen in property business
  • give the proportion of the full premium allocated to primary layers where losses are limited at different values
  • express limits as a fraction of sum insured, maximum probable loss or EML
  • AKA loss elimination functions
53
Q

XL scales

A

Similar to a first loss scale except they give the proportion to be allocated to the excess layer rather than the primary layer

54
Q

ILFs

A

Applies to risks where there is no upper bound to the loss.

Choose a basic limit - usually a relatively low primary limit and calculate the risk premium if the insurer were to cap claims at that level

The construct a table of ILFs giving the ratio of te premium for higher limits to the basic limit premium

55
Q

Liability XL rating using ILFs:
Assumptions made within select group risks for casualty business

A
  • the ground-up loss frequency is independent of the limit purchased
  • the ground up severity is independent of the number of losses and of the limit purchased
56
Q

Increased limit factors (ILFs)

A
  • usually functions of monetary amounts (x)
  • represents the ratio of the loss cost for a primary limit x to the loss cost for the basic limit b

ILF at level x relative to the basic limit b is:
ILF(x)=LEV(x)/LEV(b)

LEV - Limited expected value

57
Q

Sources of heterogeneity that are highly likely to alter the distribution of Y (relative loss severity)

A
  • differences in jurisdiction and claims environment
  • different sub-classes
  • different coverages
58
Q

Property XL rating using exposure curves (first loss curves):
Why does the exposure curve work with the relative loss size (Y) and not the original loss size (X) distribution?

A

If we used X directly, it would be more likely that the curve would depend on the size of the risks giving rise to the claims distribution X, and so we would need a different curve for each size of policy.

In some circumstances, Y can be considered independent of the size of the risk. Problems arise when the data is less homogenous

59
Q

Property XL rating using exposure curves (first loss curves):
Effect of claims inflation

A

If the effect of claims inflation is uniform across all loss sizes and the sums insured are being adjusted for the trend then we require no adjustment to the exposure curve.

Where this isn’t the case, we need to adjust exposure curves by considering the relative effects of trend on different loss sizes by reworking the entire analysis

60
Q

Selecting appropriate tables of ILFs:
Considerations

A
  • select risk groupings such that the assumptions required are valid
  • jurisdiction and nature of coverage offered
  • treatment of ALAE in coverage offered
  • treatment of ULAE and loadings for risk
  • nature of limits offered
  • effects of trend and secular changes in claims environment
61
Q

Original loss curves:
Advantages

A
  • relatively simple to implement
  • relatively easy to explain to non-technical audience
  • loss costs obtained should be internally consistent
  • can be used where little or no credible loss data is available
62
Q

Original loss curves:
Disadvantages

A
  • application in practice is difficult
    difficult to select and/or estimate appropriate curves
  • modelled loss cost to layers (esp high ones) can be extremely sensitive to the selected curve
63
Q

For a reinsurer, we will load for expenses using the same approach and techniques as the direct insurer. This includes allowance for:

A
  • commission (paid to cedant) and brokerage (paid to broker)
  • operational expenses
  • expenses associated with the administrative maintenance of the policy
  • cost of retrocessional protection
65
Q

For reinsurance, the level of brokerage and commission can vary by:

A
  • line of business
  • type of reinsurance
  • broker
  • territory from which the reinsurance placement is being driven
66
Q

Why do quota share reinsurances usually involve higher reinsurance premiums than excess of loss?

A

Because the cednat is passing a proportion of each and every premium to the reinsurer, whereas for XL reinsurance, the cedant only pays a premium to reflect the expected large claims that breach the retention of the cover.

67
Q

For reinsurers, the form of the risk loading use in practice can also vary between reinsurers and might be as follows:

A
  • based on profit target, perhaps expressed as a percentage of gross/net premiums
  • based on target loss or combined ratio
  • proportion of the standard deviation of expected loss cost to the contract
  • based on required return on capital
  • investment-equivalnet pricing
  • marginal impact on capital of writing a risk and load for the required rate of return on the additional capital required to write that risk
68
Q

Property catastrophe business:
Derivation of risk premium

A
  • Very volatile
  • Little emphasis on historic claims data
  • Heavily dependent on proprietary catastrophe models
69
Q

Property catastrophe business:
Considerations when using proprietary catastrophe models for risk premium calculation

A
  • which models are viewed as more robust for which perils and in which geographical locations
  • how the assumptions behind the models differ and how often they are updated
  • how the input data requirements differ - this may have an impact on results
  • how the output differs
70
Q

The catastrophe model output would usually be the distribution of events. There are two bases for these files:

71
Q

OEPs

A

An occurrence exceedance probability file, which considers the probability that the largest individual event loss in a year exceeds a particular threshold

72
Q

AEPs

A

An aggregate exceedance probability file, which considers the probability that the aggregate losses from all loss events in a year exceeds a particular threshold

73
Q

Problems with OEPs and AEPs

A

The OEP file may ignore the possibility of multiple events

For both the AEP and OEP file, it would be difficult to price XL reinsurance as we may not be told how many claims make up one aggregate loss.

74
Q

Property and liability per-risk non-proportional covers:
Experience rating

A

Two main approaches to assessing the cost of non-proportional reinsurance using the cedant’s loss experience:

  • a basic burning cost calculation
  • construct a stochastic frequency-severity model
75
Q

Property and liability per-risk non-proportional covers:
Derivation of the risk premium

A

Consist of blending an assessment of the risk premium based on:

  • the cedant’s own historical loss experience
  • benchmarks for the appropriate line of business/territory applied to the cedant’s current risk profile using exposure curves (property) and ILFs (liability)
76
Q

Property and liability per-risk non-proportional covers:
Burning cost steps

A

The steps applied to the trended and developed individual losses are:

  1. apply the reinsurance terms to each of the trended and developed historical losses to calculate the reinsurance recovery on each loss
  2. aggregate the recoveries by underwriting year or accident year depending on whether the basis of reinsurance cover is risks-attaching or losses-occurring, respectively
  3. divide each year’s recoveries by the corresponding exposure measure to get a burning cost for each year, (where the exposure measure is also trended to today’s terms).
77
Q

Most common exposure measure in reinsurance pricing

A

Premium (earned or written premium according to the basis of cover), net of acquisition costs.

Adjust the historical premiums to be “as-if” they are based on rates for the contract year being priced

78
Q

Property and liability per-risk non-proportional covers:
Frequency-severity

A
  • apply trends to the individual losses
  • fit statistical distributions to the cedant’s historical loss data (both frequency and severity)
  • combine the frequency and severity distributions to produce a stochastic model for the cedant’s large losses
  • model the corresponding reinsurance recoveries
79
Q

Exposure rating

A

The main principle of exposure rating is to not use historic claims experience at all, but instead to base premium rates on the amount of risk (ie exposure) that policies bring to the portfolio

Use a benchmark to represent a market severity distribution for the line of business and territory being covered

80
Q

Property per-risk non-proportional covers:
Using exposure curves to calculate risk premium

A

If individual risk data is provided (gross premium and risk size in terms of sum insured, EML, etc.):

  1. based on the size of the risk, calculate the risk XL deductible and excess point as a percentage of the risk size. This allows us to apply the exposure curve.
  2. use the relevant exposure curve to assess the percentage of the gross risk premium attributable to the risk XL layer.
  3. estimate the ultimate loss ratio for each risk.
  4. multiply the estimated loss ratio by the gross premium to obtain the gross risk premium for the risk.
  5. multiply (2) and (4) to get the expected risk premium to the reinsurance layer.

This amount (5), the expected cost to the reinsurance layer of each risk, is then summed over all risks to get the total expected reinsurance risk premium.

81
Q

Property and liability proportional covers:
Derivation of risk premium

A

Aim is to determine a suitable reinsurance commission rate so that the final outcome for the reinsurer is suitable.

82
Q

Property and liability proportional covers:
Quota share

A

The process to determine a suitable commission rate is:

  • adjust claims for inflation and premiums for rate/exposure changes
  • use triangulations to get ultimate historic loss ratios
  • decide on an estimated loss ratio for the period in question
  • calculate a suitable commission, bearing in mind other outgo, e.g. expenses
  • use a stochastic model if there is a profit or sliding scale commission
83
Q

Property and liability proportional covers:
Surplus considerations

A

Similar to quota shares, but more complicated to assess.

The cession rates between risks of different sizes are different. This means the reinsurer’s loss ration can be materially different to that for the cedant.

Reinsurer’s experience is dependent on the way in which the large losses are distributed.

84
Q

Property and liability proportional covers:
Surplus

A

Use the risk data to assess the likely distribution of cession rates.

Use cedant’s loss data/exposure rating to parameterise the cedant’s gross loss experience.

Each time a loss is generated from this distribution, depending on the size of the loss, we could use the distribution of limits and cession rates to select randomly a cession rate to apply to the loss and calculate the ceded loss

In practice we assess the future ceded loss ratio using historic ceded loss ratio of ceded business and project these when assessing a suitable commission

85
Q

Stop loss:
Derivation of risk premium

A

Similar method to XL reinsurance but excess point and limit can be expressed as a loss ratio rather than a monetary amount.

  • the catastrophe losses could come from a proprietary model
  • large losses could come from a frequency-severity model
  • attritional losses could be assessed using past historical attritional experience, suitably adjusted
86
Q

Stop loss:
Important considerations

A
  • meeting a risk transfer criteria, i.e. any regulatory minimum transfer of risk
  • the particular terms of the stop loss in question
  • any inuring reinsurance
87
Q

Explain why it is more accurate to use development pattern based on claims to the excess layer rather than one that applies to the ground up claims.

A

By definition, the excess layer will only consist of claims greater than the excess point. Larger claims may show different development patterns to claims in general.

However, care should be taken when using triangulated data of XL business that the excess limits have remained constant over each origin year

88
Q

Loss sensitive or swing rated premiums and the form that such a premium usually takes

A

Loss sensitive or swing rated premiums are a form of experience rating. These are premiums that depend, at least in part, on the actual claims experience of that risk in the period covered. They will usually be applied in the form of a deposit and adjustment premium.

89
Q

How might you decide which interaction terms to test for inclusion in a GLM?

A
  • test every possible combination of pairs (or triplets) of factors and test each for statistical significance and reasonableness - very time-consuming and unlikely to be done in practice.
  • look at the structure of your existing rating algorithms and see which interactions can be included without the need for IT support, eg by checking which interaction rate tables already exist. There is little point in coming up with a highly sophisticated rating structure if it is too complicated to actually be implemented.
  • could use your experience of the product and the market in which it operates. For example, in private motor insurance it is commonplace to include an interaction between policyholder age and policyholder gender.