Chapter 19: Methods of calculating the risk premium Flashcards

1
Q

Burning cost

A

The ACTUAL COST OF CLAIMS during a past period of years
expressed as an annual rate per unit of exposure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Burning cost premium (BCP) calculation

A

BCP = (ΣClaims) / Total exposed to risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Effective burning cost

A

The burning cost using unadjusted data.

(Since claims should usually be increased to allow for past inflation, an IBNR)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

2 Criticisms of the burning costs premium applied to figures without adjustments

A
  • we IGNORE TRENDS such as claims inflation
  • by taking current exposure (often premiums) and comparing this with current undeveloped claims, we will UNDERSTATE THE ULTIMATE POSITION.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

BURNING COST:
3 Basic elements of the risk premium per unit of exposure

A
  • average COST PER CLAIM
  • CLAIM FREQUENCY per policy
  • average UNIT OF EXPOSURE per policy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

BURNING COST:
Data requirements for calculating risk premium

A

We need POLICY DATA to calculate the overall exposure or the split within each risk group.
We also need CLAIMS DATA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

BURNING COST:
3 data items needed for each policy

A
  • DATES on cover
  • all RATING FACTOR and EXPOSURE MEASURE details
  • details of premiums charged,
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

4 Advantages of the burning cost approach

A
  • simplicity
  • needs relatively little data
  • quicker than other methods to perform
  • allows for experience of individual risk or portfolios
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

4 Disadvantages of a burning cost approach

A
  • harder to spot trends so it provides less understanding of changes impacting the individual risks
  • adjusting past data is hard
  • adjusting for changes in cover, deductibles and so on may be hard as we often lack individual claims data
  • it can be a very crude approach
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Frequency-severity approach

A

We assess the expected loss cost for a particular insurance structure by:
- estimating the distribution of expected CLAIM FREQUENCIES
- and DISTRIBUTION OF SEVERITIES for that structure

and combining the results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Key assumption of the frequency-severity approach

A

That the loss frequency and severity distributions are not correlated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

4 causes of frequency trends

A

Changes in:
- accident frequency
- the propensity to make claims and other changes in the social and economic environment
- legislation
- the structure of the risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

FREQUENCY-SEVERITY:
For each historical policy year, frequency of losses is calculated as

A

frequency
= ultimate number of losses /
exposure measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

4 drivers of severity trends

A
  • economic inflation
  • changes in court awards and legislation
  • economic conditions
  • changes to the structure of the risk`
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

FREQUENCY-SEVERITY:
For each historical policy year, average severity of losses is calculated as,

A

average severity =
ultimate cost of losses /
ultimate number of losses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

8 Possible drivers of FREQUENCY trends for employers’ liability insurance

A
  • increasing compensation culture
  • propensity of no-win-no-fee arrangements
  • growth of claims management companies
  • changes in health and safety regulations
  • court decisions
  • changes in economic conditions
  • emergence of latent claims
  • changes in policy terms, conditions, excesses, limits, etc
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

7 Possible drivers of SEVERITY trends for employers’ liability insurance

A
  • salary inflation
  • court decisions / inflation
  • medical advantages / medical inflation
  • inflation of legal costs
  • legislative changes
  • interest rate changes
  • changes in policy terms, conditions, excesses limits
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

FREQUENCY-SEVERITY:
3 Methods to develop individual loss amounts to ultimate (for IBNER)

A
  • We could apply an incurred development factor to each individual loss
  • (More realistically) we could develop only open claims using “case estimate” development factors.
  • We could use stochastic development methods to allow for the variation that may occur in individual loss amounts around each of their expected values.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Aggregate deductible

A

The maximum amount that the insured can retain within their deductible when all losses are aggregated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Non-ranking deductibles

A

The non-ranking component of a deductible does NOT contribute towards aggregate deductible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Ranking deductible

A

The ranking component of a deductible does contribute towards an insured’s aggregate deductible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Trailing deductible

A

The amount that is retained by the insured for each individual loss once the aggregate deductible has been fully eroded.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Per-occurrence limit

A

The maximum amount that the insurer can retain for each individual loss.

24
Q

Annual aggregate limit

A

The maximum amount that the insurer can retain when all losses for an annual policy period are aggregated.

25
Q

FREQUENCY-SEVERITY:
2 Practical considerations of simulation modelling

A
  • Require more simulations if investigating the tails of the resulting loss distribution than if investigating the mean
  • Require more simulations if assessing an excess layer than if assessing the underlying primary layer.
26
Q

4 Advantages of the frequency-severity approach

A
  • Mirrors the underlying process - a number of losses are generated, each with its own ultimate value - and so is readily understood by underwriters.
  • We can use the approach for complex insurance structures, eg deductibles or limits.
  • By separately assessing information on loss frequency and severity, we gain additional insight into aggregate loss amounts.
  • It helps us identify trends. Trends in frequency and severity can be allowed for separately.
27
Q

2 Disadvantages of the frequency-severity approach

A
  • Assessing the compound frequency-severity loss distribution has more onerous data requirements than assessing aggregate amounts.
  • This approach can be time-consuming for a single risk and requires a high level of expertise.
28
Q

define “propensity”

A

A tendency towards a particular way of behaving.

29
Q

3 commonly used forms of original loss curves

A
  • First loss scales / exposure curves
  • XL scales
  • ILFs
30
Q

First loss scales / exposure curves

A

These curves give the proportion of the full premium allocated to primary layers where losses are limited at different values.

I.e. if the loss is capped at a certain value which is less than the value of the insured item, the exposure curve tells us the percentage of the full risk premium to charge.

31
Q

XL scales

A

Similar to a first loss scale except that they give the proportion to be allocated to the excess layer rather than the primary layer.

32
Q

ILFs

A

Applied to risks where there is no upper bound to the loss. In this case, it does not make sense to express the limits as a percentage of the loss.

Rather, we choose a “basic limit”. This is usually a relatively low primary limit, and calculate the risk premium if the insurer were to cap claims at that level.
We then construct a table of multiplicative factors (ILFs) giving the ratio of the premium for higher limits to the basic limit premium.
We usually use this terminology and format for liability (casualty) business.

33
Q

Reinsurers’ expense analysis will include allowance for: (4)

A
  • commission (paid to the cedant) and brokerage (paid to the reinsurance broker)
  • operational expenses (eg general management expenses or the cost of HR)
  • expenses associated with the administrative maintenance of the policy.
  • the cost of the reinsurer’s own reinsurance (retrocessional) protections.
34
Q

The level of brokerage and commission can vary by: (4)

A
  • line of business
  • type of reinsurance
  • broker
  • territory from which the reinsurance placement is being driven
35
Q

Why do quota share reinsurances usually involve higher reinsurance premiums than excess of loss?

A

Because the cedant is passing a proportion of each and every premium to the reinsurer, whereas for excess of loss reinsurance, the cedant only pays a premium to reflect the expected large claims that breach the retention of the cover.

36
Q

The form of the risk loading for reinsurers might be based on: (6)

A
  • based on a profit target, perhaps expressed as a percentage of gross or net premiums
  • based on a target loss or combined ratio
  • a proportion of the standard deviation of the expected loss cost to the contract
  • based on a required return on capital
  • investment-equivalent pricing (the loading is designed so that the extra premium charged provides the extra return required by shareholders for the extra risk)
  • the marginal impact on capital of writing a risk and load for the required rate of return on the additional capital required to write that risk.
37
Q

When selecting a catastrophe model, we should consider (4)

A
  • which models are viewed as more robust for which perils and in which geographical locations.
  • how the assumptions behind the models differ and how often they are updated
  • how the input data requirements differ - this may have an impact on results
  • how the output differs - the primary outputs are files containing large numbers of simulated events along with the probability of that event happening and the expected loss for the set of risks in the input data.
38
Q

2 Sources of uncertainty

A
  • uncertainty about which events will happen
  • uncertainty about, for a given event, the exact amount of insured loss it will cause.
39
Q

OEPs

A

Occurrence exceedance probability files.
Considers the probability that the largest individual event loss in a year exceeds a particular threshold.

40
Q

AEPs

A

Aggregate exceedance probability files.
Consider the probability that the aggregate losses from all loss events in a year exceeds a particular threshold.

41
Q

2 problems with OEPs and/or AEPs

A
  • The OEP file may ignore the possibility of multiple events
  • For both OEPs and AEPs, it would be difficult to price risk excess of loss reinsurance, as we may not be told how many claims make up one aggregate loss.
42
Q

Reinsurance risk premiums:
Burning cost method

A

The steps applied to the trended and developed individual losses are:

  1. Apply the reinsurance terms to each of the trended and developed historical losses to calculate the reinsurance recovery on each loss.
  2. Aggregate the recoveries by underwriting year or accident year depending on whether the basis of reinsurance cover is risks-attaching or losses-occurring, respectively.
  3. Divide each year’s recoveries by the corresponding exposure measure to get a burning cost for each year.
43
Q

The most common exposure measure in reinsurance

A

Premium, net of acquisition costs.

44
Q

Increased limit factors

A

They give the ratio of premium for higher limits to a basic limit.

45
Q

4 steps to using a frequency-severity simulation model to estimate reinsurance risk premium under different reinsurance structures using information from the company’s database.

A

1: Determine the claim frequency and severity distributions
2: Determine exposures for the next years
3: Simulation for a specific reinsurance structure
4: Repeat step 3 for different reinsurance structures

46
Q

Discuss Step 1: Determining the claim frequency and severity distributions

A
  • Check data for COMPLETENESS and correct any obvious DATA ANOMOLIES
  • Pick a base period to use, e.g. the last 5 years
  • If there are any policy limits on the claims, an estimate of the number (and amount for each claim) needs to be estimated for those below the insured’s retention and those above the policy limit.
  • Use standard reserving techniques to calculate the number of IBNR claims and their cost.
  • All claims from past years need to be developed to ultimate and treated “as-if” they occurred in the following period.
  • All decision regarding large and catastrophe claims need to be made. Large and catastrophe claims are normally modelled separately.
  • Fit frequency and severity distributions to the losses
  • Apply statistical tests to determine goodness of fit.
  • If there is a sufficient volume of losses, consider fitting a number of different severity distributions to different parts of the overall loss range.
47
Q

Discuss Step 3: Simulation for a specific reinsurance structure

A
  • Simulate claims experience in each year based on exposure for each year.
  • Re-run simulation a number of times. Each simulation will produce its own estimate of the number of claims and corresponding set of claim amounts.
  • For each simulation, apply excesses, limits and deductibles to determine total reinsurance.
  • The average reinsurance recovery over all simulations in a particular year plus a loading for catastrophe and large claims would give an estimate for the reinsurance risk premium.
48
Q

3 Common statistical tests to determine goodness of fit

A
  • Chi-Squared Test
  • Kolmogorov-Smirnov Statistic
  • Anderson-Darling Statistic
49
Q

2 Common frequency distributions

A
  • Poisson
  • Negative Binomial
50
Q

4 Common severity distributions

A
  • LogNormal
  • Weibull
  • Pareto
  • Gamma
51
Q

Original loss curves

A

Original loss curves are an exposure based rating method.

Original loss curves (exposure curves or ILF curves) are used to estimate the cost to the layer based on the exposure and premium information provided by the cedant rather than the actual cost and past exposure.

In particular, we commonly use original loss curves in excess of loss insurance pricing to infer pricing for layers at which the data are too sparse to derive credible experience rate.

So for example we might use them in place of a burning cost approach when calculating the risk premium net of a layer of reinsurance with a high excess point, or perhaps even calculating the risk premium for the layer of reinsurance.

52
Q

Exposure rating

A

The main principle of exposure rating it NOT TO USE HISTORIC CLAIMS EXPERIENCE at all.
Instead, they base premium rates on the amount of risk (ie exposure) that policies bring to the portfolio.

In exposure rating, we use a benchmark to represent a market severity distribution for the line of business and territory being covered.
The benchmark may even be directly derived from the market severity distribution.

53
Q

Define Loss sensitive (or swing-rated) premiums

A

Loss sensitive premiums are a form of experience rating.
These are premiums that depend, at least in part, on the actual claims experience of that risk in the period covered.

They will usually be applied in the form of a deposit and adjustment premium.

54
Q

Techniques used to analyse the significance of the factors used in GLMS

A
  • deviance
  • scaled deviance
  • chi-squared statistics
  • F statistics
  • Akaike Information Criteria
    are statistics that can be calculated to assess the improvement of fit when adding additional parameters.
55
Q

Characteristics of the claims under Personal
Accident Cover?

A
  • Claims are usually reported quickly, as the incidence of an event is usually very clear. However with accidental death claims, the insured’s dependants may not always know that the policy exists and may discover their entitlement after an extended period, resulting in a reporting delay.

The claims may be SETTLED QUICKLY, although if a claim is for permanent total disability it may be necessary to wait several months or years for a claimant’s CONDITION TO STABILISE.

The claim frequency tends to be reasonably stable.

Claims can be large: cover of millions of rands per person is not uncommon.

Benefits are fixed in monetary amount, and not exposed to inflation.

The currency of the liabilities will be local.

56
Q

Suggest matching assets for the claims under Personal
Accident Cover?

A

CASH / MONEY MARKET INSTRUMENTS:
- these are highly liquid and can be used to pay claims and other expenses.
- Capital values are stable, making them suitable for short-tailed claims, so there is no risk that assets will be sold at depressed market values.

SHORT-DATED (< 3-year) BONDS:
- These also tend to be very liquid, and can be easily sold if claims need be settled.
- They provide a fixed return, providing a good match to fixed benefits.
- They can be used to match claims of a slightly longer tail.
- They should provide a slightly higher expected return compared to cash.

57
Q

3 Ways in which the investments for a well-established large company could differ from those for a small newly-established company.

A

Premium growth will be much more certain for a bigger company. Hence, greater ability to rely on cash from future premiums and scope to mismatch by term, so the resulting duration of assets may be longer for the larger company.
This should lead to a higher proportion of longer-dated bonds.

If shareholder funds have been invested in property or equity, the larger company will have a greater ability to invest directly. A smaller insurer is more likely to use indirect investment vehicles.

A new company is likely to utilise excess free assets to fund business growth rather than be available as a buffer for mismatch risks. A large well-established company may have less need to fund business growth. Even with excess free assets, a new company may decide not to mismatch, while a large company may be more likely to mismatch in order to diversify and maximise investment returns.