Chapter 19: Methods of calculating the risk premium Flashcards
Burning cost
The actual cost of claims during a past period of years expressed as an annual rate per unit of exposure.
Use a simple regression model, based entirely on historical data.
Burning cost premium (BCP) calculation
BCP = (ΣClaims)/Total Exposed to Risk
Effective burning cost
The burning cost calculated using unadjusted data.
Claims are usually adjusted to allow for past inflation and IBNR.
Indexed burning cost
The curning cost calculated using adjusted data.
Why has the burning cost premium been criticised when applied to current figures without adjustments?
- we ignore trends such as claims inflation
- by taking current exposure (often premiums) and comparing this with current undeveloped claims, we will understate the ultimate position = loss ratios higher than expected
Burning cost approach:
Basic elements of the risk premium per unit of exposure
- average cost per claim
- average unit of exposure per policy
- average cost per claim
Burning cost approach:
Pure risk premium
(Expected claim frequency per policy) / (Average exposure per policy)x(Expected cost per claim)
Burning cost approach:
Information needed for each policy
- dates on cover
- all rating factors and exposure measure details
- details for premiums charged, unless they can be calculated by reference to the details on rating factors and exposure.
Burning cost approach:
When do we usually use this method?
- where litlle individual claims data are available
- where aggregate claims data by policy year are available
Burning cost approach:
Advantages
- simplicity
- needs relatively little data
- quicker than other methods to perform
- allows for experience of individual risks or portfolios
Burning cost approach:
Disadvantages
- harder to spot trends so it provides less understanding of changes impacting the individual risks
- adjusting past data is hard
- adjusting for changes in cover, deductibles and so on may be hard as we often lack individual claims data
- it can be a very crude approach depending on what adjustments are made
Frequency-severity approach
We assess the expected loss for a particular insurance structure by estimating the distribution of expected claims frequencies and distribution of severities for that structure and combining the results.
Key assumption of the frequency-severity approach
The loss frequency and severity distributions are not correlated
Causes of frequency trends
Changes in:
- accident frequency
- the propensity to make a claim and other changes in the social and economic environment
- legislation
- the structure of the risk
Frequency-severity approach:
For each historical policy year, the frequency of losses are calculated as:
frequency =
(ultimate number of losses)/(exposure measure)
Frequency-severity approach:
A standard trend applied to the frequency is based on:
- an analysis of all the risks within an insurer’s portfolio
- external information, such as industry surveys
Drivers of severity trends
- economic inflation
- changes in court awards and legislation
- economic conditions
- changes to the structure of the risk
“from the ground up”
“From the ground up” claims data shows all claims, no matter how small they are, and shows the original claim amount. It is often used in reinsurance to refer to data which shows all claims, even though reinsurance is only required for large claims.
Frequency-severity approach:
For each historical policy year, the average severity of losses is calculated as:
average severity =
(ultimate cost of losses)/(ultimate number of losses)
Possible drivers of frequency trends for employer’s liability insurance
- increasing compensation culture
- propensity of no-win-no-fee arrangements
- growth of claims management companies
- changes in health and safety regulations
- court decisions
- changes in economic conditions
- emergence of latent claims
- changes in policy terms, conditions, excesses, limits, etc.
Possible drivers of severity trends for employer’s liability insurance
- salary inflation
- court decisions/inflation
- medical advances/medical inflation
- inflation of legal costs
- legislative changes
- interest rate changes
- changes in policy terms, conditions, excesses, limits, etc.
Methods used to develop individual losses for IBNER
- apply an incurred development factor to each individual loss (open and closed claims), reflecting its maturity, to estimate it ultimate settlement value
- (more realistic approach) is to only develop open claims using “case estimate” development factors. These case estimate factors will be higher than the incurred development factors at the same maturity to offset the effect of not developing closed claims
- use stochastic development methods to allow for the variation that may occur in individual ultimate loss amounts around each of their expected values
Aggregate deductible
The maximum amount that the insured can retain within their their deductible when losses are aggregated
Non-ranking deductible
The non-ranking component of the deductible does not contribute towards the aggregate deductible
Ranking deductible
The ranking component of a deductible does contribute towards an insured’s aggregate deductible
Trading deductible
The amount that is retained by the insured for each individual loss once the aggregate deductible has been fully erroded
Per occurence limit
The maximum amount the insurer can retain for each individual loss
Annual aggregation limit
The maximum amount the insurer can retain when all losses for an annual policy period are aggregated
Frequency-severity approach:
Loss distributions often used
Frequency: Poisson, Negative Binomial
Severity: LogNormal, Weibull, Pareto, Gamma
Frequency-severity approach:
Common underlying fitting alogrithms (methods)
- maximum likelihood estimation
- method of least squares
- method of moments
Frequency-severity approach:
Statistical goodness of fit tests usually used
- Chi-Squared statistic
- Kolmogorov-Smirnov statistic
- Anderson-Darling statistic
Practical considerations of simulation modelling
- we require more observations if investigating the tails of the resulting loss distribution than if investigating the mean
- we will require more simulations if assessing an excess layer than if assessing the underlying primary layer
Frequency-severity approach:
Common data issues
- Form of the data - loss information gross of reinsurance and from the ground up for all claims
- choice of base period to achieve required quantity of data - 5 years but more is desirable
Frequency-severity approach:
Advantages
- mirrors the underlying process - a number of losses are generated, each with its ultimate value - and so is readily understood by underwriters
- can use approach for complex insurance structure
- by separately assessing information on loss frequency and severity, we gain additional insights into aggregate loss amounts
- helps us identify trends. Trends for frequency and severity can be allowed for separately
Frequency-severity apprach:
Disadvantages
- assessing the compound frequency-loss distribution has more onerous data requirements than assessing aggregate amounts
- the approach can be time-consuming for a single risk
- requires a high level of expertise
General linear model (GLM)
- most common multivariate model
- allows for the effect of a number of predictor variables on a certain response variable to be modelled
- statistically speaking, the GLM generalises linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measure to be a function of its predicted value
Common forms of models:
Claim frequency
May be modelled by a Poisson process.
A log link function is normally used since this results in a multiplicative model structure of factors and these have been confirmed in practice to be the best reflection of the relationship between the variables
Typical form: Poisson error function with a log link
Common forms of models:
Claim severity
Typically claim amounts are modelled with a Gamma error term, and a log link functions, which assumes factors have a multiplicative effect on risk
Gamma distribution doesn’t allow zero responses - zero-sized claims are removed from frequency and claim models
Propensity
A tendancy towards a particular way of behaving
Propensity to claim modelling
Binary in nature and is modelled using a binomial error distribution.
Multiplicative model is usually used, but we need to predict in the interval [0,1] rather than (0,inf) which is achieved using a logit link function
Relativities
Numbers that quantify the level of risk in one category compared to that in another. They do not describe the absolute level of risk.
Interaction term
Used when the pattern in the response variable is better modelled by including extra parameters for each combination of two or more factors. An interaction exists when the effect of one factor varies depending on the level of another factor.
GLM modelling considerations
- choosing the factors to include in the model
- analysis of dignificance of factors
- approaches to classification
- measuring uncertainty in the estimates of the model parameters
- comparisons with time
- consistency checks with other factors
- restrictions on the use of factors in the model
- correlation between predictor variables
- parameter smoothing
GLM:
Choosing the factors to include in the model
- as few parameters as possible should be used to find a satisfactory fit to the data (parsimony)
- one-way and two-way analysis of variance can identify factors that have influence on response variable - ensure all factors have enough exposure
GLM:
Techniques used to analyse statistical significance of factors used
- deviance
- scaled deviance
- chi-squared statistics
- F statistics
- Alkaike Information Criteria (AIC)
GLM:
Approaches to classification
- spatial smoothing
- decision trees
- Chi-Squared Automatic Interaction Detector (CHAID)
GLM:
Comparisons with time
- fit model that includes interaction of single factor with a measure of time
- test whether effect of the factor varies depending on the measure of time
- determine whether effect of each factor is consistent year to year
Other types of multivariate models
- Minimum bias methods
- Generalised non-linear models - demand modelling
- Generalised additive models
Minimum bias method
Involved assessing the effect of one factor on a one-way basis and then assessing the effect of a second factor having standardised for the effect of the first factor.
Done until the iterations had correctly assessed the true effect of each rating factor over and above the effect of all the correlated factors.
- lack proper statistical framework
- don’t provide helpful diagnostics that indicate whether effect of rating factor on experience is systematic and significant or not
- computationally less efficient than GLM
Categories of risk to which exposure curves can be applied
- those risks where the loss is finite
- those risks where (theoretically) there is no limit
Most common forms of original loss curves
- first loss scales/exposure curves
- XL scales
- ILFs
First loss scales/exposure curves
- usually seen in property business
- give the proportion of the full premium allocated to primary layers where losses are limited at different values
- express limits as a fraction of sum insured, maximum probable loss or EML
- AKA loss elimination functions
XL scales
Similar to a first loss scale except they give the proportion to be allocated to the excess layer rather than the primary layer
ILFs
Applies to risks where there is no upper bound to the loss.
Choose a basic limit - usually a relatively low primary limit and calculate the risk premium if the insurer were to cap claims at that level
The construct a table of ILFs giving the ratio of te premium for higher limits to the basic limit premium
Liability XL rating using ILFs:
Assumptions made within select group risks for casualty business
- the ground-up loss frequency is independent of the limit purchased
- the ground up severity is independent of the number of losses and of the limit purchased
Increased limit factors (ILFs)
- usually functions of monetary amounts (x)
- represents the ratio of the loss cost for a primary limit x to the loss cost for the basic limit b
ILF at level x relative to the basic limit b is:
ILF(x)=LEV(x)/LEV(b)
LEV - Limited expected value
Sources of heterogeneity that are highly likely to alter the distribution of Y (relative loss severity)
- differences in jurisdiction and claims environment
- different sub-classes
- different coverages
Property XL rating using exposure curves (first loss curves):
Why does the exposure curve work with the relative loss size (Y) and not the original loss size (X) distribution?
If we used X directly, it would be more likely that the curve would depend on the size of the risks giving rise to the claims distribution X, and so we would need a different curve for each size of policy.
In some circumstances, Y can be considered independent of the size of the risk. Problems arise when the data is less homogenous
Property XL rating using exposure curves (first loss curves):
Effect of claims inflation
If the effect of claims inflation is uniform across all loss sizes and the sums insured are being adjusted for the trend then we require no adjustment to the exposure curve.
Where this isn’t the case, we need to adjust exposure curves by considering the relative effects of trend on different loss sizes by reworking the entire analysis
Selecting appropriate tables of ILFs:
Considerations
- select risk groupings such that the assumptions required are valid
- jurisdiction and nature of coverage offered
- treatment of ALAE in coverage offered
- treatment of ULAE and loadings for risk
- nature of limits offered
- effects of trend and secular changes in claims environment
Original loss curves:
Advantages
- relatively simple to implement
- relatively easy to explain to non-technical audience
- loss costs obtained should be internally consistent
- can be used where little or no credible loss data is available
Original loss curves:
Disadvantages
- application in practice is difficult
difficult to select and/or estimate appropriate curves - modelled loss cost to layers (esp high ones) can be extremely sensitive to the selected curve
For a reinsurer, we will load for expenses using the same approach and techniques as the direct insurer. This includes allowance for:
- commission (paid to cedant) and brokerage (paid to broker)
- operational expenses
- expenses associated with the administrative maintenance of the policy
- cost of retrocessional protection
For reinsurance, the level of brokerage and commission can vary by:
- line of business
- type of reinsurance
- broker
- territory from which the reinsurance placement is being driven
Why do quota share reinsurances usually involve higher reinsurance premiums than excess of loss?
Because the cednat is passing a proportion of each and every premium to the reinsurer, whereas for XL reinsurance, the cedant only pays a premium to reflect the expected large claims that breach the retention of the cover.
For reinsurers, the form of the risk loading use in practice can also vary between reinsurers and might be as follows:
- based on profit target, perhaps expressed as a percentage of gross/net premiums
- based on target loss or combined ratio
- proportion of the standard deviation of expected loss cost to the contract
- based on required return on capital
- investment-equivalnet pricing
- marginal impact on capital of writing a risk and load for the required rate of return on the additional capital required to write that risk
Property catastrophe business:
Derivation of risk premium
- Very volatile
- Little emphasis on historic claims data
- Heavily dependent on proprietary catastrophe models
Property catastrophe business:
Considerations when using proprietary catastrophe models for risk premium calculation
- which models are viewed as more robust for which perils and in which geographical locations
- how the assumptions behind the models differ and how often they are updated
- how the input data requirements differ - this may have an impact on results
- how the output differs
The catastrophe model output would usually be the distribution of events. There are two bases for these files:
- OEPs
- AEPs
OEPs
An occurrence exceedance probability file, which considers the probability that the largest individual event loss in a year exceeds a particular threshold
AEPs
An aggregate exceedance probability file, which considers the probability that the aggregate losses from all loss events in a year exceeds a particular threshold
Problems with OEPs and AEPs
The OEP file may ignore the possibility of multiple events
For both the AEP and OEP file, it would be difficult to price XL reinsurance as we may not be told how many claims make up one aggregate loss.
Property and liability per-risk non-proportional covers:
Experience rating
Two main approaches to assessing the cost of non-proportional reinsurance using the cedant’s loss experience:
- a basic burning cost calculation
- construct a stochastic frequency-severity model
Property and liability per-risk non-proportional covers:
Derivation of the risk premium
Consist of blending an assessment of the risk premium based on:
- the cedant’s own historical loss experience
- benchmarks for the appropriate line of business/territory applied to the cedant’s current risk profile using exposure curves (property) and ILFs (liability)
Property and liability per-risk non-proportional covers:
Burning cost steps
The steps applied to the trended and developed individual losses are:
- apply the reinsurance terms to each of the trended and developed historical losses to calculate the reinsurance recovery on each loss
- aggregate the recoveries by underwriting year or accident year depending on whether the basis of reinsurance cover is risks-attaching or losses-occurring, respectively
- divide each year’s recoveries by the corresponding exposure measure to get a burning cost for each year, (where the exposure measure is also trended to today’s terms).
Most common exposure measure in reinsurance pricing
Premium (earned or written premium according to the basis of cover), net of acquisition costs.
Adjust the historical premiums to be “as-if” they are based on rates for the contract year being priced
Property and liability per-risk non-proportional covers:
Frequency-severity
- apply trends to the individual losses
- fit statistical distributions to the cedant’s historical loss data (both frequency and severity)
- combine the frequency and severity distributions to produce a stochastic model for the cedant’s large losses
- model the corresponding reinsurance recoveries
Exposure rating
The main principle of exposure rating is to not use historic claims experience at all, but instead to base premium rates on the amount of risk (ie exposure) that policies bring to the portfolio
Use a benchmark to represent a market severity distribution for the line of business and territory being covered
Property per-risk non-proportional covers:
Using exposure curves to calculate risk premium
If individual risk data is provided (gross premium and risk size in terms of sum insured, EML, etc.):
- based on the size of the risk, calculate the risk XL deductible and excess point as a percentage of the risk size. This allows us to apply the exposure curve.
- use the relevant exposure curve to assess the percentage of the gross risk premium attributable to the risk XL layer.
- estimate the ultimate loss ratio for each risk.
- multiply the estimated loss ratio by the gross premium to obtain the gross risk premium for the risk.
- multiply (2) and (4) to get the expected risk premium to the reinsurance layer.
This amount (5), the expected cost to the reinsurance layer of each risk, is then summed over all risks to get the total expected reinsurance risk premium.
Property and liability proportional covers:
Derivation of risk premium
Aim is to determine a suitable reinsurance commission rate so that the final outcome for the reinsurer is suitable.
Property and liability proportional covers:
Quota share
The process to determine a suitable commission rate is:
- adjust claims for inflation and premiums for rate/exposure changes
- use triangulations to get ultimate historic loss ratios
- decide on an estimated loss ratio for the period in question
- calculate a suitable commission, bearing in mind other outgo, e.g. expenses
- use a stochastic model if there is a profit or sliding scale commission
Property and liability proportional covers:
Surplus considerations
Similar to quota shares, but more complicated to assess.
The cession rates between risks of different sizes are different. This means the reinsurer’s loss ration can be materially different to that for the cedant.
Reinsurer’s experience is dependent on the way in which the large losses are distributed.
Property and liability proportional covers:
Surplus
Use the risk data to assess the likely distribution of cession rates.
Use cedant’s loss data/exposure rating to parameterise the cedant’s gross loss experience.
Each time a loss is generated from this distribution, depending on the size of the loss, we could use the distribution of limits and cession rates to select randomly a cession rate to apply to the loss and calculate the ceded loss
In practice we assess the future ceded loss ratio using historic ceded loss ratio of ceded business and project these when assessing a suitable commission
Stop loss:
Derivation of risk premium
Similar method to XL reinsurance but excess point and limit can be expressed as a loss ratio rather than a monetary amount.
- the catastrophe losses could come from a proprietary model
- large losses could come from a frequency-severity model
- attritional losses could be assessed using past historical attritional experience, suitably adjusted
Stop loss:
Important considerations
- meeting a risk transfer criteria, i.e. any regulatory minimum transfer of risk
- the particular terms of the stop loss in question
- any inuring reinsurance
Explain why it is more accurate to use development pattern based on claims to the excess layer rather than one that applies to the ground up claims.
By definition, the excess layer will only consist of claims greater than the excess point. Larger claims may show different development patterns to claims in general.
However, care should be taken when using triangulated data of XL business that the excess limits have remained constant over each origin year
Loss sensitive or swing rated premiums and the form that such a premium usually takes
Loss sensitive or swing rated premiums are a form of experience rating. These are premiums that depend, at least in part, on the actual claims experience of that risk in the period covered. They will usually be applied in the form of a deposit and adjustment premium.
How might you decide which interaction terms to test for inclusion in a GLM?
- test every possible combination of pairs (or triplets) of factors and test each for statistical significance and reasonableness - very time-consuming and unlikely to be done in practice.
- look at the structure of your existing rating algorithms and see which interactions can be included without the need for IT support, eg by checking which interaction rate tables already exist. There is little point in coming up with a highly sophisticated rating structure if it is too complicated to actually be implemented.
- could use your experience of the product and the market in which it operates. For example, in private motor insurance it is commonplace to include an interaction between policyholder age and policyholder gender.