Section A Flashcards
When is the least squares method appropriate?
When random year to year fluctuations in loss experience are significant
Give 3 possible ways to manage the reserves if losses come higher than expected
- Reduce the reserve by the additional amount (Budgeted loss method, cov<var>var)</var>
- Don’t change the reserve (Bornhuetter Ferguson, cov = var)</var>
Explain why it’s difficult to compute Q(x) and we use L(x) instead
The best linear approximation of Q(x) is better because:
- Easier to compute
- Easier to understand and explain
- Less dependent on the underlying distribution (this is why it’s difficult to compute Q(x)
Give the formulas used in Brosius
LS : a + xb where b = (xy bar - x bary bar)/(x squared bar - x bar squared) and a = y bar -x barb OR
LS : ZX/d +(1-Z)y bar and Z = b/c and c = y bar/x bar
CL : X/d (LS with a = 0)
BL : y bar (LS with b =0)
BF : x + q*U OR
BF : a + X (LS with b =1)
Is the Benktander method superior to BF and CL
Lower MSE (if p is included within 2c*)
Better approximation to the exact Bayesian procedure
Superior to CL b/c gives more weight to the a priori expectation of ultimate losses
Superior to BF b/c gives more weight to actual loss experience
Formula for benktander
U(GB) = X + q*U(BF) U(BF) = X + q*U(0)
Express Benktander as a credibility weighting system
U(GB) = pU(CL)+qU(BF)
What is the credible loss ratio claims reserve?
Credibility weight of CL and BF with %reported calculated differently.
R=z*R(ind)+R(coll)
Express the Z for Neuhauss, Benktander and Optimal credibility
Z(NW) = p*ELR Z(GB) = p ***we weight ultimate claim amounts here Z(OPT) = p/(p + p^1/2)
How to calculate R ind and R coll
R(ind) = C/p*q R(coll) = (EP*m)*q
How to get ELR (m) and respective p’s?
Calculate the loss ratio by column (sum of incremental claims/EP) and sum all loss ratios.
To get your p’s you need to look at sum of m’s over the ELR
What is the Z that minimizes MSE (R)?
z = (p/q)*(Cov(C,R) + pqVar(U - bc))/(Var(C)+p^2Var(U - bc))
What is the MSE formula
E(alpha2Ui)(Z^2/p+1/q+(1-Z)^2/t)q^2
Characteristics of Hurlimanns Method
- based on full dev triangle (rather than latest AY)
- requires a measure of exposure (as CC)
- relies on loss ratios (rather than link ratios)
- credibility weighting between two extreme positions
Clarks assumption
- Incremental losses are iid (one period does not affect the surrounding periods, emergence pattern is the same for all AYs)
- Var of incremental losses is proportional to the expected incremental losses and the var/mean ratio is fixed and known
- Variance estimates are based on an approximation to the rao-cramer lower bound
Formula for residual
r = (ci - ui)/(ui*sigma^2)^1/2
Formula for sigma^2
1/(n-p)*sum of (ci-ui)^2/ui
where n: nb of points in triangle and p: number of parameters (LDF = 2+AYs CC=3)
How to test for assumptions using normalized residuals
- Against ui: test for var/mean ratio being constant
- Against age: test for growth curve appropriate for all AYs
- Against CY: test that there are no CY effects - one period does not affect the other
Give two growth functions
Weibull 1- e^-(x/w)^teta
Loglogistic: x^w /(x^w + teta^w)
Ultimate loss estimate - clarks method
CC: EPELR
ELR = sum of losses/sum of used up p (EP(G(x))
LDF: Paid to date/G(x)
LDF - truncated: Paid to date/G’(x) where G’(x) = G(x)/G(TP)
Reserve estimate - clarks method
CC: EPELR(1-G(x))
CC - truncated: EPELR(G(TP)-G(x))
ELR = sum of losses/sum of used up p (EP(G(x))
LDF: Paid to date(1/G(x) - 1)
LDF - truncated: Paid to date(1/G’(x)-1) where G’(x) = G(x)/G(TP)
How to chose the best set of parameters for your data
Maximize the MLE: l = sum of ci*ln(ui)-ui over the triangle
Data advantages to use Clark’s growth function
- Works with data that is not at the same maturity as prior year
- Works with data for only the last few diagonals
- Naturally extrapolates pas the end of the triangle
- Naturally interpolates between the ages in the analysis
Advantages of using parameterized curves to describe the loss emergence patterns
- Only 2 parameters to estimate
- Can use data from triangles with different dates
- Final pattern is smooth
Advantages of using the ODP distribution to model the actual loss emergence
- Inclusion of scaling factor allows to match 1st and 2nd moment of any distribution which means high flexibility
- MLE produces the LDF or CC ultimate loss estimate which means the results can be presented in a familiar format
Total variance
Sum of process variance and parameter variance
(R*sigma^2)+Parameter variance
What do the two types of variance represent?
Process: Random variation in the actual loss emergence; calculate
Parameter: Uncertainty in the estimator; give; CC always lower than LDF (b/c less parameters and incorporates information about the exposure base)
What are Mack’s key assumptions
- Expected incremental losses are proportional to losses reported to date
- AY’s losses are independent from another accident year
- Variance of expected losses is proportional to losses reported to date
What are the implications of the three assumptions by Mack
- Adjacent LDFs should be uncorrelated
- There is no CY effect
- Volume weighted average is the one that fits the best
Describe the three tests for the three assumptions by Mack
- Do the test of correlation between LDFs : Is T included in +/- 0.67*var(T)^1/2 where T is the sum of all t’s = 1 - s/(n(n ^2-1)/6) and var T = 1/(I-3)(I-2)/2. Can also look at C next period vs C current period to see if linear pattern
- Do the test for the CY effect: Is Z included in E(Zn) +/- 2*Var(Zn)^1/2. Can also look at the residuals vs the AY
- Look at residuals vs previous cumulative losses
Forumla for residuals at 36 months - Mack
- If var is proportional to 1: LFD chosen will be weighted by previous cum losses squared; ri = (c36-c24*f)/1
- If var is proportional to c24 CL: LDF chosen will be weighted average; ri = (c36-c24*f)/c24^1/2
- If var is proportional to losses squared: LDF chosen will be simple average; ri = (c36-c24*f)/c24
How to calculate alpha^2 by age
1/(nb ldf-1)sum(cum losses before(Ldf ind-LDF chosen))
How to calculate MSE by cell
- Project all future cells
- Sum above diagonal into A,B,C,D,…
- MSE = ULTi^2(alpha^2/f^2)*(1/prev cum + 1/prev age above diagonal (A,B,C,..))
- Sum all MSE for an AY to get the MSE for a reserve
How to estimate a confidence interval for a reserve and its distribution
Lognormal distribution.
Sigma^2 = ln (1 + MSE/R^2)
CI = Rexp(-sigma^2/2 +/- Z*sigma)
Why we don’t estimate reserve distributions with normal
Because not skewed enough and could result in negative lower bound
Describe 6 testable implications of Mack’s CL assumptions - venter
- Significance of age to age factors (b>2sigma)
- Superiority to alternative emergence patterns(Adjusted SSE, AIC and BIC)
- Linearity (residuals vs previous cum claims)
- Stability (residuals vs AY, LDF against year, state-space model)
- Correlation of development factors (is |T| within t (n-2 degrees of freedom, %level, where T = r*(n-2)/(1-r^2)^1/2 and r=correlation
- Additive CY effects (use regression to determine if any diagonal dummy variables is significant)
Adjusted SSE
SSE/(n-p)^2 (n excludes the first age column cells)
AIC
SSEe^(2p/n) (n excludes the first age column cells)
BIC
SSEn^(p/n) (n excludes the first age column cells)
Number of parameters (CL,BF,CC)
CL and CC: m-1 (m: nb of AYs)
BF: 2m-2
Recursion formula if variance is constant over the triangle
f = sum (h^2(q/h))/sum(h^2) h = sum (f^2(q/f))/sum(f^2)
Recursion formula if variance is proportional to f(d)h(w)
f^2 = sum(h(q/h)^2)/sum(h) h^2 = sum(f(q/f)^2)/sum(f)
Formula if we add inflation (g(w+d))
h(1+i)^d(1+j)^(w+d)(1+k)^w
where j is the inflation, i the age parameter and k the AY effect on h
Describe the complete procedure - Sahasrabuddhe
- Trend indices
- Teta at latest AY level (distribution)
- Teta at all other ages and years
- LEV(X) and LEV(B) at different AYs and ages
- C(ij)’ =C(ij)LEV(B;nj)/LEV(X;ij)
- Calculate cum LDFs at base level
- Calculate adjusted LDFs at any other layer: LDF(B)*LEV(X;jn)/LEV(B;n,n)/(LEV(X;ij)/LEV(B;nj))
- Reserve with your adjusted LDFs
3 problems with current application of trend rates
- Dont vary by claims layer
- Trend in CY often not considered
- Trend dont vary btwn AY
2 requirements for claim size models
- Have parameters that can be adjusted for inflation
2. Have a claim size model with easily computable means and LEV
Sahasrabuddhe’s key finding
Development factors at different cost levels and different layers are related to each other based on claim size models and trends
Five assumptions that must be met in order to implement the reserving procedure by Sahasrabuddhe
- Choose basic limit
- Use of claim size model
- Triangle of trend indices
- Have a triangle adjusted to basic limit and common cost level
- Claim size models at prior maturities
Explain the other way to get an LDF at any other layer: LDF(B)*(LEV(X;in)/LEV(B;in))/R(ij)
Don’t always have a distribution at other ages than at ultimate so we use the ratio. R(ij) starts close to 1 at early ages since not many losses are capped yet. At ultimate, it reaches U (LEV(X)/LEV(B)).
Identify 5 advantages of a high deductible program
- Price flexibility
- Reduces market charges and premium taxes
- Allow for self insurance without all requirements
- Incentives for loss control while still protecting against very bad losses
- Gives cashflow advantage to the insured
Explain why we need to index limits for inflation when calculating development factors for various deductibles and two methods to do it
To keep the ratio of deductible (limited) losses to excess losses constant. Can fit a line to average severities over long term history or use an index that reflects the movement in annual severity changes
Explain what is a distributional model to estimate reserves for an excess layer
Fits the development process by determining parameters that vary over time. Once you have the parameters determined, you calculate the severity relativities and by comparing those you get LDFs. Can do it with method of moments, MLE or siewert approach (minimize chi square between actual and expected relativities around a particular deductible size)
Why development for losses in excess of aggregate limits decrease more rapidly over time with smaller deductibles than with larger ones
Aggregate limits only cover losses under the deductible. Since most of the later development occurs in the layers above the deductible, excess of aggregate losses reach their ultimate value sooner with smaller deductibles
Loss ratio method (2+,2-)
+ When no data available
+ LR can be tied to pricing
- Does not use actual experience
XS L = PELRXS ratio + PELR(1-XS ratio)*Agg charge
Implied development (2+,1-)
+ Produces IBNR at early maturities even when no XS reported yet
+ LDFs are more stable than XSLDF
- Does not explicitly recognize excess loss development
IBNR = (Unlim repLDF - Lim repLim LDF)- Rep XS
Direct development (1+,2-)
+ Explicitly recognize excess loss development
- XSLDF tend to be overly leveraged and extremely volatile
- Cannot be used when no XS losses are reported yet
IBNR = XS losses rep *(XSLDF - 1)
Credibility weighting method (2+,1-)
+ Produces stable results over time
+ Gives us the ability to tie into pricing estimates for recent years where excess losses have not emerged
- Does not use actual experience in the IBNR estimation
ULT = Rep XS + (1-1/XSLDF)*Ultimate excess losses(with Loss ratio method)
Advantages of distributional models
+ Provides consistent LDFs
+ Allows for interpolation among limits and years
LDF = R(L)Lim LDF + (1-R(L))XSLDF
XSLDF (t to t+y) = (1-R(t+y))/(1-R(t))*LDF (t to t+y)