Section A Flashcards

1
Q

When is the least squares method appropriate?

A

When random year to year fluctuations in loss experience are significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Give 3 possible ways to manage the reserves if losses come higher than expected

A
  1. Reduce the reserve by the additional amount (Budgeted loss method, cov<var>var)</var>
  2. Don’t change the reserve (Bornhuetter Ferguson, cov = var)</var>
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain why it’s difficult to compute Q(x) and we use L(x) instead

A

The best linear approximation of Q(x) is better because:

  • Easier to compute
  • Easier to understand and explain
  • Less dependent on the underlying distribution (this is why it’s difficult to compute Q(x)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Give the formulas used in Brosius

A

LS : a + xb where b = (xy bar - x bary bar)/(x squared bar - x bar squared) and a = y bar -x barb OR
LS : ZX/d +(1-Z)y bar and Z = b/c and c = y bar/x bar
CL : X/d (LS with a = 0)
BL : y bar (LS with b =0)
BF : x + q*U OR
BF : a + X (LS with b =1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Is the Benktander method superior to BF and CL

A

Lower MSE (if p is included within 2c*)
Better approximation to the exact Bayesian procedure
Superior to CL b/c gives more weight to the a priori expectation of ultimate losses
Superior to BF b/c gives more weight to actual loss experience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Formula for benktander

A
U(GB) = X + q*U(BF)
U(BF) = X + q*U(0)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Express Benktander as a credibility weighting system

A

U(GB) = pU(CL)+qU(BF)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the credible loss ratio claims reserve?

A

Credibility weight of CL and BF with %reported calculated differently.
R=z*R(ind)+R(coll)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Express the Z for Neuhauss, Benktander and Optimal credibility

A
Z(NW) = p*ELR
Z(GB) = p ***we weight ultimate claim amounts here
Z(OPT) = p/(p + p^1/2)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How to calculate R ind and R coll

A
R(ind) = C/p*q
R(coll) = (EP*m)*q
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How to get ELR (m) and respective p’s?

A

Calculate the loss ratio by column (sum of incremental claims/EP) and sum all loss ratios.
To get your p’s you need to look at sum of m’s over the ELR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the Z that minimizes MSE (R)?

A

z = (p/q)*(Cov(C,R) + pqVar(U - bc))/(Var(C)+p^2Var(U - bc))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the MSE formula

A

E(alpha2Ui)(Z^2/p+1/q+(1-Z)^2/t)q^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Characteristics of Hurlimanns Method

A
  1. based on full dev triangle (rather than latest AY)
  2. requires a measure of exposure (as CC)
  3. relies on loss ratios (rather than link ratios)
  4. credibility weighting between two extreme positions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Clarks assumption

A
  1. Incremental losses are iid (one period does not affect the surrounding periods, emergence pattern is the same for all AYs)
  2. Var of incremental losses is proportional to the expected incremental losses and the var/mean ratio is fixed and known
  3. Variance estimates are based on an approximation to the rao-cramer lower bound
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Formula for residual

A

r = (ci - ui)/(ui*sigma^2)^1/2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Formula for sigma^2

A

1/(n-p)*sum of (ci-ui)^2/ui

where n: nb of points in triangle and p: number of parameters (LDF = 2+AYs CC=3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How to test for assumptions using normalized residuals

A
  1. Against ui: test for var/mean ratio being constant
  2. Against age: test for growth curve appropriate for all AYs
  3. Against CY: test that there are no CY effects - one period does not affect the other
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Give two growth functions

A

Weibull 1- e^-(x/w)^teta

Loglogistic: x^w /(x^w + teta^w)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Ultimate loss estimate - clarks method

A

CC: EPELR
ELR = sum of losses/sum of used up p (EP
(G(x))

LDF: Paid to date/G(x)
LDF - truncated: Paid to date/G’(x) where G’(x) = G(x)/G(TP)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Reserve estimate - clarks method

A

CC: EPELR(1-G(x))
CC - truncated: EPELR(G(TP)-G(x))
ELR = sum of losses/sum of used up p (EP
(G(x))

LDF: Paid to date(1/G(x) - 1)
LDF - truncated: Paid to date(1/G’(x)-1) where G’(x) = G(x)/G(TP)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How to chose the best set of parameters for your data

A

Maximize the MLE: l = sum of ci*ln(ui)-ui over the triangle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Data advantages to use Clark’s growth function

A
  1. Works with data that is not at the same maturity as prior year
  2. Works with data for only the last few diagonals
  3. Naturally extrapolates pas the end of the triangle
  4. Naturally interpolates between the ages in the analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Advantages of using parameterized curves to describe the loss emergence patterns

A
  1. Only 2 parameters to estimate
  2. Can use data from triangles with different dates
  3. Final pattern is smooth
25
Q

Advantages of using the ODP distribution to model the actual loss emergence

A
  1. Inclusion of scaling factor allows to match 1st and 2nd moment of any distribution which means high flexibility
  2. MLE produces the LDF or CC ultimate loss estimate which means the results can be presented in a familiar format
26
Q

Total variance

A

Sum of process variance and parameter variance

(R*sigma^2)+Parameter variance

27
Q

What do the two types of variance represent?

A

Process: Random variation in the actual loss emergence; calculate

Parameter: Uncertainty in the estimator; give; CC always lower than LDF (b/c less parameters and incorporates information about the exposure base)

28
Q

What are Mack’s key assumptions

A
  1. Expected incremental losses are proportional to losses reported to date
  2. AY’s losses are independent from another accident year
  3. Variance of expected losses is proportional to losses reported to date
29
Q

What are the implications of the three assumptions by Mack

A
  1. Adjacent LDFs should be uncorrelated
  2. There is no CY effect
  3. Volume weighted average is the one that fits the best
30
Q

Describe the three tests for the three assumptions by Mack

A
  1. Do the test of correlation between LDFs : Is T included in +/- 0.67*var(T)^1/2 where T is the sum of all t’s = 1 - s/(n(n ^2-1)/6) and var T = 1/(I-3)(I-2)/2. Can also look at C next period vs C current period to see if linear pattern
  2. Do the test for the CY effect: Is Z included in E(Zn) +/- 2*Var(Zn)^1/2. Can also look at the residuals vs the AY
  3. Look at residuals vs previous cumulative losses
31
Q

Forumla for residuals at 36 months - Mack

A
  1. If var is proportional to 1: LFD chosen will be weighted by previous cum losses squared; ri = (c36-c24*f)/1
  2. If var is proportional to c24 CL: LDF chosen will be weighted average; ri = (c36-c24*f)/c24^1/2
  3. If var is proportional to losses squared: LDF chosen will be simple average; ri = (c36-c24*f)/c24
32
Q

How to calculate alpha^2 by age

A

1/(nb ldf-1)sum(cum losses before(Ldf ind-LDF chosen))

33
Q

How to calculate MSE by cell

A
  1. Project all future cells
  2. Sum above diagonal into A,B,C,D,…
  3. MSE = ULTi^2(alpha^2/f^2)*(1/prev cum + 1/prev age above diagonal (A,B,C,..))
  4. Sum all MSE for an AY to get the MSE for a reserve
34
Q

How to estimate a confidence interval for a reserve and its distribution

A

Lognormal distribution.
Sigma^2 = ln (1 + MSE/R^2)
CI = Rexp(-sigma^2/2 +/- Z*sigma)

35
Q

Why we don’t estimate reserve distributions with normal

A

Because not skewed enough and could result in negative lower bound

36
Q

Describe 6 testable implications of Mack’s CL assumptions - venter

A
  1. Significance of age to age factors (b>2sigma)
  2. Superiority to alternative emergence patterns(Adjusted SSE, AIC and BIC)
  3. Linearity (residuals vs previous cum claims)
  4. Stability (residuals vs AY, LDF against year, state-space model)
  5. Correlation of development factors (is |T| within t (n-2 degrees of freedom, %level, where T = r*(n-2)/(1-r^2)^1/2 and r=correlation
  6. Additive CY effects (use regression to determine if any diagonal dummy variables is significant)
37
Q

Adjusted SSE

A

SSE/(n-p)^2 (n excludes the first age column cells)

38
Q

AIC

A

SSEe^(2p/n) (n excludes the first age column cells)

39
Q

BIC

A

SSEn^(p/n) (n excludes the first age column cells)

40
Q

Number of parameters (CL,BF,CC)

A

CL and CC: m-1 (m: nb of AYs)

BF: 2m-2

41
Q

Recursion formula if variance is constant over the triangle

A
f = sum (h^2(q/h))/sum(h^2)
h = sum (f^2(q/f))/sum(f^2)
42
Q

Recursion formula if variance is proportional to f(d)h(w)

A
f^2 = sum(h(q/h)^2)/sum(h)
h^2 = sum(f(q/f)^2)/sum(f)
43
Q

Formula if we add inflation (g(w+d))

A

h(1+i)^d(1+j)^(w+d)(1+k)^w

where j is the inflation, i the age parameter and k the AY effect on h

44
Q

Describe the complete procedure - Sahasrabuddhe

A
  1. Trend indices
  2. Teta at latest AY level (distribution)
  3. Teta at all other ages and years
  4. LEV(X) and LEV(B) at different AYs and ages
  5. C(ij)’ =C(ij)LEV(B;nj)/LEV(X;ij)
  6. Calculate cum LDFs at base level
  7. Calculate adjusted LDFs at any other layer: LDF(B)*LEV(X;jn)/LEV(B;n,n)/(LEV(X;ij)/LEV(B;nj))
  8. Reserve with your adjusted LDFs
45
Q

3 problems with current application of trend rates

A
  1. Dont vary by claims layer
  2. Trend in CY often not considered
  3. Trend dont vary btwn AY
46
Q

2 requirements for claim size models

A
  1. Have parameters that can be adjusted for inflation

2. Have a claim size model with easily computable means and LEV

47
Q

Sahasrabuddhe’s key finding

A

Development factors at different cost levels and different layers are related to each other based on claim size models and trends

48
Q

Five assumptions that must be met in order to implement the reserving procedure by Sahasrabuddhe

A
  1. Choose basic limit
  2. Use of claim size model
  3. Triangle of trend indices
  4. Have a triangle adjusted to basic limit and common cost level
  5. Claim size models at prior maturities
49
Q

Explain the other way to get an LDF at any other layer: LDF(B)*(LEV(X;in)/LEV(B;in))/R(ij)

A

Don’t always have a distribution at other ages than at ultimate so we use the ratio. R(ij) starts close to 1 at early ages since not many losses are capped yet. At ultimate, it reaches U (LEV(X)/LEV(B)).

50
Q

Identify 5 advantages of a high deductible program

A
  1. Price flexibility
  2. Reduces market charges and premium taxes
  3. Allow for self insurance without all requirements
  4. Incentives for loss control while still protecting against very bad losses
  5. Gives cashflow advantage to the insured
51
Q

Explain why we need to index limits for inflation when calculating development factors for various deductibles and two methods to do it

A

To keep the ratio of deductible (limited) losses to excess losses constant. Can fit a line to average severities over long term history or use an index that reflects the movement in annual severity changes

52
Q

Explain what is a distributional model to estimate reserves for an excess layer

A

Fits the development process by determining parameters that vary over time. Once you have the parameters determined, you calculate the severity relativities and by comparing those you get LDFs. Can do it with method of moments, MLE or siewert approach (minimize chi square between actual and expected relativities around a particular deductible size)

53
Q

Why development for losses in excess of aggregate limits decrease more rapidly over time with smaller deductibles than with larger ones

A

Aggregate limits only cover losses under the deductible. Since most of the later development occurs in the layers above the deductible, excess of aggregate losses reach their ultimate value sooner with smaller deductibles

54
Q

Loss ratio method (2+,2-)

A

+ When no data available
+ LR can be tied to pricing
- Does not use actual experience

XS L = PELRXS ratio + PELR(1-XS ratio)*Agg charge

55
Q

Implied development (2+,1-)

A

+ Produces IBNR at early maturities even when no XS reported yet
+ LDFs are more stable than XSLDF
- Does not explicitly recognize excess loss development

IBNR = (Unlim repLDF - Lim repLim LDF)- Rep XS

56
Q

Direct development (1+,2-)

A

+ Explicitly recognize excess loss development

  • XSLDF tend to be overly leveraged and extremely volatile
  • Cannot be used when no XS losses are reported yet

IBNR = XS losses rep *(XSLDF - 1)

57
Q

Credibility weighting method (2+,1-)

A

+ Produces stable results over time
+ Gives us the ability to tie into pricing estimates for recent years where excess losses have not emerged
- Does not use actual experience in the IBNR estimation

ULT = Rep XS + (1-1/XSLDF)*Ultimate excess losses(with Loss ratio method)

58
Q

Advantages of distributional models

A

+ Provides consistent LDFs
+ Allows for interpolation among limits and years

LDF = R(L)Lim LDF + (1-R(L))XSLDF
XSLDF (t to t+y) = (1-R(t+y))/(1-R(t))*LDF (t to t+y)