Clark Flashcards

1
Q

Key elements of a statistical loss reserving model (2)

A
  1. expected amount of loss to emerge (point estimate)
  2. distribution of actual emergence around expected value (range of possible outcomes)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Loglogistic growth curve

A

G(x) = x^omega / (x^omega + theta^omega)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Weibull growth curve

A

G(x) = 1 - e^ -(x / theta)^omega

*shorter tail compared to Loglogistic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Advantages to using parameterized curves to describe loss emergence pattern (3)

A
  1. simple estimation
  2. ability to use data from triangles w/o evenly spaced evaluation dates
  3. creates a smooth curve
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Number of parameters in the LDF method (*Clark)

A

n + 2

  • n AYs
  • omega
  • theta
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Number of parameters in the CC method (*Clark)

A

3

  • omega
  • theta
  • ELR
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Reasons the CC method has a lower parameter and total variance (2)

A
  1. reduced number of parameters
  2. additional info included in the exposure base
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Difference b/w LDF and CC methods

Clark

A

LDF - assumes AYs are independent

CC - assumes a known relationship b/w ultimate losses across AYs&raquo_space; described by ELR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Tests for constant ELR assumption (2)

Clark

A
  1. plot ultimate LRs by AY
  2. plot normalized residuals against expected incremental losses and look for a random scatter around the 0 line
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Variance / mean ratio (sigma^2)

A

= 1 / (n - p) * sum[ (actual - expected)^2 / expected)

> calculate chi-squared triangle and then sum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Advantages of using the ODP distribution (2)

Clark

A
  1. high flexibility - scaling factor, sigma^2, allows matching 1st and 2nd moments of any distribution
  2. results presented in a familiar format (produces LDF and CC estimates)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Advantage of MLE

A

works with negative or 0 incremental loss amounts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Key assumptions from Clark’s model (3)

A
  1. incremental losses are i.i.d.
  2. var/mean scale parameter is fixed and known
  3. variance is based on approx. to Rao-Cramer lower bound (minimized)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Residual plots to test model assumptions (what to look for and 4 types of tests)

A

want: random scatter around zero line

can plot against:

  1. increment age (how well loss emergence curve fits incremental losses at different dev. Periods)
  2. expected loss (var/mean ratio is constant)
  3. AY
  4. CY (diagonal effects)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Handling truncation (3)

A
  1. ELR is always calculated before truncation
  2. LDF method truncates LDF (fitted / truncated fitted)
  3. CC method truncates % reported (difference)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which has the smallest variance and why: discounted or un-discounted reserves?

A

discounted reserves - the tail has the largest parameter variance but also receives the deepest discount

17
Q

How to handle partial exposure periods

A

scale by % earned

18
Q

Normalized residuals (Clark)

A

r_i = (c_i - mu_i) / (sigma*sqrt(mu_i))

c = actual
mu = expected

19
Q

Parameter variance calculation

Clark

A

Parameter variance = var(ELR) * premium^2

20
Q

Why is parameter variance generally greater than process variance?
(Clark)

A

Few data points in the triangle means that most of the uncertainty comes from parameter estimation (vs. randomness)

21
Q

MLE term

Clark

A

maximizing loglikelihood is equivalent to maximizing:

L = c_i * ln(mu_i) - mu_i