Taylor Flashcards

1
Q

Notation

A

Y_k,j = incremental losses for AY i and development period j
X_k,j = SUM(Y_k,i) = cumulative losses
f_j = all year volume wtd avg age-to-age factor from development period j to j + 1
d = CY

K AYs/CYs
J development periods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

BF Method

A

A priori loss = P_k * ELR_k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Cape Cod Method

A

A priori loss = P_k * [SUM(P_i * % emerged_i) * (Culm Loss + R_i) / P_i] / SUM(P_i * % emerged_i)

Uses the same ELR for each AY = wtd average of each AY’s CL LR with weights = Premium * % Emerged

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Exponential Dispersion Family (EDF)

A

ln(pdf) = (y * theta - b(theta)) / a(phi) + c(y,phi)

theta is a location parameter = canonical parameter
phi is a dispersion parameter = scale parameter
b(theta) = cumulant function, determining the shape
exp(c(y,phi)) = normalizing factor

a(phi) = phi / w -> assume w =1

E(Y) = b’(theta) = mu
Var(Y) = a(phi) * b’‘(theta) = a(phi) * V(mu)
where V(mu) is the variance function that depends on the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Tweedie Sub-Family

A

V(mu) = mu^p where p<=0 or p>=1

mu = [(1 - p) * theta]^(1/(1-p)) for p <> 1 (when p = 1, mu = exp(theta) - Poisson)

Assuming a(phi) = phi, Var(Y) = phi * mu^p

p = 0 -> Normal
p = 1 -> ODP
p = 2 -> Gamma
p = 3 -> Inverse Gaussian

Tail heaviness of Tweedie distributions increases as p increases

If a model based on a specific p generates more widely dispersed residuals than are consistent with that model, then an increase in p might be warranted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Over-Dispersed Poisson Sub-Family

A

p = 1
b(theta) = exp(theta) = mu
a(phi) = phi

Useful when little is known of the subject distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Selection of a GLM

A
  1. Selection of a cumulant function b(theta), controlling the model’s assumed error distribution
  2. Selection of an index p, controlling the relationship between the model’s mean and variance
  3. Selection of the covariates
  4. Selection of a link function, controlling the relationship between the mean mu_i and the associated covariates

Parameters of a GLM are often estimated using maximum likelihood estimation (MLE)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Scaled Deviance

A

D = 2 * SUM[loglikelihood(saturated) - loglikelihood(actual)]

Saturated model includes a parameter for every observation so that Estimated = Actual

When this difference is small, then the fitted values are close to the actual value - want to minimize

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Unscaled Deviance

A

Common to minimize unscaled deviance so we don’t need the parameter phi

D* = 2 * SUM[loglikelihood(saturated) - loglikelihood(actual)]

phi can be estimated by D* / (n - p)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Standardized Pearson Residuals

A

R_i = (Actual - Estimated) / SD_i

Should exhibit unbiasedness (revolve around y=0) and homoscedasticity (constant variance) when plotted against covariates

Will reproduce any non-normality that exists in the observations (ex. skewed loss data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Standardized Deviance Residuals

A

Much closer to normal than the Pearson residuals, better for model assessment

R_i = sgn(Actual - Estimated) * SQRT(d_i / phi)

sgn produces -1 if negative, 0 if 0, and 1 if positive

d_i is the contribution of the ith observation to the unscaled deviance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Non-Parametric Mack Model

A

M1: AYs are stochastically independent
M2: The cumulative losses form a Markov chain (means that X_k,j is only dependent on X_k,j-1)
M3a: E(X_k,j+1|X_j,k) = f_j * X_k,j
M3b: Var(X_k,j+1|X_k,j) = sigma_j^2 * X_k,j

Result 1: The conventional CL estimators of f_j are unbiased AND minimum variance estimators among estimators that are unbiased linear combinations of the f_k,j
Result 2: The conventional CL reserve estimate is unbiased

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Parametric Mack Models

A

Assigns distributions to the incremental losses Y_k,j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

EDF Mack Model

A

M1: AYs are stochastically independent
M2: The cumulative losses form a Markov chain (means that X_k,j is only dependent on X_k,j-1)
M3a: E(X_k,j+1|X_j,k) = f_j * X_k,j
M3b: Y_k,j+1 | X_k,j ~ EDF

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Tweedie Mack Model

A

M1: AYs are stochastically independent
M2: The cumulative losses form a Markov chain (means that X_k,j is only dependent on X_k,j-1)
M3a: E(X_k,j+1|X_j,k) = f_j * X_k,j
M3b: Y_k,j+1 | X_k,j ~ Tweedie

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

ODP Mack Model

A

M1: AYs are stochastically independent
M2: The cumulative losses form a Markov chain (means that X_k,j is only dependent on X_k,j-1)
M3a: E(X_k,j+1|X_j,k) = f_j * X_k,j
M3b: Y_k,j+1 | X_k,j ~ ODP

17
Q

Theorem 3.1

A

The EDF Mack model results in the following assuming the data array is a triangle:

  1. If M3b holds, then the maximum likelihood estimators (MLEs) of the f_i are the conventional CL estimators (which are unbiased)
  2. If we are in the special case of the ODP Mack model AND the dispersion parameters are just column dependent, then the conventional CL estimators are minimum variance unbiased estimators (MVUEs). In addition, the cumulative loss estimates and the reserve estimates are also MVUEs

The estimators here are the minimum variance out of ALL unbiased estimators, not just out of the linear combinations of the f_k,j

18
Q

EDF Cross-Classified Models

A

The incremental loss random variables are stochastically independent

Y_k,j ~ EDF

E(Y_k,j) = alpha_k * beta_j

SUM(beta_j) = 1

Includes explicit row and column parameters

19
Q

Theorem 3.2

A

For an ODP cross classified model with the dispersion parameter identical for all cells, the MLE fitted values and forecasts are the same as those given by the conventional CL method

20
Q

Theorem 3.3

A

If ODP cross classified model assumptions apply and the fitted values/forecasts are corrected for bias, then they are MVUEs

Forecasts from the ODP Mack and ODP cross-classified models are identical and the same as those from the conventional CL method despite the different formulations

21
Q

ODP Cross-Classified Model Algorithm

A

Iteratively solve for the next beta and alpha parameter, starting with alpha_1 and beta_J

  1. Set alpha_1 = latest cumulative loss
  2. Determine other alphas using alpha_k = Cumulative Loss_k / (1 - SUM(remaining beta_j))
  3. Determine other betas using beta_j = SUM(column k Inc Loss) / SUM(column k alpha_k)

alpha_1 represents the ultimate loss for AY 1
Each beta_j is % incremental emergence

22
Q

GLM representation of ODP Mack Model

A

f_k,j - 1|X_k,j ~ ODP(f_j - 1, phi_j / X_k,j)

Identity link function

Weights underlying the variance are the cumulative losses

23
Q

GLM representation of ODP Cross-Classified Model

A

Y_k,j ~ ODP(alpha_k * beta_j, phi)

Log link function -> mu_i,j = exp(ln(alpha_i) + ln(beta_j)) = alpha_i * beta_j

Weights underlying the variance are 1

Leads to parameter redundancy/aliasing