Taylor Flashcards

1
Q

Calculate the Expected Value and Variance for the Tweedie distribution.

A

E(Y) = u = ((1-p)theta)^(1/(1-p))
V(Y) = phi
u^p

p = 0 if Normal (u = theta)
p = 1 if ODP (u = exp(theta))
p = 2 if Gamma (u = -1/theta)
p = 3 if Inverse Gaussian (-2*theta)^-0.5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

3 selections to specify a GLM

A
  1. Error distribution (must be EDF), including index p
  2. Explanatory variables (Xis)
  3. Link function h() (ex: identity, log)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Calculate GLM deviance

A

D* = 2*(ll_saturated - ll_model)

Saturated model has a parameter for each observation so the model completely fits the observations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Calculate Standardized Pearson Residuals.
How do you use them to validate model?

A

RiP = (Yi - Yihat)/sihat

Standardized Pearson residuals should be random around zero (unbiased) and have uniform dispersion (homoscedasticity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Calculate Standardized Deviance Residuals

A

RiD = sgn(Yi - Yihat)*sqrt(di/phi)

di is the Contribution of observation Yi to deviance D*

Standardized deviance residuals should be normally distributed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Provide 1 solution if a model based on specific p generates more widely dispersed residuals than are consistent with that model.

A

Increase p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Estimate scale parameter (phi) from Deviance

A

phihat = D*/(n-p)
n = # observations
p = # parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Provide a solution if residual plot exhibits heteroscedasticity (non-homogeneous variance around 0)

A

Use weights

Observations should be assigned weights that are inversely proportional to the variance of the residuals.

We can remove the influence of outliers by assigning them weights of 0 in the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Summarize the Non-Parametric Mack Model Assumptions

A
  1. Accident years are stochastically independent.
  2. For each AY k, cumulative losses Xkj form a Markov Chain
  3. For each AY k and development period j
    E(Xk,j+1 given Xk,j) = fj*Xkj (fj is the LDF)
    V(Xk,j+1 given Xk,j) = sj^2 * Xk,j
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Summarize the 2 results from the Mack Model

A
  1. The conventional chain ladder estimators (LDF = fi) are
    a) Unbiased
    b) Minimum variance among estimators that are unbiased linear combinations of the age-to-age factors
  2. The conventional chain ladder reserve estimator, Rk, is unbiased
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Summarize the 2 parametric Mack Model Assumptions

A

Same assumptions as Mack Model, except:
1. Yk,j+1 given Xk,j follows EDF(theta_kj, phi_kj ; alb,c)
2. Variance assumption is removed - Variance is driven by the selected EDF distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Briefly explain Theorem 3.1 regarding EDF Mack Model

A

a. If the original Mack variance assumption also holds, then the MLEs of the fj parameters are the chain ladder LDF estimators, fjhat, and these are unbiased estimators.

b. If the model is restricted to the ODP Mack model and if the dispersion parameters are just column dependent, phi_k,j = phi_j, then the weighted average chain ladder LDFs, fjhat, are MVUEs.

c. For the same conditions as (b), the cumulative loss estimates, That_k,j and the reserve estimates, Rhat_k, also MVUEs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Briefly explain the implication of Theorem 3.1

A

This theorem means that the conventional chain ladder estimates and forecasts are optimal estimators (both MLE and MVUEs).

This is a stronger implication than the original Mack model.

It shows the LDF estimators, fhatj, are minimum variance for all unbiased estimators, not just of the linear combinations of the age-to-age factors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain the 2 Cross-Classified Model assumptions

A
  1. Random variables Y_k,j are stochastically independent.
  2. For each accident year k and development period j:
    Y_k,j+1 given X_k,j follows EDF(theta, phi; a,b,c)
    E(Y_k,j) = ak*bj
    sum(bj) = 1
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Briefly explain theorem 3.2 regarding Cross-Classified Model

A

For the Cross-Classified, if the following assumptions also hold:
1. Y_k,j is restricted to an ODP distribution
2. The dispersion parameters are identical for all cells: phi_k,j = phi

The, the MLE fitted values Y_k,j are the same as those from the conventional chain ladder method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Briefly explain theorem 3.3 regarding Cross-Classified Model

A

If the theorem 3.2 assumptions hold AND the fitted values Yhat_k,j and reserve estimates Rhat_k are corrected for bias, then they are MVUEs of Y_k,j and Rk.

17
Q

Briefly explain the results of Taylor’s theorems

A

Theorems 2 and 3 are similar to theorem 1 about ODP mack model and mean:
1. Forecasts from ODP Mack and ODP Cross-Classified models are identical and the same as those from the chain ladder method despite the different formulations.
2. We can get forecasts for the ODP cross-classified model without considering the model directly and working as if the model was an ODP Mack model.

18
Q

Describe Y, X and b matrices in ODP Mack Model GLM representation.

19
Q

Describe the Y, X and b matrices in the ODP cross-classified model GLM representation (4x4 triangle)

20
Q

Calculate ak, bj and forecasts using Cross-Classified Model Parameters.

21
Q

Calculate re-normalized ODP Cross-Classified Model parameters

A

ak_norm = ak*sum(bj)
bj_norm = bj/sum(bj)