Verrall Flashcards

1
Q

Advantages of Bayesian Models

A
  • Can incorporate expert knowledge
  • Can be easily implemented
  • The full predictive distribution can be found using simulation methods
  • The RMSEP can be obtained directly by calculating the standard deviation of the distribution
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Mack Model

A

Specifies the first 2 moments of the cumulative losses

Advantage:
- Simple and can be implemented using spreadsheets

Disadvantages:
- No predictive distribution
- Additional parameters must be estimated in order to calculate the variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Over-Dispersed Poisson (ODP) Model

A

Models incremental losses

Advantages:
- It doesn’t necessarily break down if there are some negative incremental values (only requires the column sums to be positive)
- It gives the same reserve estimate as the CL method
- It’s more stable than the log-normal model of Kremer

Disadvantage:
- The connection to the CL method is not immediately apparent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Over-Dispersed Negative Binomial (ODNB) Model

A

Can be applied to both incremental and cumulative losses

Produces the same predictive distribution as the ODP model

Incremental:
EV = (LDF - 1) * Prior Cumulative
Variance = Dispersion * LDF * (LDF - 1) * Prior Cumulative

Cumulative:
EV = LDF * Prior Cumulative
Variance = Dispersion * LDF * (LDF - 1) * Prior Cumulative

Advantages:
- It doesn’t necessarily break down if there are some negative incremental values (only requires the column sums to be positive)
- It gives the same reserve estimate and has the same form as the chain ladder method

Disadvantage:
- Cannot handle negative development (the column sums must be positive)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Normal Approximation to the Negative Binomial Model

A

Incremental:
EV = (LDF - 1) * Prior Cumulative
Variance = Dispersion_j * Prior Cumulative

Cumulative:
EV = LDF * Prior Cumulative
Variance = Dispersion_j * Prior Cumulative

Advantages:
- Can handle negative incremental values

Disadvantage:
- Additional parameters must be estimated in order to calculate the variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Mean Squared Error of Prediction (MSEP)

A

Process Variance + Estimation Variance

Accounts for uncertainty in the parameter estimation (estimation error) unlike standard error

Square root is the RMSEP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Case 1: Development Factor is Changed in Some Rows Due to External Information

A

Assume ODNB model with LDF for each AY and development period combination

For a 10 x 10 triangle suppose that there is information that implies the second development factor (from column 2 to column 3) should be 1.5 for rows 8, 9, and 10

Specify the LDFs:
- LDF_i,j = LDF_j for i = 1, …, 10 and j = 2, 4, …, 10
- LDF_i,3 = LDF_3 for i = 1, …, 7
- LDF_8,3 = LDF_9,3 = LDF_10,3

The means and variances of the prior distributions of the development factors reflect the expert opinion:
- LDF_8,3 has mean 1.5 and variance 𝑊, where 𝑊 depends on the strength of the opinion
- LDF_j have prior distributions with large variances (vague priors)
- If 𝑊 is large, the posterior mean of the development factor will be pulled closer to the CL development factor and the reserve will closely resemble the CL reserve
- If 𝑊 is small, the posterior mean of the development factor will be pulled closer to the prior mean and the reserve will move away from the CL reserve
- The remaining development factors will have posterior means equal to those implied by the CL model due to their vague prior distributions with large variances - the posterior distribution will be completely driven by the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Case 2: Development Factors are Based on a Five-Year Weighted Average

A

All of the development factors will be given vague prior distributions so that the posterior distributions will be informed by the data

Specify the LDFs:
- LDF_i,j = LDF_j for the most recent n calendar year diagonals
- LDF_i,j = LDF_j* for all diagonals prior to the latest n calendar year diagonals

Specify prior distributions with large variances for LDF_j and LDF_j* to ensure that they estimated from the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Bayesian Model for the BF Method

A

Assume the data is described by the ODP model

Since the BF method assumes that there is expert opinion about the level of each row, we must specify a prior distribution for the row parameter x_i

x_i ~ Gamma(alpha_i, beta_i)

E(x_i) = alpha_i / beta_i = a priori estimate of ultimate loss in AY i
Var(x_i) = alpha_i / beta_i^2 = a priori estimate of ultimate loss / beta_i

Beta_i increases with our certainty of the a priori ultimate loss estimate

E(Inc Loss) = Z_i,j * CL Estimate + (1 - Z_i,j) * BF Estimate

BF Estimate = (LDF - 1) * A Priori Ult Loss / CDF

Z_i,j = Cumulative % Emerged_j-1 / (beta_i * dispersion + Cumulative % Emerged_j-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Impact of Beta on the Variance of the Prior Distribution

A
  • If we choose prior distributions with large variances (small betas), we have low confidence (no prior knowledge) in our prior mean and the result will be close to the CL method
  • If we choose prior distributions with small variances (large betas), we have high confidence (prior knowledge) in our prior mean and the result will be close to the BF method
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Fully Stochastic Bayesian Model for the BF Model

A

Define improper prior distributions for the column parameters, and estimate the column parameters first, before applying prior distributions for the row parameters and estimating these - this introduces variability as opposed to using the CL LDFs

ODNB model with E(Inc Loss) = (gamma_i - 1) * SUM(Prior Inc Loss in Column)

Gamma_i = 1 + (BF reserve_i / Inc Losses for Prior AYs in Future Dev Periods_i)

Assume gamma_1 = 1

The flexibility of the fully stochastic model means we can replicate the chain ladder method, the BF method or a weighting between the two methods by adjusting the variance of the prior distributions of the row parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Fully Stochastic Bayesian Model for the BF Model Algorithm

A

Iterative process that uses the estimated future incremental losses in the calculation of the next gammas

  1. Calculate gamma_2 and the corresponding incremental loss in AY 2
    - gamma_2 = 1 + BF Reserve_2 / Inc Losses for Prior AYs in Future Dev Periods_2
    - E(Inc Loss) = (gamma_2 - 1) * SUM(Prior Inc Loss in Column)
  2. Calculate gamma_3 and the corresponding incremental losses in AY 3
  3. Repeat for all incomplete AYs

If we sum up the expected future incremental losses for each AY, we obtain the BF reserves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Examples of when to incorporate expert knowledge or opinion

A
  • Change in payment pattern due to a change in company policy
  • Change in benefits due to new laws, requiring adjustment to LDFs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Areas where expert knowledge might be used

A
  • Selecting LDFs for CL method (overriding LDFs in an AY or n-year average)
  • Selecting expected loss for the BF method
How well did you know this?
1
Not at all
2
3
4
5
Perfectly