Verrall Flashcards

1
Q

stochastic models for CL

A
  1. Mack
  2. over dispersed poisson
  3. over dispersed negative binomial
  4. normal approx to NB
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

MACK

A

-only on cumulative loss

Bayesian compared to the Mack, the full distribution can be easily calculated & the prediction error can be calculated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

ODP

A
  • increm loss
  • since neg increm values are possible with reported data, preferable to use paid loss or claim counts
  • not obvious it produces CL
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

ODNB

A
  • same for increm and cumulative losses
  • if link ratio is less than 1 or id column sums of incremental loss are positive, produce negative variance
  • expected value for each increm cell is equivalent to CL estimate AKA form of mean is the same as CL
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

normal approx to NB

A
  • allows for neg increm claims
  • more parameters
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

2 areas where expert knowledge is applied

A
  • BF method (row parameters/AY ultimates)
  • Insertion of prior knowledge about individual DFs in CL (unlike Bootstrapping)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Bayesian models have 2 important properties

A
  1. Can incorporate expert knowledge
  2. Can be easily implemented
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

estimate for outstanding losses: CL

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

estimate for outstanding losses: BF

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

prediction variance

A

process variance + estimation variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

prediction error will be [] if less confident in expert opinion

A

higher

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When comparing prediction errors

A

it’s best to think of the prediction error as a percentage of the prediction, since the reserve estimate itself may vary greatly from model to model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

difficulty in calculating the prediction error highlights a few advantages of Bayesian methods

A
  • full predictive distribution can be found using simulation methods
  • RMSEP can be obtained directly by calculating the std dev of method
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

2 cases of intervention in estimation of DFs for CL

A
  • DF changed in some rows due to external information
  • DFs = 5yr volume weighted average rather than all of the available data in the triangle
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Incorporating Expert opinion about DFs

A
  • means and variance of prior distributions of DFs reflect expert opinion
  • lamda has mean and var W
  • mean is opinion
  • W depends on strength of opinion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

if W is large

A

DF will be pulled closer to CL DF and reserve will closely resemble CL reserve

17
Q

if W is small

A

DF will be pulled closer to prior mean and reserve will move away from CL reserve

18
Q

using BF

A
  • BF assumes expert opinion about level of each row xi from ODP, need to specify prior distribution for xi
  • uses Gamma

E[xi] = alpha/beta = M

Var(xi) = alpha/beta^2 = M/beta

-for given choice of M, variance can be altered by changing beta

19
Q

smaller B implies

A

we are more unsure about M

20
Q

Bayesian Model for BF (BAYESIAN MEAN RESERVE) -> E[Cij]

A
21
Q

formula for Z

A
22
Q

beta can control Z

A

so large beta aka more conf., more weight to BF

-mean of incremental claims is credibility formula where Z controls trade-off between prior mean (BF) and data (CL)

23
Q

to modify Bayesian framework ->

A

insert row parameter for each AY and specify low variances

24
Q

Estimating column parameters (BF RESERVE)

A
  • To account for all variability, we also need to estimate the column parameters (yj )
  • use estimates from traditional CL
  • or define prior dist for column parameters and estimate column parameters first
  • Once we define improper prior distributions (i.e. large variances) for the column parameters and estimate them, we obtain an over-dispersed negative binomial with mean

E[Cij]=(gamma(i)-1)*sum(Cmj)

25
Q

fully specify Bayesian model when using 5yr volume weighted average

A

E[Inc Loss or Cum Loss] = …

Var(Inc Loss or Cum Loss) = …

lambda (i,j) = lamda (j) for most recent 5 CY diagonals

lambda (i,j) = lamda* (j) for all diagonals prior to latest 5

26
Q

if estimating E[Cij] for multiple periods ie E[Cij] and E[Cij+1] using bayesian credibility

A

for CL: Di,j-1 will come from latest diagonal and estimated incrementals using CL

for BF: need to do nothing special

27
Q

estimating incrememtal losses with bayesian credibility -> another way to look at it/calculate

A

calc E[Cij] for CL and BF seperately

E[Cij]=Cum loss to date * (DF-1)

E[Cij]=M*(%rptd@t+1 - %rptd@t)

then calc Z and credibility weight estimate

28
Q

if incremental losses are assumed to follow ODNB (for Bayesian using BF) then E[Cij]

A

E[Cij]=(gamma(i)-1)*sum(Cmj)

29
Q

how to calculate gamma(i)

A

use E[Cij]=(gamma(i)-1)*sum(Cmj) and given column parameters lambda

ie calculate expected incremental loss using CL (fill out remainder of triangle)

then solve for gamma by using those and the E[Cij] formula above

30
Q

how to calculate expected ultimate loss when given gammas and the incremental losses follow ODNB

A

use E[Cij]

**need to start with oldest year first and then go down from there since this process is iterative for newer AYs

31
Q

estimating reserve for fully stochastic BF and the benefit compared to Bayesian BF based on ODP

A

this is the same as incremental loss following ODNB with gamma values

*fully stochastic model in both row and column parameters

where as Baysian BF based on ODP uses static column factors, LDFs, calculated from the loss triangle

32
Q

specify the prior distributions for DFs lambda(i,j) to be used in Bayesian model that will produce CL besides prior knowledge

A

need to set a prior distribution for each AY and development period

for non knowledge DFs, set prior distribution to have mean = volume weighted LDF and large variance

for knowledge DFs, set prior distribution to have mean = knowledge and small variance

*setting variances -> anything relatively large/small compared to mean should be fine

33
Q

specify a prior distribution for row parameter

A

x(i)~gamma(alpha, beta)

know M=alpha/beta from given info

pick small variance -> do this by using small CoV

variance =(CoV*mean)^2

then solve for alpha and beta based on variance and M

34
Q

believe AY should be modeled between CL and BF, how to do this in stochastic framework

A

use xi~gamma(alpha,beta) or gamma(mean, var)

select larger variance for a priori estimate ^

set variance to modify beta and reflect credibility weighting between CL and BF

larger var -> closer to CL estimate

smaller var -> closer to BF estimate

credibility weighting of CL: Z=%paid/(%paid + psi*beta)

35
Q

describe how bayesian model will behave and how prior distribution will impact model

A

lambdas have prior distributions with a mean of volume weighted LDFs and large variance -> large variance indicates the model will reproduce the CL results from these development factors

for AYs development periods (), prior distribution is set to mean of 1.2 and small variance -> this will pull posterior distribution toward the 1.2 factor instead of what the data alone indicates