Clark Flashcards
what are two objectives in creating a formal model of loss reserving?
(Clark)
- describe loss emergence in simple mathematical terms as a guide to selecting amounts for carried reserves
- provide a means of estimating the range of possible outcomes around the expected reserve
what are two key elements of a statistical loss reserving model?
(Clark)
- expected amt of loss to emerge in some time period
- distribution of actual emergence around the expected value
a model will estimate the expected amt of loss to emerge based on what two quantities?
(Clark)
- estimate of ult loss by year
- estimate of pattern of emergence
what is assumed about the loss emergence pattern when using the loglogistic or weibull curves?
(Clark)
assume a strictly increasing pattern
what are three advantages to using parameterized curvers to describe the emergence pattern?
(Clark)
- estimation is simple (only two parameters)
- can use data from an unevenly spaced triangle
- final pattern is smooth and doesn’t follow random movements in historical age-to-age factors
what does the LDF method assume about loss amt in each AY?
Clark
assumes loss amt in each AY is independent from all other years
what does the CC method assume about the loss amt in each AY?
(Clark)
assumes there is a known relationship between expected losses across AY, where the relationship is identified by an exposure base (prem @ CRL, sales, payroll, etc.)
which is preferred, the CC or LDF method?
Clark
CC - data is summarized into a loss triangle with relatively few data points
what is a drawback of the LDF method?
Clark
requires estimating a number of parameters - tends to be overparameterized when few data points exist
how do the parameter variances of the CC and LDF methods compare?
(Clark)
CC has smaller param variance (additional info from exposure base + fewer params)
what is process variance? (wrt variance of actual loss emergence)
(Clark)
the “random” amt
what is parameter variance?
(wrt variance of actual loss emergence)
(Clark)
the uncertainty in the estimator, aka estimation error
what are key advantages of using the over-dispersed Poisson distribution to model process variance?
(Clark)
- inclusion of scaling factors allows us to match first and second moments of any distribution -> high flexibility
- MLE produces the LDF and CC estimates of ult losses, so results can be presented in a familiar format
what is an advantage of using the MLE function?
Clark
works in the presence of negative or zero incremental losses
what are three key assumptions of the Clark model?
Clark
1 - incremental losses are IID
2 - variance/mean scale parameter sigma^2 is fixed and known
3 - variance estimates are based on an approximation to the Rao-Cramer lower bound