Venter Factors Flashcards
If future loss emergence is a constant plus a percent of emergence to date, what method should be used
A factor plus constant development method
If future loss emergence is proportional to ultimate losses rather than to emerged to data, what method should be used
BF
Testing implications of assumption 1- significance of factor f(d)
In general, the absolute value of a factor f(d) is required to be at least twice its standard deviation for the factor to be considered significantly different from zero
Testing implications of assumption 2 - Superiority of factor assumption to alternative emergence patterns
To compare development methods, we should use the sum of the squared errors (SSE), adjusting for the number of parameters used (goodness of fit)
Describe alternative emergence pattern: Linear with constant
f(d)c(w,d) + g(d)
often significant in the age 0 or 1 state, especially for highly variable and slowly reporting lines such as excess reinsurance.
If the constant is significant and the factor is not, the additive chain-ladder method may be appropriate
Describe alternative emergence pattern: Factor times parameter
f(d)h(w)
States that the next period’s expected emerged loss is a lag factor f(d) times the expected ultimate loss amount h(w) for an AY
This is the parameterized BF method
To outperform the chain-ladder method, the BF method must produce lower fit errors.
To test the BF model against the chain-ladder model, we calculate the SSE test statistic and see which one is better.
If BF is better, that would suggest that loss emergence is more accurately represented as a proportion of ultimate losses rather than a percentage of previously emerged losses
Alternative emergence pattern: testing implication 1&2
By graphing the age d+1 loss against the age d loss.
A factor-only model (no constant) would show roughly a straight line through the origin with slope equal to the development factor.
A constant-only model (no factor) would show roughly a horizontal line at the height of the constant
The additive chain-ladder method and the Cape Cod method will always produce the same adjusted SSE, why is that?
For each age d, both models result in a constant f(d) across all accident years.
If we fit the Cape Code model, we can define g(d) = f(d)h, where g(d) is the additive development amount for age d.
If we fit the additive model, we can define f(d)h = g(d).
As long as the parameters are fit using least squares, the equation will always hold.
Ways to reduce the number of parameters in the BF model
- Grouping AYs using apparent jumps in loss levels and fitting a single h parameter to each group
- Assume that subsequent periods all have the same expected percentage development.
- Fit a trend line through the BF ultimate loss parameters
Chain-ladder method assumption
Assumes that future emergence for an AY will be proportional to losses emerged to date
BF method assumption
It takes expected emergence in each period to be a percentage of ultimate losses
Cape Cod and additive chain-ladder method assumption
It assumes that years showing low losses or high losses to date will have the same expected future dollar development (because they assume a constant h over all accident years)
Testing implication of assumption 3- test of linearity - Residuals as function of previous cumulative losses
A scatter plot of the raw incremental residuals (actual emergence - expected emergence) against the previous cumulative losses
The chain-ladder method assumes that the incremental losses are a linear function of the previous cumulative losses.
Testing implication 4: Tests of stability - residuals over time (Stability Test 1)
Plot the incremental residuals against time
Testing implication 4: Tests of stability - residuals over time (Stability Test 2)
Look at a moving average of a specific age-to-age factor.
If the moving average shows clear shifts over time, then instability exists and we may want to use a weighted average of the factors.
If the moving average shows large fluctuations around a fixed level, this does not mean we should focus only on recent data. In this case, a broader range of data is actually better