Couret & Venter Flashcards
Describe Couret & Venter’s main insight
Injury type correlations can be used to better predict freq of large claims since fatalities and permanent disabilities are costliest injury types
Losses above high loss limit can be difficult to estimate since excess losses are driven by small # of very large claims, so small error in estimation of freq of large claims can have significant impact on XS loss estimates.
C&V improved NCCI 7 groups segmentation by observing that since physical circumstances are similar for significant injury types, claim frequency between these injury types are correlated.
They then relied on those correlations to build model to better estimate frequency of more serious injuries (drive XS losses) based on less-severe injuries for which data exists.
Describe the data used in analysis
7 policy years undeveloped and untended WC claim counts at countrywide level by class and injury type.
Discard most recent year since too immature and split the remaining 6y into modeling set and holdout set.
On which basis did C&V split data between modelling and holdout datasets?
Choose to split data based on even (modeling) and odd (holdout) years to help neutralize any differences in trend and development between datasets.
Also tried splitting data into 4 oldest years for modeling and latest 2 for holdout data and this approach gave similar results.
Briefly explain the relationship between injury type, frequency and severity. Any exception?
As severity of injury increases, frequency decreases and severity of claims increases
There is one exception where severity of PT is usually higher than F.
Explain why less-severe injury types are predictive of more-severe injuries
Serious injury types are correlated so a class with a lot of major claims is likely to have higher average of PT and F as well.
Describe the multi-dimensional credibility approach
C&V constructed multivariate credibility formulas to estimate true population mean injury type count ratios for each class based on various injury type count ratios for class & HG from modelling data.
vi = Vh + b(Vi-Vh) + c(Wi-Wh) + d(Xi-Xh) + e(Yi-Yh)
Procedure seek to find credibility factors b, c, d, e that vary by injury type and class i.
How does multi-dimentisional approach ties with one-dimensional approach
If injury types were uncorrelated, credibilities given to other injury types would b 0 and these formulas would become:
vi = bVi + (1-b)Vh
(Similar to Robertson approach)
Describe why C&V decided to use multivariate version of B-S credibility
BS credibility minimizes LSE in estimates
Minimal LSE mighty not be appropriate for heavy-tailed distribution such as WC as squared error can get quite large in tail, but since C&V are focusing on claim freq (which is not heavy tailed), use of squared error will produce reasonable result
Multi-dimensional credibility takes advantage of extra claim frequency info for a class instead of simply relying on HG avg. This results in more accurate predictions of claim frequencies for a class.
Briefly describe how C&V tested the result of their analysis
Assumed raw injury type count rations in holdout sample are true values and tried to best predict using 3 approaches:
1. Raw class injury type ratios (Vi)
2. HG injury type ratios (Vh)
3. Injury type ratios resulting from credibility procedure (vi,est)
Describe the 2 tests they used
- Sum of Squared Errors (SSE) Test
Differences between injury type ratios for each of the 3 possible methods and holdout sample ratios
SSE(Raw) = Sum of (Vi - Vi,holdout)^2
SSE(HG) = Sum of (Vh - Vi,holdout)^2
SSE(cred) = Sum of (vi, est - Vi,holdout)^2
Lowest of 3 is best
- Quintiles Test
a. Sort classes in both modeling and holdout datasets in increasing order based on injury type relativities produced by cred procedure
b. Group classes into 5 quintiles based on sorted relativities
c. Calculate Quintile and Vquintile,holdout using each quintile (using TT counts as weight)
SSE(Raw) = Sum of (Vquin/Vh - Vquin,holdout/Vh,holdout)^2
SSE(HG) = Sum of (1 - Vquin,holdout/Vh,holdout)^2
SSE(Cred) = Sum of (vquin,est/vh,est - Vquin,holdout/Vh,holdout)^2
Lowest is best
Which procedure was identified best under SSE Test
Credibility procedure
However, it does not show much of an improvement over HG.
2 explanations:
1. Estimators derived from even year data are designed to fit data
2. Class data by year is volatile
Made 2 adjustments to data (claimed true improvement was masked):
1. Group classes within each HG into quintiles (eliminates class-level volatility)
2. Each value in calculation is normalized at HG level to eliminate differences between modelling and holdout dataset
Which procedure was preferred under Quintiles Test
Credibility procedure
Showed a substantial reduction in SSE (does not mean procedure is better for class-level estimation, only that using quintiles type ratios better than HG level ratios)
Procedure did not show improvement for HG A for several injury types. C&V claimed this is due to classes in A being very homogeneous so injury type ratios are not expected to vary much within HG.
State 2 advantages of quintiles test over SSE
- Grouping classes into quintiles help reduce volatility in data
- Relative incidence ratios are impacted by unknown covariates, with level varying between odd and even years. Normalizing each dataset by HG makes even and odd years more directly comparable.
Briefly explain conclusion from C&V paper
Individual class experience contains info relevant to future large relative frequency.
A correlated credibility approach using relationships among injury type frequencies within each type can utilize that info.
List 3 recent innovations in XS rating
- Use more HG to get more homogeneity in loss potential
- Look at possible differences in claim costs within an injury type across classes/HG
- Look for better ways to combine data from different state systems.