Reserving - other techniques Flashcards

1
Q

Frequency- Severity Techniques

A

-any method that estimates claim counts and severities separately before multiplying them together to estimate ultimate claims

3 approaches

  1. Using CL on claims counts and severity separately
  2. Incorporating exposures and inflation to estimate highly leveraged years
  3. Disposal rate technique
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

F-S Assumptions

A
  • claim counts and severity will continue to develop in future periods as they have in past periods
  • consistent definition of claim counts throughout experience period
  • mix of types of claims is relatively homogeneous
  • for disposal rate: no significant partial payments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

F-S Advantages

A

Advantages

  • disposal rate tech only uses paid data so not impacted by changes in case reserve adequacy
  • assumptions about inflation and expected claim disposal rates can be explicitly incorporated into methods
  • gain greater insight into claims process by understanding rate of claim reporting and settlement and avg $ value of claims separately
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

F-S Disadvantages

A

Disadvantages

  • estimates are highly sensitive to assumed trend rate
  • changes in definitions of claim counts impact estimates
  • changes in claims reporting/processing impact estimates
  • methods require relatively homogeneous mix of claims
  • data may not be available ie claim counts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Freq-Sev Tech 1

A
  • calc LDFs for claim count and severity triangles
  • use these to project counts and severities to ultimate
  • calc ultimates = ult claim counts*ult severity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Freq-Sev Tech 2

A
  • for highly leveraged years based on CDFs for severity and counts
  • incorporate trends on older AYs to estimate highly leveraged year
  • payroll trend and claim count trend applied to frequency
  • severity trend applied to severity
  • calc Ult trended freq and Ult trended severities for each AY
  • ult counts for leveraged yr = selected ult trended freq*payroll
  • ult severity for leveraged yr = selected ult trended severity
  • use these to estimate ultimate claims for highly leveraged yr
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Freq-Sev Tech 3 / Disposal Rate

A
  • start with cumulative closed claim count triangle and incremental paid severity triangle
  • CL on reported claim count triangle to estimate ult counts
  • calc disposal rate triangle = cumulative closed count/ultimate claim count
  • select disposal rates (i.e. like selected LDFs)
  • use selection to project incremental closed claim counts
  • need to determine severities to use to get projected unpaid claims
  • incremental paid severity = incremental paid/open claims
  • need to calc trended increm paid severity triangle to latest level (or level of AY you’re estimating)
  • select incremental severities to use
  • unpaid claims = sum(incremental count*incremental paid severity)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Incremental counts closed between y1 and y2

A

-uses only selected DRs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Tail Severities

A

-combining data for multiple maturities can help produce more stable tail severity estimate than estimating severity for each maturity separately based on thin data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Tail Severities:

how to calculate

A
  • need incremental closed claim count triangle incremental paid claims triangle
  • need to trend incremental paid claims to latest level
  • calc volume weighted average trended tail severity = sum(trended incremental paid)/sum(closed claim counts)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Tail Severities:

Considerations for maturity age to being in tail

A
  • combine data at age at which results become erratic since combining data may provide more stability
  • influence on total projections of selecting particular age
  • % of claims expected to be closed beyond age -> enough claims present to provide more stable severity estimate when grouped but not too many since some should remain to provide estimates for earlier maturities when LDFs are more stable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Case Outstanding Development Techniques

A

-uses ratio of incremental payments and CO to prior CO to estimate future CO and then future payments

2 approaches to using CO development tech:

  1. Estimate future incremental payments using insurer’s own triangles to obtain paid-on-case and remaining-in-case ratios
  2. Use reported CDF and paid CDF to come up with single factor to estimate total unpaid claims for AY
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

C/O: Assumptions

A

Main assumptions:

  1. Development of future claims will be similar to development in prior periods
  2. CO to date provides useful into about future claims development

All same assumptions from CL also apply:

Consistent claims processing

Consistent mix of claims

Stable policy limits and deductibles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

C/O: works best

A

-since techniques use CO as starting point for estimates, they are most appropriate when CO provides sufficient info about future payments

Works best:

  • when looking at RY triangles
  • for CMPs since no pure IBNR
  • when looking at AY triangles but nearly all claims are reported by first column of triangle
  • 2nd approach when only current CO data is available so cannot use other methods
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

C/O: Disadvantages

A
  • in most LOBs, CO do not provide sufficient about pure IBNR
  • lack of industry benchmark data for AY applications
  • not intuitive as what paid-on-case and remaining-on-case ratios are appropriate at each maturity including tail
  • projections can be distorted by case reserves for large losses
  • 2nd approach depends on industry CDFs which may not be appropriate for particular self-insurer
  • CDFs for 2nd approach may be highly leveraged for immature years, making estimates highly volatile
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

C/O Tech 1

A
  • CO and incremental paid claims triangles
  • calc remaining in case ratio triangle by dividing adjacent columns of CO triangle (basically calc LDFs)
  • select ratios and use them to project case reserves (i.e fill CO triangle into square)
  • calc incremental paid on prior case ratios = incremental paid triangle @ j+1/CO @ j

Ex: incremental paid 12-24 / CO @ 12

-select ratios and then use them to project incremental payments

Ex: incremental paid 12-24 = 12-24 ratio*CO@12

Incremental paid 0-12 = incremental paid 0-12

  • calc unpaid claims by summing projected incremental payments
  • calc IBNR by subtracting latest CO from unpaid
17
Q

C/O Tech 2

A
18
Q

Berquist-Sherman: dealing with operational changes

A
  1. Using data selection and rearrangement to isolate or neutralize impact of the changes
    - using EE instead of claim counts when definition of claim counts has changed
    - using PY instead of AY data when policy limits have changed between PYs
    - using RY instead of AY data when social or legal climate changes cause severity to correlate closer with RD than AD
    - using shorter time periods when avg acc date has changed over time
  2. Using data adjustment to restate historical data as if changes never occurred -> BS techniques
19
Q

Subdividing Data

A
  • another example of data selection and rearrangement is dividing data into more homogeneous groups
  • groups need to be large enough to be credible
  • one way is to subdivide claims by size of loss aka severity -> useful if shift in emphasis in claims between handling small vs. large claims
  • example: claims shifted focus towards large claims then slowdown in settlement rates for both small claims and overall book, speedup in large, and increased severity from smaller claims as less attention given to them
  • this would impact both paid and reported claim triangles if small and large claims were combines but could be better isolated if had separate triangles for large and small claims
20
Q

B-S Techniques in general

A
  • can be used to adjust triangles for changes in claim settlement rates and/or case reserve adequacy
  • after adjustments, all years will have same settlement rates and/or case reserve adequacy so regular development methods can be used on adjusted triangles to produce estimates
  • techniques adjust triangles to common level -> latest diagonal is almost always used since adjusted triangle would have the latest diagonal remain unchanged from unadjusted triangle
21
Q

Paid B-S: Assumptions

A
  • main assumption: changes in disposal rates are due to speedups or slowdowns in settlement rates
  • not due to things like changes in rate of reporting of claims or changes in prioritization between small and large claims
  • related assumption: higher disposal rates are associated with higher percentage of ultimate paid claims
22
Q

Paid B-S: steps

A
  • need to determine whether there have been changes in settlement rates
  • look @ disposal rates = cumulative closed counts/ultimate counts
  • if disposal rates are changing as you go down columns then adj is appropriate
  • use diagonal disposal rates as selected DRs
  • use linear interpolation to create adjusted paid claims triangle
  • formula depends on whether historical DR is higher/lower than selected DR
  • perform CL on adj triangle to determine ultimate estimates
  • without adjustment, paid CL would have underestimated ultimate claims if slowdown
23
Q

Paid B-S: adj paid triangle

A
24
Q

Reported B-S: Assumption

A
  • main assumption: any differences between annual changes in average case reserves at each maturity and severity trend are due to changes in case reserve adequacy
  • not due to things like large unpaid losses
  • need severity trend to adjust average case reserves for inflation
  • can be derived from looking at annual changes in average paid severity triangle -> assumes that average paid severity is only changing because of severity trend and not shifts in prioritization between large and small claims
25
Q

Reported B-S: steps

A
  • need to determine whether there has been changes in case reserve adequacy
  • look @ % changes in avg paid (down column not across row)
  • look @ average case reserve triangle = (c. rptd-c. paid)/open
  • look @ % changes in avg case = (c. rptd-c. paid)/open
  • if % change in avg CO is different than severity trend, assume change is due to change in reserve adequacy
  • calc adjusted avg CO triangle = keep latest diagonal and trend backwards using severity trend (divide by trend)
  • calc adjusted reported claims triangle = adj CO*open claim counts + cumulative paid
  • perform reported CL on adj reported triangle to estimate ultimates
  • if no adjustment, overestimate if increase in case adequacy
26
Q

Paid and Reported Adjustments Combined

A
  • need to calc adjusted open claim counts = reported claim counts – adjusted cumulative closed claim counts (based on selected DRs)
  • adjusted reported triangle = (adj open claims*adj avg case reserves) + adj cumulative paid claims
27
Q

Evaluation of Techniques: best practices

A
  • considered good practice to use multiple estimation methods when possible rather than relying on single method
  • seeing multiple estimates can help you better understand range and distribution of possible outcomes as well as sensitivity of estimates to varying assumptions
  • common diagnostics to review for reasonability would include implied ult frequencies, severities, claims ratios, PPs, and unpaid severities
28
Q

Retroactive testing

A

Another area of good practice is to review unpaid estimates between annual analyses, 2 reasons:

  1. Unpaid claims estimates should be updated if there has been a change in exposures
  2. To see if claims are developing as expected as diagnostic check as to whether unpaid claims estimates are reasonable -> comparing actual and expected development is retroactive testing and can reveal whether estimates seem to be consistently higher or lower than actual results
29
Q

Hugh White’s options if actual>expected

A
  • reduce IBNR (speedup in reporting)
  • leave IBNR unchanged (large reported claim and think future development will return to expected levels)
  • increase IBNR (deterioration in Claims Ratio)
30
Q

Calculating Expected Emergence

A
31
Q

Interpolating Within Quarters or Years

A
  • can use linear interpolation to obtain expected emergence at smaller level by interpolating between % reported values
  • linear interpolation within quarter is likely to produce more reasonable expected emergence values than interpolation within entire year
  • development tends to be higher in earlier maturities and tends to decrease over time
  • linear assumption is usually not reasonable for prolonged periods of time since most of development will tend to occur earlier in year than later in year
32
Q

Recoveries: Salvage and Subrogation

A
  • data available for estimating S&S recoveries can vary significantly by insurer
  • some treat S&S as negative payments while other record separately
  • some capture data for different types of recoveries separately while others combine all together
  • some estimate CO for recoveries
  • recoverable S&S = unpaid S&S
33
Q

Estimating S&S Recoverables: 2 approaches

A
  1. use development tech on S&S directly
    - may work better for salvage since salvage is related to property coverage and develops quickly while subrogation is mostly related to liability coverage and takes longer to develop
  2. use ratio approach that develops ratios of S&S to gross claims and use those ratios along with ult claims estimates to project ult S&S
34
Q

Reinsurance: when estimates are needed net of reinsurance

A
  • when unpaid claims estimates are needed on net of reinsurance basis, can either develop net triangles directly or develop gross and ceded separately and then use those estimates to calc estimates on net of reinsurance basis
  • decisions varies based on data availability and characteristics of reinsurance program
  • should check implied reinsurance relationships for reasonability based on type of reinsurance program
35
Q

Quota Share

A

-quota share = insurer ceded constant percent of all claims to reinsurer so net claims is constant percent of gross claims

36
Q

per-risk or per-occurrence XOL

A

insurer cede all amounts above certain retention and up to certain limit on individual claims or individual occurrences; ex: 500k retention and 1M limit

37
Q

Stop Loss

A

stop loss = insurer will cede all amounts above certain retention and up to certain limit on aggregate loss amounts for insurer’s book

38
Q

Reinsurance: Tail Factors

A
  • when looking at tail of triangles of either ceded or net losses, relationship to gross loss triangle can vary by type of reinsurance
  • QS: tails factors will be same since net and ceded triangles are just constant multiples of gross triangle
  • XOL and Stop Loss: tail factor for ceded will be larger than for gross since once retention is hit, all development occurs in ceded layer and net triangle will be smaller since net losses may be capped by reinsurance program
39
Q
A