Section A - part II Flashcards
Current approach to assess risk margins
- CoVs determined for individual portfolios
- Correlation matrix
- Distribution is selected to combine the CoVs and correlations to determine the aggregate risk margin at a particular probability of adequacy
How are CoVs determined?
Combine quantitative methods with qualitative analysis of the sources of uncertainty not captured in historical experience
How is the correlation matrix populated?
Mostly actuarial judgment. Quantitative methods require significant amount of time, data and cost to produce credible and intuitive results
How is the statistical distribution selected?
Usually lognormal is used. Normal distribution can be used too, at lower probabilities where it would produce higher risk margins than lognormal.
Define claims portfolio
Aggregate portfolio for which the risk margins must be estimated
Define valuation classes
Portfolios that are considered individually as part of the risk margin analysis (auto and home)
Define claim group
A group of claims with common characteristics (auto PL, auto OCL)
Define independent risk
Risk due to the randomness inherent in the insurance process
Describe the two sources of independent risk
Process risk: Pure randomness effect
Parameter risk: Represents the extent to which the randomness associated with the insurance process affects the ability to select appropriate parameters in the valuation models
Define systemic risk
Risks that are common across valuation classes
Describe the two types of systemic risk
Internal systemic risk: Risk internal to the insurance liability valuation/modeling process (model specification error)
External systemic risk: Risk external to the insurance laibility valuation/modeling process
How to prepare the claims portfolio for risk margin analysis?
Split into valuation classes based on:
-same as used for central estimate
-may not be possible at the same granularity level so need to check for credibility concerns (conduct the quantitative analysis on an aggregate basis and allocate back)
Should specific classes be divided further into claim groups
-based on development patterns
How to analyze independent risk sources
Using modeling techniques: when model fits past data well it is possible to remove past systemic risk and leaving only random sources of uncertainty.
How to analyze internal systemic risk?
Using qualitative techniques such as the balance scorecard. Quantitative methods don’t work since it relates to possible inadequacies of the model
3 sources of internal systemic risk
- Data error: Risk arising from incorrect data or lack of knowledge of the PF being analyzed
- Specification error: Error that arises because the model cannot perfectly model the insurance process
- Parameter selection error: Error arising because the model cannot adequately measure all predictors of future claim costs or trends in these predictors
How to analyze external systemic risk?
Can model past external systemic risk with modeling techniques but must consider that future risk may not be the same as the risk in historical data.
Examples of external systemic risk
Economic and social risks
Legislative, political and claim inflation risks (dominate OCL and PL risk for long-tailes LOB)
Claim management process change risk
Expense risk
Event risk (dominates volatility of premium liabilities for property)
Latent claim risk
Recovery risk
Why quantitative methods are not used to populate the correlation matrix
- technique very complex and requires large amounts of data
- techniques yield correlations heavily influenced by corerlations experienced in the past
- difficult to separate past correlation effects btwn independent risk and systemic risk OR to identify the pure effect of each past systemic risk
- internal systemic risk cannot be modeled using standard correlation modeling techniques
Risk margin distributions
- Normal : ZCoV(TOT)Sum liabilities
2. Lognormal : Liabilities * ((e^(u + Z*sigma)/weights)-1)
Sensitivity testing usage
Change key assumptions (correlations, CoVs) to see the sensitivity of the risk margins
Scenario testing usage
How do key assumptions need to change to result in a central estimate at the higher level (estimate + risk margin)
Internal benchmarking usage
- Independent risk: Larger PF and shorter tailed LOB the smaller the CoV
- Internal systemic risk: Similar groups have similar CoVs and short tailed lines have lower CoVs
- External systemic risk: Short tailed lines should have lower CoVs
Hindsight analysis
Compares past estimate PL’s and OCL’s against the the latest view of the equivalent libilities. Any movement/varaition can be converted to a CoV reflecting actual past volatility. **Careful: models may have improved over time and future external sources of risk may be significantly different from past episodes. Better on short-tailed portfolios where serial correlation between consecutive valuations is less significant
Mechanical hindsight
Applies a mechanical approach to estimating the OCL’s and PL’s by systematically removing the most recent claims experience.
Do usual Chain ladder to get you current estimate. Remove diagonals one at a time and apply chain ladder to derive claims payments outstanding at past valuation dates. Compare each with current estimate.
Analyse:
- Independent risk: focus on periods with stable development
- Internal systemic risk: Apply this techniques with many methods to observe the differences in volatility
- All past sources of uncertainty: Apply this approach to all past periods
Regularity and review
Full application of framework: 1 /3 years
Review of key assumptions when central estiamtes are conducted:
-emerging trends
-emerging systemic risks
-changes to valuation methodologies
What should be assessed for each external risk source:
1. Economic/Social :
Long tailed, impacts both OCL’s and PL’s or one more than the other?
What should be assessed for each external risk source:
2. Legislative/Political and Claims inflation:
Analysis of past level of inflation should help to separate random volatility from systemic events. Long Tailed: changes in courts setting, medical costs, legal costs, systemic shifts in freq or sev. Short Tailed: Claim inflation will increase at a level different from that used in the central estimate analysis
What should be assessed for each external risk source:
3. Claim management process change:
Work closely with management
Consider reporting patterns, payment patterns, reopening rates,…
What should be assessed for each external risk source:
4. Expense
Product and claim management to understand key drivers of policy maintenance and claim handling expenses.
Identify key sources of variation vs central estimates assumptions
What should be assessed for each external risk source:
5. Event
OCL’S: Focus on material outstanding events. Discuss with claim management to understand expectations and final costs.
PL’S: Analyze past experience for event claims, output of cat models, output of models for perils not included in cat models
What should be assessed for each external risk source:
6. Latent claim
*Liability and workers compensation lines
Very hard to quantify (extremely low freq/high sev)
Talk with underwriters to better understand potential sources of the risk
What should be assessed for each external risk source:
7. Recovery
Identify systemic events that may lead to different recovery situations than those used for central estimate purposes.
Speak with claim and reinsurance management to understand current/future trends in recovery and identify possible scenarios + quantify their impact
How to determine alphas and betas in the GLM model
alphas = ln (ult / AY using chainladder * percent reported first age) betas = ln (% incremental reported/% incremental reported in age previous)
Advantages of using simplified GLM?
- Use of LDFs makes it easy to explain
- Replace GLM fitting with simply calculating chainladder
- Will give a solution with negative incremental values
Adjustments to residuals
England and Verral : Racine (n/(n-p)) * residual to adjust for the degrees of freedom
Pinheiro: Criticizes that it does not produce standardized residuals. Adjust by Racine (1/Hii) from the hat matrix
Variance assumption : proportional to..
Incremental losses follow a distrbituion that varies with variance proportional to m^z * phi. z = 0: normal z = 1: poisson z = 2: gamma z = 3: inverse gaussian
Adjustments for negative incremental values
GLM model requires to take the log of each loss. Cant take the log of a negative value so GLM cannot be applied without some adjustments
- fit -log(-q) for the negative value
- add an amount of -q to each incremental losses and after fiting we’ll add q to the fit losses
- use chain ladder method to estimate GLM result
Adjustment to the Gamma distribution when negative forecasted mean
Gamma requires positive parameters and does not have a distribution for negative values.
1.-Gamma(-m,-m*phi) becomes heavy left tail 2.Gama(-m,-m*phi) + 2m keeps mean stays heavy right tail
Adjustments to residuals with different variability
- Stratified sampling: group residuals with similar variability and sample from residuals in same group. BUT bootstrap is already limited by the number of data points, using less residuals will result in a less acurrate distribution
- Heteroscedasticity Adjustment: group residuals with similar variability into groups and calculate the standard deviation of their variability. Adjust all residuals by their standard deviation /group to the standard deviation of the group with the greatest variability. Now possible to sample from all triangle
Advantages of resorting vs location mapping
Does not need same shapes and sizes for triangles
Can make different correlation assumptions
Can use different correlation algorithms to provide additional befenits such as copulas
Adjustment for extreme points in the data
Can remove extreme points/outliers or reduce them to a less extreme value.
We risk understating the variability of our losses
Can look at whisker plots to know the points outside 3 times the interquartile range to investigate these points.
Two important properties of bayesian models (Verral)
- Can incorporate expert knowledge (vs bootstrap cannot)
- Can easily be implemented because based on MCMC and t uses conditional distribution of each parameter given all the others)
Mean and variance for ODP and ODNB
ODP mean: xy
ODP var: phix*y
Therefore we use ODP for incremental losses when having stochastic columns parameters
ODNB mean: (lambda -1)Cum previous
ODNB var: lambda(lambda-1)*Cum previous
Therefore we use ODNB for incremental losses when having stochastic column parameters because results in chain ladder estimates
Advantages of the Bayesian method
- Full predictive distribution can be found using simulation methods
- Predictive error can be obtained directly by calculating the standard deviation of the predictive distribution (Mack model cannot)
Steps to implement bayesian model
- Define improper column parameters (will be those implied by CL since improper means large variance)
- Define prior distributions pour row parameters xi, usually gamma (mean =alpha/beta and bar =mean/beta)
- Using xi reparameterize the model in terms of gamma i down the row (CL in the other sense)
Credibility formula for bayesian estimate
Cum previous * (lambda -1) * Zij
+ xi/Cum LDF previous (lambda -1)(1-Zij)
Where Zij = p(j-1)/(p(j-1)+beta*phi)
For incurred data discuss the tails of the proposed models in order
- Mack
- Leveld CL
- Correlated CL
For paid data discuss the biased high effect of the proposed models in order
- Level Incremental Trend
- ODP
- Correlated Incremental Trend
- Mack
- Correlated CL
- Change Settlement Rate
3 ways to look at the normality of the percentiles to verify if a procedure is accurate
- Histogram
- p-p plot
- KS test
KS test
D
Advantage of using mixed ln-n over truncated normal
More skewness
Can simulate negative values (possible in paid data)
Discuss the standard error by age (column)
For cumulative losses it decreases for older ages because most losses are well developed and more stable. More volatility at early maturities when more claims are opened.
For paid losses it increases for older ages because the losses paid at early maturities are the more stable and older development is more volatile and data is thin so it increases the variability
Define the following:
- Premium asset
- Standard premium
- Premium deviation
- Retro reserve
- Insurance charge
- What you expect to collect for losses that will emerge less premium already booked
- Manual premium adjusted for experience rating
- Amount by which the booked premium differs from standard premium
- Difference between premium deviation to date and ultimate premium deviation
- Difference between expected loss to the insurer caused by the maximum premium and the expected gain to the insurer caused by the minimum premium
Advantages of retro rated policies
- Encourage loss control by returning premiums to insured for good loss experience
- Gives CF advantage to the insured because pays premium as losses occur
- Shifts a large portion of the risk to the insured because premium varies with insured’s experience
Explain the two elements included in the first premium valuation
Made up of the sum of the basic premium and a component proportional to capped losses.
The first component is independent of the actual losses so it’s better to estimate the premium as a constant plus a component proportional to losses.
Formula at time n for retro rated policy premium
P(n) = (Basic Premium + Cum capped losses(n)Loss Conversion Factor)Tax multiplier
Explain how the premium responsiveness (PDLD) vary with time and loss ratios.
As losses get mature and as we expect more losses (increasing loss ratios), more of them reach their maximum premium and more losses are capped by the loss limit so the premium responsiveness declines.
Why capping ratios decrease with time
Over time there is an increasing portion of the loss development that occurs outside the loss limitations causing less losses to be included in the capped losses while increasing total losses
PDLD (1)
(BP/SP)TM/(ELR%loss emerged at t=/) + Delta CL(1)LCFTM
PDLD (n)
Delta CL(n)LCFTM
Retro formula vs empirical PDLD advantage and disadvantage
+ Formula responds to changes in the retro rating parameters that are sold
- Potential bias exist since the formula approach used the average parameters
How is the data segregated for retro rated policies
Into homogeneous groups:
1-size of account
2-type of rating plan sold
How is negative PDLD possible?
Upward development in high loss layer and downward development in layers within loss limitation (premium return)
3 reasons why current booked premiums may not equal the booked premium for the prior retro adjustement
- Timing of adjustments
- Minor premium adjustments
- Interim premium booking between regularly scheduled retro adjustments
Advantages of using CPDLD vs chain ladder on booked premium development
- Lag in processing and recording retro premium adjustments (CL would not be available before 9 months after expiry)
- Estimate of premium asset can be produced soon after expiration
- Premium asset can be updated each quarter as new loss data is available
Assumptions in PDLD model
- Premium responsiveness during subsequent adjustments is independent of the premium responsiveness during preceding adjustments
- even if the slope of the line segment is different from what is expected we do not change our expectations for the slope of the next line
2.The slope of the line segment depends on the time period, not the beginning retro premium ratio.
7 problems with reinsurance reserving
- Reporting lags (percieved reportable, reporting process, claims with extreme delays of recognition)
- Upward development (under reserved ALAE, economic and social inflation have more impact in high layers)
- Heterogeneity (reporting patterns differ greatly)
- Industry statistics don’t help (not homogeneous, different LOB, patterns and attachment points)
- Reports may be lacking important information (only summarized at high level)
- Data coding and IT problems
- Reserve to surplus is greater than for primary insurers
6 components of a reinsurance reserve
- Case reserve
- Additional case reserve
- IBNER
- Pure IBNR (rarely separate due to complexity for IT)
- Discount for future investment income
- Risk load
General procedure for reinsurance reserving
- Portion the portfolio into reasonably homogeneous exposure groups (LOB, type of contract and type of reinsurance coverage)
- Analyze historical development patterns
- Estimate future development
- Monitor and test results
How to estimate IBNR for different tailed lines of business?
Short : % of written premium
Medium : Chainladder or BF
Long : Cape code (stanard buhlmann)
Examples of short-tailed lines of business (5)
treaty property proportional
treaty property catastrophe
treaty property excess (exclude high layers)
facultative property (exclude construction risks)
fidelity proportional
Examples of medium-tailed lines of business (7)
treaty property excess high layers construction risks surety ocean marine inland marine international property non casual aggregate excess
Examples of long-tailed lines of business (5)
treaty casualty excess treaty casualty proportional facultative casualty casualty aggregate excess asbestos and mass tort claims
Advantages of cape cod and disadvantages
+ Loss ratio uses the experience (vs BF jugdmentally selects it)
+ IBNR considers the experience (vs BF does not)
-Require adjusted premiums (at current on level rate) and without fees and expenses
IBNR (Stanard Buhlmann)
IBNR = OL EP * ELR * (1-Report lag%)
where ELR = Sum of reported losses/Sum of used up premiums
Credibility weighted IBNR
IBNR = Z * Reserve (Chain ladder) + (1-Z) * Reserve (CC)
where Z = Report Lag (% rep) * Credibility factor