Section A - Class Ratemaking Flashcards
AAA #1 Question: What basic principles should be present in a risk classification?
• Reflects expected cost differences – Among classes and other factors etc. • Distinguishes among risks on the basis of relevant cost-related factors – must relate to losses • Applied objectively – understandable rules • Practical and cost-effective – cannot be too costly or too difficult to use • Acceptable to the public – Public must feel it is fair
AAA #2 Question: Compare a Government insurance program to a private insurance program.
Answer: Similarities: Pooling of risks. Pools should be large enough to guarantee reasonable predictability of total losses. Differences: Government is provided by law, private is provided by contract. Government is usually compulsory, private is voluntary Government does not need to be self-supporting, private must support itself
AAA *** List the: Three Primary Purposes of Risk Classification
AAA—Risk Classification Ratemaking 1. PROTECT INSURANCE SYSTEM’S FINANCIAL SOUNDNESS Risk Classif ication is the Primary means to control adverse selection 2. BE FAIR Risk classif ication should produce prices ref lective of expected costs 3. Permit ECONOMIC INCENTIVES to operate and thus ENCOURAGE widespread COVERAGE AVAILABILITY A proper class system will allow an insurer to write and better serve both higher and lower cost risks
AAA *** List the “Program Design Elements” (3) and how they relate to risk classification
[PEC]
1. DEGREE OF BUYER CHOICE:
Compulsory programs ~ Broad classif ication Voluntary programs ~ Ref ined classification
- EXPERIENCE BASED PRICING: To the extent prices are adjusted based on a risk’s emerging actual experience, less ref ined initial risk classification is needed
- PREMIUM PAYER: If premium is paid by someone other than insured then broad class systems may be appropriate since adverse selection is less likely
AAA ** Four Differences Between: Public vs. Private Insurance Programs
- *PUBLIC** 1. Usually DEFINED BY LAW 2. COMPULSORY 3. Little role for COMPETITION 4. Little RELATION between long term BENEFITS and COSTS Typically insure ‘uninsurable’ risks (e.g. flood)
- *PRIVATE** 1. DEFINED BY CONTRACT 2. Mostly VOLUNTARY 3. Rely on COMPETITION 4. RELATE COSTS to Benefits
AA ** Operational Considerations in Classification Rate Making (7)
MAMA ACE
- EXPENSE @ Cost of collecting info and pricing classes should not exceed benefits achieved
- CONSTANCY @ risk characteristics should be constant over time
- MEASURABILITY @ susceptible to convenient, reliable measure
- MANIPULATION @ minimize manipulation
- AVOID EXTREME DISCONTINUITIES @ especially at end points
- ABSENCE OF AMBIGUITY @ exhaustive, mutually exclusive classes that are clear and objective
- AVAILABILITY OF COVG @ properly classifying risks will maximize availability
AAA *** Considerations in Designing a Risk Classification System (9)
- UNDERWRITING: Controls practical impact of class plan 2. MARKETING: Inf luences insurer’s mix of business
- PROGRAM DESIGN: Buyer degree of choice; experience based pricing; premium payer
- STATISTICAL: Homogeneity; credibility; predictive stability
- OPERATIONAL: Expense; constancy; measurable; maximize availability; absence of ambiguity; minimize manipulation; avoid extreme discontinuity
- HAZARD REDUCTION: Incentives to reduce hazard
- PUBLIC ACCEPTABILITY: Use relevant data; respect privacy; not unfairly differentiate among risks
- CAUSALITY: Demonstrable relation desirable, not always possible
- CONTROLLABILITY: Helps with hazard avoidance & public acceptance
AAA *** Five Basic Principles of a Sound Risk Classification System
- Reflect EXPECTED COST DIFFERENCES
- DISTINGUISH among risks on the basis of COST RELATED FACTORS
- APPLIED OBJECTIVELY
- Practical and COST EFFECTIVE
- ACCEPTABLE TO THE PUBLIC
AAA * 3 Mechanisms for Coping with Risk
- HAZARD AVOIDANCE and reduction: not all risks can be avoided
- TRANSFER: gov’t assistance, self insurance groups, private insurance
- PUBLIC and PRIVATE Insurance Programs
AAA * 3 Means of Establishing a Fair Price
FAIR PRICING METHODS:
- Reliance on WISDOM, INSIGHT, GOOD JUDGMENT Ignores actual experience
- OBSERVE THE RISK’S ACTUAL LOSSES over extended time Loss event may only occur once (life insurance)
- Observe GROUPS OF RISKS WITH SIMILAR CHARACTERISTICS and what their losses are
Bailey & Simon: *** 3 Major Conclusions on the Actuarial Credibility of a Single Auto
- The Experience for 1 year for 1 CAR HAS SIGNIFICANT AND MEASURABLE CREDIBILITY
- In a HIGHLY REFINED rating class system which reflects inherent hazard, there will NOT BE MUCH ACCURACY in an individual risk merit rating plan, but where a wide range of hazards are encompassed within a class system, credibility is much larger
- CREDIBILITY DID NOT INCREASE LINEARLY with years of experience due to variation in the groups over time and skew
Adding a 2nd year increased credibility roughly 2/5 Adding a 3rd year increased credibility by another 1/6
Bailey & Simon: ** Four reasons Multi-year Credibility does not Grow Linearly from 1-yr Credibility
Multi-Year Credibilities are Not Linear because:
- Risks ENTERING AND LEAVING the class
- An Insured’s CHANCE FOR AN ACCIDENT changes over time
- RISK DISTRIBUTION of individual insureds may be SKEWED, reflecting various degrees of accident proneness 4. Credibility , defined as P/(P+K), does not grow linearly as P (cred in experience rating) increases [Hazaam review]
Bailey & Simon: ** The Credibility of PPA Experience Rating depends on ______ and ______
Experience Rating Credibility Depends on:
- VOLUME of data in experience period
- Amount of VARIATION of individual hazards within the class vs. class rating where credibility increases in proportion to square root (Data Volume) only
Bailey & Simon: ** Why is EPPR used instead of earned exposures as the FREQUENCY BASE for calculating credibility of a single PPA?
According to Hazam, what conditions must be met?
Use EPPR as a Base to AVOID THE MALDISTRIBUTION that results when higher claim frequency territories produce more X, Y, and B risks and also produce higher territory premiums. Basically to avoid overlap between territory rating and experience rating
To use EPPR as a Base for Eliminating misdistribution [Hazam]: 1. High Frequency territories must also be higher Premium territories 2. Territory Differentials are Proper Alternative: apply Bailey-Simon method to loss costs instead of loss frequency
Bailey & Simon: *** Single PPA Credibility:
Credibility =
Modification =
R =
m =
CARD #11
z = ( 1 - Mod) Class is n-yr claim free (1+,2+, 3+); R = 0 for claims free risks
z = (Mod - 1) / (R - 1) Group of Risks WITH claim experience (0,1,2)
Mod = Relative Freq = ZxR + (1-Z) =
= (# claims class/EPPR class) / (# claims tot / EPPR tot) R = Actual Freq / E[Freq] = 0 for accident free risks
= [1 - e^(-m)]^-1 if Freq Poisson distributed
E[Freq] is usually the class AVG freq
m = (#claims total)/ EE total)
Bailey & Simon:
How to determine which class has more stability:
- Across time?
- Between risks within the class?
**Stability Across Time:** Examine (n - yr Cred / 1-yr Cred) for each class The more linear the multi -year credibility, the MORE STABLE (ratio = n) (i.e. The class with RATIO CLOSEST TO n is the MOST STABE) Logic: If an insured’s chance for an accident remained constant from yr-to-yr and no risks entering/leaving, then credibility should vary in proportion to the number of years.
**Stability within a Class**: Examine (n - yr Cred / freq per EE <sub>total</sub>) for each class The LOWEST RATIO indicates the MOST STABLE individual risks, lowerst variation within its HG, or is the most narrowly defined/most homogeneous. Logic: If the variation of individual insured’s chances for an accident were the same within each class, credibility should vary in proportion to the average claim frequency. *Recall: there are 5 classes shown; each class is divided into experience groups A, X, Y, B*
Mahler 1 *
Credibility & Shifting Risk Parameters
Background for the paper’s analysis
Background:
Past Experience used to predict the future
New Estimate = Data*(z) + (Prior Est)*(1-z)
z = Credibility, Prior Est = Class AVG
Parameters may shift over time posing the question: How should we combine different years of historical data? Suggests:
- Give Old years substantially less weight
- May be minimal gain f rom using additional years of data
May want to vary the weights to prior estimates
- No weight at all
- Only look at 1 year back
- Use multiple past years (weight equally, or vary the weights?)
Mahler 1 ***
Criteria for Evaluating Credibility Weighting
Schemes (3)
for weighting past experience against expected future
experience
3 Criteria for evaluating Credibility weighting:
- LEAST SQUARED ERROR (LSE): Minimizes the squared errorbetween observed and predicted results. The smaller the MSE the better the estimate. Buhlmann/Baysian credibility methods are LSE.
- SMALL CHANCE OF LARGE ERRORS: Minimizes the probability (p) that observed results will be greater than k% different from predicted.
The smaller p is, the better the solution is. Classical/Limited fluctuation credibility technique:
Pr (|A%E| < = k * E ) = (1 - p) - CORRELATION: Minimizes patterns of errors (not concerned about large errors); Also called the Meyers/Dorwieller Method
Statistic =
Corr of [(Actual % / Predict %) ( Predict %/Grand Mean)]
= Corr of [Modified Loss Ratio & Experience Modifier]
Want the Corr as close to zero as possible