Bahnemann Flashcards
Methods for Estimating Distribution Parameters
method of moments
maximum likelihood
minimum chi squared
minimum distance
truncation
discarding; usually in case of claims below a deductible
censoring
capping; usually in case of limit
shifting
usually with straight deductible; for claims larger than deductible, they get reduced by the deductible amount
since limits reduce volatility of severity compared to unlimited data
- may be interesting in computing the variability of losses in layer
- can use the coefficient of variation as a way to measure the variability for different distributions
claim contagion parameter
accounts for claim counts not being independent of each other (where 1 claim encourages others to file a claim too)
-if claim counts have a Poisson, γ=0
final rate for a policy needs to incorporate
all other expenses and profit as well as charge for risk
risk charge
premium amount used to cover contingencies such as:
- random deviations of losses from expected values (process risk)
- uncertainty in selection of parameters describing the loss process (parameter risk)
for purpose of pricing, instead of publishing full rates for every limit
insurers usually use relativities called ILFs to rate for a basic limit
ILFs can be determined using
empirical data directly or can be obtained using a theoretical curve fit to empirical data with latter approach being more common for highest limits with little empirical loss data
in determining ILFs appropriate for each limit, following assumptions are commonly made:
- all UW expenses and profit are variable and don’t vary by limit
- in practice, profit loads might be higher for higher limits since they are more volatile - frequency and severity are independent
- frequency is same for all limits
ILFs must be (with Bahnemann’s assumption of fx(l) is not equal to 0 )
increasing at a decreasing rate
in terms of premium, premium for successive layers of coverage of constant width
will be decreasing
checking that a set of ILFs satisfies above criteria for I’(l) and I’’(l)
performing a consistency test:
Per occurrence limit l
Increased limit factor I(l)
Marginal rate per $1k coverage I’()
an exception to consistency test
could occur if one of the ILF assumptions was violate like if liability lawsuits were influenced by size of limit, then frequency would not be same for all limits so formulas would not hold
one reason that a set of increased limits factors may fail this consistency test yet still generate actuarially reasonable prices.
Adverse selection, which could happen if insureds that expect higher loss potential are more inclined to buy higher limits.
Adverse selection, which could happen if liability lawsuits are influenced by the size of the limit.
how the consistency test has both a mathematical interpretation and a practical meaning
The practical interpretation is that as the limit increases, there are less losses expected at higher layers, so rates should not increase more for higher limits than for lower limits.
mathematical interpretation is that I’(l) ≥ 0 and I’‘(l) ≤ 0
there is more volatility (process risk) for policies with
with higher limits or higher attachment points, insurers will also want to charge a risk load for these policies
-to do this, need to include a risk charge ρ(l)
risk charge ρ(l) options
old Miccolis aka variance method
old ISO aka std dev method
risk load increases as
policy limit increases and is used to take into account the higher process risk for policies with higher limits
deductibles typically reduce
coverage limit so layer of coverage with deductible d and limit l is (d,l] and not (d,d+l]
3 types of deductibles:
straight
franchise
diminishing
straight deductible
loss is truncated and shifted by d such that net losses
Xd = X-d for d
franchise deductible
loss is truncated but not shifted by d such that
Xd=X for d
diminishing deductible
- aka disappearing deductible
- loss below amount d is fully eliminated, deductible declines linearly from d to another larger D and loss above D is paid in full
Xd,D = D/(D-d) * (X-d) for d <d></d>
<p> = X for D
</p>
<p>-formula for LER even assuming alae is not additive is long, so calculate the loss eliminated at each size of loss level as a % of total losses</p>
</d>
when comparing the 3 deductibles
straight eliminates the most loss, then diminishing and then franchise
when pricing a layer of coverage and applying a risk load, it is not appropriate to
simply subtract risk loaded ILFs or premiums for layer since that would result in an incorrect risk load
while without risk loading we have
Pa,l=Pl-Pa=Pb*[I(a+l)-I(a)]
-these relationships do not hold when risk loading is applied
Assuming retention R describe how inflation will affect the expected losses for the excess cover relative to unlimited ground up losses.
Inflation would affect excess losses more than total losses. Some losses that were below the retention will now get into the excess layer. Also, for losses that were already in the excess layer, the increase in losses due to inflation will affect the excess layer entirely.
reason why it is desirable for a set of increased limits factors to pass this consistency test.
So long as partial losses are possible, expected losses will not increase as much as the increase in limits, so the rate per $1k of coverage should decrease as the limit increases.
diagram for expected losses
y=loss size
x = FX(x) = cumulative loss distribution
why the loss cost for a given straight deductible policy can increase more than the ground-up severity trend.
For losses above the deductible, the trend is entirely in the excess layer.
Also, losses just under the deductible are pushed into the excess layer by the trend, creating new losses for the excess layer.
How do the policy conditions alter the coefficient of variation of the claim-size variable
The policy restrictions restrict the variability of claims, such that the coefficient of variation will be lower with the policy restrictions than the coefficient of variation for the unlimited claim size variable
Briefly describe one reason why increased limit factors might fail a consistency test and still produce reasonable rates
If adverse selection is occurring because higher risk insureds are more likely to purchase higher limits, then it would be reasonable to have factors that fail the consistency test (e.g., ILFs that increase at an increasing rate).