Banhemman Flashcards
Name 2 count distributions
- Poisson
- Negative Binomial
For Poisson distribution, identify f(n), E(N) and V(N)
E(N) = V(N) = u
f(n) = u^n * exp(-u) / n!
For Negative Binomial distribution, identify f(n), E(N) and V(N)
f(n) = (r+n-1 chose n)p^n(1-p)^r
E(N) = pr/(1-p)
V(N) = pr/(1-p)^2
For Uniform distribution, identify f(x), F(x), E(X) and V(X)
f(x) = 1/(b-a)
F(x) = (x-a)/(b-a)
E(X) = (b+a)/2
V(X) = (b-a)^2 / 12
For Exponential distribution, identify f(x), F(X), E(X) and V(X)
f(x) = exp(-x/b) / b
F(x) = 1 - exp(-x/b)
E(X) = b
V(X) = b^2
For Gamma distribution, identify E(X) and V(X)
E(X) = ab
V(X) = a*(b^2)
For Lognormal distribution, identify E(X) and V(X)
E(X) = exp(u+s^2/2)
V(X) = (exp(s^2) - 1)*exp(2u+s^2)
u = ln(mean) - s^2/2
s^2 = ln(CV^2+1)
For Shifted Pareto distribution, identify f(x), F(X), E(X) and V(X)
f(x) = a(b^a)/(x+b)^(a+1)
F(x) = 1 - (b/(x+b))^a
E(X) = b/(a-1)
V(X) = a(b^2)/((a-1)^2*(a-2))
Discuss 4 methods for estimating distribution parameters
- Methods of moments
Compute moments for different values of m, set them equal to theoretical values and use sample data to solve for parameters - Maximum likelihood
For sample data with n observations, create log-likelihood function, create log-likelihood function sum of lnf(xi), take derivatives with respect to each parameter, set equal to 0 and solve for estimated parameters. - Minimum chi-squared
Create m ranges of claim size intervals. Tabulate actual claim counts in each range and expected counts using target distribution with initial seed parameters. Calculate Chi-squared values:
X^2 = sum of (A - E)^2 / E
And use computer to iterate different parameter choices until X^2 is minimized. - Minimum distance
Create m ranges of claim size intervals. Tabulate cum % of actual claim counts from sample data for each range and expected cum % using target distribution with initial seed parameters. Calculate:
D =(Sum of (Fn(Ck) - Fo(Ck))^2)^0.5
Use computer to iterate different parameter choices until D is minimized.
Define truncation (aka discarding)
Usually in case of claims below deductible.
They do not even appear in insurer’s data
Define censoring (aka capping)
Usually in case of a limit
Define Shifting
Usually with a straight deductible.
For claims larger than deductible they get reduced (shifted) by deductible amount.
Explain Panjer’s recursive algorithm
Recursive algorithms can be used to approx real aggregate loss distributions.
Panjer’s recursive algorithm generates an aggregate loss distribution given equally-spaced discrete severity distribution with spacing h and count distribution N that satisfies fN(n) = (na+b)fN(n-1)/n
Both Poisson (a=0, b=u) and NB (a=p, b=(r-1)p) satisfies the relationship.
Assuming fx(0) = 0, we have
fs(0) = fN(0)
fs(mh) = P(S = mh) = sum over k of (a+bk/m)fx(kh)fs(mh-kh)
Given Y = max(0, X-a), calculate
1. Mean insurance pmt
2. Prob insurer pays 0
3. Limited E(XS) value
4. pdf of Xa
- E(Y) = E(X) - E(X;a)
- Fx(a) = P(X<=a)
- E(Xa;l) = E(X;a+l) - E(X;a) / (1-Fx(a))
- fXa(x) = fx(x+a)/(1-Fx(a))
Explain how severity curve can be used to fit theoretical curve
We can calculate XS severity at various limits using empirical data and then compare that to theoretical curve
Straight line going up -> Pareto -> slope is 1/(a-1) and intercept is b/(a-1)
Increasing XS Sev -> Weibull
Hockey stick -> Lognormal
Decreasing XS Sev -> Gamma
Flat -> exponential -> b = XS Sev
Describe the difference between interval of losses and layer of losses
A range of losses can be defined on an interval basis or layer basis.
Interval mean = sev = Tot loss/n
Layer mean = E(X;a+l) - E(X;a) / (1-Fx(a))
We need to price for losses in layer
Calculate the variance for a layer of losses
V(Xa;l) = E(Xa^2;l) - E^2(Xa;l)
E(Xa^2;l) = E(X^2;a+l) - E(X^2;a) - 2a(E(X;a+l) - E(X;a)) / (1-Fx(a))
Calculate coefficient of variation for layer of losses
CV(X) = V(Xa;l)^0.5/E(Xa;l)
Given distribution, calculate expected number of claims above retention
N = ground-up claim counts
Na = counts for which x>a
p = 1 - Fx(a)
E(Na) = pE(N)
V(Na) = p^2V(N) + p(1-p)E(N)
Binomial:
Na follows Bin(k,p)
p = 1-Fx(a)
fNa(n) = sum over k of P(n excess counts given N=k)*P(N=k)
Poisson:
Na follows Poisson(pu)
E(Na) = pu = V(Na)
NB:
Na follows Bin(a,pv)
E(Na) = pE(N)
V(Na) = p^2V(N) + p(1-p)E(N)
Calculate impact of severity inflation in XS layer
tau =1 + i%
Impact of inflation:
tau = E(tXa;l)/E(Xa;l) = t(E(X;(a+l)/t) - E(X;a/t))Sx(a)/(E(X;a+l) - E(X;a))*Sx(a/t)
Impact of severity inflation on aggregate losses:
tau(s) = t(E(X;(a+l)/t) - E(X;a/t))/(E(X;a+l) - E(X;a)) = tautau(n)
Impact of severity inflation on XS counts:
tau(n) = Sx(a/t)/Sx(a)
Calculate Mean and Variance of Aggregate Losses in layer
E(S) = E(N)(E(X;a+l) - E(X;a))
V(S) = E(N)(E(X^2;a+l) - E(X^2;a)) - 2aE(S) + vE^2(s)
v is the claim contagion parameter: accounts for claim counts not being independent of each other
if n follows poisson, v=0
Describe Risk Charge
Final rate por a policy needs to incorporate all expenses and profit as well as charge for risk.
The risk charge is a premium amount used to cover contingencies such as:
1. Random deviations of losses from expected values (process risk)
2. Uncertainty in selection of parameters describing loss process (param risk)
Calculate the final rate and policy premium
N = cc
m = exposures
phi = fréquenté
E(N) = mphi
E(S) = E(N)E(Y)
Pure premium = p = phi*E(Y)
Final rate = R = (p+f)/(1-v)
f are fixed expenses
v are variable expenses
Policy premium = P = mR
Basic Limit P = mphi(E(X;b)*(1+u) + f) / (1-v)
Define Loss Cost Multiplier (LCM)
1 / (1-v)
Loads all other costs on top of p to get final rate
Describe 2 approaches to determine ILFs
- Empirical data directly
- Theoretical curve fit to empirical data (more common for highest limits with little empirical data)
In determining ILFs, 3 assumptions are commonly made, explain.
- All UW expenses and profit are variable and do not vary by limit
In practice, profit loads might be higher for higher limits since they are more volatile - Freq and Sev are independent
- Freq is the same for all limits
This might not be true in practice due to adverse or favourable selection. GLMs or LR methods will reflect those differences but ILF approach will not.
Calcule ILF
ILF(L) = E(X;L) / E(X;b)
Calculate premium for a layer using ILFs
Pa,l = Pb * (I(a+l) - I(a))
Only accurate if:
1. Limit applied to loss & ALAE
2. Insurer covering XS layer does not cover ALAE
3. ALAE is proportional to loss
If limit applies to loss only and ALAE is not proportional, use this instead:
Pa,l = LCME(N)(E(X;a+l) - E(X;a) + (1-Fx(a))*e)
Error caused by not using this formula is usually small compared to risk load in practice
Describe the consistency test for a set of ILFs
Premium should decrease as attachment point increases at a decreasing rate
I(L) = E(X;L)+e / E(X;b)+e
I’(L) = 1-Fx(L) / E(X;b)+e >= 0 (increasing)
I’‘(L) = -fx(L)/E(X;b)+e <= 0 (decreasing rate)
An exception to this could occur if one of the ILF assumptions was violated (ex: if liability lawsuits were influenced by size of limit so freq is not the same for all limits)
Describe a situation where it could acceptable to not meet consistency test
If adverse selection is considered, the second condition could be overcome.
This will occur when risks that have large loss potential purchase higher limits or when court cases and settlements tend to loss at policy limits when settling.
Explain 2 approaches to calculate risk loads for ILFs
To consider higher risk for higher limits, we include a risk charge rho(L) to obtain risk loaded ILFs.
- Miccolis approach:
rho(L) = kV(s)/E(N) = k(E(X^2;l)+dE^2(X;l))
k is an arbitrary constant
d = V(N)/E(N) - 1
If N follows Poisson, d=0 so risk load independent of N - ISO approach:
rho(L) = kV(s)^0.5 / E(N) = k(E(X^2;l) + d*E^2(X;l))^0.5 / E^0.5(N)
If k not given but Pb is, Pb = E(N)*(E(X;b) + rho(b)) -> solve for k
The risk load is higher with var method for higher limits (so higher prep) since variance increases faster than standard deviation.
Calculate risk load for layer of coverage
rho(a,l) = k*(E(X^2;a+l) - E(X^2;a) - 2a(E(X;a+l) - E(X;a))
P = E(X;a+l) - E(X;a) + rho(a,l)
State 2 advantages of using ISO approach over Miccolis approach
- More fitted for thick-tailed distributions like Pareto
- Express risk load in currency unit rather than currency squared
Calculate ILF in case of both per-claim limit and aggregate limit
E(S_l) = E(N)*E(X;l) if only per-claim
With both limits:
I(l,L) = E(S_l;L) / E(N)*E(X;b)
If straight deductible, calculate premium
Xd = max(X-d, 0)
Pd,b = Pb(1-C(d))
C(d) is the LER aka deductible credit factor
C(d) = E(X;d) + Fx(d)e / E(X;b)+e
Pure premium = p_d,b = phi(E(X;b) - E(X;d) + (1-Fx(d))e)(1+u)
Premium = P_d,l = Pb(I(l) - C(d))
If franchise deductible, calculate pure premium and LER
Xd = if(X>d, X, 0)
Loss is truncated but not shifted by d so net loss Xd = X
Pure Premium = p_d,b = phi(E(X;b) - E(X;d) + (1-Fx(d))(d+e))*(1+u)
C(d) = (E(X;d) - d(1-Fx(d)) + Fx(d)*e) / (E(X;b) + e)
If diminishing deductible, calculate LER
Xd = max(min(D(X-d)/(D-d), x), 0)
Loss below d is fully eliminated, deductible declines linearly from d to D and loss above D is paid in full (d disappears at D)
C(d,D) = (D(E;d)/(D-d) - dE(X;D)/(D-d))/E(X;b)
Describe the LER relation between the 3 deductible types
Straight > diminishing > franchise
Because with franchise, losses above d are paid in full.
In contrast, a straight deductible reduces loss by d.
Diminishing a kinda of a mix of the 2.
Calculate Pure Premium after inflation
p = phitau(E(X;l/t) - E(X;d/t) + (1-Fx(d/t))e)(1+u)
Estimate trend impact net of deductible and capped by limit
tau = t(E(X;l/t) - E(X;d/t) + (1-Fx(d/t))e)/(E(X;l) - E(X;d) + (1-Fx(d))*e)
State 2 reasons why loss cost per straight deductible can increase more than ground-up severity trend
- For losses above deductible, trend is entirely in excess layer
- Losses just under deductible are pushed into excess layer by trend, creating new losses for excess layer.
Insurer believes average increase in XS losses due to inflation is too high. Describe 2 differences in assumptions the insurer may have with the consulting firm.
- If insurer assumes a heavier tail distribution, more loss would be in XS which reduces the impact of XS inflation.
- Insurer may assume a higher average severity, this would essentially act like an inflation factor shifting more of XS to higher limits so impact of trend on this particular layer is lessened. Shape of the loss distribution could also affect trend in this layer.