Quizlet Flashcards
in force premium
full term premium for policies that are in effect at a given point in time
reported losses
paid losses + case reserves
estimated ultimate losses formula
reported losses + ibnr reserve + ibner reserve
operating expense ratio formula
uw expense ratio + LAE/earned premium
combined ratio formula
loss ratio + LAE/earned premium + uw expense/written premium
fundamental insurance equation
premium = losses + LAE + uw expenses + uw profit
underwriting guidelines
a set of company-specific criteria that can affect decisions of whether to accept a risk or can alter aspects of the premium calculation
what do underwriting guidelines specify
decision to accept risk, company placement, tier placement, schedule rating credits/debits
criteria for exposure bases
proportional to expected loss,
practical (can be split up into objective and verifiable), historical precedence
extension of exposures method
rerating every policy to restate the historical premium to the amount that would be charged under the current rates
case incurred loss
reported loss
classification ratemaking
the process of grouping risks with similar loss potential and charging different manual rates to reflect differences in loss potential among the groups
first stage of classification ratemaking
determining which risk criteria effectively segment risks into groups with similar expected loss experience. usually different levels of a rating variable
second stage of classification ratemaking
calculates the indicated rate differential relative to the base level for each level being priced
adverse selection
when failure to accurately price individual risks results in a company having a disproportionally high amount of high risk policy holders
4 categories of risk classification criteria
1.statistical
2.operational
3. social
4. legal
3 important factors for statistical classification criteria
- statistical significance
- homogeneity
- credibility
4 important factors for operational classification criteria
- objectivity
- cost to administer
- verifiability
- constancy
4 important factors for social classification criteria
- affordability
- causality
- controllability
- privacy concerns
primary shortcoming of univariate rate classification methods
they do not take into account the effect of other rating variables
main benefit of multivariate rate classification methods
they consider all rating variables simultaneously
overall benefits of multivariate rate classification methods
they consider all rating variables simultaneously, they remove unsystematic effects in the data (noise), they produce model diagnostics (additional information about the certainty of results and the appropriateness of the model fitted), and they allow consideration of the interaction, or
interdependency, between two or more rating variables
According to “The Practitioner’s Guide to Generalized Linear Models” standard errors are:
an indicator of the speed with which the log-likelihood falls from the maximum given a change in parameter
2 common statistical diagnostics
- standard errors
- measures of deviance
a deviance
a single figure measure of how much the fitted values differ from the observations
factor analysis
a data mining technique to reduce the number of parameter estimates in a classification analysis. This can imply a reduction in the number of variables or a reduction in the levels within a variable.
cluster analysis
Data mining technique that seeks to combine small groups of similar risks into larger homogeneous categories or “clusters.”
when is cluster analysis most commonly used
when rating for geography
data mining techniques can enhance a ratemaking exercise by:
- whittling down a long list of potential explanatory variables to a more manageable list
- providing guidance in how to categorize discrete variables
- reducing the dimension of multi-level discrete variables
neural network
gathers test data and invokes training algorithms designed to automatically learn the structure of the data. This technique has been described as a recursion applied to a GLM.
*Its a data mining tehnique
challenges to territorial ratemaking
geography tends to be highly correlated with other rating factors, and the data in each individual territory may be sparse
high-dimensionality
a challenge for territorial ratemaking in which data in individual territories is sparse
2 phases to territorial ratemaking
- establishing territorial boundaries
- determining rate relativities for the territories
first step of establishing territorial boundaries
determining the basic geographic unit
second step of establishing territorial boundaries
estimate the geographic risk of the territory by isolating the geographic symbol in the data
spatial smoothing
since geographic risk tends to be similar for units in close proximity, smoothing improves the estimate of any individual unit by using information from nearby units
2 types of spatial smoothing
- distance-based
- adjacency based
distance-based spatial smoothing
weighting the information from one geographic unit with the information from all nearby geographic units based on the distance from the primary unit and some measure of credibility
advantage of distance based smoothing
easy to understand and implement
disadvantage of distance based smoothing
the assumption that a certain distance has the same impact on similarity of risk regardless of whether it is an urban or rural area.
Additionally, the presence of a natural or artificial boundary between two geographic units is not taken into consideration
adjacency based spatial smoothing
weights the information from one geographic unit with the information estimators of rings of adjacent units
advantage of adjacency based smoothing
it handles urban/rural differences more appropriately, and accounts for natural or artificial boundaries better than the distance-based smoothing
adjacency-based smoothing tends to be most appropriate for which perils
perils driven heavily by socio-demographic characteristics
dangers of over-smoothing
the actuary may be masking the real spatial variation among the risks
dangers of under-smoothing
the actuary may be leaving considerable noise in the estimator
what is the goal when combining units into territories
minimize within territory heterogeneity and maximize between territory heterogeneity
Ward’s clustering method
creates boundaries that lead to the smallest within cluster sum of squares difference. This tends to produce clusters that have the same number of observations.
formula for limited average severity
LAS(H) = integral from 0 to H of(x*f(x)) plus H * the integral from H to infinity of(f(x))
increased limit factor formula
LAS(H)/LAS(B)
loss elimination ratio
the rate by which losses and expenses are reduced as a result of an increase in deductible
what approach do commercial lines insurers typically use to determine applicable expense provisions
the all variable approach
coinsurance clause
the insurer may require a minimum insurance to value or else payment on covered losses will be reduced proportionately by the amount of underinsurance
coinsurance apportionment ratio
the relationship between the amount of insurance selected and the coinsurance requirement and is the factor applied to the loss amount to calculate the indemnity payment
formula for coinsurance apportionment ratio
coinsurance apportionment ratio: a = face value of policy/(coinsurance percentage * value of property)
coinsurance penalty
the amount by which the indemnity payment is reduced by the application of the coinsurance clause
3 conditions for a coinsurance penalty
- A non-zero loss has occurred (i.e., L > 0).
- The face amount of insurance is less than the coinsurance requirement (i.e., F < cV).
- The loss is less than the coinsurance requirement (i.e., L < cV).
guaranteed replacement cost
a policy feature that allows replacement cost to exceed the policy limit if the property is 100% insured to value and, in some cases, subject to annual indexation.
classical credibility estimate
estimate=(zobserved experience)+((1-z)related experience)
buhlmann credibility estimate
estimate=(zobserved experience)+((1-z)prior mean).
where z = least squares version
in least squares credibility what is the formula for Z
N/(N+(E(Var(x))/Var(E(x))
hard market
the phase of the underwriting cycle that sees higher price levels and increased profitability
variable premium
the portion of the total premium that varies by risk characteristics
flat or additive premium
the portion of premium that is derived from expense fees or other dollar additives
formula for proposed fixed expense fee
FE/(1-V-Q)
proposed base rate with flat premium components
Bp = Bc x (Pp-Ap)/(Pc-Ac)
Formula : proposed base rate under weighted average rate differential method
Proposed base rate = (proposed average premium - proposed additive fees) / proposed average rating factor
off-balance factor
1/(1+% change in average rate differential)
what to do when rates increase as a result of a minimum premium
multiply the base rate by the offset factor = (prem without min)/(prem with min)
premium transition rule
dictates the minimum or maximum amount that a premium can change for a single insured in one renewal
ex ante moral hazard
occurs when there is an increase in the underlying risky behavior causing the loss
ex post moral hazard
occurs when an individual asks the insurer to pay for more of the negative consequences than would have otherwise been the case
2 methods of aggregating exposures
- calendar year exposures
- policy year exposures
which exposure aggregation method can assign a single earned exposure to multiple time periods
calendar year exposures
2 types of manual rate modification techniques
- experience rating
- schedule rating
Primary insurance
to the first layer of insurance coverage. Primary insurance pays compensation in the event of claims arising out of an insured event ahead of any other insurance coverages that the policyholder may have
Umbrella and excess insurance
typically refers to liability types of coverage available to individuals and companies protecting them against claims above and beyond the amounts covered by primary insurance policies or in some circumstances for claims not covered by the primary policies.
5 elements of unpaid claim estimate
- case outstanding
- provision for future development on known claims
- estimate for reopened claims
- provision for claims incurred but not reported
- provision for claims in transit (incurred and reported but not recorded)
TPA
Third Party claims Administrators who handle claims from beginning to end.
IA (Independent Adjuster)
Hired by insurance companies to handle specific claims for which the insurer does not have expertise
First decision of a claims adjuster
whether or not the claim is covered under the terms of a valid policy
Second step for a claims professional
Set up an initial case outstanding
case outstanding
the estimated future payments on a claim at any specific point in time. This includes all expenses
When can the pure premium indication method not be used?
When exposure info is unavailable
When can the Loss Ratio indication method not be used?
When pricing a new line of business because it requires current rate
First trend factor in two step premium trending
factor=(Latest period AWP @ CRL)/(Avg EP @ CRL)
Premium trend length
(average written date during the period the proposed rates are expected to be in effect) - (average written date of the latest period of awp data)
Second trend factor in two step premium trending
factor=(1+%trend)^(Premium trend length)
When is harwayne’s credibility method used?
when the subject experience and the related experience have significantly different distributions
formula for complement of credibility using ILF’s
C=La*(ILF(x+a)-ILFa)/(ILFa). where La=the losses capped at attachment point a (avg Pure premium capped at a). ILFx+a is the increased limit factor of the excess layer (upper boundary)
What is the ILF complement of credibility used for
to adjust losses that are capped at an attachment point to produce an estimate of losses in a specific excess layer
formula for complement of credibility using Lower Limits analysis
C=Ld*(ILF(x+a)-ILFa)/(ILFd). Where Ld=losses capped at some lower limit d
when is lower limits analysis complement of credibility used for?
If the losses capped at the attachment point are too sparse
formula for complement of credibility using Limits analysis. (NOT LOWER LIMITS)
C=LRSum[Pd(ILF(min(d,a+x))-ILF(a))/ILF(d)].
where Pd=Total premiums for policies within limit d.
The sum is for all limits d above a.
complement of credibility for class x using rate change of larger group
C=Loss cost for class x * (1+indicated rate change for larger group).
Cred weight loss costs then calculate the indication.
what are berquist sherman techniques used for
adjusting development triangles for changes in claim settlement rates or case reserve adequacy
The main assumption of the paid Berquist Sherman technique
changes in disposal rates are due to speedups or slowdowns in claim settlement rates and not due to changes in the rate of reporting of claims or changes in prioritization between small and large claims
what is the paid berquist sherman technique used for
adjusting development triangles for changes in claim settlement rates
what is the reported berquist sherman technique used for
adjusting development triangles for changes in case reserve adequacy
The main assumption of the reported Berquist Sherman technique
Any differences between the annual changes in average case reserves at each maturity and the severity trend are due to changes in case reserve adequacy and NOT due to large unpaid losses
Which Berquist Sherman technique needs a severity trend to adjust for inflation
Reported Berquist Sherman Technique
disposal rate
closed claims/projected ultimate claims
How do you adjust each cell of the cumulative paid claim triangle when doing a berquist sherman settlement rate adjustment?
Let X=Paid losses,
D=Disposal Rate,
N=Next Collumn,
S=Selected,
T=This Collumn,
P=Previous Collumn.
Adding: XT+(XN-XT)x[(DST-DT)/(DN-DT)].
Subtracting: XP+(XT-XP)x[(DST-DP)/(DT-DP)]
What determines if berquist sherman case reserve adequacy adjustment is appropriate?
If the trend in average outstanding case reserve is different from the trend in average paid severity
How do you adjust each cell of the incurred loss triangle in the berquist sherman case reserve adequacy adjustment?
Detrend the average case outstanding triangle for the severity trend and multiply by open claims, then add paid losses.
What is the extra necessary step when doing berquist sherman for both claim settlement rates and case reserve adequacy?
you need to calculate the adjusted open claim counts as the reported claim clounts minus the adjusted cumulative closed claim counts (calculated using the selected disposal rates). Then the incurred loss triangle will be (adj open claim counts*adj avg case reserve)+adjusted cum. paid losses
chain ladder method
link ratio method or Development method
when is expected claim ratio loss development commonly used?
When entering a new line of business with insufficient data.
When operational or environmental changes make historical data irrelevent.
Or when estimating ultimates at early maturities for long tailed lines of business where the early age-to-ultimate factors are highly leveraged.
How do you calculate expected ultimate losses using the expected claim ratio method?
Ultimate for AY= expected claim ratio * earned premium for AY.
(- OR -) Ultimate for AY = expected pure premium * earned exposures for AY.
Bornhuetter ferguson development is a credibility weighted average of what two techniques?
The “development” method (Link Ratio) and the expected claims ratio. Z = (1/CDF)
Banktander method
credibility weighted average of “development technique and BF method. Z = (1/CDF)