ERA 3 TIA Flashcards
When implementing an Internal Risk Model, firms often underestimate? (3)
- Resource Commitment - Staff, Systems, Software
- Timelines
- Organization impact
When implementing an Internal Risk Model - Staff Considerations (5)
- Reporting Lines should be clear
- Leader should have a reputation for Fairness
- Functions Represented: U/W, Planning, Finance, Actuarial, Risk
- Full Time Staff vs. Part Time Staff (also have day to day job)
- Permanent Staff vs. Temporary Staff (for implementation)
Implementing IRM - Scope considerations? (4)
- Underwriting Year
- Reserves
- Assets
- Low Detail on Company OR High Detail on pilot segment
Parameter Estimation is difficult because (4):
- Low Data Quality
- Low Data Volume
- Unique Characteristics of Firm
- Differing Risk Attitudes
Correlation Assessment in an IRM is difficult because (4):
- Lack of Data
- High Political Sensitivity
- spans Multiple Business Units
- Significant impact on Company Risk Profile and Capital Allocation
IRM team recommends correlation assumptions
Owned by CRO/CEO/CUO
Why is Validation of an Internal Risk Model difficult?
How can we Validate?
- No current model to compare to
- Review a series of Complementary variables over an
extended period
How should a Pilot Test be done for the
implementation of an Internal Risk Model?
- Provide output in parallel to current decision metrics
(allows user to get comfortable with new metrics) - High Level of the Company OR Detail of a Pilot Segment
- Provide Education on New Metrics
- Each Quarter increase Weight that is given to new metrics
Recommendations for Integration and Maintentance
of an Internal Risk Model (4)
- Integrate into the Corporate Calendar that already exists
- Major Updates - no more than twice a year
- Minor Updates - via scaling
- Input/Output - Ownership and Control must be very clear
Formula for Coefficient of Variation (CV) of Losses
What is Superimposed Inflation?
Severity Trend less General Inflation
[Claim Severity Trend] =
[General Inflation] + [Superimposed Inflation]
For Projecting Annual Loss Trend, the author
recommends an AR(1) process with what parameters?
What is the preferred method to estimate parameters
for Frequency and Severity Distributions?
Maximum Likelihood Estimator (MLE)
Among Unbiased estimators, it has the lowest Estimation
Error (for large data sets)
When Estimating Parameters how we can estimate
correlations?
How do the authors recommend we model parameter
estimates and their dependencies?
Model the parameter estimates as Joint LogNormal
with correlations from the Information Matrix
Why do we prefer to use Joint LogNormal to model
estimates of parameters? (2)
- Removes Negative Values from possible
simulated values - Parameter Estimates have a heavy tail -
LogNormal captures this