CIA.Models Flashcards
Define “model”
A practical representation of relationships among entities using Financial, Economical, Mathematical, Statistical concepts
What are the elements of a “model” (3)
SIR:
- Model Specification
- Model Implementation
- Model Run
Define “model specification”
Is the description of the components of a model and the interrelationships of those components with each other, including the types of data, assumptions, methods, entities, and events
Define “model implementation”
Is one or more systems developed to perform the calculation for a model specification
Define “model run”
Is a set of inputs and the corresponding results produced by a model implementation
Define “model risk”
Is the risk that, due to flaws or limitations in the model or in its use, the actuary or a user of the results of the model will draw an inappropriate conclusion from those results
What is the main distinction between a calculation and a model
A model requires more documentation (how it was chosen, how it’s used)
Why is there always risk in using a model
Because a model is a simplification of reality
Name 3 strategies employed by actuaries when mitigating model risk
- Choose a model for a task
- Use the model (one-time or ongoing) or oversee its usage
- Communicate results of that model
How can model risk be measured (2)
- Severity of model failure (how bad can it be)
- Frequency of model failure
Describe considerations in assessing the severity of model failure (3)
- Financial significance (Ex: severity is higher if estimating a major balance sheet item)
- Importance of model (Ex: severity is lower if multiple models are being used)
- Frequency of use of model (Ex: severity is higher if model is used frequently)
Describe considerations in assessing the likelihood of model failure (4)
- Complexity of model (ex: higher complexity means higher likelihood of misuse of model)
- Expertise of users (ex: non-expert users may not understand model limitations)
- Documentation of model (ex: bad documentation means high likelihood of model failure)
- Adequacy of testing (ex: inadequate testing means high likelihood of model failure)
Does the actuary have more control over the SEVERITY or LIKELIHOOD of model failure (justify)?
More control over likelihood through:
- Choosing a better model
- Exercising greater care in validation
- Employing tighter controls for model runs
Identify the steps an actuary should take before using a new model (4)
- Review specifications
- Validate implementation
- Deal with limitations
- Keep documentation
Describe what an actuary does when reviewing a model’s SPECIFICATIONS
Verify DAMs:
- your DATA fits the models requirements
- ASSUMPTIONS are appropriate
- METHODS are sound
If using third party model, actuary will want to perform the appropriate tests to assess any important aspects not covered in the user’s documentation
Describe what an actuary does when validating a model’s IMPLEMENTATION
- Compare with other tested models
- Maintain a set of test cases
- Backtesting (testing with historical data where you already know the answer)
- Run an entire live file through successive versions of the model (for models with a higher risk-rating)
- Peer review of testing procedure
Describe what an actuary does when dealing with a model’s LIMITATIONS
- Understand the range of uses for which the model was designed and tested
- Actuary would be aware of which events are independent of each other and which are correlated
- Actuaries would be alert to assumptions that are fixed or embedded in a model
Describe what an actuary should include when DOCUMENTING a model
- How the model was chosen
- How the model was tested
- What are the model’s limitations
What is an important tool for validating models
A model’s risk rating (riskier models need more thorough validation)
How should an actuary evaluate an existing model that’s being used in a NEW WAY
- Check that the initial model was properly validated
- Review limitations in the new application that may not have been relevant in the initial application
How should an actuary evaluate a model approved for use BY OTHERS
Actuary should review & approve the initial validation report (if possible)
In some cases, the actuary may choose to rely on the validation done by others outside their firm.
How should an actuary evaluate a model OUTSIDE THEIR EXPERTISE
Actuary would determine the appropriate level of reliance on experts, considering:
- If the individuals on whom the actuary is relying are considered experts in their field of practice
- The extend to which the model has been reviewed by experts in the applicable field
- The risk rating associated with the model
Actuary would make a reasonable attempt to understand:
- The basic workings of the model including its inputs, outputs, and general approach
- The testing and validation work that was completed
- The model’s complexity and the control framework used
Give an example of a model outside actuary’s expertise
A credit-scoring model
What is the purpose of sensitivity-testing regarding models
- To validate a model
- To understand the relationship between inputs/outputs
- To develop a sense of comfort with the model
How can model assumptions be tested in the context of sensitivity-testing
- Test assumptions OUTSIDE expected range
- Test assumptions singly and then IN COMBINATION
- Test assumptions with a NONLINEAR relationship between inputs/outputs
What types of validation should be done when USING a model (3)
DAR: validation of Data, Assumptions, Results
- Data should be reliable and sufficient
- Validate non-global assumptions that vary by model run
- Results should be “reasonable” relative to input
What does the actuary consider regarding data reliability (6)
- Reconciliation to other sources (preferably audited)
- Summarize and compare input data to prior periods, if applicable
- Check and investigate data points that are outliers for possible errors.
- How are missing data handled? Is a data assumption made or is an error generated?
- Data assumptions would be reviewed periodically to assess their appropriateness
- Is the size of the data file consistent with prior periods?
What does the actuary consider regarding data sufficiency (2)
- Do the data meet the requirements of the model specification?
- If the model will be used repeatedly, are the data in a consistent format every time?
What are the 3 considerations for reviewing model assumptions?
- Regular peer review (internal/external) of the assumptions
- Are the intended assumptions the ones used in the model?
- Are model assumptions unchanged unless they were meant to be changed?
What checks can be done to validate the results of a model?
- Are outputs consistent with inputs? (for ex: do the total amount of policies match)
- How many errors were generated and what amount was involved? It is within established tolerance?
- Are results as expected, both in direction and magnitude?
What will the actuary review when validating the results of a stochastic model?
- The results from a carefully chosen sample of realized deterministic scenarios, covering an appropriate range of inputs and/or assumptions
- The distribution of output results for reasonability, paying particular attention to items such as trend, mean, median, etc.
- Whether the results of the chosen deterministic scenarios are consistent with the distribution of stochastic results
Is severity or likelihood of failure impacted ( & state level of impact): Documentation scribbled on napkin
Impacts likelihood of failure : HIGH-RISK
Is severity or likelihood of failure impacted ( & state level of impact): model used ONCE every 5 years
Impacts severity of failure: LOW RISK
Is severity or likelihood of failure impacted ( & state level of impact): model values 90% of business
Impacts severity of failure: HIGH RISK
Is severity or likelihood of failure impacted ( & state level of impact): User is very inexperienced
Impacts likelihood of failure: HIGH RISK
Is severity or likelihood of failure impacted ( & state level of impact): model is STREAMLINED for ease of use
Impacts likelihood of failure: LOW-RISK
- Could also argue it presents severity risk for being too simple (may omit important variables/interactions)
Is severity or likelihood of failure impacted ( & state level of impact): model used to price PREMIUMS for 2% of book (by premium)
Impacts severity of failure: LOW RISK
Is severity or likelihood of failure impacted ( & state level of impact): model used to determine SELLING PRICE of company
Impacts severity of failure: HIGH RISK
Compare uni-dimensional model risk-rating with two-dimensional rating
Uni dimensional approach:
- Rating from 1-20 (20 is high)
- Based on financial significance, complexity, expertise of users, documentation
Two-dimensional approach:
- Assessed separately for severity & likelihood of failure
- Final rating is a balance of these