ILA-LPM B Flashcards
List the 6 steps to establish experience assumptions
- Identify assumptions needed
- Determine structure of each assumption
- Analysis experience & trends
- Review assumptions for reasonableness, consistency
- Document assumptions
- Monitor experience & update assumptions
I DARDM
3 types of primary assumptions needed for an experience study
- obligation
- asset
- scenario
Describe primary types of assumptions needed for an experience study:
- Obligation assumptions
- LIABILITY!
- mortality
- lapse
- expense
Describe primary types of assumptions needed for an experience study:
- Asset assumptions
- investment income rate
- capital gains rate
- defaults
Describe primary types of assumptions needed for an experience study:
- Scenario assumptions
- deterministic vs. stochastic interest rates
- sensitivity testing
How do you determine experience classes for an experience study?
- groups of policies w/ same assumption
- similar type, structure, marketing objectives
What are the key principles when deciding complexity?
- reflect differences in actual experience
- use objective definitions
- be practical and cost effective
List 4 considerations when analyzing experience and trends for an experience study
- evaluate credibility
- evaluate quality of data
- actual vs similar
- reflect trends
- reflect company and external factors
- sensitivity test the assumptions
List considerations when analyzing experience and trends for an experience study:
- Evaluate credibility
- quantity of data
- homogeneity
- reasonableness
List considerations when analyzing experience and trends for an experience study:
- Evaluate quality of data
- Alternative sources?
- Appropriate? Comprehensive enough?
List considerations when analyzing experience and trends for an experience study:
- Actual vs. similar experience
- Use actual if available and credible
List considerations when analyzing experience and trends for an experience study:
- Reflect trends
- Example: mortality improvement
List considerations when analyzing experience and trends for an experience study:
- Reflect company and external factors
- underwriting
- investment policy
- other business practices
Validation checks to review assumptions for reasonableness and consistency in an experience study
- static (starting reserves)
- dynamic (projected reserves)
Consisitency checks to review assumptions for reasonableness and consistency in an experience study
- inflation consistent w/ investment earnings
- mortality anti-selection and lapses
How should assumptions be documented after an experience study?
- Actual assumptions: value, applicable class
- Data: source, values, any concerns, adjustments
- Methods for development: e.g., credibility
-
How to use:
- pricing vs. CFT
- sensitivity testing
- regulatory requirements
3 key steps to determine a mortality assumption from an experience study
- determine structure
- analyze experience
- monitor experience
How to analyze experience for a mortality assumption
- mortality study (e.g. 5-yr CY study)
- develop expected mortality rates
- assess credibility
- adjust mortality rates
What are some of the ways that mortality rates are adjusted?
- trends (improvement, etc.)
- anti-selection = conservation of deaths
- blended w/ similar/industry (low cred)
- adjust for UW, distribution, market, etc
- blend male/female rates into aggregrate
What is the structure of a mortality assumption
- select and ultimate common
- mortality improvement
- ALB vs. ANB
Possible variations in the structure of a mortality assumption
- risk class
- selection process (type of UW)
- size of policy (bigger face ⇒ lower mortality)
- market method (direct, agent, etc)
ANB
- age nearest birthday
- qANB(x) = 0.5 x [qALB(x) + qALB(x-1)]
ALB
- age last birthday
- qALB(x) = 0.5 x [qANB(x) + qANB(x+1)]
List and decribe the 2 main types of mortality studies
-
CY
- activity for single CY
- account for new policies, WDs, and death
-
Anniversary-to-anniversary
- simpler; coincides w/ policy year
- issue year/duration basis
Which risk classes have separate mortality studies?
- non-routine UW
- conversions ⇒ usually higher mortality
- sub-standards
- non-forfeiture (ETI, RPU)
- multiple-life
Calendar year exposure equality
A + N = W + D + B
of lives…
A = beginning of year
N = enter during year
W = lapse during year
D = die during year
B = alive at end of year
What are the two ways that the balducci assumption can be used to calculate a mortality rate for a mortality study
- Policy level - sum up individual exposures
- Aggregrate - assume new issues and WDs occur mid-year (s=r=0.5)
Credibility considerations for a mortality study
- multi-year studies - better credibility
- 5-yrs good; too many years hide trend
- lower for amount-based studies
- A/E ratios - credibility & tracking trends
Describe conservation of deaths principle
Total mortality = weighted average of:
- mortality of “select lives” that lapse in duration r
- mortality of lives that stay w/ original policy
Key pricing concept: If you don’t account for selective lapsation, you will under-price the insurance product
anti-selections’s effect on mortality
- healthy lives more likely to lapse than unhealthy - especially w/ high renewal premiums
- mortality of those left begins to increase by “the conservation of deaths” principle
List at least 4 variations in lapse assumption structure
- product design
- distribution channel
- policy size
- premium mode
- product type
- conservation program effectiveness
Describe variations in lapse assumption structure (i.e. ways that lapses vary):
- Product design
- term: high (shock) lapses when ART period begins
- annuities: high (shock) lapses after SC period
- permanent insurance: high 1st year lapses
Describe variations in lapse assumption structure (i.e. ways that lapses vary):
- Premium mode
- lapses generally occur on premium due dates
- flexible premium: assume uniform monthly
Describe variations in lapse assumption structure (i.e. ways that lapses vary):
- Policy size
- small policies: high early lapses
- large policies: high later lapses
Describe variations in lapse assumption structure (i.e. ways that lapses vary):
- Distribution channel
- brokerage: higher lapses
- agency: higher lapses if agent quality is poor
Describe variations in lapse assumption structure (i.e. ways that lapses vary):
- Product type
- Deferred annuities: more sensitive to lapse rates than life insurance
Exposure for a lapse study
- same as mort study - death and lapses trade places
- lapses get full year of exposure
w(x) = W / (A + (1 - r)N - (1 - s)D)
List specific types of lapses
- failure to pay premium (term)
- full/ partial cash surrender
- policy loan > CSV
- nonforfeiture transfers - ETI, RPU
- term conversions
- deviations in actual prem as a % of target prem (UL)
Describe the following aspects of interest rate assumption structure.
- Deterministic vs. Stochastic
Investment Assets:
- deterministic
- portfolio average
- investment generation
- stochastic
- useful for more important risk
- rates vary w/ time, asset class, quality, & credit risk
Describe the following aspects of interest rate assumption structure.
- Policy Loans
- modeled as assets or negative liabilities
- net of policy loan expenses
- utilization rate
Formula for determining interest rate on a book value basis
I = Ai + (B - A - I)(i/2)
i = 2I / (A + B - I)
A = BoY
B = EoY
C = net CFs (assume mid-year)
Formula for determining interest rates on a market value basis
r = (B - A - C) / (A + C/2)
If done daily, set C = 0
A = BoY
B = EoY
C = net CFs (assume mid-year)
Direct vs. indirect expenses
- Direct (vary w/ sales)
- commissions
- premium taxes
- UW
- Indirect (express as per policy, % of premium, or per unit)
- OH
- Both
- maintenance
- acquisition
- entering new LOB
Describe how exposure is determined in an expense study
- Goal: develop a policy count base for per policy expenses
-
How: Count the number of policy years that start in the study’s calendar year
- BOYs “crossed” in CY = A + N -W/2
- Mid-years “crossed” in CY = A + N/2 - W/2
- Exposure count
- (A + B + N)/2 for beginning of year expenses
- (A + B)/2 for mid-year expenses
- assumes WDs & new issues occur mid-year
4 methods for allocating expenses
- transaction count (# of premium payments)
- transfer costs (employee benefit cost per employee)
- employee time spent
- index-based allocation (policy count or premium)
Define “cell” in terms of an experience study
combinations of data dimensioned by issue age, sex, smoker/nonsmoker, policy year, etc.
Define “rate types” in terms of an experience study:
- decrement rates vs. utilization rates
- decrement = probabilities from 0 to 1
- mortality, morbidity, lapse, etc.
- utilization = NOT probabilities (can exceed 1)
- WD, option election rates, etc.
Decribe amount-weighted studies, for grouped amount weights
- lx terms become lx x avg DB in force
- dx terms become dx z avg DB paid on death
- wx terms become wx x avg DB on withdrawn policies
- sum amount exposures and calculate qx’s in the usual way
Describe amount-weighted studies, for individual amount weights
- multiply each life year exposure by the policy’s DB
- sum amount exposures and calculate qx’s in usual way
Describe the relationship between amounted-weighted q and life-weighted q
Amount-weighted q will be > life-weighted q if average DB paid on death > average DB in force
Uses of an A/E analysis
- compare actual mortality/lapse to expected
- develop best estimate assumptions as a multple of expected amounts
- Best estimate rate for age x = (A/E) x qxe
- valuation, risk management, financial planning, etc.
Frequency and severity formulas
- fx = nx/Ex = avg claim frequency*
- sx = Cx/nx = avg claim amount*
- nx = # of claims incurred at age x
- Cx = total claim amount incurred at age x
- Ex = central exposure (or initial)
Experience study calculations:
- Average withdrawal taken as a % of the max withdrawal formula
- ux = sx/Mx*
- Mx* = max w/d allowed
- sx *= average size (severity) of w/d
Experience study calculations:
- Average size (severity) of withdrawals taken formula
- sx = Wx/nx*
- Wx* = total actual w/d amount
- nx* = # of contracts who took a w/d
Experience study calculations:
- Withdrawal frequency formula
- fx = nx/Ex*
- Ex* = count-based exposure (deaths and lapses)
- nx* = # of contracts who took a w/d
Experience study calculations:
- Count-based exposure (excludes deaths and lapses)
- Ex = lx - dx - wx*
- wx *= # of lapses
Experience study calculations:
- Distortions caused by amount-weighted calculations
-
large amounts can distort numerator
- not easy to fix (cap max amount allowed in study)
-
risks/behaviors can differ significantly by amount
- small policies have less commitment
- large policies used to defraud
- solution: add band size to cell
Experience study calculations:
- List advantages of mutli-year studies
- higher credibility
- less distortion cause by reporting lags
- populate many more cells
- allow study of trends for study-year/CY
Experience study calculations:
- 4 examples of reporting lags
- death claims - not reported for months
- LTCI claims not reported til after EP
- “shoe-boxing” - lag between direct insurers and reinsurers
- life annuities sometimes continue paying after death
Experience study calculations:
- 2 possible solutions to reporting lags
- wait a few months after study period before gathering data
- IBNR claim estimates from previous studies
Experience study calculations:
- 3 examples of non-uniform events
- lapses often vary by month
- prorated mortality rates tend to underestimate deaths early in year at old ages
- LTCI claims immediately following claim
Experience study calculations:
- 3 ways to compensate for non-uniform distributions
- shorter interval
- adjustment factors
- constant force mortality for old ages
Considerations for inclusion of partial policy years in decrement study
-
ok when evenly distributed over year
- common for biologically-driven rates (mortality, morbidity, etc.)
- can allocate PYs between CYs
-
Otherwise, partial policy years will distort results
- behavioral rates not usually evently distributed
- lapse, withdrawal, option election, etc.
- do not allocate PYs between CYs
- behavioral rates not usually evently distributed
Describe the distributed exposure method
- assumes decrement is uniformly distributed
- allocates exposure/decrements by both PY & CY
- produces same total PY exposure as annual method, but CY allocation differs
- exposure missed by intial partial year (so can extend past study period)
Describe the annual exposure method
- consistent with Balducci
- exposes decrements to end of next anniversary (even beyond study period)
- disadvantage: overstates rates in first partial year; understates rates in final partial
-
expsoure extended beyond study period:
- b/c rate is probability
- exposure missed by intial partial year
Algorithm for calculating distributed exposure
- calculate annual exposure
- 1st half exposure = PY t exposure in CY
- deaths/surviving policies from anniversary to end of CY
- other decrements from anniversary to decrement date
- 2nd half exposure = total - 1st half exposure
- allocated to next CY
- In plain English: switch the partial year formulas in the annual exposure method*
- Move final partial year deaths to next study period’s first partial year
product-related considerations that may affect experience study calculations
- deaths in premium grace period (30-60 days)
- compromised and denied claims
- study cariables changing over time
- reinsured amounts
- substandard and uninsurable lives
Describe product-related considerations that may affect experience study calculations:
- Deaths in premium grace period (30–60 days)
- actual DB paid is net of overdue premiums, but use full DB in mortality study
- expose lapses to beginning of grace period
Describe product-related considerations that may affect experience study calculations:
- Study variables changing over time
- admin system changes
- may change plan codes or billing frequencies (distorts exposure)
Describe product-related considerations that may affect experience study calculations:
- Compromised and denied claims
- don’t count as death claims in mortality study
Describe product-related considerations that may affect experience study calculations:
- Reinsured amounts
- can study amounts net of reinsurance if material enough
Describe product-related considerations that may affect experience study calculations:
- Substandard and uninsurable lives
- typically excluded
List at least 4 claim characteristics that are shared by LTCI and DI products
- multiple payments that can last months or years
- usually paid monthly
- EP before claim payments start (up to 1 year)
- usually limited in some way (e.g. max age for DI, max total benefits, etc.)
- paid when insured meets certain conditions
- claims stop when insured recovers, but can start again
Describe the following types of morbidity studies for LTCI and DI:
- Benefit utilization rate studies
(LTCI only)
Benefit Utilization Rate = Actual Benefits / Max Benefits
Describe the following types of morbidity studies for LTCI and DI:
- Claim termination studies
- separate studies for recovery and death
- DI claims terminate when proof of diability not submitted
- LTC claims terminate when requests for reimbursement stops
- termination rates are highest and most volatile for new claims
Describe the following types of morbidity studies for LTCI and DI:
- Claim severity studies
Average length of claim = total months of claims/# of claims
Average cost per claim = total claims paid/# of claims
Describe the following types of morbidity studies for LTCI and DI:
- Claim incidence studies
- claim incidence rates = annual probabilities
- usually done on “all claims”
- usually based on annual exposure method
List 4 DI-specific considerations for experience studies
- EPs - claims are usually reported after EP
- partial DI benefits - reduces avg cost per claim
- recovery followed by relapse within 6 months
- claim settlements - lump sum paid in lieu of future monthly
List 5 LTCI-specific considerations for experience studies
- EP - 30, 60, 90, 180, and 365
- some benefits indexed to inflation
- mortality rates do not follow any established table
- diagnosis - must capture (drives claim length)
- claim data not always organized - requires work
Experience study considerations for deferred annuities
-
deferred annuity utilization rates
- GMxBs: compare benefit paid to the amount payable in absence of GMxB
-
contract year data challenges
- data usually monthly
- common approach: convert account balances to contract year basis using a 13-month average
Describe experience study considerations for payout annuities:
- Structured settlements
- from injury lawsuits
- one of more lump sums (not lifetime payouts)
- mortality higher than immediate annuities
- amount exposure - base on reserves or premium paid
Describe experience study considerations for payout annuities:
- immediate annuities
- self-selection - people buy IAs when they think they are healthier than average
- choice of immediate annuity may indicate health
- larger IAs tend to have lower mortality
- amount exposure should be based on monthly payment, not reserves
- mortality improvement is a key consideration
6 major steps in table development
- develop table
- identify table dimensions (exploration techniques)
- populate table (graduation/interpolation/modeling)
- extend and project rates
- review and adjust rates
- finalize
List 4 preliminary activities of data development and analysis
- review previous studies
- clarify puprose of table
- ensure confidentiality of each contributor’s data
- review available data (homogeneity, credibility, etc.)
Define and describe data call process
- data call: formal request for data from contributors
- simpler - more successful
- document what is needed for each data item (columns)
- define data structure (e.g. relational database)
- create detail records (rows)
- document all table relationships
List 6 common data challenges for table development
- incomplete data
- terminology may vary by contributor
- data may not arrive on time
- wrong or improperly formatted data
- time lags in reporting
- lack of resources to transform data
List 6 components in an experience study calculation
- study anniversary date - bday, policy anniversary, etc.
- age basis (ANB, ALB, etc)
- experience study summary records
- count-based and amount-based rates (qx’s, etc.)
- expected results (A/E ratios)
- summary results for each record (event counts and amounts)
List 3 ways to address distortion caused by having mix of age bases in experience data
- assume all ages based on most common method
- recalculate on common basis for each record
- weighted-average age
List 8 steps in data analysis process (table development)
- acquire data
- data validation, preliminary exploration, outlier analysis
- data visualization and preparation
- analytical approach: exploratory vs. advance analytics
-
model creation and assessment
- select model(s) w/ explanatory ability, predictive power, and implementation ease
- select final model
- minimize table dimensions
- replacing grids w/ factors if possible
THINK EXAM PA!
List the 3 steps in the modeling process, w/ in the data analysis process
The modeling process:
- model fitting: select variables w/ lowest p-value
- avoid confounding
- create transformed variables, functions, interaction terms, or stratification
AGAIN - THINK PA!
Describe the purpose of graduation
-
Graduation is a mathematical process that smooths an array of rates
- assumes “true” rates follow continuous curve
- required when rates don’t come from model
- can still be used on modeled rates
- goal: balance fit and smoothness
List properties of a preferred graduation method
- produces same total events as ungraduated
- parameters control amount of smoothness
- input fit, table fit, smoothness
Describe credibility and data grouping considerations when graduating rates for an experience table
- non-credible (“incredible”) rates distort graduated rates
- smoothness calc weights credible and non-credible rates equally
- biggest problem: very young and very old ages
- increasing credibility comes at cost of graduation
- grouping rates by age (e.g. quinquennial)
- grouping data by calendar or policy years
- possible solutions: use exposure-weighted averages for graduation
Describe the 4 steps of the graduation process
- Collect and populate graduation input
- Review and adjust input as needed
-
Run graduation process for 3 parameters: input fit, table fit, smoothness
- Set parameter value and run the graduation algorithm
- Review/graph/evaluate results
- Repeat until satisfied with parameter
- Run a final graduation using the best fit and smoothness parameters
3 ways to approximate total variance for all lives (from least refined to most)
- overall average size
- average size for lives contributing to qr
- allocate average size between 2 most common sizes in size groups (can be used to develop factors for 1 & 2)
Describe common interpolation methods for table development
-
1-dimensional methods
- linear
- cubic spline
- log-linear or log-cubic (good for mort)
-
2-dimensional methods
- bilinear
- bicubic (smoothest)
Describe 3 methods for extending rates in a table
Often necessary for very young or old ages where credibility is low
-
use rates from existing credible table
- use slope as guid
- grade from study’s rates to existing table’s rates
-
other data sources
- SSA for old ages
-
formulas
- should reproduce rates for nearby credible ages
Provide 5 reasons why initial rates developed for table may be deficient
- nonsensical (mort rate > 1)
- data had poor credibility
- nonsensical pattern
- suspicious differences in arrays of rates
- very wide CIs
List the 5 componencts of enforcement reviews
- define relationships to be enforced
- define when relationships will be checked
- create spreadsheets to check relationships (automation)
- check S&U mort rates
- adjust rates to enforce relationships
Describe 4 considerations for projecting future rates
- historical data - consistent population
- mortality trends - difficult to estimate
- connect cause and effect
- different types of projections
3 different types of projections of future rates
- mid-point of experience data to final table effective date
- beyond effective date at single rate of improvement
- beyond effective date at a varying rate of improvement
Describe items to consider when assessing financial impact of a table
-
reserves and nonforfeiture values
- prescribed industry tables affect prescribed stat reserves and nonforfeiture values
- PBR can be impacted by internally developed rates
- impact can vary a lot from company to company
-
premiums and PH dividends
- could be impacted by new industry or internally-developed tables
Describe 5 ways to create additional tables from final table rates
- w/ or w/ out projected trend factors
- w/ or w/ out valuation loading factors
- multiple age definitions (ANB, ALB, etc.)
- unisex
- relative risk versions for UW purposes
Describe the 4 major steps in valuation loading process for the commissions’ valuation table (CVT)
- develop experience table - soley on experience data
- develop valuation basic table (VBT)
- apply loading factors to experience table to create a loaded experience table
- create CVT (used for stat reserves)
Describe how to calculate a credibility-weighted A/E ratio
- for both LFM & Buhlmann, credibility estimate:
Estimated Company A/E = Z x (Company A/E) + (1 - Z) x (Overall A/E)
- Z = credibility factor ranging from 0 to 1
- Z = 1 means company experience fully credible
- Calc for Z different under LFM and Buhlmann
- overall A/E ratio based on average of all companies in study
- lower Z is, more weight placed on overall experience
Describe limited fluctuation credibility method
- based on CI and only uses data from particular company being studied to determine the credibility factor
Describe properties of Bühlmann Empirical Bayesian Method
- based on empirical application of Bayesian stats
- starts w/ initial or prior distribution based on past data, professional experience, or option
- observed results used to formulate predictive or posterior distribution
- considers variance between insurers, not just insurer specific like LFM
- Z calc more complex
Compare LFM and Buhlmann credibility methods
-
Buhlmann considers 2 sources of variance, LFM only 1st:
- company-specific
- among companies
- LFM only needs data from company studied (simpler)
-
Both:
- calculate estimated company A/E
- assume constant overall A/E ratio
- additional adjustments maybe needed based on actuarial judgment
- Comparison for 10 insurers surveyed:
- count-based A/E estimates similar
- amount-based A/E estimates differed more
What are advantages of traditional approach to experience studies
- Commonly accepted/well established
- easy to produce results
- management familiar with ⇒ can apply to management decisions
List 7 key steps in predictive analytics project
- project scope
- data collection and validation (MOST IMPORTANT)
- initial factor anlysis
- model building
- model validation
- final calibration
- implementation
List the key steps in a predictive analytics project:
- Data Collection and Validation (most critical step)
- clean data and consider variables that may not be obvious
List the key steps in a predictive analytics project:
- Model Building
- multiplicative or additive
- statistical tests, business knowledge, and interactions to include
List the key steps in a predictive analytics project:
- Model Validation
- compare A/E ratios
- use hold-out samples
List the key steps in a predictive analytics project:
- Initial Factor Analysis
- do univariate analysis
- then remove highly correlated variables
List the key steps in a predictive analytics project:
- Final Calibration
- refit model using all data and make final adjustments
List the key steps in a predictive analytics project:
- Implementation
- use model results to set assumptions, do underwriting, etc.
List the key steps in a predictive analytics project:
- Project Scope
- (target variable, LOB, timeline, resources)
- Assemble team of technical and business experts
How can UW type and geo-demographic variables can aid UW?
Traditional techniques don’t use these but predictive analytics could. . .
-
UW type
- could use process results (BP, LDL, BMI)
- limitation: data availability
-
geo-demographic variables
- categorize socio-economic composition of a particular zip code
- apply multipliers that vary by face amount, duration, or distribution channel
List general advantages of predictive analytics for experience studies
- Interactions of factors better understood
- better credibility for specific factor combinations
- standardizes all factors ⇒ isolates “true effect” of each
- backed up by statistical tests
List new business valuation advantages of predictive analytics for experience studies
- assess NB value at more granular level
- allows management to make more specific changes if needed
- revise benefits, increase fees, change producer compensation
Disadvantages of predictive analytics for experience studies
- significant expertise required
- risks: misinterpret data and flawed models
Limitations of predictive analytics (and traditional) in experience studies
- lack of data at old ages (85+)
- possible solutions: create trend lines or adjust standard models
- not doing experience studies at all
- some companies lack resources
- pricing models may not support the more refined assumptions
List 4 main steps for building a predictive model
- collect and organize data
- prepare data
- build model
- final adjustments
List criteria for data that will be used in a predictive model
- relationship with target variable
- statistically signifcant
- correlation vs. cost to obtain
- regulatory/legal concerns
- 12-18 months of data
List and describe 2 classes of data that can be used in a predictive model for UW
-
Traditional
- app data
- medical information bureau (MIB)
- motor vehicle record (MVR)
- electronic Rx profile
- traditional medical UW results
-
Non-traditional 3rd party data
- credit scores (FCRA)
- marketing data
List 4 main steps to prepare data for a predictive model
- generate variables
- exploratory data analysis
- variable transformation
- partition into 3 sets
List and describe 4 main steps to prepare data for a predictive model:
- Generate variables
- synthetic vs. disease-state
List and describe 4 main steps to prepare data for a predictive model:
- Exploratory Data Analysis
- distributional vs. univariate analysis
List and describe 4 main steps to prepare data for a predictive model:
- Variable Transformation
- Group values into buckets
- Replace missing values
- Reduce effect of extreme values/outliers
- Convert variables into numerical values to capture trends
List and describe 4 main steps to prepare data for a predictive model:
- Partition Data Into 3 Sets
- Train
- Validation
- Test
Describe additional adjustments that might be appropriate for a predictive underwriting model
-
layering on existing UW guidelines
- example: exceptions for rare conditions
-
decision trees
- used to narrow applicant pool into good/bad risks
Describe the following considerations with respect to predictive modeling for
underwriting.
- Anti-selection concerns
- PH can’t manipulate 3rd party data
- model relies on data in total, not individual
- allow reduced UW requirements for some
- randomly traditionally UW some applicants (for checks)
Describe the following considerations with respect to predictive modeling for
underwriting.
- Legal and ethical concerns
- collection of consumer data
- marketing data not subject to FCRA
- social views on privacy
- compliance and legal staff should review variables
Explain why credit data is useful for mortality prediction
- main idea: qx = f(credit behavior)
- Credit behavior indicates an individual’s conscientiousness
- conscientious people tend to have better credit history ⇒ lower mortality
Give reasons why conscientious people tend to have better credit history, and therefore, lower mortality
- Engage in healthier behavior/environments
- Healthier friendships
- Better education, career, and higher incomes
- Have less unhealthy stress
2013 study showed what about why credit data is useful for mortality prediction?
- lowest conscientious US adult mortality = 3.2x highest conscientious
- falls to 1.6x after adjusting for smoking, alcohol, and waist size
List criteria for using credit data for UW purposes
- adjacent markets utilize similar data/approaches
- heavily regulated - allows for consumer dispute resolution
- updated in near real time
- used to develop model that is fit for purpose: mortality
- credible population - allows for a separate holdout sample
- available for vast majority of adult population
List the credit attributes that influence the TrueRisk Life score
The score is a function of 25 credit attributes across 4 categories:
- credit-seeking activity - recency/frequency of credit inquiries/trades
- credit tenure - credit history length, active trade count, months since oldest trade
- severity/frequency of derogatory credit info - bankruptcies, collections
- credit usage - utilization percentages, usage patterns, recency
Describe the data used to develop the TrueRisk Life model
- Credit archive data were divided into 3 groups
- 44M: training data used to create model w/ multivariate logistic regression
- 30M: test/validation data to prevent overfitting
- 18M: holdout data for mortality validation
Describe the data used to develop the TrueRisk Life model
- Data sources
- 1998 TransUnion credit archive: 92M useable records
- originally 175M (roughly 90% US pop)
- useable set only includes adulst w/ a SSN, credit history, and age 21-70
- 2011 SS Death Master File (death data)
Characteristics needed for final variables in predictive analysis
- low correlation
- high predictive power
- stable over time
- non-gameable
Describe how to interpret the TrueRisk Life score with respect to mortality prediction
- TrueRisk Life score for each individual = 1 to 100
- 1 = best credit-based score possible (100 is worst)
How does age affect a TrueRisk Life score?
-
credit history generally improves w/ age
- A/Es still increase within each TrueRisk Life score slightly w/ age
- ages 60+: health factors play a bigger role
How are A/E ratios directionally related to a TrueRisk Life score?
-
A/E ratios increase w/ TrueRisk score (worsening credit)
- worst 5% of scores mortalty more than 6x the best 5%
- A/E ratio curve smooth and monotonically increasing
Credit segmentation does not wear off over time
Describe how the TrueRisk Life score relates to traditional forms of UW
- Compared to full & non-medical UW
- FU business skews toward lower TrueRisk Life score (better credit)
- non-medical skews toward higher
- A/E ratios increase with score for both UW types
- early duration lapse rates increase w/ score
Review the FCRA Regulations on the back of this card
TrueRisk Life is subject to the FCRA
- Governs collection, assembly, and use of consumer report information
- Model does not use:
- Credit card transactions
- Social media
- Income
- Race
- Criminal records or court filings
- Property records
- Religion or national origin
- IRS data
- Education
- Checking/savings account information
Describe the LexisNexis Risk Classifier methodology
- score ranges from 200 to 997 (worst to best)
- Population studied: 8 million insurance shoppers ages 18–90
- Study period: 2006–2010
- based on public records, credit information, motor vehicle history
- Death sources: SS Death Master File, state records, other
- Expected mortality basis: 2008 VBT
Describe how the LexisNexis Risk Classifier score relates to each of the following:
- Mortality risk within age, gender, and duration
- Risk Classifier stratifies mortality risk across age, gender, and duration
- Mortality risk decreases with score within age bands and gender
- Older ages (70+) have less differentiation by score (health plays a bigger role)
- Credit tends to improve with age
- Differentiation does not wear off with time
Describe how the LexisNexis Risk Classifier score relates to each of the following:
- Public records
- Better public records and credit indicate better mortality
- Mortality risk increases as past due balances increase
- Other attributes that result in higher mortality:
- Derogatory public records (bankruptcies, liens, judgments)
- Felony and criminal convictions
- People with professional licenses have lower mortality
Describe how the LexisNexis Risk Classifier score relates to each of the following:
- Wealth
- Risk Classifier scores generally increase with wealth
Describe how the LexisNexis Risk Classifier score relates to each of the following:
- General population mortality risk
- Mortality risk decreases as the Risk Classifier scores increase
- People with better credit/behavior have lower mortality
- Exposure is concentrated in the middle range of the scores
- Exposure is very low at the best scores (900+)
List 4 possible methods for ending a mortality table
- Forced Method
- Blended Method
- Pattern Method
- Less-Than-One Method
List and describe 4 possible methods for ending a mortality table:
- Less-Than-One Method
- Choose an ultimate age
- No changes to other qx ’s
- At ultimate age, qx may be 1.000
List and describe 4 possible methods for ending a mortality table:
- Pattern Method
- qx ’s continue normally until they reach 1.000
- Age where qx = 1.000 is ultimate age