Part 3 Flashcards
Scenario assessment
=A method used to evaluate the impact of potential risk events on an organisation.
It creates detailed scenarios to understand how differenet high severity, low frequency risk events affect the business
Severity assessments - part of the scenario assessment
- severity assessments evaluate total fin/non-fin impacts
- They convert non financial impacts (eg service interruption) into financial terms for a complete assessment.
- Assess impact after mitigation efforts (not including insurance)
- Link impacts to potential financial loss of business (ie lost clients worth $50 mil
- Use peers loss data to benchmark severity, particularly usefull for unfamilliar 1 off scenarios
- smaller firms can leverage large firm’s loss data to adjust their risk framework as they grow.
Importance of scenario assessment
- Enhances preparedness for unexpected events
- provides insights into potential impacts/responses
- Supports strategic plannning and decision making
- Helps calculate regulatory capital requirements (under the Advance Measurement Approach)
Steps in conducting a scenario assessment
- Prep/governance - Establish a strcutured approach with clear governance
- scenario generation - generate a range of scenarios and select the most relevant ones
- assess the impact/likelihood of each scenario
- validate scenarios with relevant stakeholders/experts
- Use scenarios to inform risk mgmt and strategic planning
- aggregate the results and report to senior mgmt/regulators
Frequency assessment
=Evaluates the probability of each scenario occuring in the coming year - alligning with the capital measurement
Attributing probability to rare, low frequency events is difficult and can lead to inaccurate.
Internal risks can are aligned with results from the RCSA exercise.
Scenario generation and selection techniques
- Brainstorming - generates a wide range of scenarios and encourgaes creativity/divergent thinking
- Clearly outline selection criteria to prioritise scenarios for their impact and impact on the firm.
- Consolidate similar scenarios and exclude negligible ones.
- Focus on around 15 relevant scenarios for detailed assessment.
Scenario assessment techniques
- Strucutured expert analysis - use strcutured questions/benchmakrs to reduce estimation bias based on past events
- Availability and recency bias - recent events seems more likely to occur than older ones. therefore data from further back is used for stable risks (pandemics, fraud) and more recent for rapidly evolving risks (cybercrime)
- Anchoring, confirmation and group polarisation bias - ‘herd mentality’ causing bias. Mitigated by doing private votes before results are discussed.
- group size/dynamics - smaller groups of SMEs are more effective than large groups as large groups can be inefficient/biased
- Bias awareness and training - estimation biases are deeply ingrained and understanind them can help individuals avoid being biased.
Delphi method of scenario assessment/generation
Focusses on pooling expert judgements
1. Silent collection - individuals write down their assessment with influence from others
2. Disclose estimates - all responses are collated and shared with the group for comparison.
3. Optimal reassessment - participants are allowed to modify answers after seeing estimates/answers of others. significant changes can trigger more rounds however forcing the same answer at this stage is discouraged.
4. final estimate calculation - calculated using the lowest, highest and average responses (weighted for accuracy)
Final estimate = [lowest response + (n-2) X average response + highest response)] /number of participants (n)
Fault Tree Analysis (FTA)
= breaks down scenarios into conditions that must occur in order to cause a disaster.
This helps banks (classed as high reliability organisations) layer independant controls to lower risks. e.g. layering 3 independant controls with a 10% failure rate lowers the chance of them all failing to 0.1%.
Fault trees and Bayesian models - condittional probability
- Conditional probability in controls - more realistic scenarios often have partialy dependant control failures that require conditional probabilities.
- Bayesian models - update likelihood assessments based on new information or expert opinions, using conditional probabilities to refine estimations
- Recommendation for using FTA as a scenario assessment.
- breaking down scenarios into likelihood and impact components enhances accuracy and transparency of the assessment process.
- Use experts/stakeholders to review scenarios and offer feedback. Also need to ensure they are regularly updated
Scenario documentation and validation
- The entire scenario analysis process must be documented in detail with each scenario summarised in a sheet with title, description, rationale, assessment range and relevant incidents.
- use standardised templates to ensure consistency
- Use independant third parties to review consistency of the process
- Validation is reliant on documents from scenario workshops etc.
- Similar scenarios can be combined to assess collective risk.
- The scenario list is then presented to the board for approval
Large firms will have 50 scenarios, mid firms have 15 and small 6-10
benefits of scenario assessment
- Enhanced preparedness - improves firm’s readiness for unexpected events
- Improves strategic decision making/planning
- supports regulatory compliance for risk assessments
- Improves risk mgmt and mitigation plans
- Facilitiates continuous improvement in risk mgmt practice
Systematic estimation and mitigation of bias
- SMEs create a scenario assessment based on likelihood of worst case events occuring in a variety of time frames
This is however, less popular with regulators due to it’s lack of structure/unreliability
.+ quick and inexpensive
.- relies on selecting experts who may be biased
Mgmt lessons from scenario analysis
- puts focus on response and risk mitigation instead of exact probabilities of the risk happening
- scenarios should be grouped by their impact on the firm - facilitates focusses assessments and mitigation efforts
- If risks breach the firms appetite, further mitigation/escalations should be required and in place
- Risks within the appetite are required to be continuously monitored
- firms must have responses in place for events even if they are unlikely
Regulatory capital
= minimum amount of capital that a fin. institution must hold as required by regulators
It ensures that institutions can absorb a reasonable loss and it protects depositors/clients and the financial system
It helps prevent runs on banks thus enhancing the strength of the fin. system.
3 Basel pillars
- min. regulatory capital to cover mkt, credit and operational risks
- supervisory review process - allows for adjustments to the capital required for pillar 1 based on an institution’s risk exposure.
- market discipline -
Regulatory capital for operational risk - Basel pillar 1
- Standard approach for op risk = regulatory capital is based on the avergae annual gross income over the past 3 years. Basic indicator approach - Of this, 15% (alpha factor) will be required to held by local banks. For the standard ised approach (TSA) reg. capital is based on the banks risk profile (beat approach) and can vary from 12,15,18%
- Beat values - calculated in the late 1990s out of a sample of 29 firms, NOT REPRESENTATIVE OF TODAY’S LANDSCAPE, therefore the focus for banks/regulators have shifted to pillar 2.
- Sound management should be a key focus as well as having regulatory capital to prevent failures of operation.
- Principles for sound operation risk introducted in 2003 and revised iin 2011 and 2014
Capital modelling approaches
- Standardised approach: uses predefined risk weightings for certain asset classes. it is very simple but does not effectively consider the institution’s risk profile
- Internal ratings based approach (IRB): allows banks to estimate risk using their own risk models. more effectively models the firm’s risk but requires regulatory approval
Advanced modelling techniques
- Value at risk (VAR) - measure potential loss n a portfolio over a defined period of time to a set confidence level (ie 90% chance of loss in a 10 year period with a 95% confidence rate)
- stress testing: tests extreme scenarios to assess impact on capital
- scenario analysis - evaluates teh impact on various scenarios on a firm’s capital
8 principals of sound op risk mgmt
- Op risk culture
- Op risk mgmt framework
- board of directors
- Op risk appetite and tolerance
- senior mgmt
- risk identification and assessment
- change mgmt
- moitoring and reporting
- control and mitigation
- business resillience and continuity
- role of disclosure
Advanced measurement approach (AMA) criteria
- Incident reporting history of 5 years (now 10)
- mapping of risks and losses to regulatory categories
- operational risk mgmt function
- implicationof the senior mgmt in risk management
- written policies and procedures
- active day to day op risk mgmt
regulatory capital for op risk
- internal loss data - info on previous losses/trends
- rules on how to map incidents and their data
- external data - data sourced from public/private databases to compare against internal data
- Mixing internal and external data is important to adjust data to suit the firm’s size as well as using one data set when the other does not have enough info.
4 types of models
- Stochastic - part of the loss distribution approach (LDA), purely quantitative and focus on past losses. Extrapolate future losses distributions up to the 99.9th %. quite common
- scenario based - qualitative model for when internal loss data is insufficient. Common in the EU and insurance firms
- Hybrid - most common and allign with AMA reg expectations. creates a 99.9% confidence loss distribution report based on past incident data and scenario based losses.
- Factor models - explain behaviour of variables based on influencing factors (economy, controls etc) and are common in equity pricing. Overtaken by stochastic models as they are hard to calibrate
Challenges of capital modelling
- Data quality - ensuring data is accurate and comprehensive
- Model risk - risk of model inaccuracies leading to false outputs
- regulatory changes - addapting to evloving reg requirements
- complexity - ensuring the models are understood by those using them
Model validation and back testing
- Model validation - ensures models are accurate and reliable. It checks assumptions, methodologies and data used.
- back testing - compares model predicitions to actual outcomes to assess accuracy. helps refine the models accuracy
Loss Distribution approach (LDA) overview
- Models represent reality through repeated observations that create patterns.
- Helped combat data scarcity when regulators needed banks to calculate risk capital
- LDA - modelled using discrete distriubtions, typically poisson distriubtion which is used bu 90% of AMA firms. The other 10% use Negative Binomial Distriubutions.
- Severity - is modelled using continuous, asymmetric distriubutions to account for both small and major losses. Lognormal distribution is the most common but Weibull and GPD are used for heavy tailed data
- Aggregated loss distribution - frequency and severity are combined into an aggregated distribution. Monte Carlo Simulation is most common. Fast fourier and Paner recursion are other models that are more maths focussed than computing power.
Units of measure for capital modelling
- External fraud = depends on the LOB
- Internal fraud = no. events per business entity (under the same senior mgmt)
- Damage to physcial assets = grouped for all LOBs
- Processing error events = depends on LOB
basel pillar 2 - supervisory review process (SREP), ICAAP and CCAR
- SREP - allows regulators to asses a firm’s impact on the fin system if they fail considering the firm’s nature, scale and complexity of their activities. Stress testing is typically used here.
- Capital sufficiency evaluation - EU = ICAAP, Insurance firms = Own Risk and Solvency Assesssment (ORSA), US = Comprehensive Capital Analysis and review (CCAR)
- Solvency assessments - indetifies threats and large loss scenarios and assesses a firm’s resillience to internal and external shocks. Stress testing is used
- governance and culture - regulators form and opinion on a firm’s governance, culture and values and mgmt efffectiveness through risk reporting.
Stress testing
- Sensitivity stress testing - test robustness of models by changing key parameters (these can affect business plans, revenue and solvency)
- Scenario stress testing - focusses on low frequency high impact events and their impact. The US places more focus on legal/compliance events than the EU
- reverse stress testing - identifies events that could cause firms to fail and aims to ensure the events are unlikely beyond a 99.9% confidence level. If the event is unmanagable wind-down planning is implemented to reduce disruption.
- macroeconomic stress testing - regulators provide macro shock scenarios to large fin firms to assess solvency. smaller firms are meant to test against global macro shocks.
Wind-down planning
- identifies situations where the firm is no longer viable (large fin losses, loss of key clients/infrastructure etc) by reviewing the business model, vulnerabilities and key revenue drivers.
- the firm must identify indicators of winding down and assess their internal/external impact.
- mitigation measures must be put in place as well as transition planning.
Orderly closure requires assessment of liquidity solvenCY, personnel and infrastrcuture.
Operational risk governance
= policies, processes and strcutures used to manage operational risks.
It ensures risks are ID’d, assessed, managed and monitored effectively.
It provides a strcutured appraoch to managing risks which helps firms comply with regulations as well as improve stability/resillience. promotes a risk aware culture and supports informated decision making/strategic planning.
3 lines of defence
- 1st line - operational management - identifies and manages risks within their areas.
- 2nd line - Risk mgmt and compliance functions - provides support/oversight to the first line.
- 3rd line - internal audit - provides independant assurance on the effectiveness of risk mgmt.
1st line of defence - operational mgmt
- ID’s situations where the firm is at risk/not viable
- reviews business model, vulnerabilities, key revenue drivers
- firms must ID winding down indicators and their impact internally/externally.
- assess liquidity, solvency, personnel, infrastrcuture
- includes mitigation measures and transition planning
Line of defence ‘1.5’ - risk champions
- main correspondants for risk issues
- collect and record risk events/losses
- mapping risks and controls in line with the group definitions
- maintaining the controls rules
- Championing redesing of operating procedures if needed
- be part of the follow up process after audits
2nd line of defence - risk and compliance function
3 key roles
* Define risk appetite for the business and the board
* Monitor risk exposure within risk appetite and own the risk mgmt framework
* challenge and advise on strategic decisions relating to risk taking
Helps guide decision making by asessing risks in new ventures, products, other strateigic actions.
has the authority to hal decisions that may breach risk appetite
3rd line of defence - internal audit
- Independant party to the risk management process.
- It assesses the risks and compliance across the firm using some similar and some difference risk assessment tools to the first and second LOD.
- Some firms allow for audit and risk to overlap their findings
Institute of internal auditors guidance:
* Internal audit must be independant of risk mgmt and compliance
* Scope: must assess the effectiveness of the above functions and conduct it’s own review of activites
* judgement: can leverage work done by other depts. but must use informed judgement and do it’s own testing.
Risk committee and organisation
- ultimate authority - above the 3 LOD and is responsible for the overall governance of the firm
- 1 delegates powers to committees such as teh executive committe, risk and audit
- executive and non executive directors manage the sub depts.