Topics 40-42 Flashcards

1
Q

Poor Data Quality

A

A list of negative impacts on a business from poor data quality.

  • Financial impacts:*
  • Businesses may experience lower revenues (e.g., lost sales), higher expenses (e.g., penalties, re-work costs), and lower cash flows as a result of inaccurate or incomplete data.
  • Confidence-based impacts:*
  • Managers may make incorrect business decisions based on faulty data.
  • Poor forecasting may result due to input errors.
  • Inaccurate internal reporting may occur with unreliable information.

Satisfaction impacts:

  • Customers may become dissatisfied when the business processes faulty data (e.g., billing errors).
  • Employees may become dissatisfied when they are unable to properly perform their job due to flawed data.

Productivity impacts:

  • Additional (corrective) work may be required, thereby reducing production output.
  • Delays or increases in processing time.

Risk impacts:

  • Underestimating credit risks due to inaccurate documentation, thereby exposing a lender to potential losses (e.g., Basel II Accords for quantifying credit risk).
  • Underestimating investment risk, thereby exposing an investor to potential losses.
  • Compliance impacts:*
  • A business may no longer be in compliance with regulations (e.g., Sarbanes-Oxley) if financial reports are inaccurate.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Identify the most common issues that result in data errors

A

The most common data issues that increase risk for an organization are as follows:

  • Data entry errors.
  • Missing data.
  • Duplicate records.
  • Inconsistent data.
  • Nonstandard formats.
  • Complex data transformations.
  • Failed identity management processes.
  • Undocumented, incorrect, or misleading metadata (description of content and context of data files).

From a financial perspective, such data errors (accidental or not) may lead to inconsistent reporting, incorrect product pricing, and failures in trade settlement.

Examples of risks arising out of data errors include:

  • Fraudulent payroll overpayments to fictitious employees or those who are no longer employed by the firm.
  • Underbilling for services rendered.
  • Underestimating insurance risk due to missing and inaccurate values (e.g., insured value).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain how a firm can set expectations for its data quality and describe some key dimensions of data quality used in this process

A

The important (but not complete) set of dimensions that characterize acceptable data include accuracy, completeness, consistency, reasonableness, currency, and uniqueness.

Accuracy

The concept of accuracy can be described as the degree to which data correctly reflects the real world object.

Completeness

Completeness refers to the extent to which the expected attributes of data are provided. There may be mandatory and optional aspects of completeness. Note that although data may be complete, it may not necessarily be accurate.

Consistency

Consistency refers to reasonable comparison of values between multiple data sets.

Note that consistency does not necessarily imply accuracy.

There are three types of consistency:

  1. Record level: consistency between one set of data values and another set within the same record.
  2. Cross-record level: consistency between one set of data values and another set in different records.
  3. Temporal level: consistency between one set of data values and another set within the same record at different points in time.

Reasonableness

Reasonableness refers to conformity with consistency expectations. For example, the income statement value for interest expense should be consistent or within an acceptable range when compared to the corresponding balance sheet value for long-term debt.

Currency

Currency of data refers to the lifespan of data. In other words, is the data still considered relevant and useful, given that the passage of time will gradually render it less current and less correct? Measurement of currency would consist of determining the frequency in which the data needs to be updated, and determining whether the existing data is still up-to-date.

Uniqueness

Uniqueness of data is tied into the data error involving duplicate records. Uniqueness suggests that there can only be one data item within the data set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the operational data governance process, including the use of scorecards in managing information risk

A

Operational data governance refers to the collective set of rules and processes regarding data that allow an organization to have sufficient confidence in the quality of its data.

Specifically, a data governance program should exist that clarifies the roles and responsibilities in managing data quality. A data quality scorecard could be used to monitor the success of such a program.

In short, operational data governance aims to detect data errors early on and then set into motion the steps needed to sufficiently deal with the errors on a timely basis. As a result, there should be minimal or no subsequent impact on the organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Data Quality Inspection vs. Data Validation

A

Data validation is a one-time step that reviews and assesses whether data conforms to defined business specifications. In contrast, data quality inspection is an on-going set of
steps aimed to:

  1. reduce the number of errors to a tolerable level,
  2. spot data flaws and make appropriate adjustments to allow data processing to be completed, and
  3. solve the cause of the errors and flaws in a timely manner.

The goal of data quality inspection is to catch issues early on before they have a substantial negative impact on business operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Data Quality Scorecard

A
  • A base-level metric is straightforward in that it is measured against clear data quality criteria. It is relatively easy to quantify whether the criteria is met in terms of arriving at a data quality score.
  • In contrast, a complex metric is a combined score that could be a weighted average of several different metrics (customized to the specific user(s)). Such a combined metric allows for a qualitative reporting of the impact of data quality on the organization. A data quality scorecard could report the metric in one of three ways: by issue, by business process, or by business impact.

Complex Metric Scorecard Viewpoints

Data quality issues view :

  • Considers the impact of a specific data quality problem over multiple business processes.
  • The scorecard shows a combined and summarized view of the impacts for each data problem.

Business process view :

  • For each business process, the scorecard has complex metrics that quantify the impact of each data quality problem.
  • It allows for the ability to determine exactly where in the business process the data problem is originating.

Business impact view :

  • The scorecard provides a high-level understanding of the risks embedded in data quality problems (i.e., a combined and summarized view).
  • By going into more detail, one can identify the business processes where the problems occur.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the seven Basel II event risk categories

A
  • It is important to recognize that the severity and frequency of losses can vary dramatically among the categories. For example, loss events are small but occur very frequently in the Execution, Delivery, and Process Management category.
  • Whereas, losses are much less frequent but typically have a large dollar amount in the Clients, Products, and Business Practices category as these loss events commonly arise from substantial litigation suits.
  • The modeling of loss event data differs for each category. Thus, it is important to make sure every event is placed in the appropriate group. When assigning loss events, consistency is more important than accuracy.
  • The process of identifying and classifying risks is commonly referred to as OpRisk taxonomy.
  • There are roughly three ways the firms drive risk taxonomy exercise: cause-driven, impact-driven, event driven. The last one (event-driven) is the superior one, the first one (cause-driven) is inferior. A mixture of the three method should never by applied!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Six level 2 categories for the event type identified as Execution, Delivery, and Process Management (EDPM)

A

Figure 2 identifies the six level 2 categories for the event type identified in level 1 as Execution, Delivery, and Process Management (EDPM).

For financial firms, the EDPM category typically has the highest frequency of occurrence compared to the other categories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Subcategories with examples for the Clients, Products, and Business Practices (CPBP) category

A

The second Basel II category listed in Figure 1 is Clients, Products, and Business Practices (CPBP). The most common type of loss events in this category arise from disagreements between clients and counterparties, as well as regulatory fines for negligent business practices and advisory fiduciary duties.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Examples of operational risk events for all categories except EDPM and CPBP

A
  • The Business Disruption and System Failures (BDSF) category is far less common than the first two Basel II categories. A system crash will result in substantial losses for a firm, but most of these losses would be categorized under the EDPM category. Basel II defines failed activity examples leading to loss events in the BDSF category as hardware, software, telecommunications, and utility outage.
  • The Basel II level 1 External Fraud category has only two sub categories: (1) theft and fraud and (2) systems security. Examples of activities that are classified under the systems security subcategory are hacking damage and theft of information with monetary losses.
  • The Basel II level 1 Internal Fraud category also has only two subcategories: (1) unauthorized activity and (2) theft and fraud. Examples of activities that are classified under unauthorized activity are intentionally not reporting transactions, unauthorized transaction type, and the intentional mismarking of positions.
  • The Basel II level 1 Employment Practices and Workplace Safety {EPWS) category has three subcategories: (1) employee relations, (2) safe environment, and (3) diversity and discrimination.
  • The last Basel II level 1 category for Op Risk loss events is Damage to Physical Assets (DPA). The only subcategory is disasters and other events.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Summarize the process of collecting and reporting internal operational loss data, including the selection of thresholds

A
  • The foundation of an OpRisk framework is the internally created loss database. Any event that meets a firm’s definition of an operational risk event should be recorded in the loss event database and classified based on guidelines in the operational risk event policy. A minimum of five years of historical data is required to satisfy Basel II regulatory guidelines.
  • Basel II requirements allow financial institutions to select a loss threshold for loss data collection. OpRisk managers should not set the threshold for collecting loss data too low (e.g., $0) if there are business units that have a very large number of smaller losses, because it would require a very high amount of reporting. OpRisk managers should also not just think in terms of large OpRisk threshold amounts.
  • When quantifying capital requirements, Basel II does not allow recoveries of losses to be included in the calculation. Regulators require this rule because gross losses are always considered for capital calculations to provide a more realistic view of the potential of large losses that occur once every 1,000 years.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Issue of timeframe for recoveries in collecting loss data and
reporting expected operational losses

A

Another important issue to consider in the process of collecting loss data is the timeframe for recoveries. The financial crisis of 2007—2009 illustrated that the complexity of some loss events can lead to very long time horizons from the start of the loss event to the final closure. It is important for firms to have a policy in place for the processing of large long timeframe losses.

To help firms know what to report, the International Accounting Standards Board (IASB) prepared IAS37, which establishes guidelines on loss provisions or the reporting of expected operational losses after the financial crisis in 2007—2009. Three important requirements for the reporting of expected operational losses are as follows:

  1. Loss provisions are not recognized for future operating losses.
  2. Loss provisions are recognized for onerous contracts where the costs of fulfilling obligations exceed expected economic benefits.
  3. Loss provisions are only recognized for restructuring costs when a firm has a detailed restructuring plan in place.

The IAS37 report states that loss provisions of restructuring costs should not include provisions related to relocation of staff, marketing, equipment investments, or distribution investments. Loss provisions must be recognized on the balance sheet when the firm has a current obligation regarding a past loss event. Balance sheet reporting of loss events is required when the firm is likely to be obligated for a loss and it is possible to establish a reliable estimate of the amount of loss. Gains from the disposal of assets or expected reimbursements linked to the loss should not be used to reduce the total expected loss amount. Reimbursements can only be recognized as a separate asset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Explain the use of a Risk Control Self-Assessment (RCSA) and key risk indicators (KRIs) in identifying, controlling, and assessing operational risk exposures

A

A risk control self-assessment (RCSA) requires the documentation of risks and provides a rating system and control identification process that is used as a foundation in the OpRisk framework. Once the RCSA is created, it is commonly performed every 12—18 months to assess the business unit’s operational risks.

The following four steps are commonly used in designing an RCSA program:

  1. Identify and assess risks associated with each business unit’s activities.
  2. Controls are then added to the RCSA program to mitigate risks identified for the firm. The manager also assesses any residual risk which often remains even after controls are in place.
  3. Risk metrics, such as key risk indicators or internal loss events, are used to measure the success of OpRisk initiatives and are linked to the RCSA program for review. These risk metrics would also include all available external data and risk benchmarks for operational risks.
  4. Control tests are performed to assess how effective the controls in place mitigate potential operational risks.

Key risk indicators (KRIs) are identified and used to quantify the quality of the control environment with respect to specific business unit processes. KRIs are used as indicators for the OpRisk framework in the same way that other quantitative measures are used in market and credit risk models. Even though KRIs may be costly to measure, they provide the best means for measuring and controlling OpRisk for the firm.

External data such as stock market indices and market interest rate levels are also used in RCSA frameworks.

Three common methods of gathering external data are: internal development, consortia, and vendors. Under the internal development method, the firm gathers and collates information from media such as news or magazines. This may be the least expensive method, but it may not be as accurate and has the potential to overlook large amounts of relevant data. The most popular consortium for banks is the Operational Riskdata eXchange Association (ORX), which contains large banks in the financial industry. While this consortium has a relatively low loss reporting threshold, there are often no details on the losses and therefore this data can only be used for measurement. There are a number of vendors who provide detailed analysis on losses that can be used for scenario analysis. However, the loss threshold for vendor data is often much higher and the information may not always be accurate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe and assess the use of scenario analysis in managing operational risk

A
  • Scenario analysis models are especially useful tools for estimating losses when loss experiences related to emerging risks are not available to the financial institution. Inputs to scenario analysis models are collected from external data, expert opinions, internal loss trends, or key risk indicators (KRIs).
  • Studies suggest that most financial firms analyze between 50 and 100 scenarios on an annual basis.
  • One of the challenges in scenario analysis is taking expert advice and quantifying this advice to reflect possible internal losses for the firm.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Biases and Challenges of Scenario Analysis

A
  • One of the biggest challenges of scenario analysis is the fact that expert opinions are always subject to numerous possible biases. There is often disparity of opinions and knowledge regarding the amount and frequency of losses. Expert biases are difficult to avoid when conducting scenario analysis.
  • Examples of possible biases are related to presentation, context, availability, anchoring, confidence, huddle, gaming, and inexpert opinion.
  • Presentation bias occurs when the order that information is presented impacts the expert’s opinion or advice. Another similar type of bias is context bias. Context bias occurs when questions are framed in a way that influences the responses of those being questioned.
  • Another set of biases are related to the lack of available information regarding loss data for a particular expert or for all experts. Availability bias is related to the expert’s experience in dealing with a specific event or loss risk. The availability bias can result in over or under estimating the frequency and amount of loss events. A similar bias is referred to as anchoring bias. Anchoring bias can occur if an expert limits the range of a loss estimate based on personal experiences or knowledge of prior loss events. The availability an expert has to information can also result in a confidence bias. The expert may over or under estimate the amount of risk for a particular loss event if there is limited information or knowledge available for the risk or the probability of occurrence.
  • Expert opinions are often obtained in structured workshops that have a group setting. This group setting environment can lead to a number of biases. Huddle bias (also known as anxiety bias) refers to a situation described by behavioral scientists where individuals in a group setting tend to avoid conflicts and not express information that is unique because it results from different viewpoints or opinions. An example of a huddle bias would be a situation where junior experts do not voice their opinions in a structured workshop because they do not want to disagree in public with senior experts. Another concern for group environments is the possibility of gaming. Some experts may have ulterior motives for not participating or providing useful information in workshops. Another problem with workshop settings is the fact that top experts in the field may not be willing to join the workshop and prefer to work independently. The lack of top experts then attracts less experienced or junior experts who may have an inexpert opinion. These inexpert opinions can then lead to inaccurate estimates and poor scenario analysis models.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Delphi technique

A

One technique that can help in scenario analysis is the Delphi technique. This technique originated from the U.S. Air Force in the 1950s and was designed to obtain the most reliable consensus of opinions from a group of experts.

More specifically, the Delphi technique is often applied in situations that exhibit some of the following issues:

  • Precise mathematical models are not available but subjective opinions can be gathered from experts.
  • Experts have a diverse background of experience and expertise, but little experience in communicating within expert groups.
  • Group meetings are too costly due to time and travel expenses.
  • A large number of opinions is required and a single face-to-face meeting is not feasible.

Under the Delphi technique, information is gathered from a large number of participants across various business units, areas of expertise, or geographical regions. The information is then presented in a workshop with representatives from each area. Recommendations are determined by this workshop group and quantified based on a pre-determined confidence level. A basic Delphi technique commonly goes through the following four steps:

  1. Discussion and feedback is gathered from a large number of participants who may have diverse exposure and experience with particular risks.
  2. Information gathered in step 1 is summarized and presented to a workshop group with representatives from various locations or business units surveyed.
  3. Differences in feedback are evaluated from step 2.
  4. Final evaluation and recommendations are made based on analysis of data and feedback from participants and/or respondents.
17
Q

Compare the typical operational risk profiles of firms in different financial sectors

A

Basel II defines level 1 business units into the following categories:

  • Trading and Sales,
  • Corporate Finance,
  • Retail Banking,
  • Commercial Banking,
  • Payment and Settlement,
  • Agency Services,
  • Asset Management, and
  • Retail Brokerage.
18
Q

Explain the role of operational risk governance and explain how a firms organizational structure can impact risk governance

A

There are four main organizational designs for integrating the OpRisk framework within the organization. Most large firms start at design 1 and progress to design 4 over time.

Design 1: Central Risk Function Coordinator

  • The risk manager gathers all risk data and then reports directly to the Chief Executive Officer (CEO) or Board of Directors. Regulators believe there exists a conflict of interest for reporting risk data directly to management or stakeholders that are primarily concerned with maximizing profits. Thus, this design can only be successful if business units are responsive to the Central Risk function without being influenced by upper management who controls their compensation and evaluates their performance.

Design 2: Dotted Line or Matrix Reporting

This type of framework is only successful if there is a strong risk culture for each business unit that encourages collaboration with the Central Risk function. Furthermore, this dotted line structure is preferred when there is a culture of distrust of the Central Risk function based on some historical events.

Design 3: Solid Line Reporting

For larger firms that have centralized management, the solid line reporting is more popular. The solid line indicates that each business unit has a risk manager that reports directly to the Central Risk function. This design enables the Central Risk function to more effectively prioritize risk management objectives and goals for the entire firm. The solid line reporting also creates a more homogeneous risk culture for the entire organization.

Design 4: Strong Central Risk Management

Many large firms have evolved into a strong central risk management design either voluntarily or from regulatory pressure. Under this design, there is a Corporate Chief Risk Officer who is responsible for OpRisk management throughout the entire firm. The Central Risk Manager monitors OpRisk in all business units and reports directly to the CEO or Board of Directors. Regulators prefer this structure as it centralizes risk data which makes regulatory supervision easier for one direct line of risk management as opposed to numerous risk managers dispersed throughout various business units of the firm.

19
Q

Explain the motivations for using external operational loss data

A

Senior management should take an interest in external events because news headlines can provide useful information on operational risk. Examining events among industry peers and competitors helps management understand the importance of effective operational risk management and mitigation procedures. This is why external data is the key to developing a strong culture of operational risk awareness.

20
Q

Subscription Databases and Consortium Data

A

Subscription databases (for example, IBM Algo FIRST database) include descriptions and analyses of operational risk events derived from legal and regulatory sources and news articles. This information allows firms to map events to the appropriate business lines, risk categories, and causes. The primary goal of external databases is to collect information on tail losses and examples of large risk events.

Besides the FIRST approach to collecting data, there are also consortium-based risk event services that provide a central data repository. Operational Riskdata eXchange Association (ORX) is a provider of this data, which is gathered from members to provide benchmarking.

Unlike subscription services, ORX data does not suffer from the availability bias that skews the FIRST data (which relies on public sources of information). With ORX, all events are entered anonymously into the database; however, the data relates only to a small subset of firms that are members of the consortium.

21
Q

Explain ways in which data from different external sources may differ

A

Differences in the collection methods between the ORX and the FIRST databases have an interesting impact on the relative distribution of the loss data.

Size and Frequency of Losses by Risk Category

  • When comparing the size of losses by risk category in the ORX and FIRST databases, we see that the FIRST database has a significantly higher percentage of losses for Internal Fraud than ORX does. In contrast, ORX has a significantly higher percent of Execution, Delivery, and Process Management losses than does FIRST. This could be because not all Execution, Delivery, and Process Management losses are reported by the press, implying that the FIRST database is missing many events and has an unavoidable collection bias.
  • The primary difference between these two databases with respect to Execution, Delivery, and Process Management events is that ORX data is supplied directly from member banks, which does not include all banks, implying that ORX also suffers from collection bias. This is in contrast to the FIRST database that collects data on all firms, including a significant number of firms outside of Basel II compliance.
  • Regarding the frequency of losses by risk category, Execution, Delivery, and Process Management events are missing from FIRST data, presumably because they rarely get press coverage. ORX has a larger frequency of External Fraud than FIRST, which suggests that such events are often kept from the press. ORX data also shows a large amount of External Fraud due to the participation of retail banks in the consortium.

Size and Frequency of Losses by Business Line

  • When comparing the size of losses by business lines in the ORX and FIRST databases, ORX losses are heavily weighted toward Retail Banking. Also, Commercial Banking accounts for a smaller percentage of losses for ORX than for FIRST, which may be due to recent commercial banking events making it into the press and, therefore, into the FIRST database (but not the ORX database).
  • Regarding the frequency of losses by business line, ORX data is driven by Retail Banking events, whereas FIRST events are more evenly distributed among the various business lines. Also, the majority of events for ORX and FIRST occur in Retail Banking but by a slimmer margin for the FIRST database.
22
Q

Describe the challenges that can arise through the use of external data

A
  • There are several challenges with using external data.
  • For example, external data derived from the media is subject to reporting bias. This is because it is up to the press to decide which events to cover, and the preference is for illegal and dramatic acts.
  • We should also consider that a major gain is less likely to be reported by the media than a major loss.
  • Another barrier to determining whether an event is relevant is that some external events may be ignored because they are perceived as types of events that “could not happen here.”
  • Finally, the use of benchmark data may be a concern because there is a chance that comparisons may not be accurate due to different interpretations of the underlying database definitions.
  • One of the best ways to use external data is not to spot exact events to be avoided but rather to determine the types of errors and control failings that could cause similar losses.
  • External data can serve a valuable role in operational risk management if its limitations are acknowledged.
23
Q

Describe the Societe Generale operational loss event and explain the lessons learned from the event

A

Between July 2005 and January 2008, Kerviel established large, unauthorized positions in futures contracts and equity securities. To hide the size and riskiness of these unauthorized positions, he created fake transactions that offset the price movements of the actual positions. Kerviel created fake transactions with forward start dates and then used his knowledge of control personnel confirmation timing to cancel these trades right before any confirmations took place. Given the need to continuously replace fake trades with new ones, Kerviel created close to 1,000 fictitious trades before the fraud was finally discovered.

Lessons to be learned specific to this operational loss event include the following:

  • Traders who perform a large amount of trade cancellations should be flagged and, as a result, have a sample of their cancellations reviewed by validating details with trading counterparties to ensure cancellations are associated with real trades.
  • Tighter controls should be applied to situations that involve a new or temporary manager.
  • Banks must check for abnormally high gross-to-net-position ratios. High ratios suggest a greater probability of unauthorized trading activities and/or basis risk measurement issues.
  • Control personnel should not assume the independence of a trading assistant’s actions.
  • Trading assistants often work under extreme pressure and, thus, are susceptible to bullying tactics given that job performance depends on them following direction from traders.
  • Mandatory vacation rules should be enforced.
  • Requirements for collateral and cash reports must be monitored for individual traders.
  • Profit and loss activity that is outside reasonable expectations must be investigated by control personnel and management. Reported losses or gains can be compared to previous periods, forecasted values, or peer performance.