Topics 3-5 Flashcards

Backtesting VaR VaR Mapping Messages from the Academic Literature on Risk Measurement for the Trading Book

1
Q

Explain the significant difficulties in backtesting a VaR model

A
  • VaR models are based on static portfolios, while actual portfolio compositions are dynamic and incorporate fees, commissions, and other profit and loss factors. This effect is minimized by backtesting with a relatively short time horizon such as daily holding periods. The backtesting period constitutes a limited sample, and a challenge for risk managers is to find an acceptable level of exceptions.
  • Another difficulty with backtesting is that the sample backtested may not be representative of the true underlying risk. The backtesting period constitutes a limited sample, so we do not expect to find the predicted number of exceptions in every sample.
  • Risk managers should track both actual and hypothetical returns that reflect VaR expectations. Generally, we compare the VaR model returns to cleaned returns (i.e., actual returns adjusted for all changes that arise from changes that are not marked to market, like funding costs and fee income). Both actual and hypothetical returns should be backtested to verify the validity of the VaR model, and the VaR modeling methodology should be adjusted if hypothetical returns fail when backtesting.
  • Backtesting can never tell us ex ante with 100.0% confidence whether our model is good or bad. Our decision to deem the model good or bad is itself a probabilistic (less than certain) evaluation.
  • An actual portfolio is “contaminated” by (dynamic) changes in portfolio composition (i.e., trades and fees), but the VaR assumes a static portfolio.
    • Contamination will be minimized only in short horizons
    • Risk manager should track both the actual portfolio return and the hypothetical return (representing a static portfolio)
      • If the model passes back testing with hypothetical but not actual returns, then the problem lies with intraday trading.
      • In contrast, if the model does not pass back testing with hypothetical returns, then the modeling methodology should be reexamined
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Verify a model based on exceptions or failure rates

A
  • The backtesting period constitutes a limited sample at a specific confidence level. We would not expect to find the predicted number of exceptions in every sample.
  • Failure rates define the percentage of times the VaR confidence level is exceeded in a given sample.
  • An unbiased measure of the number of exceptions as a proportion of the number of samples is called the failure rate. The probability of exception, p, equals one minus the confidence level (p = 1 —c). If we use N to represent the number of exceptions and T to represent the sample size, the failure rate is computed as N / T. This failure rate is unbiased if the computed p approaches the confidence level as the sample size increases. Nonparametric tests can then be used to see if the number of times a VaR model fails is acceptable or not.
  • The confidence level at which we choose to reject or fail to reject a model is not related to the confidence level at which VaR was calculated
  • We verify a model by recording the failure rate which is the proportion of times VaR is exceeded in a given sample. Under the null hypothesis of a correctly calibrated model (Null H0: correct model), the number of exceptions (x) follows a binomial probability distribution. The expected value of (x) is p*T and a variance, σ2(x) = p*(1-p)*T (!here p is significnce level of VaR). By characterizing failures with a binomial distribution we are assuming that exceptions (failures) are independent and identically distributed (i.i.d.) random variables.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define and identify Type I and Type II errors

A
  • When determining a range for the number of exceptions that we would accept, we must strike a balance between the chances of rejecting an accurate model (Type I error) and the chances of failing to reject an inaccurate model (Type II error).
  • The goal in backtesting is to create a VaR model with a low Type I error and include a test for a very low Type II error rate.
  • The binomial test is used to determine if the number of exceptions is acceptable at various confidence levels.
  • When too many exceptions are observed, the model is “bad” and underestimates risk. This is a major problem because too little capital may be allocated to risk-taking units; penalties also may be imposed by the regulator.
  • When too few exceptions are observed, this also problematic because it leads to an inefficient allocation of capital across units.
  • A good (aka, accurate) model will produce approximately the number of expected exceptions.
  • We cannot eliminate the possibility of a decision error:
    • If we analyze the backtest results, and if we decide to accept the null (i.e., accept that the VaR model is correct), we necessarily risk a Type II error because it remains statistically possible for a bad VaR model to produce an unusually low number of exceptions (in a sense, our actual results were lucky). Notice that if we decide the model is good, under this null hypothesis it is impossible to produce a Type 1 error.
    • If we analyze the backtest results, and if we decide to reject the null (i.e., reject the VaR model as bad), we necessarily risk a Type I error because it remains statistically possible for a good VaR model to produce an unusually high number of exceptions (in a sense, our actual results were unlucky). Notice that if we decide the model is bad, under this null hypothesis it is impossible to produce a Type 2 error.
  • Under the dubious assumption of independence (recall the binomial assumes i.i.d.), the binomial model can be used to test whether the number of exceptions is acceptably small. If the number of observations is large, we can approximate this binomial with the normal distribution using the central limit theorem. Jorion provides the shortcut based on the normal approximation:

z = (x - pT) / [p(1-p)T]0.5 ≈ N(0,1)

! ​where:

  • x - the number of exceptions
  • p - the significance level of VaR
  • T - time horizon of the backtest
  • z - test-statistic for the backtest confidence level (it two-tailed, then use the corresponding table)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Unconditional Coverage

A
  • The term unconditional coverage refers to the fact that we are not concerned about independence of exception observations or the timing of when the exceptions occur. We simply are concerned with the number of total exceptions. We would reject the hypothesis that the model is correct if the LRuc > 3.84
  • Increasing the sample size allows us to reject the model more easily.
  • It is difficult to backtest VaR models constructed with higher
    levels of confidence, because the number of exceptions is often not high enough to provide meaningful information.
  • The tail points of the unconditional log-likelihood ratio use a chi-squared distribution with one degree of freedom when T is large and the null hypothesis is that p is the true probability or true failure rate.

* The chi-squared test statistic is the square of the normal distribution test statistic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Using VaR to Measure Potential Losses

A

Oftentimes, the purpose of using VaR is to measure some level of potential losses. There are two theories about choosing a holding period for this application.

The first theory is that the holding period should correspond to the amount of time required to either liquidate or hedge the portfolio. Thus, VaR would calculate possible losses before corrective action could take effect.

The second theory is that the holding period should be chosen to match the period over which the portfolio is not expected to change due to non-risk-related activity (e.g., trading).

The two theories are not that different. For example, many banks use a daily VaR to correspond with the daily profit and loss measures. In this application, the holding period is more significant than the confidence level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Conditional Coverage

A
  • Conditioning considers the time variation of the data. In addition to having a predictable number of exceptions, we also anticipate the exceptions to be fairly equally distributed across time. A bunching of exceptions may indicate that market correlations have changed or that our trading positions have been altered.
  • By including a measure of the independence of exceptions, we can measure conditional coverage of the model. Christofferson proposed extending the unconditional coverage test statistic (LRuc) to allow for potential time variation of the data. He developed a statistic to determine the serial independence of deviations using a log-likelihood ratio test (LRind). The overall log-likelihood test statistic for conditional coverage (LRcc) is then computed as: LRcc = LRuc + LRind
  • At the 93% confidence level, we would reject the model if LRcc > 5.99 and we would reject the independence term alone if LRind > 3.84. If exceptions are determined to be serially dependent, then the VaR model needs to be revised to incorporate the correlations that are evident in the current conditions.
  • The test for conditional coverage should be performed when exceptions are clustered together.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the Basel rules for backtesting

A
  • The Basel Committee is primarily concerned with identifying whether exceptions are the result of bad luck (Type I error) or a faulty model (Type II error). The Basel Committee requires that market VaR be calculated at the 99% confidence level and backtested over the past year. At the 99% confidence level, we would expect to have 2.3 exceptions (230 x 0.01) each year, given approximately 250 trading days.
  • To mitigate the risk that banks willingly commit a Type II error and use a faulty model, the Basel Committee designed the Basel penalty zones. Banks are penalized for exceeding four exceptions per year.
  • Regulators are more concerned about Type II errors.
  • Exemptions may be excluded if they are the result of bad luck that follows from an unexpected change in interest rates, exchange rates, political event, or natural disaster. Bank regulators keep the description of exceptions intentionally vague to allow adjustments during major market disruptions.
  • Industry analysts have suggested lowering the required VaR confidence level to 93% and compensating by using a greater multiplier. Thus, inaccurate models would fail to be rejected less frequently.
  • Another way to make variations in the number of exceptions more significant would be to use a longer backtesting period. This approach may not be as practical because the nature of markets, portfolios, and risk changes over time.
  • The exceptions in the yellow zone are especially inconclusive (i.e., both
    accept and reject allow for non-trivial possibility of an error), the yellow zone delegates the penalty to the supervisors.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Four categories of causes for exceptions by Basel Committee

A

The penalty (raising the multiplier from three to four) is automatically required for banks with 10 or more exceptions. However, the penalty for banks with five to nine exceptions is subject to supervisors’ discretions, based on what type of model error caused the exceptions. The Committee established four categories of causes for exceptions and guidance for supervisors for each category:

  • The basic integrity of the model is lacking. Exceptions occurred because of incorrect data or errors in the model programming. The penalty should apply.
    • The bank’s systems simply are not capturing the risk of the positions themselves; e.g., the positions of an overseas office are being reported incorrectly.
    • Model volatilities and/or correlations were calculated incorrectly; e.g., the computer is dividing by 250 when it should be dividing by 225.
  • Model accuracy needs improvement. The exceptions occurred because the model does not accurately describe risks. The penalty should apply.
  • Intraday trading activity. The exceptions occurred due to trading activity (VaR is based on static portfolios). The penalty should be considered.
  • Bad luck. The exceptions occurred because market conditions (volatility and correlations among financial instruments) significantly varied from an accepted norm. These exceptions should be expected to occur at least some of the time. No penalty guidance is provided.
    • There was a large (and money-losing) change in the bank’s positions or some other income event between the end of the first day (when the risk estimate was calculated) and the end of the second day (when trading results were tabulated).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain the principles underlying VaR mapping, and describe the mapping process

A

Value at risk (VaR) mapping involves replacing the current values of a portfolio with risk factor exposures. These positions are then mapped to risk factors by means of factor exposures. Mapping involves finding common risk factors among positions in a given portfolio. If we have a portfolio consisting of a large number of positions, it may be difficult and time consuming to manage the risk of each individual position. Instead, we can evaluate the value of these positions by mapping them onto common risk factors (e.g., changes in interest rates or equity prices). By reducing the number of variables under consideration, we greatly simplify the risk management process.

Mapping can assist a risk manager in evaluating positions whose characteristics may change over time, such as fixed-income securities. Mapping can also provide an effective way to manage risk when there is not sufficient historical data for an investment, such as an initial public offering (IPO).

The principles for VaR risk mapping are summarized as follows:

  • VaR mapping aggregates risk exposure when it is impractical to consider each position separately.
  • VaR mapping simplifies risk exposures into primitive risk factors.
  • VaR risk measurements can differ from pricing methods where prices cannot be aggregated.
  • VaR mapping is useful for measuring changes over time, as with bonds or options.
  • VaR mapping is useful when historical data is not available.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain how the mapping process captures general and specific risks

A

The types and number of risk factors we choose will have an effect on the size of residual or specific risks.

Specific risks arise from unsystematic risk or asset-specific risks of various specific risks. Specific risks arise from unsystematic risk or asset-specific risks of various positions in the portfolio. The more precisely we define risk, the smaller the specific risk.

Basel II requires a charge for specific risk in the following instances:

  • Fixed income positions under standardized method
  • Equity positions under standardized method
  • Internal VaR model under IMA approach: please note the supervisor may allow the bank to include specific risk in their internal model, under conditions, but this is still a requirement for the bank to buffer against idiosyncratic risk (if not, they must use the specific charge under the standardized approach)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Differentiate among the three methods of mapping portfolios of fixed income securities

A

The three methods of mapping for fixed-income securities are:

  • Principal mapping. This method includes only the risk of repayment of principal amounts. For principal mapping, we consider the average maturity of the portfolio. VaR is calculated using the risk level from the zero-coupon bond that equals the average maturity of the portfolio. This method is the simplest of the three approaches.

Principal mapping effectively “strips out” out the coupons and considers only the principal repayments at maturity. With principal mapping, only one risk is chosen that corresponds to the average portfolio maturity.

  • Duration mapping. With this method, the risk of the bond is mapped to a zero-coupon bond of the same duration. For duration mapping, we calculate VaR by using the risk level of the zero-coupon bond that equals the duration of the portfolio. Note that it may be difficult to calculate the risk level that exactly matches the duration of the portfolio.

Duration mapping also maps to only one risk factor but it is the portfolio’s duration, which improves on principal mapping because coupons are incorporated.

Under duration mapping, the portfolio is mapped to a single risk factor: the portfolio’s average (modified) duration.

* Modified duration = Macauley Duration​ / (1+YTM/n), where n is a number of coupon periods per year

  • *Macaulay duration is the portfolio’s weighted average maturity*
  • Cash flow mapping. With this method, the risk of the bond is decomposed into the risk of each of the bond’s cash flows. Cash flow mapping is the most precise method because we map the present value of the cash flows (i.e., face amount discounted at the spot rate for a given maturity) onto the risk factors for zeros of the same maturities and include the intermaturity correlations.

Cash-flow mapping consists of grouping all cash flows on term-structure vertices that correspond to maturities for which volatilities are provided (in this case: 1, 2, 3, 4, and 5 year maturities). The portfolio maps to multiple “primitive” risk factors.

The undiversiifed portfolio VaR implicitly assumes perfect correlation among the risk factors, and is therefore simply the sum of the individual VaRs.

The diversified portfolio VaR utilizes the correlation matrix.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain how VaR can be used as a performance benchmark

A

It is often convenient to measure VaR relative to a benchmark portfolio. This is what is referred to as benchmarking a portfolio. Portfolios can be constructed that match the risk factors of a benchmark portfolio but have either a higher or a lower VaR.

The VaR of the deviation between the two portfolios is referred to as a tracking error VaR. In other words, tracking error VaR is a measure of the difference between the VaR of the target portfolio and the benchmark portfolio.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Mapping forwards

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Mapping of commodity forwards

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Mapping FRAs

A

The forward rate agreement can be decomposed into two zero coupon building blocks. If the position is long a 6 x 12 FRA, the building blocks are: Long 6-month bill plus short 12-month bill.

The key to mapping a complex or esoteric instrument is to first decompose the instrument into two or more constituent instruments. In other words, handle complex instruments by breaking them into a combination of simple instruments.

  • Consider a “fixed-for-floating” swap where we pay a fixed rate (as a percentage of a notional amount) in exchange for receiving an indexed rate such as LIBOR.
    • The swap is broken into two parts: A floating-rate note based and “left over” fixed cash flows.

Change in option price/value can be approximated by taking partial derivatives. A long position in an option consists of two building blocks:

  • Long position in the stock equal to delta (Δ) plus
  • Short position in the underlying asset financed by a loan equal to [(delta Shares) – value of call]
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain the following lessons on VaR implementation: time horizon over which VaR is estimated, the recognition of time varying volatility in VaR risk factors, and VaR backtesting

A
  • There is no consensus regarding the proper time horizon for risk measurement. Thus, there is not a universally accepted approach for aggregating various VaR measures based on different time horizons.
  • The commonly used square-root-of-time (“square root rule”) scaling rule has been found to be an inaccurate approximation in many studies. This rule ignores future changes in portfolio composition.
  • Time-varying volatility results from volatility fluctuations over time. The effect of time-varying volatility on the accuracy of VaR measures decreases as time horizon increases. However, volatility generated by stochastic (i.e., random) jumps will reduce the accuracy of long-term VaR measures unless there is an adjustment made for stochastic jumps. It is important to recognize time-varying volatility in VaR measures since ignoring it will likely lead to an underestimation of risk. In addition to volatility fluctuations, risk managers should also account for time-varying correlations when making VaR calculations.
  • To simplify VaR estimation, the financial industry has a tendency to use short time horizons. It is more preferred to instead allow the risk horizon to vary based on specific investment characteristics. When computing VaR over longer time horizons, a risk manager needs to account for the variation in a portfolio’s composition over time.
  • Backtesting is not effective when the number of VaR exceptions is small. In addition, backtesting is less effective over longer time horizons due to portfolio instability. VaR models tend to be more realistic if time-varying volatility is incorporated; however, this approach tends to generate a procyclical VaR measure and produces unstable risk models due to estimation issues.
  • The power of backtests can be improved modestly through the use of conditional backtests or other techniques that consider multiple dimensions of the data like the timing of violations or the magnitude of the VaR exceptions. No consensus has data like the timing of violations or the magnitude of the VaR exceptions
17
Q

Describe exogenous and endogenous liquidity risk and explain how they might be integrated into VaR models

A

Two types of liquidity risk are exogenous liquidity and endogenous liquidity. Both types of liquidity are important to measure; however, academic studies suggest that risk valuation models should first account for the impact of endogenous liquidity.

Exogenous liquidity is handled through the calculation of a liquidity-adjusted VaR (LVaR) measure, and represents market-specific, average transaction costs. The LVaR measure incorporates a bid/ask spread by adding liquidity costs to the initial estimate of VaR. A common approach treats the exogenous cost of liquidity (COL) as one-half the shocked spread.

In the formula of COL below, Pt is the asset mid-price, μ is the mean spreas (in %), σ is standard deviation of the spread.

Endogenous liquidity is an adjustment for the price effect of liquidating positions. It depends on trade sizes and is applicable when market orders are large enough to move prices. Endogenous liquidity is the elasticity of prices to trading volumes and is more easily observed in instances of high liquidity risk.

Endogenous liquidity risk is most applicable to exotic/complex trading positions and very relevant in high-stress market conditions, however, endogenous liquidity costs will be present in all market conditions.

Endogenous liquidity depends on trade size and is relevant for orders that are large enough to move market prices; that is, it is the elasticity of prices to volumes. Endogenous liquidity may be easily observed in situations of extreme liquidity risk, characterized by the collective liquidation of positions or when all market participants react in the same way.

Endogenous liquidity effects are particularly important when:

  • the underlying asset is not very liquid;
  • the size of the position is important with respect to the market;
  • large numbers of small investors follow the same hedging strategy;
  • the market for the underlying of the derivative is subject to asymmetric information, which magnifies the sensitivity of prices to clusters of similar trades.
18
Q

Expected shortfall vs. VaR

A

Expected shortfall is more complex and computationally intensive than VaR, however, it does correct for some of the drawbacks of VaR. Namely, it is able to account for the magnitude of losses beyond the VaR threshold and it is always subadditive.

In addition, the application of expected shortfall will mitigate the impact that a specific confidence level choice will have on risk management decisions.

19
Q

Stress testing

A

It is important to incorporate stress testing into risk models by selecting various stress scenarios. Three primary applications of stress testing exercises are as follows:

  1. Historical scenarios, which examine previous market data.
  2. Predefined scenarios, which attempt to assess the impact on profit/loss of adverse changes in a predetermined set of risk factors.
  3. Mechanical-search stress tests, which use automated routines to cover possible changes in risk factors.

When VaR is computed and analyzed, it is generally under more normalized market conditions, so it may not be accurate in a more stressful environment. A stressed VaR approach, which attempts to account for a significantly financial stressed period, has not been thoroughly tested or analyzed. Thus, VaR could lead to inaccurate risk assessment under market stresses.

20
Q

Compare unified and compartmentalized risk measurement

A

Unified and compartmentalized risk measurement methods aggregate risks for banks.

A compartmentalized approach sums risks separately, whereas a unified, or integrated, approach considers the interaction among risks.

A unified approach considers all risk categories simultaneously. This approach can capture possible compounding effects that are not considered when looking at individual risk measures in isolation. For example, unified approaches may consider market, credit, and operational risks all together.

When calculating capital requirements, banks use a compartmentalized approach, whereby capital requirements are calculated for individual risk types, such as market risk and credit risk. These stand-alone capital requirements are then summed in order to obtain the banks overall level of capital.

Thus, the overall Basel approach to calculating capital requirements is a non-integrated approach to risk measurement.

Empirical studies suggest that the magnitude of diversification benefits – that is, the amount by which aggregate risk is below the sum of individual risks – depends upon the level at which risks are measured. At higher levels of aggregation (e.g., at the holding company level), the benefits are more often detected; however, at a lower (e.g., the portfolio) level, risk compounding can become predominant.

21
Q

Compare the results of research on “top-down” and “bottom-up” risk aggregation methods

A

The top-down approach to risk aggregation assumes that a bank’s portfolio can be cleanly subdivided according to market, credit, and operational risk measures. In contrast, a bottom-up approach attempts to account for interactions among various risk factors. Top down approaches always reference an institution as a whole, whereas bottom-up approaches can range from the portfolio level up to an institutional level.

In order to assess which approach is more appropriate, academic studies calculate the ratio of unified capital to compartmentalized capital (i.e., the ratio of integrated risks to separate risks).

  • Top-down studies calculate this ratio to be less than one, which suggest that risk diversification is present and ignored by the separate approach.
  • Bottom-up studies also often calculate this ratio to be less than one, however, this research has not been conclusive, and has recently found evidence of risk compounding, which produces a ratio greater than one. Thus, bottom-up studies suggest that risk diversification should be questioned.

It is conservative to evaluate market risk and credit risk independently. However, most academic studies confirm that market risk and credit risk should be looked at jointly.

If a bank is unable to completely separate risks, the compartmentalized approach will not be conservative enough. Thus, the lack of complete separation could lead to an underestimation of risk. In this case, bank managers and regulators should conclude that the banks overall capital level should be higher than the sum of the capital calculations derived from risks individually.

The bank’s capital is mis-measured if risk interdependencies are ignored. The addition of economic capital for interest rate and credit risk derived separately provides an upper bound relative to the integrated capital level.

Two key factors determine this outcome:

  1. First, the credit risk in this bank is largely idiosyncratic and thus less dependent on the macroeconomic environment.
  2. Second, bank assets that are frequently repriced lead to a reduction in bank risk.
22
Q

Describe the relationship between leverage, market value of asset, and VaR within an active balance sheet management framework

A
  • When a balance sheet is actively managed, the amount of leverage on the balance sheet becomes procyclical. Academic studies have shown that balance sheet adjustments made through active risk management affect risk premiums and total financial market volatility.
  • Leverage (measured as total assets to equity) is inversely related to the market value of total assets. When net worth rises, leverage decreases, and when net worth declines, leverage increases. This results in a cyclical feedback loop: asset purchases increase when asset prices are rising, and assets are sold when asset prices are declining.
  • Leverage is related inversely to the market value of total assets. When net worth increases, because asset value is rising, leverage goes down; when net worth decreases, because asset value is falling, leverage increases.
  • Consider an intermediary who actively manages its balance sheet to maintain a constant leverage. If asset prices rise, the bank can take on an additional debt. If asset values shrink, leverage rises but the bank can maintain its leverage by selling securities to adjust back to the targeted leverage ratio.*
  • But “this kind of behavior leads to a destabilizing feedback loop, because it induces an increase in asset purchases as asset prices are rising and a sale of assets when prices are falling. Whereas the textbook market mechanism is self-stabilizing because the reaction to a price increase is a reduction in quantity demanded and an expansion in quantity supplied, and to a price decreases an expansion in quantity demanded and a contraction in quantity supplied, active balance sheet management reverses this self-stabilizing mechanism into a destabilizing positive feedback loop.”*

! Since VaR per value of assets is countercyclical, it directly follows that leverage is procyclical.

23
Q

What is the key change for backtesting requirements under the fundamental review of the trading book (FRTB; aka, Basel IV) in comparison to Basel III (which itself essentially incorporated the previous quantitative requirements for backtesting under the internal models approach (IMA) to market risk)?

A

One-day value at risk (VaR) is backtested at both 97.5% and 99.0% confidence levels

24
Q

The four primary issues addressed by VaR mapping

A
  1. Lack of price history (including IPOs)
  2. Dynamic (as opposed to static) exposures
  3. Size, complexity and/or granularity of portfolio
  4. Stale prices
25
Q

Generic VaR mapping steps?

A

Generic VaR mapping steps are as follows:

  1. Mark positions to market
  2. For each position, allocate market value to one or more primitive risk factor(s)
    • 2a. If current market value is not fully allocated, it must mean the remainder is allocated to cash
  3. For each risk factor, sum the allocated positions