You are on page 1of 30

Using Scenario Analysis to estimate Operational Risk Capital

James Elder Director Credit Suisse First Boston ( OpRisk Europe, March 2004
The author makes no representation as to the accuracy or completeness of the information provided. The views expressed here are those of the author, and do not necessarily represent those of Credit Suisse First Boston or Credit Suisse Group.

Background The fundamental challenges in measuring OpRisk? The 4 elements of the AMA Practical implementation issues The irrelevance of small losses on the capital charge Correlation assumptions Implementing a scenario-based capital estimation approach Conclusion

Operational Risk


Operational Risk

A1. Types of OpRisk

Goal of OpRisk management is to reduce the frequency & severity of large, rare events Small Losses Low Frequency High Frequency Large Losses MAJOR events (Primary challenge) Doesnt matter much
Can put banks (eg. Barings) out of business or severely harm reputation Difficult to understand and prioritize in advance Similar to issues faced in several other industries: aviation, healthcare, railways, chemical processing

MINOR events (Secondary challenge)

Generally not firm threatening Experience makes it easier to understand problems, to measure issues & to take relevant action Can often be incorporated into pricing - cost of doing business (eg. credit card fraud losses) Generally generates efficiency savings rather than reduce material risks

Not Relevant
(Otherwise would already be out of business!)

Operational Risk

A2. Swiss cheese model Major OpRisk events

Swiss cheese analogy holes exist in all systems Risk of accidents can be mitigated by developing effective defenses-in-depth

Ideal Control Environment

Real Control Environment


Successive layers of protection each designed to protect against the possible breakdown of the one in front

Defensive control layers try to minimize occurrence of large organizational accidents

Potential losses

Major OpRisk events more unlikely as they require alignment of holes in successive control layers
e.g. bad person; flawed systems; poor management; weak controls, on a bad day . . .

Some holes from active failures



Some holes due to latent conditions

Operational Risk

Source: Swiss cheese model (Adapted from Reason, 1997)

The fundamental challenges in measuring OpRisk

Operational Risk

B1. Position equivalence - Lack of a quantifiable size

Both market and credit risk exposures are typically explicit & normally accepted as a result of a discrete trading decision

Risk taking decision depends on the ability to measure the risk of a transaction relative to its expected profitability Credit risk: Exposures can be measured as money lent, mark-to-market exposure of potential exposure. The risk can be estimated using credit ratings, market-based models and other tools Market risk: Positions can be decomposed into risk sensitivities and exposures. The risk can be quantified with scenarios, value-at-risk models, etc. Generally, market prices are observable/quoted & frequent

In both market and credit risk there is a direct linkage to the driver of risk, the size of the position, and the level of risk exposure

These risk models allow the user to predict the potential impact on the firm for different risk positions in various market movements

OpRisk is normally an implicit event it is accepted as part of being in business, rather than part of any particular transaction

No market prices observed/quoted; Relevant data infrequent

No inherent size for the OpRisk inherent in any transaction, system, or process

Eg. how much rogue trader risk does a bank have? How much fraud risk? How much could a bank lose from implementing a new IT system incorrectly? Has the risk grown since yesterday? The equivalent position for OpRisk is difficult to identify

Operational Risk

B2. Completeness What is the portfolio?

there are known knowns there are things that we know that we know. There are unknowns that is to say, there are things that we now know we dont know. But there are also unknown unknowns there are things we do not know we dont know.
Donald Rumsfeld quote

For both market and credit risk, modelling starts with a known portfolio of risks

Usually a key test of a banks risk management systems and processes to ensure that there is complete risk capture

In OpRisk modelling, the portfolio of risks is not available with any reasonable degree of certainty

Even if a bank knows its processes and could ascertain the size of the risk in those processes, it is difficult to identify unknown risks or non-process types risks

Many Major OpRisk events are from unknown or non-process types risks they are simply outside the banks normal set of understood risks

Many of the biggest operational risk losses arise from fundamentally new issues, which even the most far-sighted and active risk management might not be able to foresee

Many OpRisk approaches effectively imply the portfolio from historic loss events

Imagine taking this approach to credit risk modelling i.e.deduce the loan portfolio from historic defaults instead of obtaining it from books and records

Operational Risk

B3. Context dependency & relevance of loss data

Context dependency describes whether the size or likelihood of an incident varies in different situations It is important in modelling because it determines how relevant your data are to the current problem

Eg. an analysis of transportation accidents over the last century would clearly contain data that had lost relevance due to different modes of transport, changing infrastructure, better communications, etc.

Consider the following questions: (1) Are your businesses, people or processing systems similar to 10 years ago? (eg. many banks have merged and/or materially changed their systems and processes) (2) Are the threats to those systems similar to 10 years ago? (eg. did firms worry about internet virus attacks in 1993?) Answering No illustrates the high context dependency of OpRisk Context dependency is driven by how quickly the underlying system or process changes

Market risks: Appear to have a moderate level of context dependency as stock market prices tend to exhibit statistical properties that appear to be somewhat stable across time Credit risk: Credit ratings an loss statistics have been measured for many decades and show some reliable properties

The level of context dependency has a fundamental impact on the ability to model and validate a system the higher the context dependency the less the past will be a good predictor for the future

High context dependency degrades the relevance of information over time

Operational Risk

The 4 elements of the AMA - Practical implementation challenges

Operational Risk


C1. The 4 elements of the AMA

A banks AMA OpRisk model must include the following 4 elements: (1) Internal loss data (2) External loss data (3) Scenario analysis (4) Business environment & internal control factors There are a number of practical implementation issues with each of these 4 elements: Completeness; Accuracy; Auditability; Relevance Completeness Internal loss data External loss data Scenario analysis Business environment & internal control factors




Conclusion Some elements are auditable but not relevant & others are relevant but not auditable
Notes (1) More difficult to ensure completeness for high-frequency, small-loss events Minor events; easier for Major events (2) Low rating as most firms unlikely to have suffered numerous Major events to provide sufficient data sample (3) Low/medium rating due to reporting bias and collection bias (4) Medium accuracy and auditability for factors that are countable but Low otherwise
Operational Risk


C2. Relevance of internal/external loss data

Internal loss data Internal loss data is not a good predictor of future events

After major event, management actions lead to improved controls & reduced chance of re-occurrence

Given lack of historic data and scarcity of meaningful data points, data can only be used as input into defining potential types of scenarios

Data generally more relevant at department level for efficiency/process improvements

External loss data Main purpose is to provide guidance on types of scenarios and parameters & for lessons learned to identify potential actions to reduce likelihood of event occurring within own organization Numerous reporting/data capture issues:

Reporting bias: Relies on companies disclosing significant OpRisk loss events. Some events have to be disclosed others may not. Relies on events to be reported correctly in publicly available documents (figures often inflated) Capture bias: Relies on firms capturing accurately OpRisk loss events and amounts from publicly available documents After reviewing for relevance (ie. similarity to your firm; whether your firm has similar business; whether circumstances surrounding the loss could be repeated in your firm) meaningful data points can be reduced significantly Estimate of the OpRisk event probability can be estimated as number of events divided by institution years (typically 1 in 2, 3, 5, 10, 20, 50 years); Number of institution years estimated from number of peer institutions and the number of years over which the external data was likely to be reasonably reliable

External data needs to be cleansed to ensure it is relevant

External data can be used to assist in determining an estimate of the event severity and probability

Operational Risk


C3. Scenario analysis

Given the limitations of internal and external loss data, scenario analysis is the most appropriate approach to determining an OpRisk capital charge

Can better take account of context dependency and the evolution of the organization, the business environment and internal control factors

To address the issue of completeness of the portfolio of OpRisk exposure one needs to list out a set out exposures (and their associated probabilities of occurring)

Primary focus is on the major events, eg. rogue trader, building unavailability, etc.

OpRisk Event Space

BCP Misselling Risk N Unexpected Event
Unknown Risk

Rogue Trading

Important to note that many of the biggest OpRisk losses arise from fundamentally new issues & hence difficult to foresee


There will be some element of the Event space not covered by a known risk unknown risks but with top-down approach can include Unexpected Event scenario

Risk A

Risk B

Operational Risk


C4. Business environment & internal control factors

There are a number of potential dimensions of business environment and internal control factors, Eg:

Complexity (business/product, technology, business processes, organization, legal entity) Rate of change of markets/products/volume (developing vs matured) Management (centralised vs remote; own managed vs outsourced) Processing maturity (automatic straight-through-processing vs manual) Personnel (level of turnover; level of resourcing; competency of resourcing)

Although it is possible to justify each business environment and internal control factor as a driver of risk, it is generally only possible from a directional basis rather than absolute basis

Generally more effective to develop action plans and monitor risk reduction of each risk factor to an acceptable risk level

Some elements are auditable at the specific factor level but is it difficult to translate the factor into an economic amount Even harder to aggregate across factors

Eg. what is the economic value of one outstanding confirmation acceptance vs one depot break? Translating economic amounts is necessarily judgmental and qualitative Easier for factors that are countable, eg. process type risks, rather than non-process type risks

Can be taken account of, to some extent, in determining scenarios and associated severity and probability parameters

Operational Risk


The irrelevance of small losses to capital

Operational Risk


D1. Experience/observations of loss data collation

It is important to understand why internal loss data is being collected. How is the data going to be used? Must not confuse data with information
Loss frequency

Lots of data

Limited data

Minor events Majority of collected events contribute little to total loss

USD loss band

100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%

Value of losses
No information More relevant information

Major events Relatively few events contribute majority of total loss



















USD loss band

Conclusion: All relevant information is obtained from Major OpRisk loss events
Operational Risk













D2. Experience/observations of loss data collation (2)

Majority of cumulative loss derives from a small number of large events Minor event losses provides limited relevant information

Eg. 90% of cumulative loss comes from approx. largest 10% of loss events (approx. 20 loss events)

Conclusion: Major event losses drive the capital charge & impact the economic condition of a bank Difficult to model OpRisk with only few relevant data points

Losses ranked & Cumulative losses

35.0% 30.0% 25.0% 100.0% 90.0% 80.0% 70.0% 60.0% 50.0% 15.0% 10.0% 5.0% 0.0%
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39

% of total loss


40.0% 30.0% 20.0% 10.0% 0.0%


Operational Risk


D3. Small OpRisk losses vs Large OpRisk losses

Shouldnt we collect small OpRisk losses to learn more about potential large OpRisk losses and for better risk management purposes? No Small losses and large losses are different in character and how they arise Large OpRisk losses Events can threaten the capital or solvency of the firm Typically result from a combination of control failures across a number of departments
Examples: Rogue trader; Fraud; Class-actions

Benefit from central collation and analysis to identify cross-department functional issues and lessons learned Key difference: Do not get small rogue traders, class-actions, etc. Small OpRisk losses Generally result from basic human errors (eg. lack of attention, forgetfulness, poor communication, etc.)
Incidents typically correspond to single breaches of the control layers; Shows that the successive defense layers provide adequate control to capture upstream control breaches Examples: Settlement errors; Credit card fraud

Data collation and actions should be taken at appropriate level of organization

Small losses best handled by dept experts little benefit from central collation/analysis

Key difference: Do not get large losses of this type of error

Operational Risk


D4. Small OpRisk losses vs Large OpRisk losses (cont.)


Small Loss

Big Loss

How caused? Typical examples Appropriate Actions

Appropriate loss reporting

Small Losses Small number of control layers breached Generally control failure is specific to a particular department Settlement errors Lessons learned only relevant to processes within the particular dept concerned Often actions taken only require reinforcement of minor changes to controls already in place Escalation only relevant to dept management Department escalation and reporting processes sufficient

Large Losses Typically many control layers breached Control failures cross a number of departments Fraud; rogue trader; business interruption Lessons learned often read across multiple departments Often require new controls or significant redesign of existing controls Escalation required across departments

Central aggregation and reporting required


Operational Risk

Correlation assumptions

Operational Risk


E1. Correlation assumptions

Correlation should be considered at 2 levels: (1) within a scenario, (2) across scenarios Correlation within scenarios: 100% correlation between individual control failures

eg. Major rogue trader event is the combination of: lack of supervision & failure to obtain confirmations & failure to independent test prices & failure to perform independent P&L analysis & No evidence to suggest that OpRisk events are correlated, eg. what is the likelihood of documentation failure impacting building unavailability Major event eg. Rogue Trading

Correlation across scenarios: 0% correlation between major event scenarios

Lack of supervision Poor challenge, issue escalation Failure to obtain transaction confirmations Failure to obtain independent FX prices Failure to analyse P&L Major event is the combination of individual control failures that alone would not give rise to the incident (ie. 100% correlation between individual control failing)

Operational Risk

E2. OpRisk aggregation: Across scenarios

Scatter graph of severity of loss event vs date of arrival shows no pattern

Indicates no relationship between one event and another

There is no strong relationship between the number of loss events and the aggregate value of loss events

No obvious relationship between number of losses and aggregate value of losses is evident suggests that the level of OpRisk is not related to the number of events suffered

Loss amount (Log scale)

Number of loss events




Loss event

Date of loss


2001 Q1

2001 Q2

2001 Q3

2001 Q4

2002 Q1

2002 Q2

2002 Q3

2002 Q4

2003 Q1

2003 Q2

2003 Q3

2003 Q4

Number of loss events

Sum of loss events

Operational Risk


Value of loss events

[insert scatter graph of loss event vs time]

E3. OpRisk aggregation: Across scenarios

There is no evidence to suggest that major OpRisk events are correlated

Correlations unlikely to be able to be estimated empirically due to the lack of meaningful relevant data From OpRisk loss data it is possible to estimate the distribution of interarrival times, ie. the days elapsing between each loss event and the next event in sequence For independent events, interarrival times should be approximately exponentially distributed. Fitting an exponential distribution allows the average interarrival time to be estimated
Loss Interarrival times
OpRisk loss data for 3 year period to 31/12/2003
Number of months in data period

Loss interarrival times & correlation: Actual loss experience

120 Frequency

Numbers of events per month




3 Actual 2 Poisson fit

40 Series1 Series2 20 Days 0 0 5 10 15 20 25 30 35 40

Number of events

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

Conclusion Evidence confirms common-sense, ie. intuitive that OpRisk events are not correlated OpRisk scenarios should be aggregated with 0% correlation
Operational Risk


Implementing a scenario based estimation approach

Operational Risk


F1. Putting it all together & determining a capital charge 4 elements of the AMA


Internal loss data

Scenario definitions & parameters

Scenario exposures & probabilities
Scenario exposure amount


Loss Distribution



External loss data



Capital charge


Business environment & internal control factors

Aggregation of scenarios using OPRISK+

Low Medium Medium Medium Medium High High Very Low 50 Low Low 20 10 5 3 2




Probability 1 in x years

Annual loss

Operational Risk


F2. Components of a scenario

Each scenario should use: Internal loss data, external loss data, business environment and internal control factors to determine the scenario severity and probability parameters
Business environment & internal control factors What are the business environment and internal control factors that could affect size and likelihood of loss? Complexity of product/business; pace of change of market or regulation; volumetrics; key risk indicators Internal loss data What losses has the firm experienced for this given scenario? What were the size of losses; frequency of major events? What management actions have been taken to prevent future occurrence or reduce potential size of loss? External loss data What major events of this particular scenario have occurred to other firms similar to the firm? What is the potential range of losses? How frequently have the events occurred? What is the potential loss and likelihood of occurrence for the firm
Operational Risk

Scenario, eg. rogue trader

Scenario loss: $?m Probability: x

BCP Misselling Risk N Unexpected Event Risk A

Rogue Trading


Risk B

Base Risk



F3. Calculating the OpRisk capital charge

An actuarial approach is taken. OPRISK+ - an event risk model

The OpRisk economic capital is calculated using

Using a credit portfolio analogy the inputs into the OPRISK+ model are: OpRisk scenario exposures (cf. credit exposures) and OpRisk event probability (cf. default probability) Top-down scenario approach considering major events allows an independence assumption to be made Bottom-up scenario approach looking at the combination of minor OpRisk events needs to consider how these minor events could correlate to generate a major event

A correlation assumption is made that major OpRisk scenario events are independent

Scenario exposures & probabilities

Scenario exposure amount


Loss Distribution






Capital charge


Low Low Medium

Medium Medium

Medium Medium High High

High High

Very Low 50


Probability 1 in x years





Annual loss
Operational Risk


F4. Building scenarios An approach

Define OpRisk Scenario definitions

Understanding of risks facing the bank Basel event categories Internal/External loss data to indicate where banks can lose money Description of scenario risk Description of primary controls mitigating risk Summary of internal loss experience related to scenario risk Summary of relevant external loss experience related to scenario risk Description of any relevant Business Environment and Internal Control Factors affecting scenario risk or control environment Assumptions used to determine parameter assumptions Summary of scenario parameters (frequency and severity) Discuss scenario risk, controls and scenario parameters with relevant line experts utilizing their expert judgment (eg. discuss the Fraud scenario with experts from Legal, Corporate Security and Operations). Provides an additional sense check over capital numbers Provides comparison check between business units
Operational Risk

Draft up scenario templates using standardised format

Review each scenario template with Business Unit experts

Review capital assessments with Senior Management


F5. Benchmarking OpRisk capital results

Validation of OpRisk models is a major challenge pure statistical validation of OpRisk models may not be possible for many years, probably never However, there are benchmark tests that can be performed for scenario approaches:
Against Internal OpRisk Loss Data Graphs show loss distribution from actual internal OpRisk loss data over 3 yr period

No assumptions are made regarding the underlying distribution of events

Against External OpRisk Loss Data Graph shows loss events for 10 key peers over 10yrs (ie. 100 institution yrs of relevant data) 99%-ile capital figure is equivalent to 1 in 100 year event
"Backtest" of peer loss data (present value) vs 99% OpRisk Capital Based on 10 key peers and 10 years - ie 100 Bank Years
1994 1995 1996 1997 1998 1999 2000 2001 2002 2003


Loss Distribution from Actual Op Risk Losses





Internal Loss Data Capital (99.9%)

ERC Scenario $1,434m Capital (99.9%)



Loss $m

SumOfInflated LossAmount Capital (99%)

Operational Risk


G1. Conclusion
Characteristics of Scenario Approach
A scenario based capital estimation approach is: pragmatic; implementable; cost effective Sensible capital numbers can be derived in a systematic and transparent manner The approach is forward-looking, utilizing all types of data available Expert judgment is used to blend available data with understanding of the control environment to produce forward-looking assessments of risk

Have a go yourself
Re-perform the analysis in this presentation on your data What does your loss data tell you?

Frequency plot; Value of losses vs number of losses plot; Cumulative loss ranking; Scatter plot vs time; Interarrival times

Operational Risk