Sie sind auf Seite 1von 31

A Perfect Storm –

Why are some Operational Losses larger than others?


Patrick Mc Connell,
July 2006

Abstract
In mid 2004, after a lengthy period of industry consultation, the Basel Committee finally released
its definitive rules on capital charges for Operational Risk under Basel II. In its proposals for
allowing banks to calculate regulatory capital using their own internal models, the Basel
Committee specified very stringent ‘quantitative standards’ for modelling operational risk. This
has led to much discussion in the industry as to the types of statistical tools that could be used to
calculate operational risk capital, under the so-called Loss Distribution Approach (LDA).
Advanced techniques, such as Extreme Value Theory (EVT), have been proposed to satisfy the
new regulatory requirements.

However, LDA approaches make the assumptions that all loss events are drawn from the same
underlying ‘model’, and can be grouped using the Basel II classification of ‘Loss Event Types’.
The paper challenges these assumptions, specifically for the cases of the very largest losses,
which have been shown to have a disproportionately high impact on quantitative measures
resulting in inflated capital calculations. Using published examples, the paper argues that many
of the very largest operational risk losses do not fit easily into the very broad ‘one size fits all’
Basel II event type classification, and, in fact, cut across many of the mandated categories. In
statistical terms, these events should properly be considered as ‘outliers’ that should be removed
from statistical analysis of the underlying distribution and addressed using other techniques.

Furthermore the paper argues that, for some of these large events, a measurement approach that
attempts to measure risk to an unrealistic level of precision (e.g. to the 99.9th percentile
demanded by Basel II) introduces ‘moral hazards’, encouraging managers to claim that risks
have been fully mitigated to the (somewhat arbitrary) regulatory standard, rather than address the
serious issues underlying these events. The illusory search for precision is no better illustrated
than in the case of the real, but highly uncertain, potential for a Avian Flu Pandemic, where
epidemiologists and medical experts are working within a six point scale for primary risks (death
etc.) whereas banks are required to estimate capital to cover secondary impacts (i.e. monetary
losses) to a precision of 1 in 1,000– clearly an unattainable task!

This paper recognises and argues that much more research is needed into the quantitative and
qualitative standards proposed by Basel II and is a contribution to these debates. In particular,
the paper argues for much more detailed and specific research into how best to manage risks that
are real but difficult to measure under Basel II, in effect, arguing for expanding and
strengthening Pillars 2 and 3 of Basel II.

Keywords
Basel II, Operational Risk, Advanced Measurement Approach (AMA), Extreme Value Theory
(EVT), Moral Hazard, Business Continuity
A Perfect Storm – Why are some Operational Losses larger than others?

Introduction
In June 2004, the Basel Committee released the ‘Revised Framework for the International
Convergence of Capital Measurement and Capital Standards’, which specified the definitive
rules on capital charges for Operational Risk under Basel II (Basel 2004). Under proposals for
allowing “internationally active” banks to calculate regulatory capital using their own internal
models – so called AMA (Advanced Measurement Approaches) - the Basel Committee specified
that such AMA models must be based on a 99.9th percentile confidence interval of a distribution
of Operational Losses constructed from internal and external loss data.

While the Basel Committee stresses the importance of ‘qualitative standards’ for banks that wish
to use an AMA1 for management of their operational risks, in particular the use of “scenario
analysis”, it also imposes stringent “quantitative standards” as regards the construction of
internal models to estimate capital to the 99.9th percentile.

During the development of the Basel II proposals, much work was done on identifying
potentially useful statistical methods for estimating this 99.9% value, leading to an industry
consensus that EVT (Extreme Value Theory) could be applied to satisfy the Basel quantitative
standards. EVT, which is used extensively in the insurance industry for modelling potential
losses arising from “extreme events”, is a particularly appealing theory because it is possible,
given certain assumptions about the underlying data, to derive a closed form equation for the
99.9% ‘Value at Risk’ or VAR (see Appendix A).

However, while it has been shown that operational losses, across the industry, can, for certain
types of losses, be modelled using EVT, it is also apparent that for individual institutions there is
insufficient data to use these techniques in a robust manner2.

As part of the on-going research called for by the Basel Committee, this paper considers some
important questions raised in key parts of the Basel II proposals, in particular the implications of
extremely large losses, such as Barings, for modelling operational risk.

After summarising the Basel II proposals on Operational Risk, the paper discusses some of the
characteristics of operational risk ‘loss events’ that must be addressed when developing
quantitative models, such as EVT. The paper then addresses the question of whether some of the
largest loss events are ‘extremes’ of an underlying distribution that can be modelled using
statistical techniques, such as EVT, or whether they are ‘outliers’ that are substantially different
to most other loss events. The Basel II categorization of “Loss Event Types”, upon which
distributions must be modelled, is then described and the paper argues that this ‘one size fits all’
classification is insufficient for discriminating between regular ‘run of the mill’ and very large
loss events.

1
Note that many of the same qualifying criteria also apply to the use of the Standardised Approach (SA) in
calculating operational risk capital for Basel II.
2
EVT theory and modelling techniques are well developed in the insurance sector because while extreme events,
such as Hurricane Katrina, may be very rare, there are a very large number of insurance losses that satisfy the
assumptions of the underlying models.
2
A Perfect Storm – Why are some Operational Losses larger than others?

To illustrate the problem, the paper then considers the ‘characteristics’ of some of the very
largest operational risk losses that have been reported in the past and argues that these events are
much more complex than the simple classification proposed by Basel II. Moreover, they violate
the underlying assumption that they are part of the same statistical distribution, as required by
modelling techniques such as EVT. Furthermore, the paper argues that in some circumstances,
using capital alone to manage some of these events creates ‘moral hazards’, encouraging
managers to apply inappropriate techniques to incomplete models to give the illusion of covering
losses, to an unattainable precision.

The paper also argues that some categories of loss events that will incur capital under the Basel
II ‘one size fits all’ classification, such as catastrophic damage to premises and external fraud,
are better handled using conventional insurance techniques, given that the global insurance
industry will have more information on, and better capacity for hedging, such risks. Furthermore
for some risks, such as Infrastructure Disasters, the best-managed banks are being unnecessarily
penalised. Having prudently purchased insurance and invested heavily in Business Continuity
Planning to the levels now mandated by regulators, they must also incur an additional somewhat
arbitrary capital charge, i.e. a form of ‘self insurance’ that they cannot hedge.

Recognising that extreme loss events that are difficult to anticipate do occur, unfortunately with
some regularity in the industry, the paper argues that the Pillars 2 and 3 requirements of Basel II
be strengthened and that a much clearer distinction be made between ‘measurable’ and ‘non-
measurable’ operational risks, which may need to be managed in distinctly different ways.

Operational Risk Management under Basel II


The final Basel II proposals stipulated that an Operational Risk Management ‘system’ must be
implemented by an independent operational risk management function responsible for
developing and implementing “strategies, methodologies and risk reporting systems … to
identify, measure, monitor and control/mitigate operational risk” (Basel 2004).

To qualify to use an AMA approach to calculate operational risk capital under Basel II, a bank
must meet stringent “qualitative standards”, in summary (Basel 2004, section 666):

An independent operational risk management function.


An operational risk measurement system that is closely integrated into the day-to-day risk
management processes of the bank.
Regular reporting of operational risk exposures to business units, senior management,
and the Board, with procedures for appropriate action.
The operational risk management system must be “well documented”.
Regular reviews of the operational risk management processes/systems by internal and/or
external auditors.
Validation of the operational risk measurement system by external auditors and/or
supervisory authorities, in particular, making sure that data flows and processes are
transparent and accessible.

The Basel Accord also details a series of “quantitative standards” that apply to operational risk
capital calculations, which local banking regulators, such as the Australian Prudential Regulatory
Authority (APRA) have expanded for the banks in their jurisdictions (APRA 2005 sections 6, 15
&16):
3
A Perfect Storm – Why are some Operational Losses larger than others?

“- A bank3 must be able to demonstrate to APRA that its operational risk regulatory
capital requirement as determined by the bank’s operational risk measurement model
meets a soundness standard comparable to a one-year holding period and a 99.9 per cent
confidence level [authors’ emphasis]. In other words, the bank’s operational risk
measurement model must capture an appropriately robust set of operational risk-related
events that can lead to severe and rare operational risk losses. To do this, the bank’s
operational risk measurement model must be sufficiently granular to capture the major
drivers of operational risk affecting the shape of the tail of the bank’s operational loss
distribution. The bank’s operational risk measurement system must also be sufficiently
comprehensive to capture all material sources of operational risk across the bank.
- Irrespective of the bank’s risk measurement approach, the bank will be expected to
establish a distribution of aggregated potential operational risk losses across the bank or
a set of operational risk loss distributions for sub-parts of the bank’s operations.
- Where a single distribution is assumed for the purpose of determining the bank’s
operational risk regulatory capital requirement, a bank will be required to demonstrate to
APRA, on the basis of quantitative and qualitative considerations, that the distribution is
appropriate for all of the bank’s material operational risk exposures.”

The questions raised by these quite stringent requirements include:


1. What would operational risk ‘loss distributions’ look like? And
2. What quantitative tools and techniques are available to identify such distributions and to
calculate the necessary percentiles?

In developing the Accord, the Basel Committee identified a number of mechanisms for
calculating the 99.9th percentile confidence interval, in particular the so-called “Loss Distribution
Approaches” or LDA. An LDA, which was the main quantitative approach identified by Basel,
was described as (Basel 2001):
“Under loss distribution approaches, banks estimate, for each business line/risk type cell,
or group thereof, the likely distribution of operational risk losses over some future
horizon (for instance, one year). The capital charge resulting from these calculations is
based on a high percentile of the loss distribution … this overall loss distribution is
typically generated based on assumptions about the likely frequency and severity of
operational risk loss events. In particular, LDAs usually involve estimating the shape of
the distributions of both the number of loss events and the severity of individual
events. These estimates may involve imposing specific distributional assumptions
(for instance, a Poisson distribution for the number of loss events and lognormal
distribution for the severity of individual events) or deriving the distributions empirically
through techniques such as boot-strapping and Monte Carlo simulation.”

The Basel committee noted in 2001 that “at present, several kinds of loss distribution approach
methods are being developed and no industry standard has yet emerged”. Moreover, it should be
noted that no specific references were made to LDAs in the final Basel 2004 document, implying
that an industry-wide consensus on quantitative standards had not emerged in the intervening
years.

3
Note that APRA uses the terminology Approved Depository Institution (ADI) to refer to banks under its
jurisdiction.
4
A Perfect Storm – Why are some Operational Losses larger than others?

The most popular approach to developing an LDA in the industry has been ‘Extreme Value
Theory’, or EVT, using theories and actuarial techniques widely employed in the insurance
industry. EVT is supported by well-developed statistical theories that, as the name suggests, can
be applied to modeling certain types of “extremal events” (Embrechts et al 2003).

This paper concentrates on the quantitative standards required by Basel II, while acknowledging
that qualitative approaches are equally, if not more, important, but must be the subject of further
research. And, while also acknowledging that EVT may, in the future, prove useful in measuring
Operational Risk, the paper argues that some of the key underlying assumptions necessary to use
EVT for modeling severe and rare operational losses remain to be tested. After giving a brief
description of EVT (which is expanded in more quantitative language in Appendix A), the paper
looks at some of the characteristics of very large operational losses and questions the
applicability of current approaches to Operational Risk measurement in these cases.

Operational Losses in Financial Institutions


It is generally believed (and has been observed in practice) that operational losses in financial
institutions follow a pattern of a large number of relatively small losses, and a very small number
of losses that are very large, triggering senior management attention and often appearing in
negative press comments.

Figure 1 below shows three charts using actual losses recorded by a medium-sized international
bank over a period of 5 years. Note that the values and dates of these losses have been adjusted
by constant factors to ‘disguise’ the identity of the bank concerned:
• 1.A is a plot of all recorded operational losses over time, showing: many small losses
(close to the zero value line); a number of losses between 1 and 10 million dollars; and
here one very large (extreme?) loss. An arbitrary ‘threshold’ line is shown at around $5
million - see Appendix A for discussion.
• 1.B is a histogram of the frequency of these losses, showing that most losses have a
small value; here, over 95% of losses are less than $5 million.
• 1.C is a histogram of the severity of the losses illustrating that a small number of very
large losses accounts for a significant percentage of total losses – i.e. less than 5% of
losses account for over 35% of total value. One loss in particular is much larger, here
over $50 million, accounting for over 11% of the total.

Table 1 below summarises the percentage frequency and value of these losses.
% Frequency % Value
Small < $1 Million 83% 34%
Medium between $1 & $ 5 million 13% 30%
Large > $5 Million 4% 36%
Largest Single Loss <0.25% 11%

Table 1 – Summary of Operational Loss Data

Such a pattern of operational losses is not unique to the bank in this case and has been identified
elsewhere by several researchers, including Moscadelli (2004), Medova and Kyriacou (2001),
Embrechts & al (2004), Coleman (2002), Ebnother et al (2001) and DeFontnouvelle et al (2003).

5
Figure 1 – Example Operational Loss Data - Disguised

Fig. 1.A Operational Losses - Disguised

Extreme
50,000,000

40,000,000

30,000,000

20,000,000

10,000,000
Threshold
0
Jun-00 A pr-01 Feb-02 Dec-02 Oct-03 A ug-04 May-05

Fig. 1.B Histogram of Operational Losses Fig. 1.C Sum of Operational Losses

600 40% Extreme


Extreme 35%
500
30%

% of Total Losses
Number of Losses

400
25%

300 20%

15%
200
10%
100
5%

0 0%
$1 $11 $21 $31 $41 $51 $1 $11 $21 $31 $41 $51

Loss ($ Millions) Loss ($ Millions)


In statistical terms, such a pattern would suggest that operational losses are drawn from a ‘heavy
tailed’ distribution. If so, then it should be possible to identify, at the very least, the class of
distributions that would describe these operational losses and using that knowledge estimate the
‘distribution parameters’ that, along with the appropriate formula for the Cumulative Distribution
Function (CDF), would allow the 99.9th percentile to be calculated to a known confidence
interval. Examples of heavy (or ‘fat’) tailed distributions include, of particular importance the
so-called Extreme Value Distributions, i.e. Gumbel, Weibull and Frechet and the Lognormal and
Pareto distributions (Vose 1996).

Appendix A summarizes current thinking in modelling operational risk losses assuming heavy-
tailed distributions, in particular assuming the Generalised Pareto Distribution (GPD) and using
the so-called Peaks Over Threshold (POT) method. While acknowledging that EVT is a sound
theory, Appendix A notes that use of the POT method is extremely sensitive to the overall
number of data points in the distribution (i.e. the total number of losses) and the choice of
‘threshold’ cut-off (i.e. the number of losses in the ‘tail’).

This paper does not address the highly technical question of ‘how many data points are enough
for EVT analysis?” but questions whether the most obviously extreme events should be included
in any analysis.

Extremes or Outliers?
Classical statistical treatment of operational risk losses assumes that losses are drawn from the
same ‘idealised model’ of random events and that all events are equally important. The
histograms in Figures 1.B and 1.C show that the data is very heavily concentrated on the left
(mostly below $1 million) with only a small number on the right, in particular one very large
loss, of over $50 million. Can we assume that this event is from the same ‘model’ as other losses
or is it an aberration - in statistical terms, is it an outlier? Outliers are observations that are very
different from the rest of the data and hence worthy of special consideration, and can occur as a
result of “data collection/recording errors, problems of group or correlation or because they
violate underlying model assumptions” (Chernobai and Rachev 2006).

In conventional statistical analysis, outliers would be removed from the data set and distribution
statistics (mean, standard deviation etc.) would be estimated without taking these observations
into account. Removing outliers is however drastic and not to be taken lightly. The theories of
‘Robust Statistics’ attempt to bridge the gap between the ‘classical’ approach of including all
data (whether suspect or not) and the draconian approach of removing all observations that are
considered outliers. Chernobai and Rachev (2006) discuss this dilemma and describe the
methods that may be used to first identify outliers, using objective ‘diagnostic techniques’, and
then to determine the sensitivity of the underlying model to removing one or more outliers.
Applying robust methods to a public ‘external’ loss database, Chernobai and Rachev show that
about 5% of the data appears to be ‘contaminated’ in the right tail of the distribution and that if
that data were to be ‘trimmed’, i.e. removed, the resulting capital charges would be significantly
reduced. Of course, without detailed analysis of the ‘trimmed’ data it would be difficult to argue
whether it should be excluded or not.

In some respects, the question of whether to include obviously extreme events/outliers in a


statistical analysis, or not, is of little relevance. Any attempt to answer to this question can only
A Perfect Storm – Why are some Operational Losses larger than others?

result in a better understanding of the events that give rise to extreme losses. If it can be shown
that some, or all of, these extreme losses are, in fact, radically different to other losses in the
distribution then both statistical theory and common sense would dictate that these outliers be
removed and analysed using different methods (statistical or qualitative). On the other hand, if it
can be shown that the outliers follow the same ‘model’ as other losses then any statistical
analysis and resulting capital calculations will be more robust. This paper discusses some
characteristics of extreme losses using examples from recent history to illustrate relevant points,
in an attempt to set a framework for removing or including them in capital charge models.

Operational Loss Classification


Table 2 below shows a subset of the Basel II detailed ‘Loss Event Type Classification’ (Basel
2004 – Annex 7). The Basel committee and local regulators, e.g. APRA (2005), require that
banks, wishing to apply for AMA accreditation for regulatory capital calculations, must be “able
to map its historical internal loss data into the relevant level 1 [author’s emphasis] supervisory
categories” (Basel 2004, 673).

The ‘level 1’ categories mandated are very broad indeed covering, as can be seen, very different
types of activities that could lead to losses of very varying severity. It is also obvious that even
at the ‘activity level 3’ an activity could result in very different economic consequences for a
bank.

Table 2 – Operational Loss Event Type Classification – Subset - Annex 7 Basel (2004)

8
A Perfect Storm – Why are some Operational Losses larger than others?

A real example illustrates the wide ranges of losses that can occur within just one ‘level 3
activity’ in this classification:
• In April 2003, the US securities regulator, the Securities and Exchange Commission
(SEC) reported that fines, ‘disgorgements’ and other funds of over $1.4 billion were
levied against ten large securities firms for breaches related to “undue influence of
investment banking interests on securities research at brokerage firms”. Of these
penalties, for example, a total $400 million were levied against the brokerage arm of
Citigroup (SEC 2003).
• In October 2003, Citigroup also paid a fine of $1 million levied by the NYSE “for failing
to properly supervise activities in an Atlanta branch”, a relatively small amount for a
comparatively minor infraction.

Under Basel II, these two losses would probably4 be classified under the Basel II categorization,
for capital analysis purposes, as:
• Level 1 - Clients, Products and Business Practices;
• Level 2 - Improper Business or Market Practices; and
• Level 3 - Improper Trade/Market Practices.

At level 1, the ‘one size fits all’ Basel II loss event type classification fails to discriminate
between two very different losses, in size and cause, and without much better information, it
would be a brave risk analyst who contended that these two losses followed the same ‘model’
and that the $400 million loss was not an outlier in the distribution of losses due to regulatory
fines.

It is also obvious that, even at levels of classification lower than that required by Basel II, it is
extremely difficult to satisfy another Basel II requirement:
“A bank’s risk measurement system must be sufficiently ‘granular’ [author’s emphasis]
to capture the major drivers of operational risk affecting the shape of the tail of the loss
estimates.”

As argued later, to properly understand very large operational losses they must be analysed in
detail and in isolation and, because of their complexity, would most likely fit into categories of
observations too few to form a meaningful distribution for statistical analysis, or to differentiate
between ‘major drivers’ of operational risk.

Characteristics of Extreme Loss Events


As illustrated above, very large losses are difficult to classify into Basel II groupings sufficient
for meaningful statistical analysis. Nonetheless, it is possible to identify ‘characteristics’ of
widely reported large loss events which ultimately may lead to a better understanding of the
‘major drivers’ of operational risk as required by Basel II.

This paper suggests that ‘characteristics’ of many widely reported extreme operational loss
events might be categorised as:

4
Much more information about these losses would be required to place them accurately even within the broad
categories of Basel II.
9
A Perfect Storm – Why are some Operational Losses larger than others?

• Perfect Storm: a ‘wicked’ permutation of factors that come together and, in a largely
unforeseen scenario, precipitate a very large loss, examples include Barings;
• Ethical Meltdown: a widespread failure of ethical values across the industry, examples
include the misuse of research to promote inappropriate investments;
• Infrastructure Disaster: a widespread disruption to financial infrastructure across large
sections of the industry, examples include terrorist attacks and natural disasters;
• Learning Curve: losses that occur as a consequence of innovation; examples include
development of new products or implementation of new technologies.

These characteristics are expanded below, however it should be noted that, by its nature, this list
is not comprehensive but subject to enhancement and modification as extreme loss events are
analysed in more detail.

A Perfect Storm
A ‘Perfect Storm’ is a term that has come to mean the confluence of a number of different
seemingly innocuous factors to create a ‘once in a lifetime’ catastrophic event. Derived from a
best selling book and film (Junger 1998), the original ‘Perfect Storm’ occurred in 1991 when
several low intensity weather events in the North Atlantic happened to collide to create a massive
and unfortunately deadly super storm. This, of course, is precisely the type of event that would
lend itself to analysis using EVT analysis as it is based on years of data on weather systems in
the particular region and the ‘random’ probability that multiple systems would combine into an
‘extreme’ storm.

The extremely large losses experienced by Barings ($1.2 billion), AIB ($750 million) and NAB
(A$360 million) are better understood than most other large operational risk loss events in
banking because they were all the subject of independent well-documented inquiries.
McConnell (1998, 2003 and 2005) summarises these inquiries and draws parallels between the
cases. The losses were caused and exacerbated by a combination of factors or ‘risk drivers’:
• Fraudulent/improper activity on the part of one person or group – primarily to protect
bonuses;
• Trading in derivative securities – in particular ‘selling’ options in volatile markets;
• A major market movement precipitating massive losses in options trading;
• Non adherence to critical policies and procedures, in particular trade confirmation; and
most importantly
• An aberrant ‘corporate culture’ that did not encourage open questioning about the risks
being taken and encouraged imprudent risk taking in search of higher profits.

In addition, there was evidence, in all cases, of collusion (or at least turning a blind eye) by
external parties and downright ignorance by senior management of the nature of the risks being
taken.

While primary responsibility was pinned on the now-infamous ‘rogue traders’, these major
events do not fit easily into the Basel II Loss Event Classification scheme, comprising several
elements of many of the Level 1 categories, including: ‘Internal Fraud’, ‘External Fraud’,
‘Employment Practices’ and several different sub-categories of ‘Execution, Delivery and Process
Management’. Nor is it easy to allocate losses between these Basel II categories, since if any
one causal agent was not present the losses would almost certainly been much smaller or even

10
A Perfect Storm – Why are some Operational Losses larger than others?

non-existent. More importantly, one of the major risk drivers in all of these cases, that of an
aberrant corporate culture, is not even recognised explicitly in Basel II.

In summary, the losses from these events result from a complex interaction between different
organizational, personal and market factors that cannot easily be fitted into any simple
classification scheme.

Nor are these the only cases that exhibit such complexity. While few large-scale operational loss
events have the degree of scrutiny applied to these three cases, there are other well-publicised
cases where losses cannot easily be pigeonholed into a single Basel II category:
• Metallgesellschaft – a subsidiary of Deutsche Bank (1993): losses of over $1.4 billion
due to ‘model error’ resulting from incorrect assumptions about futures prices in energy
markets; there was no evidence of internal fraud but as with Barings it was obvious that
senior management did not understand, or chose to ignore, the risks that traders were
taking.
• Bankers Trust (1993): losses of over $400 million due to selling ‘inappropriate’
derivative products to clients; while there was no internal fraud involved, this case
appears to be a combination of aggressive selling to clients who did not understand new
types of complex derivatives and the impact of large scale market movements.
• Kidder Peabody (1994): losses of over $350 million due to alleged concealment of
trading losses to protect bonuses; as with Barings it appears that the pursuit of profits
blinded senior management to the need to closely scrutinise complex bond trading
strategies.
• Daiwa Bank (1995): losses of over $1.1 billion due primarily to fraudulent trading by an
employee to cover trading losses; since the unauthorised trading activity had been going
on for 11 years, one must conclude that lax managerial oversight and failure to follow
and monitor policies were major contributors to these losses.
• Republic Securities (1999): losses of over $600 million and loss of trading licence due to
its support of fraudulent trading by a broker (Cresvale, Tokyo) for which it provided false
documentation to the broker’s clients and regulators. The broker’s fraudulent activities
were known to the management of the firm’s futures division and persisted for several
years and hence should have been picked up by management and auditors.
• China Aviation Oil (2004): losses of over $500 million due to fraudulent trading by an
employee to cover energy trading losses; as with other cases it appears that there was lax
managerial oversight of the trader’s activities.

While naively most of these cases could be attributed to a Basel II Level 1 classification of
‘Internal Fraud’, they obviously are much more complex and do not fit such a simple model but
are the result of a complex combination of factors, personalities, and market conditions.
Although broadly similar, they are sufficiently different to defy simplistic classification; deep
analysis is needed to learn lessons from such failures and more importantly to identify lessons
for risk management.

Ethical Meltdown
In the early 2000s, the investment banking industry was hit by a series of scandals concerning
improper activities in the US securities market, including:
• Inappropriate use of Investment Research;

11
A Perfect Storm – Why are some Operational Losses larger than others?

• Preferential allocations of shares in new Initial Public Offerings (IPO), so-called


‘spinning’;
• Inappropriate pricing of Mutual Funds; and
• Inappropriate behaviour in interest rate auctions.

While such unethical, if not downright illegal, practices had been going on for several years
during the unprecedented boom in US stock prices in the late 1990s, they were finally exposed
by the failures of several large companies, including WorldCom and Enron, and publicised by
the crusading efforts of the New York State Attorney General, Elliot Spitzer.

After prolonged negotiations, a formal legal settlement was agreed in early 2003 between a
number of banking and securities regulators and ten of the largest bank-owned securities firms,
following investigations into “allegations of undue influence of investment banking interests on
securities research at brokerage firms … and ‘spinning’ of ‘hot’ IPOs” (SEC 2003). Although
none of the firms ‘admitted or denied’ any wrongdoing, the total amount of fines and
‘discouragements’ in the settlement amounted to a staggering $1.4 billion with individual
amounts ranging from $80-400 millions. In addition, the firms agreed to make structural
changes to the role of their research departments and agreed to desist from spinning IPOs. The
firms censured in this settlement include some of the largest banks in the world, including CSFB,
Citigroup, UBS, J.P. Morgan and Goldman Sachs.

On a technical issue, it is difficult to argue that the severity of these losses reflect purely random
instances drawn from a homogenous distribution of typically insignificant losses related to
regulatory fines5. If this were so, it would be highly improbable that identical fines would be
levied on very different firms for different activities, such as the identical $80 million fines on
four firms in the SEC research settlement (SEC 2003). At the very least, these identical losses
would violate the ‘independent’ condition of the ‘independent and identically distributed’ (iid)
assumption underlying EVT analysis.

In July 2003, two large banks paid fines totalling $301 million, without admitting nor denying
misconduct, for activities related to Enron and in May 2004, Citigroup agreed to pay $2.65
billion in respect of activities related to WorldCom (Forbes 2004). In 2005, regulators also
censured a number of firms and fined them a total of $81 million for “improper sales” of shares
in mutual funds to investors.

These massive losses to the banks concerned bear little resemblance to the bulk of regulatory
fines levied for minor infractions of regulatory rules. They represent nothing less than a
meltdown of core ethical values across the investment banking industry. Such a catastrophic
breakdown in the assumptions underlying fair-trading in some of the largest markets in the world
raises very serious questions not only about the activities of many highly-paid individuals but
also banking governance across the industry6. These are not issues that can be solved by
ensuring that banks hold sufficient operational risk capital, however large.

5
Such as, for example, an SEC fine of $2 million in July 2004 against Goldman Sachs for violations of “the waiting
period for marketing an IPO before a registration became effective”.
6
Nor is such a situation unique, a similar series of scandals relating to the widespread mis-selling of pensions funds
was revealed in the UK financial markets and documented in parliamentary reports such as the House of Commons
Treasury Select Committee - Ninth Report November 1998, available at the UK Treasury web site.
12
A Perfect Storm – Why are some Operational Losses larger than others?

The problem with using operational risk capital to act as a deterrent in such cases is that it creates
a form of moral hazard, by limiting managements’ responsibilities for curbing such egregious
excesses at an arbitrary 99.9th percentile.

Sheedy (1999) describes the dangers in attempting to measure risks that are difficult to measure:
“Quantifying risks that are not suitable for precise measurement can create further moral
hazard; the process of quantification can create a false sense of precision and a false
impression that measurement has by necessity resulted in management. Managers,
wrongly thinking that operational risk has been addressed, may reduce their vigilance in
this area, creating an environment where losses are more likely to occur.”

The ethical lapses described above should not be acceptable at the 99.9999 recurring percentile
and regulators must ensure that banks eliminate them altogether through vigorous enforcement of
sound ethical standards. Capital may have a role to play but intense regulatory and market
scrutiny, as in Pillars 2 and 3 of Basel II, are more likely to ameliorate the impact of similar
situations in future.

Infrastructure Disasters
It is rightly the concern of regulators to protect the vital technology infrastructure that underpins
all modern banking systems. The terrorist attacks of 9/11 in New York and July 2005 in London
struck at major financial institutions with massive human and economic costs. And while natural
disasters, such as Hurricane Katrina, had little impact on the global financial system, regional
banks suffered significant losses due to damage to premises and technology. Banks have also
suffered losses due to disruption of vital services, such as the failure of electricity supplies across
the North Eastern United States in August 2003.

Under Basel II, banks are required to record losses resulting from such events, for the purposes
of capital calculation, under the categories of “Damage to Physical Assets” and “Business
Disruption and System Failures”. Arguments similar to those above can be made about whether
such wide-spread disruptions can be analysed using statistical tools or are they, in fact, very
different to run of the mill loss events, such as a fire in bank premises. Whether appropriate or
not, this paper makes a different argument.

Business Continuity Planning (BCP) is a recognised discipline in banks (and other industries)
focussed on minimising disruption to a firm’s business resulting from man-made and natural
disasters. Banking, securities and insurance regulators take BCP very seriously and lay down
strict guidelines as what levels of BCP are expected of regulated entities (Joint Forum 2005b).
National regulators, such as APRA, have expanded the principles into mandatory ‘prudential
standards’ with follow up assessments of capabilities against requirements (APRA 2005b).

BCP is not only a regulatory issue but one that is also of interest to shareholders and bank boards
have authorised major investment in, for example, building Disaster Recovery (DR) data centres
to house fully operational copies of their key technologies. In fact, it was the lessons of 9/11 that
enabled banks to recover easily from disruptions caused by the London terrorists’ attacks of July
2005 (Allen 2005).

13
A Perfect Storm – Why are some Operational Losses larger than others?

The potential impact of an Avian Flu Pandemic raises serious issues for regulators and banks -
though obviously dwarfed by the human costs that would occur if a pandemic were to break out.
A pandemic, if it were to occur, would have a major impact on the operational capabilities of
individual banks and the entire banking infrastructure, and since it would fall under the Basel II
categorisation of Damage to Physical Assets (in this case staff) must be part of a bank’s capital
charge calculation to a 99.9% confidence interval (McConnell 2005a). This is, of course, no
comparable history of losses in the recent past, and hence little assistance that quantitative
analysis can give. The international risk classification of pandemics, not just Avian Flu, is
calibrated on a 6-point scale issued by the World Health Organization (WHO), i.e. to a precision
of +/- 16%. If epidemiological experts work to such a gross scale, it is unlikely that operational
risk analysts would, with much less information, be able to calculate risks to a precision of
99.9%. Clearly the potential losses from an Avian Flu Pandemic are not capable of being
measured to the precision required by Basel II, drawing the inference that the capital charge
regime has failed in its first real test.

Like any prudent business, banks also insure their operations against losses resulting from
disasters, laying off these risks to others better able to measure and manage them – the insurance
industry. Insurance premia are expensive and increasing, as for example 1.6% of total general
expenses for National Australia Bank in 1994 were insurance related (NAB 2004).

In addition to significant costs on BCP initiatives and insurance premia, banks are, under Basel
II, further required to estimate the capital required to cover 99.9% of losses due to damages to
physical assets and business disruption.

It is not obvious what this additional capital is meant to cover, even if it were possible to derive a
meaningful estimate of the amount needed in the first place. If a firm does not have a fully
complaint BCP plan in place and tested according to regulators’ standards, then the obvious
action is for regulators to force the necessary improvements to the plan. Likewise, if firms are
under-insured then the firm’s auditors should specifically address this issue. Purchasing
additional insurance is more efficient than the ‘self insurance’ implied by capital charges,
because it is based on the vast amount of additional information available to insurers and, more
importantly, by the superior capacity for insurers to hedge such risks by global diversification.
Requiring all banks to partially self-insure disaster risks, will, in the absence of full information,
result in ‘excess capital’ being held in the system, which creates inefficiencies in the market.

Learning Curve
It is well known that most new ventures will experience ‘teething problems’ when, due to
unfamiliarity with new processes, materials and tools, unexpected difficulties occur. After some
time, the number of problems tends to decrease and the new venture settles down. This ‘learning
curve’ phenomenon is well known in engineering and is modelled by Reliability Theory, which
attempts to measure the impact of failures of components in complex mechanical/ electronic
systems.

Research and experience has found that the likelihood of a component failure at a particular time
(the instantaneous failure or "hazard" rate) follows what is called a "bath tub" curve, as
illustrated in Figure 2. The bathtub curve has three distinct periods:
1. Learning: (also called burn-in/infant mortality) during which failures occur relatively
frequently due to inexperience or quality problems;
14
A Perfect Storm – Why are some Operational Losses larger than others?

2. Maturity: (also called useful life) during which failures occur infrequently and
randomly; and
3. Wear Out: during which failures occur because components have reached the end of
their useful lives.

Failure
Rate Learning

Wear Out

Maturity

Time
Figure 2 – Bath Tub Curve

The curve in Figure 2 can be described by the Reliability Function (Dhillon 1981)
h(t ) = kλct c −1 + (1 − k )bt b −1 βe βt
b
(1)

where b, c, β, λ >0, 0≤ k ≤1 and h(t) is hazard rate or the likely number of failures in time t+∆t.

The overall shape of the curve is determined by the parameters:


c and λ, which determine the shape and scale of the "learning" period;
b and β, which determine the shape and scale of the "wear out" period; and
k, which determines the length of the "maturity" phase.

While reliability theory is normally used to measure failures in mechanical components there are
parallels with the occurrence of failures/errors in operational processes (McConnell 2003a). In
new banking ventures, errors and losses occur frequently during initial start-up, due to factors
such as incomplete knowledge, inexperienced staff and the unforeseen troubles. As staff climb
the "learning curve", processes settle down and errors drop off, but nevertheless may still occur
at a low level. After a time, processes gradually diverge from industry best practice and, unless
constantly renewed, become out of date and the frequency of loss events again begins to creep
up.

The possibility of a ‘learning curve’ effect raises serious difficulties for modelling Operational
Risk capital under Basel II because, for example, it clearly violates the ‘iid’ assumption of
‘stationarity’, i.e. that the frequency of events is “invariant under shifts of time” (Embrechts et al
2004). Most modelling of operational risk events assumes a purely random frequency of loss
events over time, for example assuming a Poisson distribution. The diagram above shows this
holds only in the ‘mature’ period of the curve. Simplifying assumptions could be made, if the

15
A Perfect Storm – Why are some Operational Losses larger than others?

learning period was sufficiently short or the frequencies of events during learning and wear-out
were not much larger than during the mature period. But are such assumptions reasonable?

The introduction of new products and technologies into the financial system has resulted in some
spectacular losses, for example:
• Derivatives: one of the most spectacular cases of losses in new markets is that of the
Borough of Hammersmith and Fulham in London in the early 1990s. In this case, the
council was supported by the UK high court in refusing to pay over $600 million to
several banks on new ‘interest rate swaps’ derivatives contracts. These contracts were
declared null and void because they were deemed as being for the purpose of trading
[which the council was not permitted to do] as opposed to interest rate management
[which it could undertake]. This case was, however, a major driver in creating standards
for derivatives contracts, in particular those developed under the auspices of ISDA
(International Swap Dealers Association), and a real example of the industry learning
from its mistakes.
• Information Technology: Failures of IT projects are unfortunately frequent in banking,
but few are as spectacular as the case of the Westpac bank in Australia and its infamous
CS90 project. This ambitious project was designed to re-engineer all of the bank’s
internal processes to be able to react quickly to changing customer demand, but, like
many such projects, was abandoned due to late delivery and massive cost overruns.
Though not the only cause, this massive IT failure led to replacement of senior
management and a ‘near death’ experience for what was at the time Australia’s leading
bank (Carew 1997).

Remembering that Basel II is meant to be ‘forward looking’7, the possibility of a learning curve
effect poses a real dilemma for risk managers and regulators. If a firm is about to embark upon a
new venture, then capital estimates should take into account the increased likelihood of losses,
whereas conversely if an innovation was recently put in place then there is a likelihood, but not
certainty, of a reduced loss frequency. But, of course, one can only detect the end of the learning
phase with hindsight some time (in some situations years) into the mature period.

Under Basel II, estimation of losses resulting from learning factors would be covered by an
analysis of so-called “Business environment and internal control factors” (Basel 2004, 676):
“In addition to using loss data, whether actual or scenario-based, a bank’s firm-wide risk
assessment methodology must capture key business environment and internal control
factors that can change its operational risk profile. These factors will make a bank’s risk
assessments more forward-looking, more directly reflect the quality of the bank’s control
and operating environments, help align capital assessments with risk management
objectives, and recognize both improvements and deterioration in operational risk
profiles in a more immediate fashion.”

In practice, for new innovations, such as the development of new derivative products, the
estimation of potential losses to a 99.9% confidence interval is pure sophistry. Research into
‘risk perception’ shows8, for example, that people will invariably overestimate the likelihood of

7
Specifically capital must be calculated for a “1 year holding period”, i.e. next year.
8
For work that won them the Nobel Prize, Kahneman, and Tversky showed that rather than use the ‘expected utility’
rules of classical decision theory, people estimate risk using subjective ‘heuristics’ (or convenient ‘rules of thumb’)
16
A Perfect Storm – Why are some Operational Losses larger than others?

an event with which they have some familiarity rather than a completely alien one and will
extrapolate from known situations to estimate an unknown one, invariably not making a large
enough adjustment - i.e. will underestimate risks (Kahneman and Tversky 1979). Furthermore,
‘experts’ are over confident in their ability to estimate accurately from small data samples. Nor
does using a number of experts, rather than one, to estimate risks necessarily lead to a better
estimation, as the well-known phenomenon of ‘groupthink’ can lead groups to make completely
wrong, but agreed, conclusions.

Again the Basel II approach introduces a ‘moral hazard’, as business sponsors will, given the
potential for large profits, be loath to admit to senior management that there is potential
(however small) for large losses from any speculative venture.

Insurance and Basel II


The Basel II Accord recognises the mitigating impact of insurance for operational risks but limits
the recognition of insurance to 20% of the total operational risk charge calculated by an AMA
(Basel 2004 677). In addition, to ensure prompt payment against claims when needed, Basel II
places, not unreasonable, restrictions on the types of policies and issuers that may be used for
capital mitigating purposes. Interestingly, under so-called Pillar 3 market disclosure standards
for AMA, Basel II requires that banks publish details of their purpose and use of insurance for
operational risk (Basel 2004, 678).

Unfortunately this ‘one size fits all’ approach to the use of insurance may have the perverse
effect of increasing risk rather than lowering it, since, at a certain point, it will discourage
managers from prudently purchasing insurance, preferring instead to self-insure with incomplete
knowledge.

There are many operational risks, such as those that relate to natural disasters, which appear to be
adequately covered by proven insurance techniques. The industry appears to have coped well
with catastrophic events such as 9/11 and Hurricane Katrina and is increasing its premia to
reflect their understanding of these risks. The question that must be asked is whether, for a well-
defined class of risk, a bank’s shareholders will be better served by maintaining capital,
estimated with limited knowledge, or by hedging those risks to insurers with better information.
The answer in most cases would be No!

This paper argues that those risks that can be covered adequately using conventional insurance
contracts, such as Damage to Physical Assets, should be taken out of the capital charge regime
and that the discipline of Pillar 3 market disclosure, augmented by audit scrutiny, be used to
ensure that insurable risks are in fact properly covered. Furthermore, recognising the vital nature
of robust Business Continuity Planning, regulators should strengthen their requirements for BCP
and if it can be shown that a firm has inadequate BCP plans regulators should take ‘prompt
corrective action’ which may include additional capital charges, among other measures.

Excess Capital
A thread running through the discussions above on the characteristics of large loss events is that
it is very difficult to estimate capital to a 99.9% confidence interval in anticipation of such large
scale loss events. In such circumstances and subject to regulatory scrutiny, risk managers will
be conservative and overestimate, rather than underestimate, the capital required. Given the lack
17
A Perfect Storm – Why are some Operational Losses larger than others?

of data for robust statistical analysis it is likely that banks will retain more capital than prudently
required (Currie 2005).

During the initial phase of Basel II, this appears to be borne out in practice. In a study of capital
calculated by large US banks, DeFontnouvelle et al (2004) found that, based on banks’ own
internal loss databases and the 99.9% confidence interval required under Basel II, operational
risk capital for the banks studied should range from 5% to 9% of Minimum Regulatory Capital
(MRC), i.e. a considerable reduction on the ratio of 12% initially proposed by the Basel
committee and a strong argument for applying for an AMA approach under Basel II. However,
the researchers also observed that the banks concerned were actually holding economic capital
for operational risk between 12% and 15% of MRC. DeFontnouvelle et al (2004) postulated that
the differences were due mainly to the qualitative adjustments to capital that must be considered
for Basel II, in particular “scenario analysis”. In short, they believed that up to half of the
operational risk capital estimated by some banks might result from subjective assessments of the
impact of factors that are not directly observable. Such a discrepancy raises serious questions
about an overly obsessive search for precision in quantitative modelling of operational risks.

Holding excess capital will create inefficiencies and unintended consequences in the financial
system. Currie (2005) points out “unless a substantive test of the strategic effects of operational
risk requirements on bank behaviour and attitudes is undertaken, adverse side effects on financial
efficiency and stability may ensue. For instance the effect on lending from over or under
providing capital for financial institutions may lead to a credit crunch”. Unfortunately there is no
way of knowing how much, if any, excess capital will be drained from the system nor how the
capital regime will impact banks’ behaviour. It is only as the Basel II rules are applied in
practice in years to come will their true impact become apparent – if the requirements prove
impractical, a valuable opportunity to improve risk management will have been lost.

Pillar 1 or Pillars 2 and 3?


Implementation of Basel II is organised under three so-called ‘Pillars’:
1. Minimum Capital Requirements (MRC): calculation of capital required to cover the full
gamut of risks run by the firm. It should be noted that Pillar 1 covers not only operational
risk but also much more substantial Credit and Market risks and in most respects operational
risk, being a new concept under Basel II, is much less well understood that these other
categories of risk;
2. Supervisory Review: the active involvement of regulators in ensuring not only that banks
have adequate capital but also “encourage banks to develop and use better risk management
techniques in monitoring and managing their risks” (Basel 2004, 720); and
3. Market Discipline: disclosure to the market of sufficient and detailed information to “allow
market participants to assess key pieces of information on the scope of application, capital,
risk exposures, risk assessment processes, and hence the capital adequacy of the institution”
(Basel 2004, 809).

While Pillar 1 (Minimum Capital Requirements) is critical to “ the soundness and stability of the
international banking system” (especially as regards Credit risk), the Basel Committee notes:
“It is critical that the minimum capital requirements of the first pillar be accompanied by
a robust implementation of the second, including efforts by banks to assess their capital
adequacy and by supervisors to review such assessments. In addition, the disclosures

18
A Perfect Storm – Why are some Operational Losses larger than others?

provided under the third pillar … will be essential in ensuring that market discipline is an
effective complement to the other two pillars” (Basel 2004, 11).

The Accord recognises that not all risks can be addressed fully under Pillar 1 and specifically
identifies three areas that are “particularly suited to treatment under Pillar 2” (Basel 2004, 724):
1. “Risks considered under Pillar 1 that are not fully captured by the Pillar 1 process
(e.g. credit concentration risk);
2. Those factors not taken into account by the Pillar 1 process (e.g. interest rate risk in
the banking book, business and strategic risk); and
3. Factors external to the bank (e.g. business cycle effects).”
It should be noted that the examples given in Basel II relate specifically to Credit risk, but this
paper argues could be equally applicable to the very large operational risk losses described
above.

As regards Pillar 3, local banking regulators have considerable leeway to encourage compliance
(Basel II, 811):
“Under safety and soundness grounds, supervisors could require banks to disclose
information. Alternatively, supervisors have the authority to require banks to provide
information in regulatory reports. Some supervisors could make some or all of the
information in these reports publicly available. Further, there are a number of existing
mechanisms by which supervisors may enforce requirements. These vary from country to
country and range from “moral suasion” through dialogue with the bank’s management
(in order to change the latter’s behaviour), to reprimands or financial penalties. The
nature of the exact measures used will depend on the legal powers of the supervisor and
the seriousness of the disclosure deficiency.”

The question then becomes one of recognising which risks are best handled using Pillar 1,
predominantly quantitative, techniques and admitting that some risks are better handled using
supervisory review of a bank’s behaviour and market discipline. In effect, regulators have
recognised this dilemma by issuing additional guidelines in a number of areas:
• Business Continuity (Joint Forum 2005b and APRA 2005b):
• Outsourcing (Joint Forum 2005a); and
• Corporate Governance (Basel 2006).

These regulations go someway to addressing the issues raised in this paper but do not address the
difficult problem of calculating capital to cover potential losses in these areas. Using BCP as an
example, two conversations have to take place between banks and their regulators: first how
good/complaint is a bank’s BCP regime; and then how much capital must be set aside to cover
potential failures in BCP? The second question is little more than a ‘finger in the air’ exercise,
looking at the potential for natural disasters or terrorist attacks over and above those already
catered for - a task difficult even for climate and security experts.

This paper proposes the following approach for calculating capital charges for such risks:
1. Regulators publish detailed requirements for mitigating such risks, e.g. for BCP planning
or outsourcing agreements;
2. Regulators, or independent auditors, evaluate each bank on its compliance with the
published requirements;

19
A Perfect Storm – Why are some Operational Losses larger than others?

3. Compliance with requirements is turned into a quantitative rating scale, e.g. 0-10, and
detailed criteria for evaluation on the scale is published (after industry and regulatory
discussions);
4. Capital calculated by firms, under Pillar 1 rule, is multiplied by a factor that is related to
the rating achieved; and finally
5. The rating and resulting capital charges are published to the market.

It should be noted that the most onerous activities in this approach (i.e. steps 1 & 2) are normally
undertaken by regulators anyway as part of their supervisory reviews, albeit using internal
documentation. The remaining steps are simple and, once agreed, ratings can be calculated
consistently and compared between firms.

For example, Basel (2006) identifies 8 ‘key principles’ for sound corporate governance in banks,
including:
“Principle 6 - The board should ensure that compensation policies and practices are
consistent with the bank’s corporate culture, long-term objectives and strategy, and
control environment.”
It should be possible, for HR and corporate governance experts, to develop a set of evaluation
criteria that would rate the compliance of an institution with this principle, on a scale of 0-10.
When all eight principles were evaluated and ratings added together, this would result in an
overall rating in the range 0-80, which would then be translated into a capital multiplication
factor using a published table or algorithm9. Obviously any clear deficiencies, such as a value of
zero for any principle, would be the subject of in-depth regulatory discussions more than mere
capital charges.

The advantage of such an approach is that firms can be incentivised for doing the right thing –
i.e. improving their risk management by achieving better ratings in key risk management areas.
Improvements in risk management over time should be reflected in improved ratings, lower
capital and increased market confidence.

Additional market discipline could be provided through Insurance. In theory, insurance premia
should reflect the ratings achieved by a bank in a particular area – better rating equals lower risk
and a lower premium. In a particular case, if an insurance assessor does not agrees with the
published rating then that should raise alarms with management and regulators. If a premium is
agreed based on the published rating there is then little argument for not allowing 100% of the
mitigating effect to be taken into account in calculating a Basel II capital charge.

While templates exist for evaluating aspects of risk management on an industry basis, such as
Enterprise Risk Management Framework proposed by COSO (2004), further research is needed
to identify risks that are difficult to evaluate but can be managed by Pillar 2 and 3 mechanisms.

9
It would not be difficult, if agreed, to rank principles in importance and apply different summation factors nor
would it be difficult to calculate a multiplication factor using a statistical distribution (such as Normal) to reflect the
difficulty and importance of improving ratings to higher levels.
20
A Perfect Storm – Why are some Operational Losses larger than others?

In conclusion, it is worth noting the caution of Embrechts et al (2003a) in applying overly


quantitative approaches:
“For these reasons, we argue that Pillar 1 in the operational risk management framework
should not be overemphasized [author’s emphasis]. For repetitive and stationary losses
the standard actuarial methods and their refinements can be employed to derive capital
charges. The crux, however, pertains the non-repetitive and non-stationary case. And it is
exactly the losses of the latter category, which jeopardize the existence of financial
institutions. VaR estimates, even though complemented by stress testing and scenario
analysis, can never be viewed as a “stand-alone” risk management tool. Keeping in mind
that most serious operational risk losses cannot be judged as mere accidents, it
becomes obvious that the only way to gain control over operational risk is to improve
the quality of control over the possible sources of huge operational losses. It is
exactly here that Pillar 2, and to a less extent Pillar 3, becomes extremely important.”

Further Research
As noted by the Basel Committee, there are extensive opportunities for research into the topic of
Operational Risk Management, covering both quantitative and qualitative methodologies (Basel
2004). For modelling large losses, there are several potential areas of further research, in
particular:
• Empirical research into, and case studies on, instances of large operational losses, the
conditions that created them and the lessons that can be learned from them.
• Research into the theories underlying Loss Distribution Approaches (LDA) under Basel
II and how banks might use those theories in complying with Basel II.
• Research into Enterprise Risk Management models and their use in Basel II.
• Research into the potential strategic effects of Basel II, the possibility of ‘unintended
consequences’ and actions that might reduce the adverse impact of such consequences.

Summary
There is a growing body of research into quantitative methods for calculating capital charges to
cover Operational Risk under Basel II. Unfortunately, there is little consensus on the best
methods to employ. Potentially attractive techniques, such as EVT, do not appear to be directly
applicable to satisfy the strict quantitative standards set by Basel. On one level this is because
there is just not enough good data but there may be more fundamental problems.

After describing Basel II requirements and its classification of ‘loss events’, the paper gave an
overview of EVT and its potential use in calculating Operational Risk capital charges. Using
disguised data from a large bank the paper illustrated a typical distribution of losses due to
operational risk loss events and discussed the issue of whether unusually large losses may be
‘extremes’ (that can be modelled statistically) or ‘outliers’ (that should be removed from the
analysis).

Taking a step back, the paper asked the question whether extreme losses satisfy or violate the
model assumptions underlying quantitative approaches to Basel II, including EVT.

To test these assumptions, the paper then considered a number of the largest recorded
Operational Risk losses, such as Barings, and concluded that such events are too complex to be

21
A Perfect Storm – Why are some Operational Losses larger than others?

modelled by the ‘one size fits all’ classification of loss event types proposed in Basel II and,
moreover, some of these large events appear to violate the necessary ‘iid’ conditions for
statistical analysis using EVT techniques. Furthermore, the paper argues that attempting to
calculate capital to an unrealistic precision (i.e. 99.9% confidence interval) is not only
technically suspect, given the paucity of data, but creates a ‘moral hazard’ encouraging managers
to claim compliance while ignoring serious underlying ethical issues.

The paper concludes by arguing that, where accurate modelling is not possible, certain risks
should be managed using qualitative approaches, specifically through the Pillar 2 and 3
mechanisms of Basel II. An approach to calculating capital in these circumstances is outlined.

Despite several years of industry consultation, robust methods for quantitative analysis of
Operational Risk are still a long way from being widely adopted. The paper suggests that the
way forward is to face this dilemma and break the problem down – measure what we can
measure and manage what we can’t measure, as best we can, until we understand the problem
better.

22
A Perfect Storm – Why are some Operational Losses larger than others?

Appendix A – Extreme Value Theory

This appendix does not attempt to provide a detailed description of the statistical theories that
underlie Extreme Value Theory, which are comprehensively covered in Embrechts et al (2003).
It does however provide a brief introduction to the elements of EVT that are commonly applied
to the measurement of Operational Risk capital for Basel II purposes. It should be noted that
EVT is used to model ‘loss severity’ and that other techniques are used to model ‘loss
frequency’, in particular the use of the Poisson distribution.

Extreme Value Theory (EVT) has its roots in the physical sciences, such as in the study of
“extreme” floods, and its statistical methods have been applied successfully to analysis of
insurance losses, in particular the work of Embrechts et al (2003). EVT theories have also been
applied to other areas of finance, such as the investigation of large falls/rises in stock prices.

The use of EVT to measure Operational Risk is based on a theorem in statistics10, similar to the
“Central Limit Theorem’, which states that for a certain class of distributions, the limiting
distribution for ‘excess’ losses, i.e. those values above a selected threshold, is the General
Pareto Distribution or GPD. Figure 1.A above shows an example of a ‘threshold’ of $5 million
with all points above that line being considered ‘peaks’ or one set of ‘excess losses’.

It should be noted that this so-called ‘Peaks Over Threshold”, or POT, method is only one way
of considering operational losses; Cruz (2003) describes another method which considers
extremes over time periods (so-called Block Maxima method) where the limiting distribution is
the GEV (Generalised Extreme Value) distribution. The POT method, however, is most often
referenced with regards to operational losses (Embrechts et al 2004).

The theorems behind EVT state that if F(X) is an unknown distribution of observations that are
‘independent and identically distributed, or ‘iid’, and u is a ‘high threshold’, then the extreme
distribution function of excess losses (X-u) belongs to class of GPD, usually expressed as a two-
parameter distribution (Medova et Kyriacou 2001):

Gξ,β (y) = 1- (1+ ξy/β) -1/ξ if ξ ≠ 0 or (Equation A.1)


= 1- exp (-y/β) if ξ = 0

The beauty of the POT method is that, given accurate estimates of the parameters, it is possible
to use a simple formula to estimate the VAR (Value At Risk), or the severity of a ‘loss event’,
for a particular ‘quantile’ of this distribution, such as 99.9% (Mc Neil and Saladin 1997):

VARp = u + β/ ξ ((n/Nu (1-p))-ξ -1) (Equation A.2)

Where
p is the desired quantile , e.g. 99.9% ;
n is the total number of observed operational losses; and
Nu is the number of observations exceeding the threshold u.

10
Sometimes known as the Pinklands, Balkema & de Haan (PBdH) limit theorem
23
A Perfect Storm – Why are some Operational Losses larger than others?

The values of the two parameters of the GPD distribution, i.e. ξ (Xi), the ‘shape’ parameter, and
β (Beta), or ‘scale’ parameter, depend on the underlying distribution F(X) and especially the
choice of the threshold value u.

Herein lies a problem! Several questions are raised immediately:


• What should the value of the threshold be?
• How can the parameters ξ and β be estimated accurately?
• More importantly, does the underlying distribution of operational losses satisfy all of the
conditions for IID?

Making the (quite considerable) assumption that iid conditions hold, accurate estimates of the
shape and scale parameters depend on having sufficient data points in the ‘tail’, i.e. above the
threshold. One can see from equation A.2 above, that the VAR result is particularly sensitive to
the ‘shape’ parameter because it depends on the power of -ξ, and can produce results for capital
required that are over or underestimated. Likewise, any errors will be multiplied as the desired
quantile (p) is increased – what may be an acceptable estimation error at 95% may be
unacceptable at 99.9%.

Tools, techniques and software exist to estimate these parameters to a reasonable confidence
given sufficient data. When estimating the VAR, the choice of an appropriate threshold value is
not simple, often relying on visual interpretation of graphs of the data rather than strictly
quantitative criteria (Cruz 2002). However, researchers agree that very large data sets are
required to provide accurate estimates of extreme quantiles, such as the 99.9% required by Basel
II. For example, Hyde and Kou (2003) considered the general problem of discriminating between
‘heavy’ and ‘light’ tailed distributions11 and concluded that theoretically “the order of 50,000
observations would be necessary even at 99.9% confidence level”.

While there is an enormous body of empirical research into the use of EVT in insurance12
(Embrechts et al 2003), there is much less evidence of its applicability to Operational Risk under
the Basel II conditions. This is hardly surprising given that banks have only been collecting
operational risk data in a consistent fashion for only a few years, from when the Basel I
proposals were being finalised in the early 2000s.

Empirical research into the potential use of EVT in measuring Operational Risk has been
concentrated on two major data sets: (a) ‘pooled’ data from multiple banks usually collected as a
part of regulatory surveys13; and (b) ‘public’ data collected by commercial companies from
public sources such as the media.

11
A heavy tailed (or leptokurtic) distribution is one where the “weight” of data in the tails (e.g. above 95%) is
greater than would be expected in a Normal distribution.
12
Actuarial risk analysis is based on theories developed in the early 20th century by, among others, Lundberg and
Cramer.
13
As part of the ‘calibration’ of Basel II, the Basel Committee undertook a number of so-called Quantitative Impact
Studies (QIS) which included a Loss Data Collection Exercise (LDCE), collecting operational risk data from banks
through their local supervisors.
24
A Perfect Storm – Why are some Operational Losses larger than others?

It should be noted that a number of general concerns have been raised about the quality of these
databases, in particular:
• As Operational Risk Management is an evolving discipline, it is unlikely that data
collection methods for ‘pooled’ data would be consistent across multiple business lines in
multiple banks of varying sizes and degrees of sophistication (Moscadelli 2004). In
addition, because there is a ‘de minimis’ reporting threshold, pooled databases are
truncated at the lower end of the distribution, i.e. the full distribution is not available for
analysis.
• On the other hand, while a ‘public’ database is unusually constructed on a consistent
basis (by a commercial firm), however, it suffers from the fact that only reported events
are recorded and hence has a ‘size bias’, i.e. tends to contain the large losses that gain
public attention (DeFontnouvelle et al 2003).

Table A.1 below summarises the results of some of the published empirical research into
quantitative models for measuring Operational Risk. In addition, there are many theoretical
papers that assume heavy-tailed distributions for operational risk data but use techniques that
either expand on EVT, such as DiClemente and Romano (2003) or describe statistical techniques
for minimizing biases in loss event data, such as Chernobai et al (2004).

As Table A.1 shows, the results are far from conclusive. At best the studies conclude that, only
after significant data manipulation to reduce biases, EVT techniques may be useful in calculating
capital charges for certain Loss Event Types. At worst, serious questions are raised about the
cost-effectiveness of the quantitative approaches required to gain accreditation for an AMA
under Basel II.

It is not the purpose of this paper to fully review the research on the application of EVT to
operational risk measurement but it is important to note that the theories of EVT are themselves
sound and as a recent industry paper noted “challenges in achieving a 99.9% should not
necessarily be viewed as the result of a shortcoming in any one particular model, but are partly
due to the nature of operational risk” (FSA 2005).

A final comment on the state of the art is best left to the experts (Embrechts et al 2004):
“The theory … is based on specific conditions and can be applied in cases where testing
has shown [author’s emphasis] that these underlying assumptions are indeed fulfilled.
The ongoing discussions around Basel II will show at which level the tools presented will
become useful. However, we strongly doubt that a full operational risk capital charge can
be based solely on statistical modelling.”

25
Table A.1 – Summary of Empirical Research into EVT and Operational Risk

Readers should note that the ‘Summary of Findings’ column in Table A.1 is the author’s summation of researchers’ conclusions that are
usually hedged with many qualifications and caveats about the quality of data! The author apologies for any misplaced emphasis and urges
readers to reference the original research to interpret the conclusions in their full context.

Reference & Data Source Summary of Major Findings


Research Emphasis
Moscadelli (2004) Pooled data from responses to 1. Operational Risk is significant in banks
LDCE 2002 from 89 banks 2. POT is a ‘suitable and consistent’ statistical tool for analysis of ‘heavy
Regulatory Calibration tails’
Assumed data analogous to 1 3. LDCE dataset satisfies ‘iid’ conditions for GPD
medium sized internationally active 4. GPD provides a “good estimate” of aggregate loss severity but the
bank over an 89-year period. distribution varies by business line
5. Capital results are highly sensitive to “largest observed losses” and “very
extreme quantile estimates”, e.g. 99.9%
6. Observed differences in ‘riskiness’ of different Business Lines
7. No analyses by Basel II Loss Event Type
DeFontnouvelle et al Data from two large ‘public’ 1. Operational Risk is significant in banks
(2003) databases selecting events related to 2. There is a large size ‘reporting bias’ in public datasets requiring special
banking. treatment before analysis to reduce overestimation of capital
Regulatory Calibration 3. For these databases, GPD in an “appropriate model” to represent tail
severity for all business lines
4. Wide differences in ‘riskiness’ of different Business Lines
5. Differences in ‘event type’ classification across databases
6. Insufficient data for conclusive analysis on Basel II Loss Event Type
7. Supplementing ‘internal’ data with ‘external’ data can significantly
improve operational risk models
Embrechts et al (2004) 417 losses from undisclosed real 1. Operational Risk data exhibits extremes
sources. 2. EVT, and other insurance, techniques may be applied successfully to
events that lend themselves to quantification
Theory Development 3. Evidence of ‘non-stationarity’, i.e. losses spaced irregularly over time
(EVT)
A Perfect Storm – Why are some Operational Losses larger than others?

Chernobai & Rachev European ‘public’ database, 1. Operational Risk data exhibits extreme values
(2006) 2. 5% of data is ‘contaminated’, i. e. there are ‘outliers’ that do not fit the
Selected 233 published events model of the bulk of data
Theory Development related to ‘External’ loss type 3. Outliers account for over 70% of VAR at 99% level
(Robust Statistics)
Baud et al (2002) & Data generated by Monte Carlo 1. There is a large size ‘reporting bias’ in public datasets
Baud et al (2002a) simulation based on anonymous 2. Capital will be significantly overestimated unless “threshold and
internal bank and external public truncation biases” are addressed
Practical Calculation datasets
of Basel II Capital
Charges
DeFontnouvelle et al Loss data for six ‘internationally 1. Loss Data for most business lines and event types may be well modelled
(2004) active’ banks covering 1 year was by a Pareto type distribution
extracted from LDCE 2002 2. Ranking of severity is consistent across institutions
Regulatory Calibration 3. Losses for certain business lines and event types are “very heavy-tailed”14
4. Could not “formally reject the hypotheses that [data] are drawn from a
light-tailed distribution such as the lognormal”
Chernobai et al (2005) European ‘public’ database, 1. Ignoring ‘minimum threshold’ in collected data leads to “severe biases” in
estimations, in particular underestimating VAR
Practical Calculation Selected 233 published events 2. Loss event types are best fit by Weibull and Log Weibull distributions
of Basel II Capital related to ‘External’ loss type 3. Tails are fit by heavy tailed distributions, e.g. Pareto, Burr and Stable
Charges Pareto

14
The researchers warn that while this finding is “intuitively appealing, overly simplistic approaches may yield implausible estimates of economic capital”

27
A Perfect Storm – Why are some Operational Losses larger than others?

Rachev et al (2005) Pooled data from responses to 1. Loss distributions are “highly right skewed and have a very heavy-tail”
LDCE 2002 from 89 banks 2. Distribution of loss event across business lines and loss event types is
Theory Development “non uniform”
Plus data from European public data 3. Severity of public data best modelled by ‘Stable Pareto’ distributions
4. Assumption that frequency follows a simple Poisson is “unrealistic” and a
“time varying, non-homogeneous Poisson process” is a superior fit.

Chapelle et al (2004) Data from a large European bank, 1. Internal data is best fit by combination of Lognormal below a threshold
with added external data from a and GPD above it
Regulatory Calibration ‘public’ database 2. Relationship between loss severity and firm size is non-linear16
3. Public data, scaled for size, is best fit by a Lognormal distribution
Selected 3,000 data points covering 4. Confidence interval for 99.9% estimate can be as high as +/- 20%
2 Business Lines and 2 Event 5. Capital charge can be considerably reduced by taking ‘dependence’ (or
Types15 correlation) into account
6. Without considering dependence, a conservative AMA approach will
yield results greater than a Standardised Approach, raising questions
about the cost-effectiveness of AMA accreditation

15
Researchers noted that the study was restricted to a 2x2 matrix because there was too little data to analyse for many combinations of Business Line and Event Type.
16
In this study, loss severity is scaled by the relative size of the external entity as against the comparable internal entity raised to the power of a regression factor.

28
References
Allen P. (2005) “BCP: Expect the Unexpected” Wall Street and Technology. September 23
APRA (2005) “ Guidance Note AGN 115.2 (draft) - Advanced Measurement Approaches to
Operational Risk: Quantitative Standards”, Australian Regulatory Prudential Authority
APRA (2005b) “ Prudential Standard APS 232 - Business Continuity Management”, Australian
Regulatory Prudential Authority
Basel (2001) “Working Paper on the Regulatory Treatment of Operational Risk”, Basel
Committee on Banking Supervision. September
Basel (2004) “International Convergence of Capital Measurement and Capital Standards - A
Revised Framework”, Basel Committee on Banking Supervision. June
Basel (2006) “Enhancing Corporate Governance for Banking Organisations”, Basel Committee
on Banking Supervision. February
Baud N., Frachot A. and Roncalli T. (2002) “How to Avoid Over-estimating Capital Charge for
Operational Risk?” Credit Lyonnais, France
Baud N., Frachot A. and Roncalli T. (2002a) “Internal data, External data and Consortium Data
for Operational Risk Measurement – How to pool data properly?” Credit Lyonnais, France
Carew E. (1997) Westpac - The Bank that broke the Bank, Doubleday, Sydney
Chapelle A., Crama Y., Hübner G. & Peters J.P. (2004) “Basel II and Operational Risk:
Implications for risk measurement and management in the financial sector” National Bank of
Belgium
Chernobai, A, Menn C., Truck S. & Rachev S.T. (2004) “A Note On The Estimation Of The
Frequency And Severity Distribution Of Operational Losses” Technical Report, University
of California, Santa Barbara
Chernobai, A, Menn C., Rachev S.T. & Truck S. (2005) “Estimation of Operational Value-at-
Risk in the Presence of Minimum Collection Thresholds” Technical Report, University of
California, Santa Barbara
Chernobai, A., & Rachev, S.T. (2006) “Applying Robust Methods to Operational Risk
Modeling” Tech. rep., University of California Santa Barbara.
Coleman R. (2002) “Modelling Extremes” International Statistical Workshop, Seoul University
June.
COSO (2004) “Enterprise Risk Management – Integrated Framework” The Committee of
Sponsoring Organizations of the Treadway Commission (COSO) http://www.coso.org
Cruz M. G. (2002) Modelling, Measuring and Hedging Operational Risk, Chichester: Wiley
Currie, C.V. (2005) “A test of the Strategic Effect of Basel II risk requirements on Banks”
University of Technology Sydney: Business - Working Paper 143.
DeFontnouvelle P., DeJesus-Rueff V., Jordan J. & Rosengren E. (2003) “Using Loss data to
quantify Operational Risk”, Federal Reserve, Boston
DeFontnouvelle P., Rosengren E. & Jordan J. (2004) “Implications of Alternative Operational
Risk Modeling Techniques”, Federal Reserve, Boston
Dhillon B. S. (1981) Engineering Reliability, Wiley
DiClemente A. and Romano C. (2003) “A Copula-Extreme Value Theory Approach for
Modelling Operational Risk” University of Rome
A Perfect Storm – Why are some Operational Losses larger than others?

Diebold F.X. et Al. (1998) “Pitfalls and Opportunities in the Use of Extreme Value Theory in
Risk Management”, Wharton
Ebnother S., Vanini P., McNeil A., and Antolinez-Fehr P. (2001) “Modelling Operational Risk”
ETH Zurich
Embrechts P., Kaufmann R, & Samorodnidsky G. (2004) “Ruin Theory Revisited: Stochastic
Models for Operational Risk”, ETH Zurich
Embrechts P., Kluppelberg C. & Mikosch T. (2003) Modelling Extremal Events for Insurance
and Finance, Berlin: Springer Fourth Printing
Embrechts P., Furrer H. & Kaufmann R. (2003a) “Quantifying Regulatory Capital for
Operational Risk”, ETH Zurich
Forbes (2004) “Wall Street Fine Tracker” Forbes Magazine 15th July 2004
FSA (2005) “AMA Soundness Standards”, AMA Quantitative Experts Group, Financial
Services Authority, London
Heyde C.C. and Kou S.G.” (2003) “On the controversy over tailweight of distributions”,
Operations Research Letter, Elsevier
Joint Forum (2005a) “Outsourcing in Financial Services”, Joint Forum published by Basel
Committee on Banking Supervision. February
Joint Forum (2005b) “High Level Principles of Business Continuity Planning”, Joint Forum
published by Basel Committee on Banking Supervision. December
Junger S. (1998) The Perfect Storm: A True Story of Men Against the Sea Norton, New York
Kahneman D. and Tversky A. (1979) "Prospect Theory: An analysis of decisions under risk"
Econometrica Vol. 47 pp 262-291
Mc Connell P. J. (1998) “Barings: Development of a Disaster International”. Journal of Project
and Business Risk, Vol. 2 Issue 1 pp. 59-74.
Mc Connell P. J. (2003) " AIB/Allfirst – Development of another Disaster”, Henley Working
Paper Series, Henley Management College
Mc Connell P. J. (2003a) "The Use of Reliability Theory in measuring Operational Risk”, in
Advances in Operational Risk – Revised Edition (Eds. S Jenkins) Risk Books; London
Mc Connell P. J. (2005) " NAB – Learning from Disaster”, Henley Working Paper Series,
Henley Management College
Mc Connell P. J. (2005a) " Banks and Avian Flu – Planning for a Possible Pandemic”, e-Journal
of Operational Risk October 2005 http://www.continuitycentral.com
McNeil, A. J. and Saladin, T. (1997). “The Peaks over Thresholds Method for Estimating High
Quantiles of Loss Distributions” Proceedings of XXVIIth International Astin Colloquium,
Cairns, Australia, pp. 23–43.
Medova, E. A. & Kyriacou M. N. (2001) “Extremes in Operational Risk Management” Centre
for Financial Research, Cambridge
Moscadelli M (2004) “The modeling of operational risk: experience with the analysis of the data
collected by the Basel Committee”, Bank of Italy WP 517
NAB (2004) Annual Financial Report 2004, Melbourne: National Australia Bank
Rachev S.T. Chernobai, A, & Menn C., (2005) “A Note On The Estimation Of The Frequency
And Severity Distribution Of Operational Losses” Technical Report, University of
California, Santa Barbara

30
A Perfect Storm – Why are some Operational Losses larger than others?

SEC (2003) “Ten of Nation's Top Investment Firms Settle Enforcement Actions Involving
Conflicts of Interest Between Research and Investment Banking”, Press Release April 28th
Securities Exchange Commission, Washington D.C.
Sheedy, E. (1999) “Applying an Agency Framework to Operational Risk Management” Applied
Finance Centre, Macquarie University. Working Paper 22
Vose D. (1996) Quantitative Risk Analysis – A guide to Monte Carlo Simulation Modelling,
Chichester: Wiley

31

Das könnte Ihnen auch gefallen