Sie sind auf Seite 1von 38

Explaining the Rationale Behind the Assumptions

Used in the Measurement of VaR


Michael K. Ong

This appeared as Chapter 1 of the book,


Risk Management for Financial Institutions,
Published by RISK Books (London) 1997

M.K.Ong/Explaining VaR/October 1996

Explaining the Rationale Behind the Assumptions


Used in the Measurement of VaR
Michael K. Ong

Prologue
It is coming.
In the U.S., January 1, 1998 is the date in which internationally active banks must comply with
the risk-based capital standards issued by the Basle Committee on Banking Supervision. The
supervisory framework for market risks, defined as the risk of losses in on- and off-balance sheet
positions arising from movements in market prices, requires that banks engaging in these
activities use internal models to measure these potential losses.
The objective in introducing this significant amendment to the Capital Accord of July 1988 is "to
provide an explicit capital cushion for the price risks to which banks are exposed, particularly
those arising from their trading activities. The discipline of the capital requirement impose is
seen as an important further step in strengthening the soundness and stability of the international
banking system and of financially markets generally."
What have banks really been preparing for all this time?
In my role as a risk manager, I have been asked countless times from regulators, fellow
academics, senior management, and colleagues on the Street to explain what it is that the
internal models -- collectively known by the generic term VaR -- are supposed to measure and
whether or not it is feasible. Can VaR really deliver what it is touted to be? Having delivered
several conference talks on the subject, I have finally decided to put some of my thoughts down
in writing. Here is the story of VaR -- its assumptions, rationale, foibles, and my own personal
reflections in search of truth.

M.K.Ong/Explaining VaR/October 1996

Implementing Variance-Covariance Models


The simplest Value-at-Risk framework, also known as VaR, is based on a so-called analytic or
portfolio variance approach. This approach closely resembles the Markowitz framework on the
portfolio theory of risk and return. In the original Markowitz formulation, the concept of
portfolio risk is associated with the observed dispersion of the portfolio's return around its mean
or average value. Risk, therefore, is a quantifiable entity assumed to be completely
encapsulated by the calculated portfolio variance, a measure of the portfolio return's dispersion
or deviation around the mean. Consequently, the square root of the portfolio variance is the
portfolio standard deviation -- the dispersion number itself.
Consider a portfolio containing only two assets with prices labelled by S A and S B . Let the
portfolio value be denoted by U( S A , S B ) . Suppose due to some market movement, the values
of these two assets change by the amounts, S A and S B , respectively. Then, if the portfolio
value U depends only on the asset prices in a linear fashion, the change in portfolio value due to
changes in asset prices is

U =

U
U
S A+
SsubB .
SA
SB

The stipulation of linear dependence on asset prices necessarily dictates that higher ordered
derivatives are all identically zero so that higher order changes in asset prices do not make a
contribution to change in portfolio value.
For simplicity, consider only overnight market movements. What is the dispersion or deviation
of the change in portfolio value away from the previous day's value, given these overnight
changes in asset prices? The answer lies in the standard deviation of the variance of the change
in portfolio value, viz., [var( U)]1/2 . With this, one can now begin to ask questions related to
the potential portfolio losses due to market movements. Because of the overnight nature of the
assumed market movements, we can define a concept called Daily-Earnings-at-Risk, denoted
by DeaR, as defined by

M.K.Ong/Explaining VaR/October 1996

DeaR A+B = var( U)

U
= var
S A+
S B
SB
SA

= W WT ,
where the weight vector W and the covariance matrix are, respectively, given by
U U
W=
,

S A S B

var( S A ) cov( S A , S B )

=
.
var( S B )
cov( S B , S A )
How should this overnight concept be extended to a holding period of several days?
This question resulted in the invention of Value-at-Risk, or simply VaR. The VaR for a
horizon (or holding period) of T days is then defined as
VaR = DEaR A+B * T ,

which is nothing but a simple scale-up of the daily risk number by a multiplicative factor given
by the square root of the holding period.
Because VaR looks like, but not necessarily is, a standard deviation number, in order to
facilitate a statistical interpretation, one is necessarily forced to make assumptions regarding the
statistical distribution of the changes in the underlying asset prices in the portfolio. In essence,
M.K.Ong/Explaining VaR/October 1996

VaR is a statistical measure of potential portfolio losses, summarized in a single number, given
an assumed distribution. Common wisdom in the market then decided that a multivariate normal
distribution is easiest to deal with. Consequently, the derived VaR number can also be used to
attach a confidence level to any estimate of maximum possible loss.
But do all of these make good sense, let alone what it all means?
In our quest for "truth", we need to ask some tough questions along the way. They are:
(1) Correlation
What really is the correlation AB in the expressions for the covariance terms
above, given that cov(A, B) = AB A B ?
(2) Term Structure
There is no consideration of the assets' term structure or time to maturity in the
formulation above. What if A is a 5-year instrument and B is a 10-year
instrument?
(3) Non-linearity
What if assets A and/or B are not linear instruments, e.g., stocks and futures,
and what if they are nonlinear instruments, such as options or callable bonds?
(4) Discontinuity
What if assets A and/or B have discontinuous payoffs at some specific market
U
U
and
, can potentially go to
levels and both derivatives, given by
SA
SB
infinity?
(5) Square Root of Time
What exactly does _T mean and where does it come from?

M.K.Ong/Explaining VaR/October 1996

Questions (2) and (3) can be answered easily by incorporating the following remedies to our VaR
formulation earlier:
a) Break positions down into "primitive" components, the so-called risk factors.
b) Incorporate term structure, i.e., maturity buckets, into the underlying assets.
c) Retain higher-ordered derivatives in the change in portfolio value.
With these simple remedies, we are still left with 3 unresolved dilemmas, namely, correlation,
discontinuities, and _T .
Before continuing on with the 3 still unresolved issues, let's talk about each of the proposed
remedies.

Identifying the Correct Risk Factors


Risk factors that influence the behavior of asset prices can be thought of as primordial "atoms"
which make up a material substance. What are these primitive and atomistic components that
contribute to the observed movements in asset prices? There aren't that many. For example, a
bond, at first glance, depends on the term structure of interest rates -- zero rates, to be more
specific. The price of a bond is a linear combination of the present value of some periodic
stream of cash flows. The zero rates at those specific cash flow dates determine the discount
factors needed to calculate the present value of these cash flows.
What else affects the price of a bond? Volatilities of each of these zero rates is another
important determinant of bond price. Since interest rates are not deterministic but are
fundamentally stochastic in nature, they incorporate a random component or "noise". Volatility
is a manifestation of that noise, uncertainty or randomness. In essence, the most primitive
factors one can think of are: zero rates and volatilities, incorporating term structure or time
buckets in both.
Suppose there are n risk factors, denoted by RF i , i = 1,2,3,..., n . We assume that it is possible
to decompose the price of a trading instrument as a linear combination of these primitive risk
M.K.Ong/Explaining VaR/October 1996

factors, viz.,
n

Trading Instrument = ci RF i ,
i=1

for some constants ci .


With the decomposition as illustrated above, it is mandatory that the following assumptions be
made:
a) the linear combination is possible.
b) the decomposition makes sense mathematically.
c) the expression has a meaningful market interpretation.
We give two examples below to illustrate the concept of decomposing an instrument into its
associated risk factors and then demonstrate, using another example, the calculation of the
portfolio variance. The second example, in particular, forces us to ask even more questions.
Example: Decompostion into Primitive Risk Factors
For the bond example earlier, the precise decomposition into its primitive risk factors is
n

Bond Price = CF i e- zi ti ,
i=1

where CF i and z i are the periodic cash flows and zero rates, respectively, at times t i .
Although the volatilities of each of these zero rates and their adjoining intertemporal
correlations are not explicitly shown in the expansion above, these relationships are
implicitly embedded in the formulation above. Consequently, the price of this bond is
ultimately determined by the movements of the zero rates and how each rate interacts
with all others across time.
The implication of the decomposition above is that there is a need to perform some type of
M.K.Ong/Explaining VaR/October 1996

Taylor series expansion, to which we immediately need to make a fourth assumption:


d) the series expansion converges about some given initial market condition.
Let's illustrate this formulation using a simple portfolio containing only two assets and only two
time buckets.
Example: Two-Asset Portfolio with Two Time Buckets
Let the discrete time periods be denoted by t 1 and t 2 . Suppose the portfolio
value, U( L1 , L 2 , S 1 , S 2 , 1 , 2 , 1 , 2 ) , depends only on a small set of market variables
given below:
US LIBOR rates:
L1 and L 2 at times t 1 and t 2 , resp.
Spot FX rates:
S 1 and S 2 at times t 1 and t 2 , resp.
$LIBOR volatilities: 1 and 2 at times t 1 and t 2 , resp.
Spot FX volatilities: 1 and 2 at times t 1 and t 2 , resp.
Suppose also that the portfolio value's dependency on these market variables need not
necessarily be linear in nature, then the change in portfolio value U is

U =

U
1 2U
U
2
L1 +
(

1
+
)
L
1
L1
2 L12
1

U
1 2 U
U
2
S1 +
( L1 ) +
1
2
S1
2 S1
1
+ (same for index t 2 ) + higher terms .

Notice that we have explicitly kept terms related only to delta and gamma (first and
second derivatives, respectively, with respect to either Li or S i , for i = 1,2 ), and vega

M.K.Ong/Explaining VaR/October 1996

(first derivatives with respect to either i or i , for i = 1,2 ), and lumped the rest of the
derivatives as "higher terms". These are the only observable sensitivities in the market.
The variance of the portfolio change can be quite complicated. Some of the expressions
in the variance are,

U U
U
U
var( U) =
cov[ L1 , 1 ] + ...+
var[ 1 ] + 2
var[ L1 ] +
L1 1
L1
1

U U
cov[ S 1 , 2 ] + ...
S1 2

which contain an assortment of variances and covariances of the primitive risk factors.
It can be quite exasperating to compute more terms in the Taylor series expansion and then
calculate their contributions to the variance. But let's not go on. Instead, let's ponder on the
following immediate questions which are begging to be asked:
1) What the heck are these higher-ordered derivatives, e.g.,
2
3
U
U
,
, etc.?
L13 L1 2

Which of these sensitivities have market interpretations and should therefore be


incorporated in the calculation of VaR?
2) Does the Taylor series expansion converge about the current market conditions?
What is the range of validity of the expansion when the market experiences huge
swings about the current conditions?

M.K.Ong/Explaining VaR/October 1996

3) The variance var( U) contains covariance terms like


cov[ L1 , ( L 2 )2 ] , cov[ S 1 , 2 ] , etc..

What are these things?


From practical experience, it is very clear that the only observable terms in the market are first
derivatives (corresponding to delta and vega) and second derivatives (corresponding to gamma).
The cross derivatives and orders higher than second derivatives have no practical interpretation
as far as daily risk management of the trading book is concerned. The only possible exception is
cross-currency contingent claims. It is quite difficult to assign any meaningful interpretation to
most of the other covariance terms, particularly those involving higher orders as in question (3).
Equivalence
The formulation above shows that the following equivalence holds:
VaR Variance-Covariance
In a general sense, the statement above embodies two distinct conditions: necessity and
sufficiency.

Necessity: Under the framework in which it is meaningful to decompose a security into a linear
combination of its constituent risk factors, the quantification of value-at-risk by the equation
defined earlier as
VaR = DEaR * T = var( U)* T
is tantamount to expressing the risk associated with the change in portfolio value, given change
in market conditions, in terms of the associated variances and covariances of the different risk
factors in the variance of U. This is a necessary condition of the equivalence statement above
since it is necessary that a meaningful decomposition of the trading instruments in the portfolio
into their constituent primitive risk factors be possible.

Sufficiency: Conversely, if enough of the meaningful and interpretable variances and


covariances can be retained in the calculation of var(U), then they are sufficient to encapsulate
M.K.Ong/Explaining VaR/October 1996

10

the risk due to changes in portfolio value brought about by changes in market conditions.
Finally, for the equivalence statement to be truly useful, the following assumptions need to be
imposed on the VaR framework formulated as variance-covariance:
1) linear decomposition of trading instruments into risk factors is possible and the
linear combination makes sense.
2) all partial derivatives used in the series expansion exist, are bounded, and have
market relevance.
3) correlations in the covariance expressions are estimatable, stable through time, and
therefore make sense.
4) for longer horizon risk analysis, the T- Rule as a scaling factor for the instantaneous
change in portfolio value is applicable.

Assumptions on the Potential Loss Distribution


Because VaR is cast in a variance-covariance framework, to interpret it as a measure of risk due
to adverse movements in market conditions requires the estimates of adverse future asset price
moves using historical information of previous price moves. We immediately have to ask:
1) Can historical data really predict the future ?
2) What statistical assumptions are necessary to ensure that VaR has a probabilistic
interpretation ?
Question (1) will be answered in a later section. We'll address question (2) in this section.
The probability distribution of the portfolio's future instantaneous value depends primarily on:
the linear representation of the primitive risk factors, and
the joint distribution of instantaneous changes in these risk factors.

M.K.Ong/Explaining VaR/October 1996

11

It is instantaneous because all the derivatives involved in the calculation of var(U) are
mathematically meaningful only for an infinitesimally small length of time. Although we have
stretched this time frame to account for the change in portfolio value due to an overnight change
in market conditions, effectively a one day move, the instantaneous nature of the partial
derivatives in U still remains irrefutable. The dependence on primitive risk factors is a given
because of the equivalence statement between VaR and the variance-covariance formulation. In
addition, the dependence on some kind of joint distribution is a requirement since these risk
factors, e.g., zero rates and volatilities, do not evolve in time independently of each other. To a
large extent and with very few exceptions, they are inter-related.
What important assumption is required concerning the joint distribution of primitive risk factors?
Practitioners in the market often make one big leap of faith and assume a normal joint
distribution with determinable (?!) return variances and covariances. The argument for a normal
distribution is rather ad hoc although it easily facilitates a statistical interpretation of VaR via
confidence levels. However, many of these variances and covariances are problematic and are
difficult to infer from the market. In most cases, they cannot even be determined purely from
statistical analyses of historical data without any kind of subjectivity.
Without an assumption regarding the distribution, the calculated VaR number is but just a
number -- that's all. With the imposition of a normal distribution on the number given by VaR,
however, it is possible to interpret this VaR number as a standard deviation from the mean over
a small interval of time, whether or not the act of interpretation actually makes sense. Also,
with the assumption of normality, the task of estimating the percentile of a probability
distribution becomes easy. If a distribution is normal, then all percentiles are simply multiples
of the distribution's standard deviation. In this case, one standard deviation implies a
confidence level of 84.1%, while 1.65 and 2.57 standard deviations can be translated to 95% and
99.5% confidence levels, respectively.
Example:
At a confidence level of, say 95%, there is a 95% chance that the change in portfolio
value, on an overnight basis, will not exceed the calculated value of var( U) .
Conversely, one can also say that there is a 5% chance that the change in portfolio value
will exceed this number. Because a change in portfolio value can either be positive or
negative, this amount is usually interpreted as a potential gain or loss in portfolio value.
M.K.Ong/Explaining VaR/October 1996

12

In the next example below, we illustrate further using a little bit of probability.
Example:
The impetus in developing value-at-risk methods is so that trading institutions could ask
the following question:

How much money might we lose over the next period of time?
VaR answers this question by rephrasing it mathematically: If X T is a random amount
of gain or loss at some future time T, and z is the percentage probability, what quantity v
results in
Prob{ X T < - v} = z ?
The quantity v is the VaR number we seek to find.
Since v(T,z) is both a function of T and z, clearly, in order for the VaR number to have
some meaningful interpretation, we need to attach both a time horizon T and a
probability distribution.
Alas, with confidence levels circumscribed on it, VaR even has probabilistic meanings attached
to it! With all the statistical accoutrements, it is now a very credible number. Believe it or not.
Figure 1 is a graphical interpretation of VaR using an assumed normal distribution.

M.K.Ong/Explaining VaR/October 1996

13

Figure 1: Probability of Loss Distribution

P ro b a b ility o f L o s s D is trib u tio n

P ro b a b ility o f g a in o r lo s s
P ro b { X

< -v } = z

- L oss

V aR

C e n te r

+ G a in

Just exactly what is being assumed to have a normal distribution? We had better have an answer
to this question if the calculated VaR value were to have a meaning.
To boldly assume that VaR, as a random number, is drawn from a normal distribution can be
quite unpalatable. In fact, no where in the procedure discussed earlier, leading to the
determination of the final VaR result, was there any statistical assumptions made regarding the
primitive risk factors, their changes, or the VaR number itself. So then, how can this VaR
M.K.Ong/Explaining VaR/October 1996

14

value suddenly become a random number imbued with statistics? In the ensuing discussions,
we present some arguments one way or the other.

Are the Statistical Assumptions about VaR Correct?


Instead of insisting that VaR is drawn from a normal sample, we need to look at the constituent
risk factors which made up the VaR number. In particular, since value-at-risk is defined by

VaR var( U)* T ,


we need to look at U, the change in portfolio value, given changes in market conditions.
Referring back to our earlier example on a portfolio containing only two assets with two time
buckets, the instantaneous change in portfolio value is given by the linear combination,

U =

U
1 2U
U
2
L1 +
( L1 ) +
1
2
L1
2 L1
1

U
1 2 U
U
2
S1 +
( L1 ) +
1
2
S1
2 S1
1
+ (same for index t 2 ) + higher terms .

The items L1 , ( L2 )2 , S 1 , ... , ( 1 )2 , 2 are all quantities related to either first order or
second order changes in market conditions, e.g., changes in LIBOR rates, changes in spot rates,
changes in volatility, etc.. If we first assume that all these market rates are normally distributed,
then their associated linear changes are also normally distributed. Unfortunately, the second
order changes obey a Chi-Square distribution, instead of a normal distribution. Hence, we
have the following observations:
a) if U consists strictly of linear changes alone, then U is also normally distributed.
M.K.Ong/Explaining VaR/October 1996

15

b) if U also contains second order changes, then U is no longer normally distributed


but is combination of both chi-square and normal distributions.
It is, therefore, evident that a VaR framework which attempts to incorporate gamma and other
higher-order sensitivities cannot establish itself on the familiar foundation of normal distribution.
Instead, one has to go out on a limb and make the bold assumption that VaR itself, as a
number, is drawn from a normal distribution without any sound theoretical justification. This
can't be right. More importantly, in most cases, the market variables underlying the portfolio
value are often not normally distributed. Hence, observation (a) is also not quite true unless the
portfolio is sufficiently large so that one could, in principle, invoke the central limit theorem.
The central limit theorem asserts that "as the sample size n increases, the distribution of the
mean of a random sample taken from practically any population approaches a normal
distribution". Interestingly enough, the only qualification for the theorem to hold is that the
population have finite variance. Therefore, one can indeed facetiously argue that if var(U)
were finite (translated loosely as, incurring neither infinite gains nor infinite losses), then the
central limit theorem holds.
How much can we lose over the next period of time, we ask? Less than infinity -- would be the
appropriately terse, but silly, response.
So be it then -- deus ex machina -- large portfolio, ergo, normal distribution.

Non-Normality of Returns
Although the assumption of normality (on either the changes in primitive risk factors or on the
distribution from which the calculated VaR number is drawn) was actually never made during the
calculation of risk exposures, there are distinct consequences for not having a normal or even
"near" normal distribution. Most important among these are:
Predictability of tail probabilities and values.
Dynamic stability of normal parameters, e.g., mean and standard deviation.
Persistence of autocorrelation.

M.K.Ong/Explaining VaR/October 1996

16

VaR is intended to be used as a predictor of low probability events (technically known as "tail"
probabilities) such as a 5% or less chance of potential change in portfolio value. Because
market variables are inherently non-normal, it would be virtually impossible to verify, with
certainty, the accuracy of the probabilities of extremely rare events. The calculated VaR
number, having not been truly drawn from a normal distribution, is in itself a dubious number,
let alone one which entails predictive capability about potential overnight change in portfolio
value.
In addition, the large literature on the "fat tails" of financial asset returns adds to the problem of
interpretation. Fat tails exist because there are many more occurrences far away from the mean
than that predicted by a normal distribution. This phenomenon of leptokurtosis (having larger
mass on the tails and a sharper hump consistent with the normality assumption) tends to
underestimate low probability losses. Furthermore, the inherent bias introduced by skewness
(more observations on the left tail or the right tail) can be exacerbated by whether the true
distribution is left or right-skewed and whether the positions in the portfolio are predominantly
long or short.
Secondly, given that returns on assets (as quantified by the changes in the primitive risk factors)
are in reality not normally distributed, the dynamic stability of the associated normal parameters,
namely, mean and standard deviation, can be called into question. Statistically speaking, if
normal parameters are stable over time, then past movements can be used to describe future
movements. In other words, since standard deviation is a measure of dispersion (or uncertainty)
of the distribution about the mean, the more stable the standard deviation is over time, the more
reliable the prediction of the extent of future uncertainty. This is the fundamental principle
underlying the VaR vis-a-vis the variance-covariance framework. Because the predictive
capability of the calculated VaR number depends on the calculation of the portfolio covariance
matrix using historical data, the predictive power of VaR is doomed from the very start if the
normality assumption is not made. The argument has now become rather nonsensical and
circuitous. There is no way out.
Paradoxically, the important thing to bear in mind is that in our previous description of the VaR
framework using historical data to calculate the associated covariance matrix, an assumption of
normality was never made as a basis for estimation -- it just wasn't necessary. So, why is an
assumption of normality now so urgent and important?
Thirdly, are today's changes in the risk factors related to yesterday's changes? For most financial
products, the answer is more likely to be in the affirmative. This implies that asset returns are
not serially independent. Consequently, the time series of price changes are correlated from one
M.K.Ong/Explaining VaR/October 1996

17

day to the next. This persistence of autocorrelation comes about because the parameters of the
distribution are not constant but are time-dependent or time-varying. The next section addresses
this issue.

Time-Varying Parameters
Asset returns are temporally dependent. More specifically, volatilities and correlations of asset
prices are not constant over time and they tend to bifurcate or make rapid jumps to different
plateaus as dictated by the different regimes in market conditions. They can vary through time
in a way that is not easily captured through simple statistical calculations. This means that
systemic time-dependent biases will vary with the holding period and will be conditioned on the
time of market risk prediction of losses as calculated by VaR. This insight forced us earlier to
incorporate term structure or time buckets into our variance-covariance formulation, with the
hope that by incorporating time dependencies of both volatilities and correlations, shifts in
market regimes would also be captured as a consequence.
Unfortunately, the degree of time-dependent effects varies across instruments and there exists no
single cure-all panacea. Timely and accurately determined estimates of both volatilities and
correlations are especially important after a change in market regime, but this requires vigilance
and some level of subjectivity on exactly how they are to be calculated. This brings us full circle
to the unanswered question we asked earlier: can historical data really predict the future?

Historical Data: Does History Determine the Future?


If it is true, as expressed by George Santayana, that "those who cannot remember the past are
condemned to repeat it", can we twist this question around and ask, "does knowing the past
influence the future?". The crux of the philosophy lies in the T- Rule.
The T- Rule, given by

VaR = DEaR * T ,
implies that the risk exposure in the portfolio over a period of T successive days into the future
can be determined historically from today's risk position given by the daily-earnings-at-risk
value, DEaR. As usual, the following tough questions need to be asked:
M.K.Ong/Explaining VaR/October 1996

18

1) Under what conditions can this claim be true?


2) Where did the T- Rule come from?
3) What is the range of validity for the claim?
4) What are the implications on how historical data should be handled?
We will answer Questions (1) and (2) first.

Serial Independence and Origins of T- Rule


Two sections ago, we talked about serial dependence -- that the time series of price changes are
correlated from one day to the next. In a Black-Scholes environment for pricing contingent
claims, the assumption of a true random walk (i.e., Brownian motion) is always invoked. While
the market, to a large extent, is not truly random, the assumption of Brownian motion is very
convenient. It certainly facilitates nice-looking and easy formulas for use in valuation.
For risk management purposes, a fundamental component of the random walk assumption relies
on treating each incremental change S in asset price over a small time interval t to be
independent and identically distributed in a normal fashion. In simple words, each incremental
change in asset price is assumed to be normally distributed and unrelated to preceding changes at
earlier times. A time series of these changes is, therefore, serially uncorrelated. Let's
demonstrate this point.
Suppose the change in asset price at time T is denoted by S T . Suppose also that the change
follows a random walk so that the change at time T is a result of a change in previous time T-1,
triggered by some "noise" T , viz.,
S T = S T -1 + T .

Each of the noise terms, being Brownian motions, is uncorrelated with zero mean and has the
same constant variance, say 2 .
We can now iterate successively on the equation above to yield
M.K.Ong/Explaining VaR/October 1996

19

S T = S 0 + T + T -1 + T -2 + ... + + 1
T

= S0 + i .
i=1

This equation is quite insightful -- the change at time T is due to the change at an initial time 0
plus the incremental sum of past "noises" generated between time 0 and time T.
Taking the expectation of both sides of the equation results in
T

E[ S T ] = E[ S 0 ] + E[ i ] = S 0 ,
i=1

and hence, the dispersion of the change from the mean is

T 2
E[( S T - S 0 ) ] = E i = T * 2 .
i=1
2

This is again very insightful -- the dispersion or uncertainty of the change in asset price at time T
is nothing but the uncertainty of the individual "noises" 2 multiplied by the length of the
observation time T. In other words, the uncertainty is an accumulation of each successive noise
contribution leading from the initial time to the final observation time T. The past does, indeed,
foretell the future, albeit relying only on the ramblings of the past. Also, the standard deviation,
being the square root of the variance, is simply * T . Voila! Herein lies the square root of
T!
What have we learned?
Only through the asssumption of a true random walk (via Brownian motion) is the time series of
changes in asset prices serially uncorrelated, resulting in the T- Rule. Serial independence or
the absence of autocorrelation is, therefore, a required assumption for using the T- Rule.
M.K.Ong/Explaining VaR/October 1996

20

Later on, we will examine the T- Rule in greater mathematical details and reveal one more
intriguing requirement for it to be applicable.

Historical Data Usage


Given the discussion above, does it matter how historical data are used in the calculation of
VaR? It does indeed. The length of the historical time series used is also relevant.
Traditional methods of estimating volatility and correlation using time series analysis have relied
on the concept of moving average. Moving averages come in two favorite flavors: fixed
weights and exponential weights.
Consider a time series of observations, labelled by { xt } . Define the exponential moving
average, as observed at time t, by a weighted average of past observations, viz.,
n

x t i xt -i
i=1

where the weights are i = i (1 - ) , 0 < < 1 , and which must sum to 1. The parameter is
chosen subjectively to assign greater or less emphasis on more recent or past data points. In
practice, the choice of is dictated by how fast the mean level of the time series { xt } changes
over time.
A simple arithmetic average results if all the weights are set equal to 1/n, where n is the number
of observations. The choice of the number of data points n can also be subjective. The decision
depends on both the choice of the parameter and the tolerance level on how quickly declining
the exponential parameter should be.
Common practice, being more art than scientific statistics, is to choose n=20 and between
[0.8, 0.99], regardless of what kind of assets are being analyzed. Since this paper is not about
the statistics of VaR, we'll have to content ourselves with keeping these comments on a cursory
level without asking any questions.
One comment we must make, however, is that since VaR is scaled up from an overnight risk
exposure number DEaR, it is preferable to place relatively heavier emphasis on more recent data
points. After all, using an excessively large number of past observations tends to smooth out
M.K.Ong/Explaining VaR/October 1996

21

more recent events and is, therefore, seemingly contradictory to the overnight intent of DEaR.
The exponential moving average places relatively more weight on recent observations. Not
surprisingly, the goal of an exponentially moving average scheme is to capture more recent short
term movements in volatility along the lines of a popular and more sophisticated estimation
model called GARCH (1,1). The moral lesson seems to be pointing to practicalities in risk
management using simple but sensible tools, instead of reliance on more sophisticated but rather
intrepretative estimation models.
To round up our discussion on the alleged predictive power of historical data and serial
dependency, we need to examine the T- Rule more closely.

T- Rule Further Examined


Where does the T- Rule come from? We have partially addressed this question in an earlier
section on serial independence, albeit heuristically. We need to bring in more mathematics this
time.
Consider the price of an underlying, denoted by F, which diffuses continuously as,
dF = dt + dz ,

(continuous version)

where the Brownian motion of F is governed by the so-called Wiener measure dz. The Wiener
process is a limiting process of infinitely divisible and compact normal stochastic process with
increments modeled as
F = t + ~ t + O( t) .

(discrete version)

By "infinitely divisible" we mean each small increment of time t can be chopped into even
smaller pieces, ad infinitum. The symbol O(t), read as "order of t", is a short-hand notation
for ignoring higher orders beginning with t. This is, in fact, a unique feature of Brownian
motions in which disturbances or noises larger than a small increment proportional to t do
not contribute to the path taken by the price F of the asset. Mathematically, the discretized
version of the continuous Wiener measure dz is
M.K.Ong/Explaining VaR/October 1996

22

z = ~ t ,

where

~ ~ N(0,1) .

Thus, the variance of F is

var[ F] = 2 t + O( t) .
The volatility or standard deviation is the square root of the variance, viz.,
vol( F) ~ t , for small t - > 0 .

Observe that we have recovered the T - Rule, but this time with a very important caveat -- the
interval of time t considered is required to be small. The scale factor of T in VaR, however,
is not intended to be small. Regulatory pressures insist that the holding period T should be 10
days!
There are some serious implications regarding the applicability of the T - Rule. We can now
summarize them:
a) Serial independence of the time series of changes in primitive risk factors needs to be
assumed although, for the most part, changes are primarily autocorrelated.
b) Each change in the primitive risk factors must be assumed to have a normal
distribution so that the principle of random Brownian motion is applicable.
c) The rule can only be used if the time period of observation is sufficiently small
although, in contradiction, the practical intent is to use a 10-day holding period.

M.K.Ong/Explaining VaR/October 1996

23

Incorporating Options Positions in VaR


In contrast to "linear" instruments such as stocks, futures, bonds, and forward contracts, options
are derived instruments with payoffs contingent on the future paths taken by the linear
instruments mentioned. For instance, a call option, on a stock S and struck at price K, has a
terminal payoff at maturity time T of max [0, S T - K] . This contingency of non-negative payoff
at maturity forces the value of the call option to be nonlinear for all times prior to maturity -- the
sharpest nonlinearity or convexity occurring around the strike price. This is the reason options
are considered "nonlinear" instruments. In the context of the value-at-risk framework, the
immediate questions to ask are:
1) Can an option be decomposed into a linear combination of primitive risk factors?
2) Does the decomposition make sense?
3) Do the higher-order terms in the decomposition have any market interpretation?
Although these questions were asked earlier in regard to linear instruments, they need to be
asked again even more so in the context of nonlinear instruments. To answer these questions,
one needs to be aware of the following points:
Naively incorporating higher-ordered partial derivatives can have misleading and
disastrous effects.
Since not all contingent claims are created equal, the degree of nonlinearity has to be
taken into account properly.
Interpretation of non-market observable partial derivatives needs to be carefully
thought out.
Discontinuities in option payoffs need to be considered in light of unbounded partial
derivatives.
Incorporating different kinds of "greeks" is not a trivial matter.
The simple example below illustrates the significance of these questions and remarks.
M.K.Ong/Explaining VaR/October 1996

24

Example:
Suppose the only parameter of importance is the underlying price. Then, the change in
the value V of an option, with respect to a change in the underlying price u, is given by
V = V(u + u) - V(u)
=

V
1 2V
2
u +
( u ) + ...
2
u
2 u

The first and second partial derivatives have market interpretations of "delta" and
"gamma", respectively, so we rewrite it as
1
2
V - > delta* ( u) + * gamma * ( u ) + ...
2

Other higher derivatives with respect to the underlying price have no market
interpretations. For a hedged portfolio, if the magnitude of the changes in the underlying
price and price volatility of the underlying asset is sufficiently small, the approximation
for V up till the second order term is sufficient to capture the change in value of the
portfolio.

Incorporating Other Risks


To incorporate volatility risk, one normally includes a correction of the following form,
V = V( + ) - V( )
V

- > (vega)
=

M.K.Ong/Explaining VaR/October 1996

25

In practice, a second order correction to vega is not necessary since, for most options, the
relationship tends to be almost linear in nature and the order of magnitude is insignificant.
Option payoffs are curvilinear in nature and are therefore not symmetric about the current
underlying price u. To take into account the difference in either a price increase or a price
decrease, the derivatives -- as approximated by finite difference -- need to be taken along both
sides of the current underlying price.
In general, to make the approximation of the change in portfolio value more robust, one needs
to consider a multivariate Taylor series expansion of the option value V(u,r,,t) as a function of
the underlying price u, interest rate r, volatility , and time t, among other things, viz.,
V(u, r, ,t) = V(u + u, r + r, + , t + t) - V(u, r, ,t)
=

V
V
V
V
u +
r +
+
t + higher order terms
u
r

t
= (delta)u + (rho)r + (vega) + (theta)t + ...

Term Structure Effects: Greek Ladders


Term structure effects are very important and must not be neglected. An expansion similar to
our earlier example on two-asset portfolio with two time buckets needs to be performed when the
portfolio contains options. In effect, each "greek" expression above needs to be replaced by its
coresponding "ladder" -- the rungs of the ladders increasing with time to maturity.
For instance, consider an option position which matures at time t 5 and uses only 5 distinct time
buckets, namely, t 1 , t 2 , t 3 , t 4 , t 5 . Since these 5 time buckets are used as distinct key points for
risk managing the position, the delta associated with the option position requires 5 rungs in its
delta ladder, viz.,

[ deltat1 , deltat2 , deltat3 , deltat4 , deltat5 ] .


The same term structure effects should be applied to the rest of the "greeks".
M.K.Ong/Explaining VaR/October 1996

26

Assumptions Required for Incorporating Options Positions


The discussion above forces one to reckon with the assumptions required for the VaR framework
before it can properly capture risk exposures introduced by the presence of options in the
portfolio. The assumptions (in principle, more like common sense rules than assumptions) are
now self-evident:
1) Incorporate only those "greeks" which are observable in the market and which are
actually used for risk managing the option positions. The common "greeks" are
delta ladders, gamma ladders, vega ladders, rho ladders, and to a lesser extent,
theta.
2) Assume that the observable greeks are sufficient to capture the various
degrees of curvilinearity in the option positions. This, in turn, requires the
assumption that the Taylor series expansion of the option value in terms of the
observed greeks converges for small perturbations around the set of current
market parameters.
3) Assume that discontinuous payoffs in options do not lead to unbounded partial
derivatives so that the series expansion is meaningful.
Assumption (3) is very interesting and deserves additional analysis. Plain vanilla options
without discontinuous payoffs have very smooth and bounded partial derivatives. That is not the
case for many exotic options and their various embeddings in complex structures. We need to
address this next.

Incorporating Non-Standard Structures


Overall, the VaR vis-a-vis variance-covariance framework is generally not suitable for nonstandard structures with discontinuous payoffs. For instance, consider a so-called digital call
option, defined by the payoff function at maturity time T,
0 , if S T < K
1 , if S T K .
M.K.Ong/Explaining VaR/October 1996

27

There is a sudden jump in payoff from zero to one if, at expiry, the stock price S T is greater than
or equal to the strike price K. Figure 2 illustrates the terminal payoff function for the digital call
option.
Figure 2: Digital Call Option

D ig ita l C a ll O p tio n

D e riv a tiv e s b lo w

u p

s
K

As illustrated in Figure 2, the value of the option at any time prior to expiration is smooth and
continuous. Prior to expiry, all partial derivatives with respect to the underlying price S are
bounded and well-defined. As the time to expiry diminishes, these derivatives increase
M.K.Ong/Explaining VaR/October 1996

28

dramatically without bound until they finally "blow up" at expiration, in accordance to the
payoff function given above. Therefore, for a digital call, we ask what is the meaning of

C

- > + , as t - > T
S S= K

Does it make sense to incorporate this quantity with other well-behaved and bounded greeks?
Carelessly mixing and matching greeks from non-standard structures (which have a natural
propensity to "blow up") with well-behaved bounded ones from vanilla options has some serious
consequences. Because same-letter greeks are normally treated as additive when risk managing
a portfolio, increasingly larger and larger greeks due only to a single option position cloud the
true risk profile of the portfolio. Close to the discontinuity, the contribution of, say, the delta,
to the portfolio variance can either dominate or completely overwhelm the total risk profile.
Secondly, because the Taylor series associated with such non-standard structures are potentially
meaningless, the calculated VaR number (being the portfolio variance) is also likely to make no
sense.

Incompatibility of "Greeks"
There are fundamental differences between a Black-Scholes delta and a "delta" calculated from a
one basis point parallel shift (or some other kinds of non-parallel shifts) in the yield curve. Vega
sensitivities implied from the Black-Scholes world and those resulting from a calibrated lattice of
interest rate model for valuing interest rate contingent claims are also fundamentally
incompatible. The same fundamental differences also hold for other risk sensitivities. If these
differences are not fully recognized and then resolved, the variance-covariance framework will
not be able to successfully incorporate these disparate kinds of risk sensitivities into a meaningful
measurement of risk.
The variance-covariance framework, by its very construction, forces one to aggregate the
various risk sensitivities of all instruments within the portfolio -- regardless of the inherent
nature of these instruments. The VaR formulation does not even distinguish among the distinct
shades of quirkiness of non-standard structures since the basic tenet of the formulation has
M.K.Ong/Explaining VaR/October 1996

29

always been that "it is possible to decompose any instrument in the portfolio into a linear
combination of its underlying primitive risk factors". Of course, the primitive risk factors are
the atoms that the structures -- vanilla, non-standard or otherwise -- are made up of. However,
from a risk management perspective, risk sensitivities (e.g., delta, vega or gamma) are neither
calculated nor used by the various trading desks (even within the same institution) in a consistent
and uniform manner.
As an example, consider the case of a swaption, requiring an in-house calibrated stochastic
interest rate model to determine its value and its day-to-day risk sensitivities. The model is
normally calibrated to observed market traded caps/floors and European swaptions, after which
the swaption in question is then valued using the model. Also, market prices are quoted in terms
of one single volatility as implied from Black's 76 formula.
The implied Black volatilities of the calibrating caps and floors constitute a linear array,
arranged accordingly by tenor. For swaptions, it is not as simple. A swaption is an option on
some underlying swap. Thus, swaption volatilities do not form linear arrays but rectangular
matrices instead. The two-dimensionality is necessary to take into account both the option tenor
and the term to maturity of the underlying swap.
From a variance-covariance perspective, there are immediately two issues to confront:
a) the meaning and construction of the vega ladder.
b) compression of two-dimensional swaption volatility into a one dimensional risk
exposure ladder.
On one hand, vega ladders for swaptions are clearly not acceptable, as swaption volatilities
geometrically constitute a two dimensional surface and not a linear array. On the other, the
variance-covariance framework, as constructed in earlier sections, is a quadratic multiplication
of the weight vector W and the covariance matrix , viz.,

W W T
where the weight vector W is composed of the various greek ladder sensitivities, arranged in a
linear array. Clearly, the linear framework of VaR does not allow for a matrix structure.

M.K.Ong/Explaining VaR/October 1996

30

A compromise, albeit not an ideal one, is to "collapse" or compress the matrix array into a linear
array along the tenors of the swaptions, ignoring the terms to maturity of the underlying swaps.
Doing the compression, however, introduces mixing and matching of vegas into the variancecovariance framework, thereby rendering the calculated VaR difficult to interpret. There is no
simple way out and vega, unfortunately, is not the only greek sensitivity afflicted with this
problem.

Using Proxies and Short Cuts


Earlier we discussed the handling of historical data and their predictivity in loss estimates using
the VaR framework. In this section, we discuss issues concerning data quality and what to do in
the absence of sufficient amounts of historical data. The proper usage of statistics aside, the
construction of the variance-covariance framework, as articulated so many times before,
requires careful usage of historical data and myriads of assumptions about the data's behavior.
What happens then when either market and/or position data are hard to come by?
For highly illiquid instruments, such as many of emerging markets bonds and currencies data,
"clean" historical data are simply unavailable. In general, data streams from different sources
could have gaps or missing data which then require some subjective patch work. Data which are
particulary "noisy" also require some kind of subjective "scrubbing" and cleaning. Arguments
always ensue when deciding how much "bad" data to remove or how much patch work need to
be quilted onto the database.
Many of the data streams warehoused in a database may be asynchronous -- i.e., not collected at
the same time during the course of a trading day. They are, therefore, contaminated by intra-day
trading activities. In addition, an institution's daily collection and aggregation of trading position
summaries from the various in-house trading systems may not be performed at the same time
across all trading desks and installations. This means that both market data and an institution's
own trading position data could potentially suffer from asynchronicity. The impact of timing
misalignment certainly could be significant in the estimate of the portfolio covariance matrix.
Some proxies and short cuts commonly used in practice are:
a) use in-house default factors for minor foreign exchange exposures instead of using
undesirable and incomplete market data.

M.K.Ong/Explaining VaR/October 1996

31

b) when long term rates, say beyond 10 years or so, are not available, perform linear
extrapolation to obtain longer periods on a flat basis.
c) ignore basis risk by using LIBOR-based curve as the underlying curve and adding
a constant spread to it in lieu of appropriate curves for other fixed income securities.
d) since short rates tend to be noisy, retain short rate information without change for
several days worth of VaR calculation.
e) since some implied volatilities cannot be inferred from the market, use calculated
historical volatilities as proxies.
f) aggregate certain gamma or vega buckets into one bigger bucket when the entire
ladders are not available from trading systems.
For a myriad of technical reasons, proxies and short cuts are often used to represent actual
market data, whenever possible. It is, therefore, imperative to ask: is it prudent to use proxies
when market data are sparse or unclean? There is, unfortunately, no good answer. Because
proxies and short cuts are a necessary evil and may have to be used despite the lack of full
justification, one is usually left with no satisfactory nor defensive explanations.

Some Evident Proxy Dangers


In the context of the variance-covariance framework, we point out one particular undersirable
effect of using proxies for interest rates. Similar arguments hold for other proxies. Let the real
or true (but unobtainable) rates be labelled r i at time t i be proxied by some rates l i , viz.,

r i l i + si ,
where si represents the spreads (or "errors") over the unobtainable true rates. Using this proxy,
the change in portfolio value U is

U ri
i

U
U
li +
si .
li
i si

The portfolio variance, var[ U ri ] , naturally contains terms like


M.K.Ong/Explaining VaR/October 1996

32

var( l i ) , cov( l i , l j )
var( s i ) , cov( si , s j )
cov( si , l j ) .
The presence of these covariance terms points to a potential problem with
a) the persistence of autocorrelation, and
b) correlation between the time series of proxies and the error terms.
Depending on the instruments being proxied, the error terms may not be small. One is
necessarily forced to make very strong assumptions regarding the behavior in items (a) and (b)
before a variance-covariance framework can be used to determine VaR. There is no further need
to elaborate on these points since we have already made several arguments earlier along this line
when a discussion of serial independence was made.

Validating Risk Measurement Model Results


Model validation, from a scientific perspective, is difficult and problematic. Given all the
reservations, ranging from data usage to model assumptions to the probabilistic interpretation of
risk measurement model outputs, it is extremely difficult to keep a scientific straight face and
say "all is good". In fact, all is not good if the intent of using the risk measurement model is not
clear from the outset.
The recent regulatory impetus to use the VaR number for market risk capital purposes is one
problematic and unjustifiable usage of risk measurement models. For a variety of very good
reasons, trading desks in reality do not use this highly condensed VaR number for risk managing
their trading positions. Why, then, are we so enamored of this number?
We definitely need to ask the BIG question: What is this calculated VaR number used for?
Following up on this query, any sane modeler or user of the model needs to ask: Is this
supposed to be rocket science or is it an exercise in prudent risk management?
It is reckless to rely too heavily upon risk measurement models as crystal balls to predict
M.K.Ong/Explaining VaR/October 1996

33

potential losses or to assess capital charges for trading activities. It is equally reckless not to
become fully aware of the myriad limitations imposed on such a risk measurement model. The
human elements of prudence, sound judgment, and trading wisdom -- all non-quantifiable
subjectivity -- cannot be more emphatically stressed than what we already have throughout this
article.

Back-Testing
In practice, many institutions with internal risk measurement models routinely compare their
daily profits and losses with model-generated risk measures to gauge the quality and accuracy of
their risk measurement systems. Rhetoric aside, given the known limitations of risk
measurement models, historical back-testing may be the most straightforward and viable means
of model validation.
Back-testing has different meanings to different people. Regulatory guidelines suggests backtesting as "an ex post comparison of the risk measure generated by the model against actual daily
changes in portfolio value over longer periods of time, as well as hypothetical changes based on
static positions". The essence of all back-testing efforts is, therefore, to ensure a sound
comparison between actual trading results and model-generated risk measures. The logic is that
a sound VaR model, based on its past performance and under duress of the back-testing
procedure, will accurately portray an institution's estimate of the amount that could be lost on its
trading positions due to general market movements over a given holding period, measured using
a pre-determined confidence level. Events outside of the confidence intervals are deemed
catastrophic market phenomena unforeseen by the model. But how many institutions really have
the ability to combine their entire trading portfolios into one single risk measurement system?
Figure 3 illustrates the result of one instance of back-testing for a sample portfolio. The graphic
presentation is intuitively appealing. The "envelope" is an indication of the theoretical bound of
potential change in portfolio value vis-a-vis actual observed P/L. There is really no good reason
to develop further statistical tests -- disguised in the name of science -- as currently being
advocated in the market by some people to validate a relatively unscientific risk measurement
model. Of course, one could facetiously push the borders of the envelope further away from
actual P/L observations by "tweaking" some key data inputs into the calculation of the VaR
number or by simply multiplying some unjustifiable scaling factors to the VaR number.
Common sense appears to be the key judgment factor that determines what is proper and what is
not.

M.K.Ong/Explaining VaR/October 1996

34

Figure 3: Back-Test Results

R e s u lts o f B a c k T e s tin g fo r a S a m p le P o rtfo lio

2 00 0
x
x
1 50 0

x
x

$US (000)

1 00 0

5 00

-5 0 0


x
-1 0 0 0
x
x
x
x
x
-1 5 0 0

-2 0 0 0

Jul
1995

A ug

S ep

O ct

N ov

D ec

Jan

F eb

M ar

A pr

M ay

- 2 s .d .

Jun
1996

M.K.Ong/Explaining VaR/October 1996

+ 2 s .d .

in d ic a te s v a lu e o u ts id e e n v e lo p e

35

Some "Final" Thoughts


It is true that senior management tends to prefer a single condensed measure of the institution's
trading risk. Since VaR has all this intuitive appeal as a condensed big picture of risk exposure,
VaR has gained significant support primarily from the regulatory sectors and has been touted to
senior management as a risk control device. But what is missing is that trading risk cannot be
quantified strictly in terms of a single measure. Trading decisions involve a dynamic multifaceted interaction of several external factors. The shift of a specific point on the yield curve,
the jump in the payoff characteristic of a structured note, or the specific tenor of an embedded
option and its path-dependencies, to name just a few, are the contributory factors to the risk
profile of a trading portfolio. VaR is none of these.
This value-at-risk number attempts to express the magnitude of multi-dimensional market risk as
a scalar. Reliance on a one-dimensional VaR number -- by sheer faith -- is both perplexing
and mind-numbing, even to myself who is instrumental in constructing such an internal
theoretical framework for risk measurement. As defined through its equivalence with the
variance-covariance framework, value-at-risk, regardless of confidence levels, changes
significantly depending on
the time horizon considered
correlation assumptions
integrity of database and statistical methods employed,
the choice of mathematical models.
Consequently, VaR does not and cannot provide certainty or confidence outcomes, but rather,
an expectation of outcomes based on a specific set of assumptions, some of which are very
difficult to justify. Other equally important risks, e.g., liquidity risk, model risk, opertational
risk, etc., are very difficult to quantify and are, therefore, not properly taken into account by a
condensed VaR number.
So, have we learned something about VaR or from VaR?
Indeed we have -- very important ones. The lessons are not in the mathematics nor probability
nor statistics nor in the soothsaying power of VaR nor the numerous highfalutin ways of
extracting, smoothing, and interpolating data. All of these tools are there, but they are not
important.

M.K.Ong/Explaining VaR/October 1996

36

Our quest for truth has ended right here.


The recent attention focused on VaR, together with its implication on trading losses and
regulatory capital requirement, has triggered some serious queries and attracted keen attention
from senior management. For a scientist, an academic, a risk manager, and a seeker of truth,
such as myself, I am utterly delighted when ranking executives of the Bank become more
involved and begin to ask serious questions about mathematical models, multi-factor meanreverting diffusion processes, implied volatilities, and the like -- none of which, of course, are
within their technical comprehension.
How refreshing it is for a member of the Board to query, with genuine awe, about the impact of
CMT caps on the trading portfolio and in so doing, asks "What is a CMT cap?". How
enlightening it is for the CEO to inquire whether the current condition of systems infrastructure is
sufficient to support more American style swaption trades. The heightened awareness of senior
management brought about by VaR, and their participation and concerns in the day-to-day risk
management process, stands to benefit all of us who are in the financial industry. For it is in the
awareness and support of senior management -- and through the prudent day-to-day risk
management functionalities -- that an institution engaged in market risk activities can truly
protect itself from losses and from its own trading follies. It is not through a single value-at-risk
number generated by some internal model, however rational its assumptions may be.
Indeed, a dialog has begun -- thanks to VaR.

M.K.Ong/Explaining VaR/October 1996

37

References

Basle Committee on Banking Supervision, Amendment to the Capital Accord to Incorporate


Market Risks, January 1996.
International Institute of Finance, Specific Risk Capital Adequacy, Sept. 20, 1996.

RiskMetrics -- Technical Document, 2nd Edition, J. P. Morgan, November 1994.


Kupiec, P. H. and J. M. O'Brien, The Use of Bank Trading Risk Models for Regulatory Capital
Purposes, Board of Governors of the Federal Reserve System, Finance and Economics
Discussion Series, March 1995.

Acknowledgement: The author wishes to thank Art Porton for the outstanding graphic work and his critique of the manuscript.

M.K.Ong/Explaining VaR/October 1996

38

Das könnte Ihnen auch gefallen