Sie sind auf Seite 1von 32

New York Morgan Guaranty Trust Company

February 1995 Market Risk Research


Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

Five questions about RiskMetrics

For more information or • Analysis of RiskMetrics documentation prompted numerous end-user questions
comments contact:
• The five most popular technical questions are addressed herein
Jacques Longerstaey
New York • Clarifications will be embedded in next edition of RiskMetrics – Technical Document
(1-212) 648-4936
longerstaey_j@jpmorgan.com The wide distribution of RiskMetrics material since its launch on October 11, 1994, either
through seminars, individual client presentations, or just mass mailing has prompted many
Benny Cheung questions about the inner workings of the methodology and the assumptions behind it. While
London the scope of questions has been extremely diverse, J.P. Morgan’s Market Risk Research group
(44-171) 325-4210 in New York and the RiskMetrics coordinators scattered around the world who have to date
cheung_b@jpmorgan.com diligently responded to questions from potential users have found that a number of technical
questions seem to come up regularly in risk management discussions. Therefore, in an effort to
Mike Wilson share these clarifications with all possible users of RiskMetrics, this paper focuses on
Singapore addressing the most popular technical questions asked to date.
(65) 326-9901
wilson_mike@jpmorgan.com In summary these questions are:

1. Normality: Changes in financial asset prices are known not to be normally distributed.
How appropriate then is the assumption of normality used by the RiskMetrics approach?

2. Stability: How stable are the RiskMetrics volatility and correlation estimates?

3. Mean: Given the relatively short history used by RiskMetrics volatility estimates and the
possible impact of short-term trends on these estimates, shouldn’t volatility be measured as
the difference from zero versus the current deviation from the sample mean?

4. Log or percent changes: RiskMetrics estimates use the distribution of percentage


changes in market values to estimate volatility. Shouldn’t you use the distribution of log
changes instead?

5. Cash flow allocation to vertices: The RiskMetrics mapping methodology advocates the
use of a historical variance method to allocate cash flows to standard maturity vertices.
Since the algebra of the method boils down to solving a 2nd degree equation, aren’t there
some instances where either multiple or nonexistent solutions can be derived?

We look forward to future discussions with interested parties regarding other issues of impor-
tance as this will enable us to further refine our approach and clarify standard RiskMetrics
publications. The clarifications contained in the following pages will be fully integrated into
subsequent editions of the RiskMetrics – Technical Document.
New York Morgan Guaranty Trust Company page 2
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

1. Normality: Changes in financial asset prices are known not to be normally distributed. How appropriate
then is the assumption of normality used by the RiskMetrics approach?

An important advantage of assuming that changes in asset prices are distributed normally is that
we can make predictions about what we expect to occur. In turn, we can assess how well our
model performs. There are two issues concerning how well the assumption of normality
characterizes the data:
• Measuring the difference between observed and predicted frequencies of observations in the
tails of the normal distribution (the “how often” question in the chart below)
• Measuring the difference between observed and predicted values of these tail observations
(the “how large” question in the chart below)

Chart 1
Distribution of daily DEM/$ percent changes
January 1988 to January 1995
300

250

200
How often?

150 Normal distribution


100
How large?
50

0
-4 -3 -2 -1 0 1 2 3 4

Since RiskMetrics produces both volatilities and correlations, we need to address these issues
for both univariate and bivariate return distributions. Our objective is to derive predictions based
on the normal distribution and then compare what is observed to these predictions. We organize
this discussion as follows.
• First, we compare observed and predicted univariate tail probabilities.
• Second, we compare observed and predicted values of points that fall into the tail areas (tail
points).
• Third, based on the bivariate normal distribution, we compare observed and predicted
bivariate tail probabilities.
• Fourth, we compare observed and predicted values of tail points for some correlated time
series.

Univariate tail probabilities


An often-used method to assess the validity of our assumptions is to compute the observed
frequency of returns that exceed their “adverse rate move.” Let Xt denote the percent return at
day t and let σt denote the 1-day forecast standard deviation of Xt. Then we can define theoreti-
cal tail probabilities as
(1.1) Prob(Xt < -1.65σt ) = 5% and Prob(Xt > 1.65σt ) = 5%

corresponding to the lower and upper tail areas, respectively.


page 3 Five questions about RiskMetrics

Now, if T is the total number of returns observed over a given sample period, the observed tail
probabilities are computed simply as
# of X t < −1.65σ̂ t # of X t > 1.65σ̂ t
(1.2) and
T T
The standard errors, σt, are estimated using the RiskMetrics methodology of exponentially
weighted moving averages with a decay factor of 0.94. The following charts show a typical
historical time series of Xt, -1.65 σ̂ t , and 1.65 σ̂ t for the DEM/$ exchange rate as well as the
standardized distribution of DEM/$ which is Xt/1.65 σ̂ t .

Chart 2
DEM/$ returns versus estimated volatility
0.06

0.04

0.02

0.00

-0.02

-0.04
January 1988 - January 1995

Chart 3
Distribution of standardized DEM/$
140

112

84

56
5% 5%
28

0
-1.8 -1.4 -0.9 -0.5 0.0 0.4 0.9 1.3 1.8

We would expect that points outside the black bands would occur about 10% of the time (i.e.
5% above 1.65 σ̂ t and 5% below -1.65 σ̂ t ). The table below presents observed tail probabilities
for some selected price series.
New York Morgan Guaranty Trust Company page 4
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

Percent in Percent in
Price/rate series lower tail upper tail Sample period

Foreign exchange
DEM/$ 4.20 5.33 Jan 88 - Jan 95
JPY/$ 3.51 5.14 Jan 87 - Jan 95
Equity
DEM 4.17 4.23 Jan 87 - Jan 95
JPY 5.14 3.70 Jan 87 - Jan 95
US 4.10 4.74 Jan 87 - Jan 95
3-month LIBOR
DEM 5.27 3.51 Jan 93 - Jan 95
JPY 7.17 4.03 Jan 93 - Jan 95
US 3.56 6.90 Jan 93 - Jan 95
10-year zero
DEM 4.68 3.22 Jan 90 - Jan 95
JPY 5.57 3.55 Jan 90 - Jan 95
US 4.84 5.24 Jan 90 - Jan 95

As we can see, with the exception of the Japanese 3-month rate lower tail probability (7.17%),
and the USD 3-month rate upper tail probability (6.90%), all computed probabilities perform
well relative to their predictions. As documented in the RiskMetrics – Technical Document,
the fact that money market rates do not perform as well as the others should not be surprising.
Their discretionary nature leads to departures from normality.

While estimating tail probabilities is useful, we should also be interested in the value of the
observations which fall in the tail area. We refer to these observations as tail points. Since the
fraction of observations which fall in the tail area does not give any information about the value
of those observations, we can check how well the model (Normal) predicts by comparing the tail
points’ observed and predicted values.

Univariate tail points


We define the observed values as the average value of the observations which fall into a tail
area. For example, if we are interested in the lower tail, we first record the value of the observa-
tions Xt < -1.65 σ̂ t and then find the average value of these returns. Calculations for values in
the upper tail are analogous.

Now based on the assumptions about the distribution of returns, we can derive forecasts of these
tail points. These forecasts are known as the predicted values. They are derived as follows. Since
we assume that returns are normally distributed, our best guess of any return is simply its
expected value. Therefore, the predicted value for an observation falling in the lower tail at
time t is
E[X t | X t < -1.65σ t ] = - σ t ∗ λ (−1.65)
φ (α )
where λ (α ) =
(1.3) Φ(α )
φ (α ) is the standard normal density function evaluated at α .
Φ(α ) is the standard normal cumulative distribution function evaluated at α .

The table below shows observed and predicted mean values of tail observations. In addition, it
presents the average difference between the observed and predicted tail points as well as their
standard deviation. For example, consider the Japanese equity series. The observed average
value of the lower tail returns is -2.67% while normality predicts -2.63%. Further, the mean
difference between the observed and predicted values are small.
page 5 Five questions about RiskMetrics

Mean Standard
difference deviations
Observed, Predicted, (obs - pred), (obs - pred),
Price/rate series percent percent percent percent Sample period

Foreign exchange
DEM Jan 88 - Jan 95
Upper tail 1.458 1.467 -0.008 0.266
Lower tail -1.548 -1.520 -0.027 0.275
JPY Jan 87 - Jan 95
Upper tail 1.489 1.416 0.072 0.346
Lower tail -1.424 -1.369 -0.056 0.293
Equity
DEM Jan 87 - Jan 95
Upper tail 2.522 2.551 -0.028 0.571
Lower tail -2.784 -2.653 -0.130 1.073
JPY Jan 87 - Jan 95
Upper tail 2.944 2.745 0.199 0.825
Lower tail -2.668 -2.630 -0.037 0.831
USD Jan 87 - Jan 95
Upper tail 1.660 1.672 -0.012 0.289
Lower tail -2.158 -1.953 -0.205 1.145
3-month LIBOR
DEM Jan 93 - Jan 95
Upper tail 1.448 1.360 0.083 0.309
Lower tail -1.727 -1.553 -0.173 0.464
JPY Jan 93 - Jan 95
Upper tail 2.267 2.183 0.083 0.428
Lower tail -2.647 -2.397 -0.250 0.634
USD Jan 93 - Jan 95
Upper tail 2.488 2.220 0.268 0.720
Lower tail -1.872 -1.839 -0.033 0.238
10-year zero
DEM Jan 90 - Jan 95
Upper tail 1.655 1.590 0.064 0.488
Lower tail -1.232 -1.198 -0.034 0.240
JPY Jan 90 - Jan 95
Upper tail 2.121 2.032 0.088 0.470
Lower tail -1.594 -1.610 0.015 0.293
USD Jan 90 - Jan 95
Upper tail 1.697 1.756 -0.058 0.278
Lower tail -1.582 -1.562 -0.020 0.282

Overall, the observed and predicted values across all 11 time series are very close. When a return
is a tail observation, the normal distribution offers a good prediction of its value. We conclude
that based on the volatility estimates, the normal distribution predicts the average value of the
tail observations quite well.

Bivariate tail probabilities


In addition to providing volatility estimates, RiskMetrics also provides daily updates of
correlation forecasts. These correlations provide a measure of linear association between any
pair of return series. They are also required inputs for the calculation of DEaR of a portfolio that
consists of two or more assets. For example, for a portfolio consisting of two assets X and Y, its
DEaR is computed by
New York Morgan Guaranty Trust Company page 6
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

r r
DEaRxy = V * [ C ] * V T
where
r
(1.4)
[
V = DEaRx DEaRy ] (DEaR vector)

 1 ρ yx 
[ C ] = ρ 1 
(correlation matrix)
 xy
r  DEaRx 
VT =   (transposed vector of V)
 DEaRy 

Not only is it important to obtain accurate estimates of the correlations, but we can derive addi-
tional predictions by using the correlation estimates in conjunction with the assumption of normal-
ity. We begin, as in the univariate case, comparing observed and predicted probabilities.

In the bivariate case we are interested in analyzing the probabilities associated with the joint
distribution of two return series. Studying these probabilities not only tells us how well the
theoretical model performs but also whether the correlation estimates are good predictions. For any
two return series, Xt and Yt, we focus on the event: Prob(X t / σ x t < 0 & Y t / σ y t < -1.65) ,
i.e. the probability of the return series Xt being less than zero and Yt being in its lower tail. Note
that the choice of Xt being less than zero is strictly arbitrary. To obtain the observed values we
simply compute

# of (X t / σ x t < 0 & Y t / σ y t < -1.65)


(1.5) ∗100
T

We calculate the predicted probability by integrating over the bivariate density function, i.e.,
0 −1.65
Β(0, −1.65, ρ ) = ∫ ∫ φ (x, y, ρ ) dx dy
−∞ −∞
(1.6) where φ (x, y, ρ ) is the standard normal bivariate density function.
ρ is the correlation coefficient between x and y.

The table below presents the predicted and observed probabilities (in percent) for the bivariate
case. The predicted probabilities are presented in parentheses below the observed probabilities. All
calculations are based on sample periods previously used.
page 7 Five questions about RiskMetrics

Prob( X t / σ X t < 0 & Y t / σ y t < -1.65)


Yt DEM JPY DEM JPY USD DEM JPY USD DEM JPY USD
Xt fx fx equity equity equity 3-mo. 3-mo. 3-mo. 10 z 10 z 10 z
DEM fx 4.5
(5.0)
JPY fx 3.8 4.9
(4.1) (5.0)
DEM equity 1.6 2.5 4.7
(1.9) (2.1) (5.0)
JPY equity 2 2.2 3.4 5.8
(2.0) (2.0) (3.3) (5.0)
USD equity 1.8 2 2.2 2.7 4.5
(2.5) (2.3) (3.0) (2.7) (5.0)
DEM 3-mo. 1.6 1.3 0.7 0.9 0.9 6.3
(2.6) (2.8) (1.7) (2.2) (2.4) (5.0)
JPY 3-mo. 0.4 0.7 0.2 1.3 1.1 0.7 7.4
(2.2) (2.3) (2.5) (2.8) (2.5) (2.8) (5.0)
USD 3-mo. 0.4 0.9 0.7 0.4 0.2 1.1 1.6 3.8
(2.3) (2.5) (2.4) (2.5) (1.7) (2.7) (2.4) (5.0)
DEM 10z 2.5 2.2 0.9 3.1 2 3.8 3.8 2.2 4.5
(3.0) (3.0) (0.8) (2.0) (2.4) (3.3) (2.3) (2.9) (5.0)
JPY 10z 1.3 1.1 1.3 4.5 2.9 3.1 5.4 1.3 2.5 5.4
(2.2) (2.0) (2.2) (3.3) (2.6) (2.8) (3.5) (2.5) (3.1) (5.0)
USD 10z 1.6 2.9 2 2.9 0.4 3.8 2.9 3.4 1.8 2.7 5.2
(2.3) (2.7) (2.3) (2.5) (0.6) (2.5) (2.4) (3.9) (3.0) (2.6) (5.0)

To take an example, the observed probability of the Japanese 10-year zero return (Xt) being less
than zero and the German 3-month LIBOR (Yt) return being less than its adverse rate move is
3.1%. The predicted probability for this same event is 2.8%. Similarly, the observed probability
of JPY fx being less than zero and the DEM fx being less than its adverse rate move is 3.8%.
The predicted probability is 4.1%. Overall, bivariate normality predicts reasonably well with the
exception of money market rates.

Bivariate tail points


For any pair of returns, now we are interested in the value of one return when the other is a tail
point. A comparison of the average value of the tail points is similar to what was previously
done for the univariate case. We define the observed values of, say, return Xt as the average of
the Xt ’s when return Yt < -1.65σt i.e. less than its adverse rate move. So we first record the
value of the observations Xt corresponding to Y t < −1.65σ y t and then find the average value
of these returns.

Based on the assumption about the return distribution, we can derive forecasts of these tail
points. These forecasts are known as predicted values. Again, it follows from the normality
assumption that our best guess of any return is simply its expected value; in particular, the
expected value of X t |Y t < −1.65σ y t . Under the assumption that returns are normally distrib-
uted, the mathematical expression for the predicted value of Xt is
New York Morgan Guaranty Trust Company page 8
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

E[X t | Y t < -1.65σ y t ] = - σ x t ∗ ρ xt yt ∗ λ (−1.65)


φ (α )
where λ (α ) =
Φ(α )
(1.7) φ (α ) is the standard normal density function evaluated at α .
Φ(α ) is the standard normal cumulative distribution function evaluated at α .
ρ xt yt is the correlation coefficient between x t and y t .

Predicted values (in parentheses) along with the observed values are presented in the table below
for some selected time series. All values reported are in percent. Also, the data are taken from
sample periods reported above.
Y t < -1.65σ y t
DEM JPY DEM JPY USD DEM JPY USD DEM JPY USD
Xt fx fx equity equity equity 3-mo. 3-mo. 3-mo. 10 z 10 z 10 z

DEM fx -1.18
(-1.13)
JPY fx -0.66 -1.33
(-0.51) (-1.21)
DEM equity 0.34 0.1 -2.09
(0.22) (0.16) (-1.95)
JPY equity 0.57 0.13 -0.7 -2.25
(0.10) (0.24) (-0.41) (-2.11)
USD equity 0.08 0.12 -0.01 0 -1.34
(-0.01) (0.10) (-0.17) (-0.12) (-1.13)
DEM 3-mo. -0.31 -0.18 0.2 0.18 0.26 -1.58
(-0.18) (-0.05) (0.29) (0.16) (-0.01) (-1.28)
JPY 3-mo. 0.12 0.22 0.01 -0.37 -0.28 0.01 -2.65
(0.01) (0.02) (0.08) (-0.07) (-0.16) (-0.16) (-2.06)
USD 3-mo. 0.35 -0.12 0.21 0.03 0.95 0.2 0.07 -1.86
(0.12) (-0.07) (0.08) (0.18) (0.30) (-0.19) (0.07) (-1.65)
DEM 10z -0.22 -0.11 0.97 0.08 0.24 0.01 -0.04 0.05 -1.51
(-0.25) (-0.16) (0.81) (0.28) (0.15) (-0.5) (0.04) (-0.04) (-1.51)
JPY 10z 0.22 0.33 0.5 -0.51 -0.39 -0.15 -0.62 0.17 -0.08 -1.53
(0.08) (0.19) (0.16) (0.25) (-0.07) (-0.19) (-0.62) (0.06) (-0.54) (-1.56)
USD 10z 0.19 -0.19 -0.01 -0.06 1.05 0.02 0.12 -0.86 0.07 0.08 -1.83
(0.20) (-0.15) (0.04) (0.20) (1.04) (0.03) (0.02) (-0.56) (-0.15) (0.02) (-1.76)

Let’s consider two examples. In the first case, the predicted average value of the 1-day return for
the DEM 3-month money market rate when the 1-day JPY equity return is in its lower tail is
0.16%. In reality, we observe an average value of 0.18%. Also, while the predicted average
value of the 1-day return for the DEM 10-year zero is -0.25% when the DEM fx return is in its
lower tail, we observe an average value of -0.22%.

In order to gauge the accuracy of these results, we also compute the average value and standard
deviation of the differences between the observed and predicted values. The table below presents
the average difference between the observed and predicted values along with their standard
deviation (in parentheses). All reported values are in percent.
page 9 Five questions about RiskMetrics

Y t < -1.65σ y t
DEM JPY DEM JPY USD DEM JPY USD DEM JPY USD
Xt fx fx equity equity equity 3-mo. 3-mo. 3-mo. 10 z 10 z 10 z
DEM fx -0.05
(0.18)
JPY fx -0.16 -0.12
(0.61) (0.50)
DEM equity 0.12 -0.07 -0.14
(1.33) (0.83) (0.55)
JPY equity 0.47 -0.11 -0.3 -0.15
(2.22) (0.83) (1.21) (0.47)
USD equity 0.09 0.03 0.16 0.13 -0.21
(0.52) (0.61) (0.43) (0.51) (0.36)
DEM 3-mo. -0.13 -0.13 -0.09 0.02 0.26 -0.3
(1.06) (0.61) (1.04) (0.75) (0.68) (0.71)
JPY 3-mo. 0.11 0.2 -0.07 -0.3 -0.13 0.17 -0.6
(0.82) (1.56) (0.87) (0.95) (1.13) (1.03) (0.94)
USD 3-mo. 0.23 -0.05 0.13 -0.15 0.65 0.39 0 -0.22
(1.03) (0.96) (0.98) (0.62) (1.14) (0.90) (1.10) (0.33)
DEM 10z 0.03 0.05 0.17 -0.19 0.08 0.51 -0.08 0.1 0
(0.65) (0.67) (1.42) (0.93) (0.95) (1.05) (1.38) (0.64) (0.36)
JPY 10z 0.14 0.15 0.34 -0.26 -0.32 0.05 0 0.11 0.46 0.02
(0.96) (1.02) (0.99) (1.19) (0.93) (0.94) (1.29) (1.00) (1.26) (0.21)
USD 10z -0.01 -0.04 -0.06 -0.26 0.01 -0.01 0.1 -0.3 0.22 0.06 -0.07
(0.93) (1.11) (0.88) (0.79) (0.75) (0.90) (0.89) (0.88) (0.89) (0.76) (0.44)

The table shows that the average differences between the observed and predicted values along
with their standard errors are quite small.

Conclusion
Since the launch of RiskMetrics there has been much discussion about the usefulness of the
underlying assumption of normality for return series. This response has attempted to quantify
this discussion so that we can compare directly the predictions made by the Normal model to
what we observe. We conclude:
• When the focus is on individual returns, the observed tail frequencies and points match up
quite well to their predictions from the Normal model.
• In the bivariate case, with the exception of money market rates, the Normal model’s predic-
tions of frequencies and tail points are similar to what is observed. The fact that the Normal
model does not predict money market rates well is not surprising since these rates are often
subject to discretionary changes.
New York Morgan Guaranty Trust Company page 10
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

2. Stability: How stable are the RiskMetrics volatility and correlation estimates?

As inputs to risk management models, it is important that volatilities and correlations are pre-
dictable and that forecasts of them incorporate the most useful information available. Moreover,
since these forecasts are based on historical data, it is important that the estimators are flexible
enough to account for changing market conditions. In fact, if market conditions are such that
volatilities and correlations move constantly in an unpredictable and erratic manner, then it will
be very difficult to provide good forecasts.

In RiskMetrics, forecasts of volatility and correlation are based on an exponential weighting


scheme. The underlying intuition behind the exponential estimator is that when forecasting over
short periods of time, more recent data hold potentially more useful information than more
distant data. One accounts for this by placing relatively greater weight on the more recent data.
The exact weight is given by the decay factor, or λ. Simply put, λ is chosen to minimize the
error between “observed” volatility and its forecast over some sample period.

To address the issue of stability, we organize this discussion into three sections:
• In the first section, we study the exponential estimator under different regimes or market
conditions. We are interested in learning how well the exponentially weighted estimator
performs when different volatility models are used to generate return distributions. We
establish that the exponential estimator, at the very least, captures the general movements of
volatility for different scenarios.
• The second section analyzes the overall stability of volatility by testing for change points in
the variance of return series. That is, we compute the number of times the variance changes in
a return series. The results show that the number of variance changes are few relative to the
sample period. Therefore, we conclude that it is not necessary to continuously reestimate and
update the decay factor.
• Finally, we examine how correlations are time-varying.

As shown in the RiskMetrics – Technical Document, the forecast from the exponential
estimator for the variance of returns over the next day is given by

(2.1) σ t2 = 0.94∗ σ t2−1 + 0.94∗(1 − 0.94)( Xt − xt −1 )2

where Xt is the time t daily return. We are interested in studying the robustness of this estimator.
That is, we want to know how this estimator performs under a variety of market conditions. To
simulate different market conditions we use different models of volatility to generate return
series. Specifically we carry out the following steps:
• First, daily returns are generated according to a specific volatility model (e.g. GARCH(1,1)).
• Then, based on these daily returns, equation (2.1) is used to make 500 volatility forecasts.
• Volatility forecasts from the model which generated the return series are then compared with
volatility predicted by equation (2.1).1

To conclude that the exponential estimator performs well, we should expect the volatility
forecast generated from the exponential estimator to track the volatility from the underlying
1 Note that the exponential estimator is not a model of returns. In other words, we cannot use it to produce a return
series. Rather, it takes any return series as given, and provides forecasts independent of assumptions of the return
generating process. In this sense, it is more flexible than the volatility models used to generate returns.
page 11 Five questions about RiskMetrics

model. We choose three popular volatility models to generate returns. These models generate
distributions with fat-tails – a feature often associated with asset return series.They are:
GARCH(1,1) with normal disturbances, GARCH(1,1) with t-distributed disturbances, and a
first-order autoregressive stochastic volatility model.

We parameterize each model according to previous results found using daily return data for the
British pound. Parameter values for the GARCH(1,1) with normal errors and stochastic volatil-
ity model are taken from Ruiz (1993).2 Parameter estimates for the GARCH(1,1) with t-
distributed disturbances are taken from Bollersev (1987).3

GARCH(1,1) v. the exponential estimator

When discussing forecasts of volatility, a very popular volatility model is the GARCH(1,1) with
normal disturbances. If Xt represents the time t daily return, then the return generating process
for this volatility model is given by
Xt = ε t σ t ε t is NID(0,1)
(2.2)
σ t = 0.0147 + 0.828Xt2−1 + 0.8811σ t2−1
2

NID refers to normally and independently distributed. Chart 4 shows an example of variance
forecasts produced by this model and the exponential estimator (2.1).

Chart 4
GARCH(1,1) – normal error versus exponential
GBP parameters, variance
1.6
1.4
1.2
1.0 Exponential
Va a ce

GARCH(1,1)
0.8
0.6
0.4
0.2
0.0

Unsurprisingly, the exponential model closely mimics the dynamics produced by this GARCH
model. Next, we extend this model to allow for a different, potentially more appealing distur-
bance process. Instead of simulating εt from a normal distribution, it is generated from a t-
distribution. In essence, the t-distribution has fatter tails than the corresponding normal distribu-
tion. This model is represented as

Xt = ε t σ t ε t is t - distributed with ν (= 1/.123) degrees of freedom


(2.3)
σ t2 = 0.96 x 10 -6 + 0.057Xt2−1 + 0.921σ t2−1

An example of the simulated variance forecasts for both series are shown below.

2 Ruiz, Esther, (1993). Stochastic volatility versus autoregressive conditional heteroscedasticity. Working paper 93-
44, Universidad Carlos III de Madrid.
3 Bollersev, Tim, (1987). A conditional heteroskedastic time series model for speculative prices and rates of return.
Review of Economics and Statistics, 69, 542-547.
New York Morgan Guaranty Trust Company page 12
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

Chart 5
GARCH(1,1) – Student t versus exponential
GBP parameters, variance
2.0
Exponential
1.5
Va a ce

1.0

0.5
GARCH(1,1)
0.0

For this series as well, the exponential model given by (2.1) captures the movements produced
by the underlying model.

Stochastic Volatility (ARV(1)) v. the exponential estimator

Next, we consider a different, somewhat more recent volatility model known as a first order
autoregressive stochastic volatility or ARV(1) model. Under this specification, returns are
generated according to

Xt = ε t σ t ε t is NID(0,1)
(2.4)
log( σ t2 ) = - 0.0442 + 0.9515log( σ t-1
2
) + ηt ηt is NID(0, 0.00254)

An example of a resulting time series of variance produced by the ARV(1) model and the
exponential estimator are presented next.

Chart 6
Stochastic volatility versus exponential
ARV(1), variance
2.0
ARV(1)
1.6
Exponential
Variance

1.2

0.8

0.4

0.0

Note that under an ARV(1) regime, the exponential model produces volatility forecasts that are
dampened and somewhat lagged versions of the ARV(1) model.
page 13 Five questions about RiskMetrics

We conclude this section with the following observations:


• Overall, the examples illustrate that the exponential estimator works reasonably well. Under
three different specifications, the exponential model with a decay factor of 0.94 produces
variance estimates which capture most features of the volatility models.
• In practice, the exponential model is much simpler to implement than estimation associated
with the other three models.

The preceding models of volatility all account for serial dependence in volatility patterns. That
is, periods of high and low volatility appear in clusters. For the exponentially weighted estima-
tor, the responsiveness of volatility estimates are directly linked to the value of the decay factor
λ. Consequently, by changing the decay factor, the dynamic structure of the exponential
forecasts would also change. The effect of different values of the decay factor is shown in the
graph below. Over the one-year period January 1994 - January 1995, two different decay factors
are used to compute daily volatilities for the Japanese foreign exchange series.

Chart 7
JPY/$ fx
volatility
2.0
Decay = 0.92

1.6 Decay = 0.96

1.2

0.8

0.4

0.0
January 1994 - January 1995

We see that the difference in the dynamics between a decay factor of 0.92 and 0.96 are not
demonstrably different. Unless there are very large changes in the level and dynamics of
volatility, we should not expect to get very different results. This finding motivates a test to
determine how often a time series undergoes significant changes in variance. Knowing the
number of change points of variance in the past can offer some information concerning the
updating process for the decay factor. In particular, if we find evidence that a time series is very
unstable, i.e. its variance changes often, then we would want to reestimate the decay factor
frequently to account for these changes. On the other hand, if there are relatively few variance
changes, then it would not be necessary to update the decay factor as often.

For various return series, we employ a recently developed iterated cumulative sum of squares
(ICSS) algorithm4 to study the detection of multiple changes of variance. Our main interest is to
study the variance of a given sequence of observations retrospectively, so that all information on
the series can be used to indicate breaks in variance. The algorithm computes not only the
number of variance changes but also the time that they occur.

4 Inclan, Carla and George C. Taio. (September, 1994) “Use of Cumulative Sums of Squares for Retrospective Detec-
tion of Changes of Variance.” Journal of the American Statistical Association, p. 913 - 923. Briefly, unlike previous
methods such as a Bayesian approach or likelihood ratio tests, the ICSS algorithm does not have a heavy computa-
tional burden.
New York Morgan Guaranty Trust Company page 14
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

We applied this test to the following 11 price series. The following table reports on the results.

Number of Sample period


Price/rate series variance changes (daily returns)

Foreign exchange
DEM|$ 2 Jan 88 - Jan 95
JPY|$ 7 Jan 87 - Jan 95
Equity
DM 10 Jun 87 - Jan 95
JPY 16 Jan 87 - Jan 95
USD 5 Jan 87 - Jan 95
3-month LIBOR
DEM 1 Jan 93 - Jan 95
JPY 6 Jan 93 - Jan 95
USD 3 Jan 93 - Jan 95
10-year zero
DEM 8 Jan 90 - Jan 95
JPY 6 Jan 90 - Jan 95
USD 3 Jan 90 - Jan 95

Focusing on the equity series, we see that for a sample period of roughly eight years, there were
16 change points for the Japanese equity, 10 for the German series, and 5 for the U.S.

Since the results show that breaks are relatively infrequent over a sample period, it is not clear
that reestimating the decay factor on a continuous (daily) basis will necessarily improve the
exponential estimator’s forecasting ability. Rather, it is probably more worthwhile to reestimate
the decay factor, say, every six months or after an event (e.g., EMS crisis) which affects a return
series statistical properties.

Finally, we turn our attention to correlation estimates. To this point the focus has been on
volatilities. However, stability is also an issue for correlations. It is a widely accepted fact that
correlation is time-varying. Recall that the correlation between random variables X and Y,ρ xy ,
is a function of three components – the covariance ( σ xy 2
) and two standard deviations,
σ x and σ y . Having estimated each of the three components separately, the correlation is
σ xy
2
calculated as ρ xy = . Consequently, we can analyze the stability of correlations by
σ xσ y
focusing on each of its components separately.

For example, if we were to estimate the correlation between the DEM and USD 10-year zero
returns, then we could decompose the historical correlation into σ xy
2
, σ x and σ y as
page 15 Five questions about RiskMetrics

Chart 8

Daily correlation Daily covariance


DEM versus USD 10-year zero DEM versus USD 10-year zero
1 0.006
0.8
0.004
0.6
0.4 0.002

0.2 0
0
-0.002
-0.2
-0.4 -0.004
January 1990 - January 1995 January 1990 - January 1995

Daily standard deviation Daily standard deviation


DEM 10-year zero USD 10-year zero
1.8 1.6
1.6 1.4
1.4 1.2
1.2 1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2 0.2
0 0
January 1990 - January 1995 January 1990 - January 1995

As these graphs show, the covariance as well as the standard deviations are time-varying.
Therefore, each can be studied separately to assess the correlation’s stability.
New York Morgan Guaranty Trust Company page 16
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

3. Mean: Given the relatively short history used by RiskMetrics volatility estimates and the possible impact
of short-term trends on these estimates, shouldn’t volatility be measured as the difference from 0 versus the
current deviation from the sample mean?

In the RiskMetrics – Technical Document, we describe how a standard deviation measures the
dispersion of observations around its mean. However, in practice, it may be difficult to obtain a
good estimate of the mean. This leads some to argue that volatility should be measured around
zero rather than the sample mean.5 In fact, assuming a conditional zero mean of returns is
consistent with financial theory, in particular, the efficient market hypothesis.

Since different measures of volatility may give us different estimates, we study the difference
between results given by the sample mean and zero-mean centered estimators. Specifically, we
compare 1-day and 1-month (25 business days) volatility and correlation estimates. We do this
as follows.
• First, we present two competing approaches to forecast volatility known as the estimated mean
and zero-mean volatility estimators and show their differences algebraically.
• Subsequently, we present the results from a Monte Carlo experiment which estimates the
relative difference between the two volatility forecasts. Graphs of daily correlations using both
conditions then follow.
• Finally, similar analysis is performed for volatility and correlations with a one-month horizon.

Consider the case for 1-day volatility forecasts. The first measure of volatility is what we refer to
as the estimated mean estimator. Its mathematical expression is given by

(3.1) σ̂ t2 = λ σ̂ t2−1 + λ (1 − λ )( Xt − xt −1 )2

where Xt is the percent change return and xt −1 is an exponentially weighted mean. A complete
derivation of (1) is given on page 59 of the RiskMetrics – Technical Document. Next, we
define the zero-mean estimator as

(3.2) σ̃ t2 = λ σ̃ t2−1 + (1 − λ )Xt 2

Note that (3.2) is not simply (3.1) with the mean set to zero. Instead (3.2) is derived from

σ̃ t2 = (1 − λ ) ∑ λ i Xt2−i
i=0

(3.3) [
= (1 − λ ) Xt2 + λ Xt2−1 + λ 2 Xt2−2 +L ]
{ [
= (1 − λ )Xt2 + λ (1 − λ ) Xt2−1 + λ Xt2−2 + λ 2 Xt2−3 +L ]}
= (1 − λ )Xt2 + λ σ̃ t2−1

We want to compare the estimates produced from (3.1) and (3.2). One way to do so is to see how
much their forecasts differ at any time t. Algebraically, we can investigate the difference

5 See, for example, Stephen Figlewski. (1994). “Forecasting Volatility Using Historical Data.” New York University
Working paper, S-94-13.
page 17 Five questions about RiskMetrics

between σ̃ t2 and σ̂ t2 . Let δt be the arithmetic difference between the zero-mean and estimated
mean estimators when the sample means are set to zero. That is,

(3.4) δ t = σ̃ t2 − σ̂ t2 for i = t, t -1, t - 2,...


xi = 0

It follows that
δ t = Xt2 (1 − λ )2

For 1-day volatility forecasts λ = 0.94. Therefore,

δ t = 0.0036∗ Xt2

Hence, if the sample means are zero and the percent return Xt is sufficiently small, δt is negli-
gible and we should not expect significant deviations between estimates produced by the two
estimators. Furthermore, when deviations do occur the zero-mean estimates will be larger than
the estimated mean estimates, i.e., δ t ≥ 0 .

The two charts below offer a graphical depiction of the difference between the two 1-day
volatility estimates. In the first chart we see that for the DEM 10-year zero rates, volatility
estimates differ insignificantly for the 6-year period January 1990 - January 1995.

Chart 9
Daily volatility
DEM 10-year zero, volatility (%)
Estimated mean
3.5

3.0
Zero mean
2.5
2.0

1.5
1.0
0.5

0.0
January 1988 - January 1995

Next, we use historical data to determine the difference between equations (3.1) and (3.2). The
data used in the following experiment are daily percent returns taken from the RiskMetrics
database. It consists of 11 time series including German, Japanese, and U.S. FX, equity, 3-month
LIBOR, and 10-year zero rates.

For each of the 11 time series we perform the following steps.6 For each series i, let Ti denote
the total number of observations for that series.

1 - Randomly generate a random integer t where t ∈ [74, T i ] . From the sample period
[ t - 73, t ], compute σ̃ t2 and σ̂ t2
2 - Calculate the relative difference θ t = ( σ̃ t − σ̂ t ) / σ̂ t between the two forecasts and
record this value.
6 Recall from the RiskMetrics – Technical Document that the 1-day volatility forecasts use 74 days of historical data.
New York Morgan Guaranty Trust Company page 18
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

Repeat steps 1 and 2, 200 times for each time series. For each series i, let θ ti denote the time t
relative forecast difference. Also, let Ni represent the total number of θ ti ( 200 in this case).
Based on these definitions we can form the following statistics:
i
1 N i
Mean relative error = ∑ θt
N i t =1

i
1 N
Standard deviation of θ ti = ∑ ( θ ti −θ ti )2
N i t =1

The table below presents mean relative error and standard deviation based on this experiment.

Mean relative Standard deviation


Price/rate series error, percent of θ ti , percent Ni

Foreign exchange
DEM 3.58 2.35 200
JPY 2.98 2.08 200
Equity
DM 3.13 2.22 200
JPY 3.54 3.28 200
USD 2.91 2.14 200
3-month LIBOR
DEM 4.35 3.99 200
JPY 3.13 3.11 200
USD 3.87 3.04 200
10-year zero
DEM 3.57 2.33 200
JPY 3.72 3.10 200
USD 3.07 2.62 200

Note from the table that while the mean relative error across the 11 time series ranges from a
low of 2.91% to a high of 4.35%, the range for the standard deviation of θ ti is 2.08% to 3.99%.
Even though calculating the absolute value of the error exploits any differences between the two
estimates, we conclude that these relative differences are quite small.

For completeness, it is also possible to investigate the differences of the 1-day forecasted
correlation. Since we expect that most of the variation in the correlation is due to the underlying
standard deviations, we can infer that most of the differences in the correlation estimates will
reflect the volatility differences.

The two graphs below show the historical daily correlation between the DEM 10-year zero and
equity. As expected, for both the January 1990 - January 1995 and the one-year period January
1994 - January 1995, deviations are quite small.
page 19 Five questions about RiskMetrics

Chart 10
Daily correlation
DEM 10-year zero versus equity, correlation
1.0 Estimated mean

0.8

0.6

0.4

0.2

0.0
Zero mean
-0.2
January 1990 - January 1995

Next, we extend the analysis of the difference between the estimated mean and zero mean esti-
mators to 1-month horizons. Unlike the 1-day volatility estimator, in this case, the zero mean
volatility estimator is simply the estimated mean estimator with the respective mean set to zero.
To see this, we begin with the definition of a 1-month return XM which is a function of 25 daily
returns (Xi).

25
(3.5) (1 + X M ) = ∏ (1 + Xi )
i=1

As shown in the RiskMetrics – Technical Document, the variance of the monthly return is

(3.6) σ X2 M = [(1 + µ )2 + σ 2 ]Mo − (1 + µ )2∗Mo

Now if we set the mean equal to zero ( µ = 0 ) we get the following result

(3.7) σ X2 M = (1 + σ 2 )25 − 1

We then exponentially weigh these variances to get the 1-month volatility forecasts. As the
following graph shows, for the USD 3-month LIBOR, the historical volatility between the two
estimates are quite similar. Note that since the original point estimates are exponentially
smoothed, any difference is simpler to visualize. The same can be said about the 1-month
correlation between the DEM 10-year zero and equity.
New York Morgan Guaranty Trust Company page 20
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

Chart 11
Historical monthly volatility
USD 3-month LIBOR, volatility (%)
10
Estimated mean
9

8
Zero mean
7

4
January 1993 - January 1995

Chart 12
Historical monthly correlations
DEM 10-year zero versus DEM equity, correlation
0.6
Zero mean
0.5

0.4
Correlation

0.3

0.2

0.1 Estimated mean


0.0
January 1990 - January 1995

Also, to quantify these differences we carry out a similar experiment as that performed for the 1-
day volatilities. However, unlike the previous example where we use random samples, here, for
simplicity, the sample statistics are computed directly from the time series of volatility esti-
mates.7 The table below presents the results from this experiment.

7 We note that volatility estimates which are serially correlated produce lower standard deviations.
page 21 Five questions about RiskMetrics

Mean relative Standard deviation


Price/rate series error, percent of error, percent Ni

Foreign exchange
DEM 1.36 0.93 1658
JPY 1.15 0.73 1919
Equity
DEM 2.45 2.20 1811
JPY 2.57 2.06 1919
USD 1.89 1.85 1919
3-month LIBOR
DEM 5.04 2.22 352
JPY 5.11 3.99 343
USD 3.09 1.32 346
10-year zero
DEM 1.79 1.17 1136
JPY 2.04 1.58 1134
USD 1.48 1.05 1136

These results demonstrate the relatively large error for money market rates compared to other
series. This is most likely due to the large changes in the short-term rates. Overall, the mean
relative errors still appear to be small.

We conclude this discussion with the following observations.


• At the 1-day forecast horizon, estimates from the zero-mean and estimated mean estimators do
not differ significantly.
• At the 1-month forecast horizon, deviations between the two estimates appear mostly in the
money market rates. Nonetheless, there still seems to be relatively small differences between
the two estimates.
• The zero mean estimator is a viable alternative to the estimated mean estimator. It is simpler to
compute, and is not sensitive to short-term trends which could bias forecasts.
New York Morgan Guaranty Trust Company page 22
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

4. Log or percent changes: RiskMetrics estimates use the distribution of percentage changes in market
values to estimate volatility. Shouldn’t you use the distribution of log changes instead?

All statistics produced by RiskMetrics are based on change returns. The change return Xt is
defined as (Pt-Pt-1)/Pt-1 where Pt represents the time t price of a series. An alternative way of
computing returns is based on logarithms of prices. Returns based on logs are also known as
compound returns and are defined as Yt = log(Pt/Pt-1). Our decision to use change returns rather
than compound returns is twofold:
• In the financial industry, most measures of return and indices use change returns.
• RiskMetrics initially computes yield volatility from which it produces price volatility. A
more natural interpretation of a standard formula8 based on Macaulay duration which relates
yield volatility to price volatility involves change returns.

On the other hand, some researchers prefer to use compound returns. There are two principal
reasons for preferring compound returns. First, continuous time generalizations of discrete time
results are then easier and second, returns over more than one day are simple functions of a
single-day return.9,10

From a practical perspective, we demonstrate below that the RiskMetrics methodology which
uses change returns could easily be adapted to incorporate logarithmic returns without producing
large differences in its results.

A first step in comparing change and compound returns is to plot their distributions. This could
be done by comparing histograms generated from change and log returns. However, when distri-
butions are similar, it is difficult to visually discern any differences between them using histo-
grams. An alternative approach to get around this problem would be to compare kernel esti-
mates11 of the probability density function for both change and compound returns. This is shown
in the graph below for the USD 3-month LIBOR for the period January 1990 - January 1995.

Chart 13
USD 3-month LIBOR
in percent
0.6

0.5

0.4 Log
0.3 Change

0.2

0.1
0
-.06 -.04 -.02 .00 .02 .04 .06

8 See, for example, (1991). The Handbook of Fixed Income Securities, ed. Frank Fabozzi. Irwin Inc. NY, NY, p. 127.
9 See, for example, Taylor, S.J. (1986). Modelling Financial Time Series, John Wiley, Chichester, U.K. p. 13.
10A third reason has to do with the computation of cross currencies. As shown in the RiskMetrics – Technical

Document p. 81, cross rate returns, volatility, and correlation are derived from log returns.
11In contrast to the histogram of the data, this approach spreads the frequency represented by each observation along
the horizontal axis according to a chosen distribution function, or “kernel” which is chosen to be the normal distri-
bution.
page 23 Five questions about RiskMetrics

Obviously, the similarity of returns is shown by the way one estimated density overlays the
other. Note that in computing the above density, we chose a U.S. money market rate because its
return distribution is more likely to deviate from normality as well as contain some outliers.
Therefore, we would exploit any differences between the two returns.

Having established a means of comparing the return distributions, next we compare daily
volatility estimates for both types of returns. Recall that RiskMetrics computes 1-day
volatility estimates based on an exponential weighting scheme. To compare the volatility
estimates we simply use the current methodology to compute the volatility for change returns
and then replace the change returns with log returns to get another estimate of volatility. As the
following graph shows, the 1-day volatility estimates are very similar. Also, in a manner
analogous to the calculation of volatilities, we compute two types of correlation estimates – one
for change returns and one based on log returns.

Chart 14
Daily volatility
JPY 10-year zero, volatility (%)
4 Change return

2
Log returns
1

0
January 1994 - January 1995

Chart 15
Daily correlations
DEM 10-year zero versus U.S. 10-year zero

0.7
Change return
0.6
0.5
0.4
0.3 Log return
0.2
0.1
0
-0.1
-0.2
January 1994 - January 1995

To check whether this analysis holds across most series, we perform the following experiment
which is identical to the one carried out in a previous question. For each of the 11 time series we
perform the following steps.

1 - Randomly generate a random integer t where t ∈ [74, T i ] and Ti denotes the total
number of observations for the ith series. From the sample period [ t - 73, t ], compute
New York Morgan Guaranty Trust Company page 24
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

σ̃ t2 and σ̂ t2 where σ̃ t2 is calculated from log returns and σ̂ t2 is based on change


returns.
2 - Calculate the relative difference θ t = ( σ̃ t − σ̂ t ) / σ̂ t between the two forecasts and
record this value.
Repeat steps 1 and 2, 200 times for each time series. For series i, let θ ti denote the time t relative
forecast difference. Also, let Ni represent the total number of θ ti ( 200 in this case). Based on
these definitions we can form the following statistics:
i
1 N i
Mean relative error = ∑ θt
N i t =1

i
1 N
Standard deviation of θ ti = ∑ ( θ ti −θ ti )2
N i t =1

The results from the Monte Carlo experiment are presented in the table below

Mean relative Standard deviation


Price/rate series error, percent of error, percent Ni

Foreign exchange
DEM 0.14 0.11 200
JPY 0.15 0.12 200
Equity
DEM 0.36 0.63 200
JPY 0.39 0.55 200
USD 0.35 0.90 200
3-month LIBOR
DEM 0.30 0.20 200
JPY 0.40 0.37 200
USD 0.53 0.45 200
10-year zero
DEM 0.15 0.16 200
JPY 0.20 0.19 200
USD 0.15 0.11 200

As can be seen from the table, the volatility forecasts are very similar.

In the case of daily volatility and correlation forecasts, all that was required was that we change
the inputs to the volatility and correlation estimators. In other words, to estimate volatility for log
returns one simply uses log returns in place of change returns. For monthly estimates, the
assumption of log returns alters the estimator ultimately used. This is because we construct
monthly returns from daily returns. We define the monthly change return as
25
(4.1) X M = ∏ (1 + Xi ) − 1
i=1

Based on this expression, and imposing the zero-mean condition, we arrive at the estimator

(4.2) σ̂ X2 M = (1 + σ d2 )25 − 1
page 25 Five questions about RiskMetrics

where σ d2 is the variance based on 25 daily returns. Now, if we use log returns then we can
express the monthly log return as
25
(1 + X M ) = ∏ (1 + Xi )
i=1
25
(4.3) log(1 + X M ) = ∑ log(1 + Xi )
i=1
25
Y M = ∑ Yi
i=1

It then follows directly from the assumption of independent and identically distributed returns
that

(4.4) σ̂ X2 M = 25∗ σ d2

which yields the square root of time rule for the standard deviation.12 To compare the volatility
forecasts given by (4.2) and (4.4) we use the historical monthly volatility estimates to compute
the mean relative error ( i.e. ( σ̃ X2 M − σ̂ X2 M ) / σ̂ X2 M ) and the standard deviation of error. Results
are presented in the following table.

Mean relative Standard deviation


Price/rate series error, percent of error, percent Ni

Foreign exchange
DEM 0.14 0.11 200
JPY 0.15 0.12 200
Foreign exchange
DEM 0.11 0.08 1658
JPY 0.16 0.11 1919
Equity
DEM 0.38 0.37 1811
JPY 0.42 0.37 1919
USD 0.33 0.66 1919
3-month LIBOR
DEM 0.35 0.19 352
JPY 0.48 0.30 343
USD 0.72 0.20 346
10-year zero
DEM 0.20 0.14 1136
JPY 0.21 0.19 1134
USD 0.11 0.10 1136

Again, since we know that the square root of time rule is a first order approximation to the vola-
tility estimator based on change returns, we should not expect large discrepancies. The effect of
using daily log returns versus changes can also be studied in the context of monthly correlations.
Note that when using log returns, the simple daily correlation is the same as the monthly correla-
tion. The following graph shows the exponentially weighted monthly standard deviations as well
as monthly correlations. Recall from the RiskMetrics – Technical Document that we define the
smoothed estimates of volatility and correlation as the 1-month forecast.

12 Note that (4.4) is just the first order term of an expansion of (4.2).
New York Morgan Guaranty Trust Company page 26
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

Chart 16
Monthly volatility
DEM 3-month LIBOR, volatility (%)
5.5
Log

5.0

4.5

4.0 Change

3.5
January 1993 - January 1995

Chart 17
Historical monthly correlation
DEM versus USD 10-year rate, correlation
0.30

0.25
Log return
0.20
Co e at o

0.15

0.10

0.05 Change return

0.00
January 1990 - January 1995

These graphs coincide with the results from the Monte Carlo experiment. Unsurprisingly we
find little difference between the two volatility and correlation series.

The evidence presented points to the following conclusions:


• The form of the 1-day volatility estimator using relative changes is the same as that of the 1-
day volatility estimator using compound returns.
• Point estimates produced by the 1-day volatility estimators are very similar.
• The 1-month volatility and correlation estimators based on change and log returns are not
necessarily the same. However, the difference between their point estimates is negligible.
page 27 Five questions about RiskMetrics

5. Cash flow allocation to vertices: The RiskMetrics mapping methodology for fixed income instruments
advocates the use of a historical variance method to allocate cash flows to standard maturity vertices. Since
the algebra of the method boils down to solving a 2nd degree equation, aren’t there some instances where
either multiple or nonexistent solutions can be derived?

Australian Bond Index cash flow map


January 1995
RiskMetrics vertices 1d 1w 1m 3m 6m 1yr 2yr 3yr 4yr 5yr 7yr 9yr 10yr 15yr
Australia - (1.287)
Zero yield 7.620 7.620 7.620 8.280 9.240 10.241 9.978 10.022 10.037 10.034 9.978 9.953 9.953 9.961
Yield volatility 14.598 15.067 13.996 10.006 10.178 9.443 9.368 8.804 8.384 8.218 8.415 8.753 8.739 9.219
Price volatility 0.003 0.022 0.088 0.203 0.449 0.877 1.700 2.406 3.059 3.747 5.344 7.131 7.910 12.526
Correlation to prior vertice 0.957 0.905 0.790 0.825 0.812 0.766 0.968 0.990 0.989 0.927 0.993 0.998 0.991
Market value 0.46 0.71 0.91 1.52 3.88 6.97 7.00 6.45 6.73 5.99 3.45 4.02 0.93
% of total 0.9% 1.4% 1.9% 3.1% 7.9% 14.2% 14.3% 13.2% 13.7% 12.2% 7.0% 8.2% 1.9%

Practically implementing risk estimation requires using a synthetic description of positions,


something we refer to in the RiskMetrics – Technical Document as mapping. Maps provide a
simple table summarizing a set of fixed income positions broken down into their component
cash flows against a set of standard maturity vertices. RiskMetrics maps use the vertices for
which volatility and correlation estimates are available, i.e., 2, 3, 4, 5, 7, 9, 10, 15, 20, and 30
years for government bond zeros.

Common instruments are unlikely to have cash flows which fall exactly on the maturity vertices
used in the standard map. That is why a method is required to distribute cash flows falling
between vertices. Suppose you have an instrument which pays out a flow in six years, the
question is how to distribute the flow between the standard 5- and 7-year vertices used by the
RiskMetrics map.

A common method used to date throughout the financial industry has been to follow two
standard rules:
• The first one to maintain present value (i.e., the sum of the cash flows maturing in 5 and 7
years has to be the same as the original).
• The second one to maintain duration (the duration of the “portfolio” of 5- and 7-year cash
flows must be identical to the duration of the 6-year flow).

Chart 18
Cash flow allocations and yield curve shifts
yield
8.60
8.40 ■ RiskMetrics vertices
8.20
Shift

8.00
7.80
7.60
Cash flow

7.40
7.20
1 2 3 4 5 6 7 8 9 10
Maturity
A detailed outline of the methodology for mapping fixed income instruments can be found in the RiskMetrics –
Technical Document (Second Edition pp. 34-37).
New York Morgan Guaranty Trust Company page 28
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

Cash flow maps like these are similar to a barbell type trade, where an existing position is re-
placed by a combination of two instruments distributed along the yield curve under the condition
that the trade remains duration neutral. Barbell trades are entered into by investors who are dura-
tion constrained but are taking a view on a shift in the yield curve. What can be a perfectly defen-
sible investment strategy however cannot be simply applied to risk estimation. Allocating cash
flows on the basis of duration will lead to maps that do not correctly reflect the risk of a yield
curve shift between two vertices as shown by the chart below. Under this yield curve scenario,
the actual risk of the 6-year flow will not be identical to the risk incurred by a 5-7 year barbell.

To address this shortcoming, the RiskMetrics methodology advocates allocating cash flows to
standard maturity vertices using historical variance instead of duration. As shown in the
RiskMetrics – Technical Document, cash flows falling between standard vertices must be split
in proportion defined by the solution to the following equations:

(5.1) PV = PVi + PVi + x


where
1
PV = present value = C * , z = zero coupon yield, and n = maturity
(1 + z) n
thus
C actual Ci C i+x
na = +
(1 + z ) (1 + z i )i (1 + z i+x )i + x

(5. 2) DEaR( Cactual ) = DEaR(Ci + Ci + x )


or
PVi 2 PV PVi PVi + x
σ a 2 = σ i2 () + σ i2+ x ( i + x )2 + 2ρi,i + x σ i σ i + x
PV PV PV PV
where
σ 2 = variance (i. e. market risk) of a cash flow
ρ i,i + x = the correlation between the daily changes of the
zero rates at the two vertices

Because equation (5.2) usually has 2 solutions, we impose a third condition to make sure that
each allocation to RiskMetrics vertices is positive:

(5.3) sign(PV ) = sign(PVi ) = sign(PVi + x )

In general, the solution to equation (5.2) will be given by:

(5. 4) aX 2 − bX + c = 0 solved as
−b − b 2 − 4ac
X= where:
2a
a = σ i2 + σ i2+1 − 2ρ Di , Di +1 σ i σ i +1
b = 2ρ Di , Di +1 σ i σ i +1 − 2 σ i2+1
c = − σ 2 + σ i2+1
page 29 Five questions about RiskMetrics

Chart 19
USD zero government bond volatilities
volatility, data for January 12, 1994
1.60
■ RiskMetrics vertices
1.40
1.20
1.00 Yield volatility

0.80
0.60
0.40
Price volatility
0.20
0.00
1 2 3 4 5 6 7 8 9 10
Maturity

Under the conditions that:


• The price volatility curve is upward sloping (i.e., the price volatility of a 7-year zero is higher
than the price volatility of a 5-year zero)
• That price volatilities are not equal to 0.

The table below shows that theoretically, if these two conditions are not met, slight adjustments
must be made:

Standard
Upward sloping (1) (2) (3) (4)
vol. curve Zero Down Flat Zero
vols>0 corr. vol. curve vol. curve vols.

Cash Flow volatility 0.5000 0.5000 0.5000 0.5000 0.0000

Vertice 1 volatility 0.2500 0.2500 0.7500 0.5000 0.0000


Vertice 2 volatility 0.7500 0.7500 0.2500 0.5000 0.0000
Vertice 1/2 correlation 0.8500 0.0000 0.8500 0.8500 0.8500
Product 0.3188 0.0000 0.3188 0.4250 0.0000

a 0.3063 0.6250 0.3063 0.0750 0.0000


b -0.8063 -1.1250 0.1938 -0.0750 0.0000
c 0.3125 0.3125 -0.1875 0.0000 0.0000

Solution 1 x’ 47% 34% -116% 0% N/A


y’ 53% 66% 216% 100% N/A

Solution 2 x’’ 216% 146% 53% 100% N/A


y’’ -116% -46% 47% 0% N/A

(2) Price volatility curve is downward sloping.


In this case, the other root of equation (5.2) must be used., i.e.:

−b + b 2 − 4ac
(5.5) X=
2a
New York Morgan Guaranty Trust Company page 30
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

Chart 20
RiskMetrics estimates of price volatility for Italian
government bond zeros
in percent, daily data, June-August 1993
August 3, 1993
1.2

0.8

0.6

June 30, 1993


0.4

0.2
3 4 5 7 9 10 15
Maturity

This occurrence is infrequent as price volatilities are usually a positive function of maturity.
There are instances however where the certain sections of the price volatility term structure are
downward sloping as evidenced by the chart on Italian bond market volatilities to the right.

(3) The price volatility curve is flat.


In this case, cash flows can be allocated to either vertice. In the historical data run to date,
there has not been an occurrence of a perfectly flat volatility term structure.

(4) Volatility estimates are equal to 0 and the equation has no solution.
While this is mathematically possible, no occurrence of this phenomenon has been
recorded to date and it is highly unlikely that any instrument would ever display a zero
volatility estimate.
page 31 Five questions about RiskMetrics
New York Morgan Guaranty Trust Company Five questions about RiskMetrics page 32
February 1995 Market Risk Research
Jacques Longerstaey (1-212) 648-4936
Peter Zangari (1-212) 648-8641

Key RiskMetrics documents Worldwide RiskMetrics contacts


RiskMetrics Directory: A short brochure describing where For more information about RiskMetrics, please contact
and how the datasets can be accessed and where to go for the author or any person listed below:
risk management systems and other products related to
North America
RiskMetrics, i.e., a directory of RiskMetrics products
from J.P. Morgan and third-party vendors. New York Jacques Longerstaey (1-212) 648-4936
longerstaey_j@jpmorgan.com
RiskMetrics Monitor: A monthly publication which re- Chicago Michael Moore (1-312) 541-3511
views the changes in market volatilities and correlations moore_mike@jpmorgan.com
based on the RiskMetrics dataset.
San Francisco Paul Schoffelen (1-415) 954-3240
schoffelen_paul@jpmorgan.com
RiskMetrics Technical Document: A 100-page manual
which describes in detail the RiskMetrics methodology for Toronto Dawn Desjardins (1-416) 981-9264
measuring market risks. It specifies how transactions in any desjardins_dawn@jpmorgan.com
asset class and currency must be mapped into a common po- Europe
sition sheet and describes how volatilities and correlations
London Benny Cheung (44-171) 325-4210
are estimated in order to compute market risks for trading
cheung_benny@jpmorgan.com
and investment activities. The manual also describes the for-
mat of the volatility and correlation data (RiskMetrics Brussels Geert Ceuppens (32-2) 508-8522
dataset) and the sources from which daily updates can be ceuppens_g@jpmorgan.com
downloaded electronically. Paris Ciaran O’Hagan (33-1) 4015-4058
o’hagan_c@jpmorgan.com
RiskMetrics Databook: A roughly 400-page document
Frankfurt Guido Barthels (49-69) 712-4238
which provides statistics and graphs of historical data on all barthels_g@jpmorgan.com
approximately 325 rate and price series. It provides statistics
and graphs of historical estimation errors on all volatilities Milan Roberto Fumagalli (39-2) 774-4230
series and on a key set of correlation series. Underlying data fumagalli_r@jpmorgan.com
goes back to January 1988 for most time series (subsets Madrid Jose Luis Albert (34-1) 435-6041
available to large institutional clients only). albert_j-l@jpmorgan.com
Zurich Victor Tschirky (41-1) 206-8315
Bond Index Cash Flow Maps: A monthly insert into the tschirky_v@jpmorgan.com
Government Bond Index Monitor outlining synthetic cash
flow maps of J.P. Morgan’s family of bond indices. It is Asia
aimed at investors who view market risks relative to a bench- Singapore Michael Wilson (65) 326-9901
mark. wilson_mike@jpmorgan.com
Tokyo Yuri Nagai (81-3) 5573-1185
Key RiskMetrics data sources nagai_y@jpmorgan.com
Internet Telerate Hong Kong Martin Matsui (85-2) 841-1373
matsui_martin@jpmorgan.com
CompuServe Bloomberg
Australia Debra Robertson (61-2) 551-6137
Reuters robertson_d@jpmorgan.com

RiskMetrics is based on, but differs significantly from, the market risk management systems developed by J.P. Morgan for its own use. J.P. Morgan does not
warrant any results obtained from use of the RiskMetrics data, methodology, documentation or any information derived from the data (collectively the “Data”)
and does not guarantee its sequence, timeliness, accuracy, completeness or continued availability. The Data is calculated on the basis of historical observations
and should not be relied upon to predict future market movements. Examples are for illustrative purposes only; actual risks will vary depending on specific
circumstances. The Data is meant to be used with systems developed by third parties. J.P. Morgan does not guarantee the accuracy or quality of such systems.

Additional information is available upon request. Information herein is believed to be reliable but J.P. Morgan does not warrant its completeness or accuracy. Opinions and estimates constitute our judgment and are
subject to change without notice. Past performance is not indicative of future results. This material is not intended as an offer or solicitation for the purchase or sale of any financial instrument. J.P. Morgan may hold
a position or act as market maker in the financial instruments of any issuer discussed herein or act as advisor or lender to such issuer. Morgan Guaranty Trust Company is a member of FDIC and SFA. Copyright 1995
J.P. Morgan & Co. Incorporated. Clients should contact analysts at and execute transactions through a J.P. Morgan entity in their home jurisdiction unless governing law permits otherwise.

Das könnte Ihnen auch gefallen