Beruflich Dokumente
Kultur Dokumente
1 February 1999
Postal:
E-Mail: ludwig.kanzler@economics.oxford.ac.uk
WWW:
http://users.ox.ac.uk/~econlrk
Abstract
This paper is concerned with the study of some fundamental aspects of the BDS
test. Brock, Dechert, Scheinkman & LeBaron (Econometric Reviews, 1996) propose
this non-parametric tool as a test of the null hypothesis of an independently and identically distributed (i.i.d.) time series, with power against virtually all linear and nonlinear, stochastic and deterministic (chaotic) alternatives. Unfortunately, it is extremely processing-intensive and requires an efficient computer algorithm to be viably run
even on relatively small data sets. An algorithm is presented which is very fast, rather
simple and easily implemented in common programming environments; a version for
MATLAB is part of the paper. The algorithm overcomes a number of deficiencies of
the two most widely used BDS packages by Dechert and LeBaron.
Extensive Monte-Carlo simulations are conducted which show that the properties
of the BDS statistic are sensitive to the choice of embedding dimension and dimensional distance, and to sample size. Unless the choice parameters are set in accordance
with the recommendations emerging from the simulations, a statistical test for iidness
is bound to be badly sized on small samples and thus yield misleading conclusions.
Tables of the small-sample distribution are offered for correctly sized testing. The
recommendations, tabulated quantile values and computer algorithms put forward will
hopefully help render the BDS test part of the standard econometric toolbox.
JEL subjects: C12, C13, C14, C15, C52, C63, C87
Keywords:
BDS statistic, computer algorithm, correlation integral, hypothesis
testing, i.i.d., model misspecification, non-linearity, random walk,
small-sample distribution, ultra-high-frequency data
Acknowledgements
Part of this paper was completed while I was staying at the Institute for
Monetary and Economic Studies at the Bank of Japan. I am very much indebted to the
Bank in particular, Jun Muranaga, Sachiko Kuroda Nakada, Makoto Ohsawa and
Tetsuya Yonetani for their hospitality and for granting me access to their research
facilities. An earlier version of the paper was presented at a Workshops of the Banks
Institute. I was happy to receive helpful comments during and after my presentations
from seminar participants, in particular Fumio Hayashi of Tokyo University.
My thanks also extend to Blake LeBaron (now at Brandeis University) for
giving me the exclusive opportunity to beta-test his BDS programme compiled for
MATLAB, which spurred my interest in the BDS test to the extent that I decided to
devote this long paper to it. In the course of my research, Blake and I exchanged over
160 e-mail messages with each other. The simulations would have taken me more time,
if Jrg Filthaut, my father and my wife had not kindly allowed me access to their
computing resources.
My particular gratitude goes to Peter Oppenheimer of Christ Church, Oxford,
for supervising this research as part of my D.Phil. thesis, A Study of the Efficiency of
the Foreign Exchange Market through Analysis of Ultra-High Frequency Data. Peter
afforded me valuable support through his strong sense of style and presentation of
analytical material. All remaining errors remain, of course, my sole responsibility.
Financial support from the Economic and Social Research Council (ESRC), and
Christ Church is gratefully acknowledged.
2
Contents
1. Introduction
13
3.1
15
3.2
17
3.3
17
3.4
20
22
4.1
23
4.2
28
4.3
33
4.4
36
40
5.1
40
5.2
42
5.3
42
5.4
44
46
6.1
48
6.2
Compass-rose patterns
49
6.3
51
7. Summary of Recommendations
52
References
54-59
Appendix A:
Figures
A1-A41
Appendix B:
Tables
B1-B19
Appendix C:
C1-C17
Contents of Appendices
Appendix A:
Figures
Fig. 1 (Panels 1- 6)
A2
A3
A18
A22
Fig. 5 (Panels 1- 4)
A28
A30
A33
A37
Appendix B:
Tables
Table 1
B2
B3
Table 3
Appendix C:
Prog. 1 BDS.M:
C2
Prog. 2 BDSSIG.M
C11
B19
***
All econometric MATLAB functions developed for this thesis and copies of the
appendices in portable-document format (pdf) can be downloaded from the authors
homepage:
http://users.ox.ac.uk/~econlrk
1. Introduction
This paper is concerned with the study of some fundamental aspects of the BDS
test, so called after its original authors William Brock, Davis Dechert and Jos
Scheinkman, who developed it in 1986. After their first working paper had appeared
in 1987, Blake LeBaron joined the team to develop viable Fortran and later C software
to compute the BDS statistic (LeBaron, 1997a), to examine some finite-sample properties (see Brock, Hsieh & LeBaron, 1991) and to apply the test to financial time series
(Scheinkman & LeBaron, 1989). The revised working paper was eventually published
by Brock, Dechert, Scheinkman & LeBaron (henceforward BDSL) in 1996. Section 2
briefly reviews intuition and the main equations of the test.
This test for independence based on estimation of correlation integrals at various
dimensions (as explained in Section 2) has power against virtually all types of linear
and non-linear departure.1 While estimation of the BDS statistic is non-parametric, the
test statistic asymptotically follows a normal distribution with zero mean and unit
variance and therefore lends itself for easy hypothesis testing.
Moreover, in principle no distributional assumptions need to be made about the
data under the null hypothesis other than that it is i.i.d. For example, unlike the bispectrum test or the bootstrap-linearity test,2 the BDS test does not depend on the
existence of higher moments. Given that excess/lepto-kurtosis, also called fat
tails, has been documented to be almost a standard feature of financial-time series,3
Dechert (1988b) considers two types of theoretical exceptions which play no role in practice.
The references are Hinich (1982), Hinich & Patterson (1985, 1989, 1990), Ashley et al. (1986),
Brockett et al. (1988), Barnett & Hinich (1993) for the bi-spectrum test and Ashley & Patterson (1986) for
the bootstrap-linearity test. See also footnote 48 (page 48) for four other non-linearity tests.
3
With respect to the ultra-high-frequency exchange-rate data distributed by Olsen & Associates, see
their papers by Dacorogna et al. (1995), Mller et al. (1996), and Guillaume et al. (1997), as well as the
paper by Danielsson & de Vries (1998). With respect to other financial data, see DuMouchel (1983), Akgiray & Booth (1988), Hols & de Vries (1991), Jansen & de Vries (1991), Loretan (1991), Phillips & Loretan
(1992), Koedijk et al. (1992), Koedijk & Kool (1994), Loretan & Phillips (1994), and Kearns & Pagan
(1997), among others.
Most of the above studies probably over-estimate the degree of what is called fat-tailedness, since
the commonly used tail-index estimators all appear to be severely biased in small samples, as documented
by McCulloch (1997), Pictet et al. (1996), and Huisman et al. (1997). Nonetheless, even the latter authors
improved estimator finds evidence of distributional instability in exchange rates see Huisman et al. (1997,
1998).
5
the importance of this property is not to be underestimated (see also Hinich & Patterson, 1993).
The BDS test can be run over the residuals of a regressions and can thus be
viewed as a test for model misspecification. But it can also be interpreted as a test for
non-linearity, if appropriately used in conjunction with ARIMA modelling. In a first
step, the best-fitting ARIMA(p,d,q) is determined and fitted to the data, thus eliminating all linearity from the data. Only in a second step is the test applied by running
it on the residuals of that ARIMA model, which by default must be linearly independent, so that any dependence found in the residuals must be non-linear in nature.4
It is important to realise that the BDS test, like most other tests with power
against non-linear stochastic dependence and/or non-linear deterministic dependence,
is not per se a non-linearity and/or chaos test.5 However, when applied to linearly
whitened data, it becomes a test with power against virtually any type of stochastic and
deterministic non-linearity.
BDS testing may thus help identify the existence of non-linear dependence, but
not its type. To obtain more information about the type of non-linearity prevalent in
the data, one would need to model likely non-linear processes directly. For example,
given a significant ARCH statistic (Engle, 1982), it would be a good idea to model
ARCH, using the appropriate variants. The BDS test could play an important role in
this, as it can be used as a powerful test for misspecification of ARCH models. Moreover, it could also indicate the existence of non-ARCH non-linearity (in the residuals
of an appropriate ARCH model).
Unfortunately, despite the fact that the BDS test is theoretically robust against
the inclusion of nuisance parameters in the original regression, additive GARCH residuals appear for an as yet unknown reason to bias the BDS statistic (Brock et al., 1991,
Hsieh, 1991, and private correspondence with Blake LeBaron, October 1998), therefore
For the original contributions on bootstrapping, see Efron (1979, 1982). But see also the book by Efron & Tibshirani (1993), and the surveys by Hinkley (1988), Diciccio & Romano (1988) and Li & Maddala
(1996) with the ensuing discussion of their articles in the same journal numbers. For recent research, see,
for example, Berkowitz & Kilian (1996), and Andrews & Buchinsky (1997).
7
BDSL (1996) list a number of studies using the BDS test, and some additional ones can be found in
Table 1 of Abhyankar et al. (1997). Apart from Campbell et al. (1997), who devote a single page to the test,
no textbook has come to my attention which includes a description of the test. See also Section 6 below for
some ill-understood implementations of the test.
said for small samples, as the size of the error of falsely rejecting the null hypothesis
is excessively large for some m, . I run extensive Monte-Carlo simulations with the
objective of examining the small-sample properties of the BDS distribution with varying choice parameters and m and sample size n. A selection of the results is graphed
and tabulated in Appendices A and B.
Some recommendations on the optimal choice of emerge. Unless is chosen
sufficiently large, evaluation of the significance level of the BDS statistic will give
misleading results. For low dimensions, the size of the BDS test is maximised when
is chosen such that the correlation integral of dimension 1 c1,n lies around 0.7 (which
corresponds to 150% of the standard deviation in a normally distributed sample).
Higher dimensions require the value of to be raised further. These recommendations
are independent of the actual distribution of the sample. Moreover, the significance of
the BDS statistic calculated on samples with 500 or fewer observations should be
evaluated using the quantile values of the small-sample BDS distribution tabulated in
Appendix B, Table 2 (and these are also incorporated in Programme 2 of Appendix C).
In Section 5, I conduct comparisons between Decherts, LeBarons and my own
programme. Significant differences in BDS statistics are revealed for some choices of
n, m and , which are the result of differing methods for estimating correlation integrals. I find that Dechert and LeBaron employ estimators for some of the correlation
integrals which are inferior in statistical efficiency to the standard estimators and that
Dechert uses an estimator for which statistical consistency has not been proven.
LeBarons and Decherts methods yield BDS distributions distinctly different from
mine when the sample size is small, when the embedding dimension considered is high
and when the distance parameter chosen is small relative to the recommended choice
for .
Both BDS programmes are better avoided on samples of size 500 or smaller,
unless bootstrapping is employed to evaluate the significance of the BDS statistic.
However, on large samples and for reasonable values for m and , all three methods
appear to give statistically indistinguishable results.
In most applications of the BDS test, parameter has actually been chosen outside the range of values which yield the best small-sample properties. In Section 6, I
briefly review a number of such studies and check whether either reference to the tabulated small-sample quantile values or setting within the recommended range would
change their respective findings qualitatively.
Section 7 summarises my recommendations for very fast and correctly sized
estimation of the BDS statistic.
(1)
Choose an arbitrary size of dimensional distance , the only condition being that it
must not exceed the spread of the time series (if it exists):
0 < < max(x) min(x)
(2)
Now consider the probability of any pair of observations Xi, Xj lying within of each
other:
P1 P ( | Xi Xj | )
(3)
10
as their two predecessors being close to each other, i.e. the probability of a history
of two observations being within of each other:
P2 P ( | Xi Xj | , | Xi
Xj 1 | )
(4)
It is clear that the probabilities for the two dimensions differ. However, if (and almost
only if) the time series is i.i.d., there is a well defined relationship between the two:
the probability of a two-observation history being close is equal to the square of the
probability of any two observations being close:
P2
P1
if x F (i.i.d.)
(5)
The power relationship generalises for any dimension, and the BDS test for embedding
dimension m is a test of the null hypothesis that the probabilities for dimension 1 and
for dimension m are equal:
H0: Pm
P1
H1: Pm P1
(6)
Testing the above null hypothesis is almost equivalent to testing for iidness against
all other alternatives (see also page 5, footnote 1):
H0: x F (iid)
(7)
To obtain the test statistic, probability Pm is estimated by the correlation integral cm,n()
in finite space.8 Let I be the Heavside function so that variable I(Xi, Xj,) assumes the
value 1 if observations Xi and Xj are within distance of each other and 0 otherwise:
I ( Xi , Xj )
1
0
if | Xi Xj |
otherwise
(8)
The correlation integral was brought to prominence by Grassberger & Procaccia (1983a, 1983b).
11
m 1
(9)
I ( Xs j , Xt j )
(n m 1) (n m)
s m
t s 1
j 0
As shown in BDSL (1996), the BDS statistic for embedding dimension m and
dimensional distance is estimated consistently on a sample of n observations by:10
wm, n ()
n m 1
cm, n ()
c1, n
m 1
() m
(10)
m, n ()
m 1
km
k m j c 2j
(m 1) 2 c 2m
m 2 k c 2m
(11)
j 1
(12)
cm, n()
n m
n m 1
m 1
s 1
t s 1
j 0
2
(n m 1) (n m)
I ( Xs j , Xt j )
LeBaron (1997b) and all other BDS-related papers I have seen specify a third equation instead, having the
summations running from s = 1 to n and t = s + 1 to n respectively and consequently averaging by n(n1).
However, for any j s, I(Xsj, Xtj) is not defined, as the start of the time series has been reached for
j = s 1. So only the two equations cited here can be implemented in practice.
10
LeBaron (1997b), Barnett et al. (1997) and many others state this equation as
wm, n ()
cm, n ()
c1, n () m
m, n ()
giving the misleading impression that correlation integral of dimension 1 is to be estimated over the full
sample and that the ratio term is to be multiplied by the square root of the full sample size. See also Section
5.
12
2
n (n 1) (n 2)
I ( Xt , Xs ) I ( Xs , Xr ) I ( Xt , Xr ) I (Xr , Xs )
t 1 s t 1 r s 1
(13)
I ( Xs , Xt ) I ( Xt , Xr )
There are actually many ways of estimating k and c1 consistently; however, only the
above equations represent the most efficient estimators (see also Sub-Section 5.2
below).
Finally, BDSL (1996) show that the BDS statistic follows the standard-normal
distribution asymptotically:11
limn wm,n () N (0, 1)
for any m,
(14)
In principle, running the test is then straightforward: fix distance parameter and
embedding dimension m, compute c1,n and kn according to equations (9) and (13), use
these estimates to compute m,n as defined by (11), similarly use (9) to compute cm,n
and c1,nm+1 and plug all these estimates into equation (10) to obtain the BDS statistic.12 The significance of the null hypothesis is then evaluated against the standardnormal distribution.
In practice, it is far from easy to implement the above equations in such a way
as to make obtaining the BDS statistic viable in terms of computational speed. Section
3 deals with this issue by offering a fast algorithm which is relatively easy to implement. Moreover, the question arises which settings one should consider for parameters
and m. Section 4 aims to provide some guidance. And finally, Section 5 uncovers
some differences in statistical efficiency of competing BDS software and examines
whether their deficiencies should be of concern.
11
Asymptotic normality was only proven formally by de Lima (1992) and Lai et al. (1994).
In the remainder of this chapter, dependence on parameter is suppressed for the benefit of notational clarity.
12
13
For example, Guillaume et al. (1995) rely on LeBarons programme, and Peters (1994) mentions
Decherts software. Unfortunately a number of authors fail to acknowledge the source of the algorithm employed to run their BDS tests. Given the difficulty involved in writing a fast algorithm for the test, it is hard
to believe that they used their own proprietary algorithm without saying so and without offering their code
to the academic community.
14
The source code was written in Turbo-Pascal, but is unfortunately not available publicly.
14
since the programme is menu- rather than command-driven, it is not possible to integrate the test into other software packages. This limitation is of little importance when
evaluating only a few series, but when the test is to be run repeatedly, for example on
bootstrapped data, Decherts package appears unsuitable.
LeBaron wrote his programme in the C language and it is the C source code
which is in the public domain. His programme can thus be integrated directly with
other C programmes, and it can also be compiled for other environments, but of course
this requires access to a C language interpreter or an appropriate compiler. Most of the
statistical packages and programming interpreters used by economists do not have the
application-programme interface (API) needed to integrate external routines such as
LeBarons programme. Even though the potential for integration is greater, many
interested researchers must therefore find it even more difficult to make use of
LeBarons code than Decherts software.
It would therefore be useful to have BDS algorithms available which could be
easily implemented in any of the commonly used programming environments. With
this objective in mind, I have developed a fast BDS algorithm in MATLAB (MathWorks, 1997b).15 My algorithm is sufficiently general to be quite easily translated into
other high-level programming environments (such as GAUSS, Maple, Mathematica,
15
MATLAB is a high-performance language for technical computing, which has become popular
among applied economists. MATLAB integrates computation, visualisation and programming in an easy-touse environment where problems and solutions are expressed in familiar mathematical notation as opposed
to the idiosyncratic computer code of many competing products. It is also an interactive system whose basic
data element is an array that does not require dimensioning. This enables developing solutions to technical
computing problems characterised by matrix and vector formulations in a fraction of the time it would take
to write a programme in a scalar non-interactive language such as C or Fortran or the scalar interactive
environments which economists typically use to analyse small data sets.
For comparisons of MATLAB (albeit in the outdated versions 4.0 and 4.2c1) with GAUSS, until
recently the programming environment used most widely among economists, see Rust (1993) and Ksters
& Steffen (1996). Many of the benefits of using MATLAB can also be deducted from a paper by Belsley
(1998) which although not the authors intention presents a number of hair-raising disadvantages of
the competing Mathematica environment, none of which incidentally apply to MATLAB.
One drawback of MATLAB is that it was not originally intended to be used for econometric analysis and therefore does not include any econometric functions whatsoever. There are not (yet) any add-on
functions available either commercially or in the public domain to enhance the MATLAB functionality,
except for the pricey Statistics Toolbox (MathWorks, 1997a), which offers hardly more than the most basic
statistical functions. However, I have programmed a number of econometric functions in MATLAB, and
these can be downloaded from my homepage at http://users.ox.ac.uk/~econlrk.
15
Ox, or S-Plus) or indeed low-level languages (e.g. C). My algorithm shares a number
of features with LeBarons C code, but its core is far simpler in design and thus easier
to understand and implement.16
My algorithm would also be faster than his if translated into C. In practice, however, a compiled version of his code computes the BDS statistic in a fraction of the
time taken by my uncompiled programme in MATLAB (see Table 1). While in this
respect LeBarons compiled software appears preferable, it is only my programme
which is based on the most efficient estimators of the relevant correlation integrals.
The significance of this fact is explored in Section 5.
The source code for my programme can be found in Appendix C and is also
available publicly at http://users.ox.ac.uk/~econlrk. The remainder of this
section explains the main ingredients of the algorithm. Some aspects of the full algorithm are too platform-specific to be of general interest, so they do not receive further
attention here. For example, I have actually developed a combination of six different
algorithms for computing cm,n which differ in speed and memory requirements. The
programme chooses the algorithm which maximises speed given available memory.
The syntax for calling the programme from the MATLAB command prompt is
explained in the header of the source code, and further explanations not covered by this
section can be found as comments interspersed with the code.
1
0
if | Xi Xj |
otherwise
(15)
The computation of the BDS statistic can thus be speeded up considerably by performing each evaluation only once and using the result for all four integrals. Both Decherts
16
LeBaron (1997b) describes some of the features of his code, but is too language specific and too
short on detail to be of much use to users who are not C-language specialists.
16
and LeBarons algorithms do not exploit this potential to the full, as they evaluate the
above once for c1,nV and knV and then again for c1,nm+1 and cm,n. This is the first reason
why a compiled version of my programme would be faster than their software.
Next, it is important to realise that all correlation integrals are defined only for
unique combinations of different integers i and j, so equation (15) needs to be evaluated only for all integers i < j n (nm+1 in the case of c1,nm+1). The correlation integrals are defined as U statistics as opposed to V statistics. U statistics include only
unique combinations; V statistics include all combinations including own-points.17 So
evaluation of the double sum of a U statistic requires only up to n(n1) calculations, while arriving at the corresponding V statistic requires n2, i.e. more than twice
as many evaluations. It is important to note that the required n(n1) evaluations still
place an enormous burden on the computer.18
Dechert and LeBaron use the computationally and statistically inefficient V statistics to compute c1,n and kn. This is the second reason why my programme would be
faster after compiling it appropriately. In fact, V statistics are not only computationally
inefficient, they are also statistically inefficient in the context of the BDS test; the latter
aspect is considered in some detail in Section 5.
It is helpful to picture the combinations of observations as a two-dimensional
matrix of elements I(Xi, Xj), with i running from 1 to n in vertical direction and j running from 1 to n in horizontal direction. Elements I(Xi, Xj) assume either value 1 or
value 0, depending on whether or not observations Xi and Xj are close. Evaluating
the full matrix would correspond to computing a V statistic. Evaluating only the upper
17
On U and V statistics, see Denker & Keller (1983) and the relevant sources cited therein. LeBarons
(1997b) claims that [t]he only difference between U and V statistics is that the own points are counted in
V-statistics. While it is true that all points for which j < i are only replications of all points for which i < j
and thus only points for which i = j are truly new, inclusion of replications in V statistics still means that
unique combinations receive twice the weight vis--vis own-points and it also means that the computational
burden is almost twice as large. See also below.
18
High-level command interpreters are best instructed to perform the required operations vector-wise
or even matrix-wise, so it may appear as if only n runs are or even only one run is performed. But in reality
(i.e. at low level), this nonetheless requires n(n1) computations.
17
triangle, or alternatively only the lower triangle, of the matrix would correspond to calculating a U statistic. Here is an example:
i|j
1 2 3 4 5 6 7
1
2
3
4
5
6
7
1
1
0
0
1
0
1
1
1
1
1
1
0
0
0
1
1
0
1
0
1
0
1
0
1
1
1
0
1
1
1
1
1
0
1
0
0
0
1
0
1
1
1
0
1
0
1
1
1
18
kn
2
n (n 1)
I ( Xt , Xs )
(n t) (n s 1)
t 1 s t 1
I ( Xr , Xq )
(16)
r t q s 1
V
n
1
n
I ( Xt , Xs )
t 1
(17)
s 1
The difference between the two statistics is that knV includes all the products which can
be formed from all the elements in the entire matrix, whereas kn averages only all combinations of products in the upper triangle. So, while each product entering the summa-
19
Incidentally, this is the equation by which Dechert and LeBaron compute knV. Equation (17) appears
in Dechert (1994), but without any derivation whatsoever.
19
tion for kn is unique, knV contains a lot of duplications among the many more products
which are summed.
There are three types of duplications. The first is given by taking products with
any elements in a north-westerly direction, but within the upper triangle. The second
is given by all the products involving the lower triangle. And the third is given by all
products among pairs stretching across the diagonal. It is thus clear that all products
are evenly duplicated, so averaging over all elements is equivalent to averaging over
the unique products in the upper triangle alone.
However, knV also encompasses own-products and products formed with one
element on the diagonal. Both of the latter render the statistic statistically inefficient
and should be adjusted for. Own-products occur twice, once with the element itself and
once with the image of the element across the diagonal. Products involving one diagonal element occur only once. In both cases, the product value is equal to the value of
the (non-diagonal) element. However, all diagonal elements themselves mirror onto
themselves and should not be triple-counted. Their product value is nonetheless equal
to their own-value.
The total adjustment required is thus three times the value of all elements in the
table less twice the values of the diagonal. This adjustment reduces the number of
products by which the adjusted sum of bits is to be averaged by 3n2 2n:
n
kn
1
n (n 1) (n 2)
[
t 1
I ( Xt , Xs ) ]
s 1
I ( Xs , Xt )
2n
(18)
s 1 t s 1
kn may be relatively easily computed by summing the squares of the sums over each
row of full length n in the matrix, adjusting the sum and averaging the result as shown
above.
The remaining problem is that summation is over the full length of each row in
the matrix, i.e. requires the left-hand part of each row which falls into the lower
triangle and onto the diagonal in addition to the right-hand part of each row in the
upper triangle. It may appear that the matrix of Is should after all be evaluated as a
20
V statistic, even though this would more than double the computational burden. Fortunately, this is not needed.20
The missing left-hand part of each row is given by its mirror-image across the
diagonal, i.e. by the column based on the diagonal value on which the row is anchored.
For example, the entire row 3 in the above table is simply given by the first two
elements in column 3 (1,3) and (2,3), an element of value 1 on the diagonal (3,3) and
the remaining elements on row 3 (3,4), (3,5), (3,6) and (3,7). In practice, c1,n and kn are
best computed by summing over rows and columns while running the evaluation for
the upper triangle of matrix I.
Because Dechert and LeBaron are obviously not aware of the following trick, they instead attempt
to cut the processing time by implementing a method due to Theiler (1990) to obtain c1 and k. If the time
series is sorted first, then all close observations are found in a cohesive bulk around a given observation.
So one only needs to search for the first sorted observation lying outside distance of the given observation
to evaluate equation (15) for all n(n1) unique combinations of i and j. The effective number of evaluations therefore depends on the proportion of observations actually being close, and this proportion is
given by c1,n. If c1,n is small, pre-sorting has the potential of reducing the computational requirements significantly. If it is close to 1, the potential gains are negligible.
There is, however, a cost exceeding any potential savings: To compute cm,n for m > 1, one needs
to revert to the unsorted series, and as is argued in the main text, to speed up computing of cm,n significantly,
one needs to be able actually to build on the above matrix, which can only be derived from the unsorted
series. So counting on the sorted series provides no relief from evaluating the unsorted series as well. On
the contrary, it places an additional burden on the computer, and this burden is even greater when the cost
of sorting is taken into account.
The Theiler sort is thus not worthwhile being performed as part of an otherwise optimised algorithm. The reason why Dechert and LeBaron still benefit from it is that, unaware of the trick described in
the main text, they insist on computing c1,n and kn separately as V statistics, in addition to computing cm,n
and c1,nm+1 as U statistics.
21
1x 26 + 0x 25 + 0x 24 + 1x 23+ 1x 22 + 0x 21 + 1x 20 = 77
Whereas packing as many bits as possible into one word (16, 32 or 64 bits on a low
level, 52 bits in MATLAB) yields the largest savings in memory, this makes counting
the number of bits set (i.e. the number of 1s) a formidable task. Unfortunately, neither
on a low level nor on a high level is it possible simply to sum the 1s. The fastest way
of counting bits in, say, a 17-bit representation is to create a table relating all integers
up to 2171 = 131,071 to the number of bits set for each of these integers and then to
look up the number of bits set for a given integer. The larger the number of bits used
per word, the larger the memory requirements on this look-up table (doubling with
every additional bit). The MATLAB algorithm chooses the number of bits per word
so as to minimise the total memory consumed by the table, the matrix and the subsequent processing for given sample size n.21
The correlation integral for higher dimensions cm,n is defined by the average of
all unique products between chains of elements I(Xi, Xj), I(Xi1, Xj1), ,
I(Xim+1, Xjm+1). In the above-mentioned matrix I, these chains run in a north-westerly
direction. After left-aligning all rows in (the upper triangle of) the matrix, e.g.
1 0 0 1
1 1 1
0 1
1
0
0
0
1
0
1
0
1
0
1
1
1
1
0
1
0
1
0
1
1
1
1
0 1 0 1
1 0 0
0 1
0
the relevant products are now defined in a vertical direction. To obtain c2,n, one needs
only to multiply all elements which have a neighbour above them with that respective
neighbour and average the results. Given that the matrix of elements is stored in bitrepresentation, a logical and-operation on bit-words is computationally more efficient
than multiplying individual bits. To obtain c3,n without extending multiplication to two
neighbours in the upper direction, the results of the bit-and operations for c2,n are
stored, effectively replacing matrix I(Xi, Xj) by matrix I(Xi, Xj) I(Xi1, Xj1), similar bit21
22
and operations one step in upper direction are performed on the new matrix and the
resulting bits are averaged to yield c3,n. And so on for higher dimension, if desired.22
The great speed of computing cm,n as described above relies to some extend on
the ability to perform bit-wise and-operations efficiently. In MATLAB, this is achieved
through delegating these operations to an external C routine. Other high-level languages may not have similar capabilities.
I have included a second version of my programme towards the end of the code
which does not use bit-wise operations and could arguably be translated into any language. Instead of a bit-wise matrix array, it relies on a single vector to store all n(n1)
bits. This requires a complex indexing scheme, which is explained in the comments
interspersed with the code. While the results returned are identical to the main programme, it is much slower and requires more memory than the main programme (see
Table 1). Nonetheless, it is still sufficiently fast for occasional use.
The statistical size of the BDS test is examined in the following sections 4 and
5, and the test is extensively used on real data in the empirical studies of the thesis.
22
23
23
BHL tabulate the size of the distribution at standard-normal quantiles as well as median, mean, standard deviation, skewness and kurtosis for n = 100 with m = 2, n = 500 with m = 2 and 5, and n = 1000 with
m = 2, 5 and 10, the range of / in each case being 0.25, 0.5, 1.0, 1.5 and 2.0. Quantiles are tabulated
only for / = 0.5 and 1.0, and n = 100, 250 and 500. There is no graphical illustration.
24
24
The uniform and normal RNGs described here are built into MATLAB version 5. I obtained the full
C source code from Cleve Moler, chairman and co-founder of The MathWorks, which produces MATLAB.
The code is confidential, but a good description of the main algorithms can be found in Moler (1995).
The classical RNG reference is Marsaglia & Bray (1968), and a good review of the relevant literature is contained in Park & Miller (1988).
25
Traditionally, Gaussian random numbers are obtained by simple scaling of uniform random numbers.
25
are not all conditioned on the same dependence if there is any.26 I am thus confident
that the Monte-Carlo simulations are de facto conducted under the true null hypothesis.
In principle, there is no need to run the simulations on normally distributed samples; the BDS test makes no such distributional assumption. Yet as will be shown, the
finite-sample distribution of the BDS statistic is influenced by the proportion of observations being close in each dimension up to embedding dimension m. For given
dimensional distance , the estimates of these correlation integrals depend on the actual
distribution of the underlying sample. So to obtain consistent results, it is important to
choose a specific distribution a priori and to run the simulations on samples drawn
from this distribution alone. The actual choice of distribution is not particularly
important to study the small-sample properties per se.
However, for reasons of computational convenience, is in practice often
specified in units of the standard deviation of the sample under examination (see also
Sub-Section 4.4). The size of the standard deviation varies, of course, strongly across
different distributions, so the choice of distribution is, in fact, important when compiling quantile tables for practical applications. Since real-world data examined with
the BDS test tends to be close to Gaussian in distribution, I have chosen to simulate
the BDS statistic on normally distributed pseudo-random data.27 Some examples of
pseudo-random samples graphed in Figure 1 show that Gaussianity is well approximate
by my RNG, even in very small samples. It is still possible to relate the results
obtained here to other distributions, and I will return to this point in Sub-Section 4.4.
The Monte-Carlo simulations are performed by running my BDS programme (as
described in Section 3 above) on samples with n = 50, 100, 250, 500, 750, 1000 and
26
The period of the above-mentioned uniform RNG is approximately 21492, i.e. even if one drew random
numbers at a rate of one million per second (possible on a 90 MHz Pentium PC), they would repeat themselves only after 10435 years! Thus continuous sampling alone would avoid any replications in practice. For
reasons of computational convenience, I do not always sample continuously, but instead shuffle the ziggurat pointers (called states in MATLAB) and later check that in fact not a single BDS statistic is replicated
in the simulation results. When comparing different methods of computing the BDS statistic for Section 5,
the simulations are, however, run in parallel.
27
Note that the size of the mean of the distribution does not impact on the BDS test, and that the size
of the standard deviation is also irrelevant as long as is specified in units of .
26
2500 observations. In most cases, all embedding dimensions from m = 2 to 19 are considered.28 is defined in fractions of the standard deviation of each random series,
and most experiments are conducted for / = 0.50, 1.00, 1.50 and 2.00. For each
combination of m and /, the BDS statistic is simulated on 25,000 random samples
of sizes 50, 100, 250, 500, 750 and 1,000, and on 16,350 random samples of size
2,500.29
The computational cost of running these simulations is immense, in particular
for larger sample sizes. Running the BDS programme for dimensions 2 to 19 on
25,000 samples of 50 observations takes approximately 5 hours on a personal computer with a 90 MHz Pentium chip (P90) and a 512 KB L2 cache running MATLAB
5.1 under Windows NT 4.0.30 One set of experiments has to be conducted for the
entire range of /, so all Monte-Carlo experiments for sample size 50 can be
concluded just within one full day. As the sample size increases, the run-times virtually
explode. Even though the number of replications is reduced by
I restricted the maximum dimension to the limit imposed by LeBarons software, which crashes for
dimensions 20 and higher. For larger sample sizes and reasonable values of , higher dimensions could, in
practice, be considered. But the additional computational cost of running the required simulations would be
enormous and the results are unlikely to offer insights beyond the findings discussed here.
29
BHLs tabulations are based on only 5,000 runs for samples of 100 and 500 observations and 2,000
runs for 1,000 observations (which was understandable given the speed of the computing equipment available to them ten years ago). This makes their estimates quite unreliable, considering that a tail estimate of
1% is based on only 50 or 20 BDS statistics. It is thus only natural that they do not even tabulate tail probabilities for 0.5%, which is, however, an important cut-off point because it is needed to evaluate the null
hypothesis of independence at the 1% significance level (since the BDS test is two-sided see also page
31, footnote 33). Nevertheless, the few results tabulated in BHL are broadly consistent with mine.
30
Level-1 and level-2 caching greatly influences computational speed, in particular in the case of
LeBarons software. For example, disabling the internal L2 cache on the Pentium-II (see also footnote 31
below) reduces the speed of my programme by 40% and that of LeBarons programme by 60%.
27
sonal computers. These are one P90, three P133s, one P200 and one Pentium-II233.31 I was thus able to complete the simulations within a little under three months,
during which time at least the Pentium-II machine was computing virtually continuously.
For each combination of the values chosen for n, m and , I obtain 25,000 BDS
statistics. In total, I thus have around 500 small-sample BDS distributions for evaluation. The distribution mapped out by a sample of BDS statistics can be visualised by
plotting the cumulative distribution of all 25,000 BDS statistics. The small-sample
properties are then easily understood by comparing the lower and upper tails of the
resulting curve with those of the theoretical Gaussian with zero mean and unit variance
plotted in the same graph, and also by comparing the BDS distributions among each
other.
Figure 2 (Panels 1 to 30) show cumulative distributions for all available sample
sizes (n) and for all dimensional-distance parameters in units of (eps = /) and for
some or all embedding dimensions (m) between 2 and 19. One panel shows the distributions of several embedding dimensions for given n and . Figure 3 (Panels 1 to 24)
and Figure 4 (Panels 1 to 36) plot BDS distributions in different perspectives. In Figure
3, one panel plots the results of increasing the sample size n as parameters m and are
held constant. Figure 4 allows BDS distributions to be compared for different ceteris
paribus.
Each distribution curve links up all observations between 3 and 3, so these
plots are very precise. For example, approximately 430,000 data points are embedded
in Panel 8 of Figure 2. However, in some graphs I omit curves which interfere too
much with their neighbours and which are very similar in shape to their neighbours.
31
Algorithms in raw MATLAB run approximately 4.5 times faster on the PII-233 than on the P90,
while LeBarons compiled software even runs six times faster. The speed of the P133s is proportionate to
the frequency of these processors vis--vis the P90. The P200 is slightly faster than its frequency would
suggest, because the size of the L1 cache is 32 KB instead of 16 KB. The PII has an even larger L1 cache
of 64 KB and a high-frequency internal L2 cache of 512 KB, and it uses a more advanced CPU technology,
all of which explains why it is almost twice as fast as the P200.
28
apparent from Figure 4, varying the dimensional distance changes the shape of the distribution in some cases drastically, but it is less clear at first sight which value of /
gives the best distributional properties in general.
Another distinct feature of the BDS distribution in small samples is the fact that
the median (0.5 quantile) is always negative, indicating that c1,nm tends to be larger than
cm,n in value (Figure 2 and Table 2). With very few exceptions, the mean is larger than
the median, but it is usually still negative (Table 2). Even in very large samples, the
BDS distribution never appears to coincide with the standard normal around the
median.
29
In practice, however, only in the tails is the shape of the distribution of concern,
as hypothesis testing is usually based only on extreme probabilities represented by tail
areas. Asymptotically, the BDS statistic follows the standard-normal distribution, allowing significance levels to be evaluated with reference to normal-quantile values. Ideally,
one would want the BDS distribution to be approximated by the standard normal even
in small samples. For the larger the deviation of the BDS distribution from the standard
normal, the larger the size of the error of falsely interpreting the BDS statistic as
evidence for or against independence.
As the plots show, the cumulative BDS distribution tends to lie above the standard normal in the lower tail and below it in the upper tail. In other words, the finitesample distribution is fat-tailed, always displaying excessive kurtosis (i.e. exceeding
3, the standard-normal equivalent), as can be seen from the Table 2. The greater the
weight in the tails, the larger the error with which one would reject the null hypothesis
of independence when the data was, in fact, independent.
As can be seen from the plots and tables, the size of this error varies considerably with the sample size, the embedding dimension and the dimensional distance. For
practical applications, the dependence of the size of this error on n, m and / must
be properly understood. The significance of the BDS statistic should be evaluated on
standard-normal quantiles only when the deviation of the size of the BDS error from
the standard-normal error is negligible. In this case, the BDS statistic should be computed only for such combinations of m and / on a given sample of size n for which
the BDS distribution and the standard normal virtually coincide in the tails. One may
indeed wish to choose m and / so as to minimise the deviation from the normal.32
However, it may not always be possible to approximate the standard normal by
educated choice of m and /, either because no suitable combination exists, or
because one wishes to evaluate the BDS statistic for many different combinations of
m and /, only some of which fulfil the above criterion. In these cases, reference to
32
Note that reducing the error of false rejection beyond that of the standard normal is not desirable
either. In some extreme cases, the BDS distribution has tails which are flatter than the normal.
30
31
For samples of size 100, the picture hardly changes for / = 0.5 and / = 1.0.
For / = 1.5 and / = 2.0, however, the tail probabilities are significantly smaller
than for half the sample size. Consider again the case / = 1.5 and m = 2: rejecting
independence for BDS statistics exceeding 1.645 in absolute value, one would falsely
reject the null 19.0% of the time instead of 10% for the standard normal; similarly
3.9% rejections instead of 1% at 2.576. Since these errors are still unacceptably large,
hypothesis testing should be based on the tabulated quantiles for / = 1.5 and/or
/ = 2.0.
Increasing the sample size to 250, the tails of the distributions keep on moving
closer to the standard normal. Now even for / = 1.0, there are a few dimensions for
which the BDS distribution is quite well approximated by the normal in the outer part
of the tails. The size of the BDS statistic still appears to be too large for correctly sized
hypothesis testing to be performed: in the benchmark case of / = 1.5 and m = 2, the
respective errors of false rejection are 13.6% at the 10% level and 2.1% at the 1%
level. The BDS tables are still indispensable to testing in samples of size 250.
Doubling the sample size to 500 improves the properties of the BDS distribution
further. For / = 1.5 and small m, the lower tails virtually coincide with the standard
normal. Moving to larger m, the BDS distribution starts undercutting the normal and
/ = 2.0 yields better results in the lower tail than / = 1.5. The upper tails are,
however, too far off the normal to give the desired asymptotic property in the
aggregate and the recommendation to use the BDS tables instead of normal quantiles
remains unchanged.
Disappointingly, as the sample size is increased further, the tails of the BDS
distributions do not keep on converging to the normal at previous pace. In fact, virtual
normality in both tails for a large range of embedding dimensions is not achieved until
the size of the samples is increased to 15,000 (Table 2). From n = 750, the lower tail
tends to undercut that of the standard normal while the upper tail stays stubbornly
below the normal.
32
Note, however, that the BDS test is usually conducted against a two-sided alternative hypothesis.33 Hence all one should be concerned about is whether lower and
upper tail probabilities in the aggregate approximate standard-normal equivalents well
enough to render reference to the tabulated BDS distribution unnecessary. For samples
of size 750 and larger and / = 1.5 or 2.0, the null will be rejected at maximum half
a percent too often compared with what normal probabilities would suggest. An error
of this size is probably acceptable for most applications. Note, however, that even in
relatively large samples, a setting of / = 0.5 yields unsatisfactory results for all but
the lowest dimensions. Similarly, higher dimensions are best avoided when choosing
to run the BDS test for / = 1.0.
In Figure 3, I explicitly explore the impact of increasing the sample size for
constant m and /, for various combinations of m and /. Except for the most
extreme cases (large m and / = 0.5), increasing the sample size leads to clear
improvements in convergence to the normal, and it is also apparent that the largest improvements are made in the range up to n = 500.
Choosing /
Figure 4 compares cumulative BDS distributions across a range of /. In very
small samples, setting / = 0.5 always produces a distribution which is not only way
off the normal mark, but also far away from the other three functions for / = 1.0,
1.5 and 2.0. In larger samples, / = 0.5 has the potential of yielding better results, but
only when the embedding dimension chosen is very low. And even then such distributions never come as close to the standard normal as those for larger /. My recommendation is therefore not to consider / = 0.5 a possible choice for the distance
parameter.
33
One would need to hold a firm view about the size of the correlation dimension at embedding dimension m to formulate a one-sided alternative hypothesis. Such beliefs could be based on findings from chaosrelated tests using correlation-dimension estimates. It should be pointed out, however, that these techniques
may easily give misleading results (see Ramsey et al., 1990, and Eckmann & Ruelle, 1992) and thus in my
opinion rarely provide sufficient support to narrow down the alternative hypothesis of the BDS test.
33
34
ding dimension when asymptotically there is no difference, and (ii) what role the size
of the sample plays in this relationship.
Consider as an example the BDS statistic being calculated for / = 0.5 on a
sample of 100 (near-)normally distributed observations. As Table 3 and Figure 3, Panel
2 show, typically around 27% of observations are close (value 1). Under the null
hypothesis, one expects (27%)2 = 7% of all 2-histories to be close, similarly
(27%)3 = 2% of all 3-histories, and so on for higher dimensions. Also, the higher the
dimension, the smaller the absolute number of (unique) m-histories which can be
formed: (nm+1)(nm).34 As the number of close histories decreases, so presumably does the reliability with which the correlation integrals of higher dimensions are
being estimated. In the given example, estimation of the correlation integral of dimension 5 would be based on a mere 7 histories which are expected to be close. These
are just too few histories to make reliable inference about the underlying data-generating process.
Decreasing , decreasing n and increasing m all cause the expected number of
close histories to shrink, thus rendering estimation of cm,n and hence the BDS statistic
less reliable. So it may not seem surprising to find the BDS statistic badly behaved for
small /, small n and large m, and especially for combinations of these.
Presumably also it is not a good idea to choose too large a dimensional distance,
because this would mean that c1,nm+1 would be estimated on too small a number of
histories being not close. However, the case of too few 0s is not symmetric to the
case of too few 1s, because the correlation integral worst affected by a lack of
reliability would be c1,n, not cm,n. For c1,n the expected number of histories being close
is just nc1,n, while for cm,n, it is nc1,n m, which is always smaller than c1,n, and much
smaller when c1,n itself is small. So as c1,n is decreased, the number of close histories
approaches zero much more quickly than the number of close histories approaches
one when c1,n is increased. Also the number of m-histories will always be much smaller
34
So for a sample of 100, there are 4,950 1-histories, 4,851 2-histories, 4,753 3-histories, etc.
35
than the number of close observations (1-histories). One would have to make /
very large indeed before a loss of reliability would become significant.
This reasoning also points to the fact that the choice of has a far greater
impact on the reliability of cm,n than on that of c1,nm+1, c1,n and also of kn (which is of
the order c1,n2). The reliability with which cm,n on the one hand and c1,nm+1m on the
other can be estimated differs markedly, even though their expected value is the same
under the null hypothesis. The odd behaviour of the small-sample BDS distribution
must be due to this relative lack of reliability in estimation. For instance, the smaller
the number of expected histories being close, the larger the probability of encountering not a single close history in estimation. A total lack of close histories means
that the estimate of cm,n becomes zero, and so the BDS statistic will be negative. This
explains why the BDS statistic tends to be more often negative than positive, and why
this tendency increases with increasing m, decreasing n and decreasing . But even
when the estimate of cm,n is positive, the number of values it can possibly assume when
is small, n is small and/or m is large, is very limited. This shows up in the crooked
shape of some small-sample functions.
In short, the foregoing features explain why larger / yield so much better
behaved BDS distributions, why the difference in error between / = 1.5 and
/ = 2.0 is so much smaller than the difference between / = 0.5 and / = 1.0 and
why larger / suit larger m particularly well.
It is also apparent that increasing the sample size should improve reliability,
because a larger expected number of histories is available for estimation. However, if
this was the only way through which the sample size influenced the distribution, then
the choice of embedding dimension should have a much larger impact on reliability
than it actually appears to have. Moving just one dimension higher reduces the typical
number of close histories drastically, in the above example from 1,337 to 353, from
353 to 94, from 94 to 25, from 25 to 7, from 7 to 2, and so on. Yet as the distribution
plots show, the impact on the shape of the BDS distribution is smaller than when a
similar reduction in the number of expected close histories is achieved through
decreasing the size of the sample.
36
35
On a sample of 50, c1,n can assume a maximum of 1,225 different evenly spaced values between 0
and 1, which is why the plots for n = 50 display jumps.
37
should choose such as to ensure there is a sufficiently large number of close mhistories available to make estimation of cm,n relatively reliable.
In principle, there is nothing wrong with this approach of fixing cm,n at a predetermined value, determining in response and computing the other correlation integrals and thus the BDS statistic accordingly. Unfortunately, this procedure is highly impracticable. The computationally efficient method for computing the BDS statistic outlined in Section 3 relies on the ability to work with a given from the outset. A
method of fixing cm,n at the start and determining in response could only be implemented either through iteration or else through computing and storing the difference
between each and every m-history. Either solution would be extremely processingintensive and could not be done on a routine basis.
Since under the null hypothesis cm,n and c1,nm+1m are equal, a second-best
solution would be to fix c1,n and determine accordingly. Now the actual number of
close m-histories would vary, but at least its expected number could be fixed through
choice of c1,n. In terms of computing power, this method is less demanding than the
above, but it is still too processing-intensive to be useful in practice. My MATLAB
programme gives the user the option to pursue this solution if a sufficient amount of
memory is available.
However, as it is not viable to fix the actual number of close m-histories, one
might as well fix their expected value by choice of rather than c1,n. This is the third
best solution, which differs little in terms of statistical reliability from the second, but
which is computationally efficient because the fast algorithm of Section 3 can be used
as it is. The idea is thus to choose such that the expected number of m-histories is
large enough and varies little to achieve (relatively) reliable estimation. Fortunately,
there is no need to make any explicit calculations, because the results of the simulations have already brought to light which values of tend to produce the best sized
BDS statistics.
It should be remembered, however, that this approach is only second-best (or
rather third-best), since it does not allow full control over the size of cm,n. The
proportion of histories which one can make close by choice of depends on how
38
these observations are distributed. The larger the distance between any one pair of
observations, the smaller the absolute number of pairs which a given will capture as
close. To mitigate this problem, I have specified in units of the standard deviation
of the sample. While this solution makes the number of close observations less
dependent on their variance, it still does not give full control over cm,n in small
samples. Figure 5 and Table 3 show that the estimate of c1,n may vary considerably
even when the size of dimensional distance is stated in units of the standard deviation
of the sample. Under the null hypothesis, a similar result holds true for cm,n.
Also, a given / tends to capture a different proportion of close observations
in a normally distributed sample than in a sample drawn from an altogether different
distribution. Stating the size of the dimensional distance in units of makes the correlation integrals depend on the kind of distribution from which the sample is drawn.
And consequently, a setting of, say, / = 1.5 will yield different BDS distributions
for, say, uniformly distributed samples and normally distributed samples respectively.
BHL simulate the BDS statistic for a given range of / on a number of different i.i.d.distributions, and the simulation results clearly vary with the type of distribution. The
reason must be entirely due to the fact that, while the ratio of to is in each case
the same, the expected proportion of close m-histories differs, and so does the variance with which the BDS statistic is estimated. (BHL themselves offer no explanation.)
To make the distribution of the BDS statistic comparable across a range of (independent) sample distributions, must thus be stated with reference to c1,n, not .36
So a BDS test conducted for determined with respect to a given c1,n is truly distribution-free in small samples. The quantile values of the BDS statistic tabulated in Appendix B are equally valid for normally and for non-normally distributed samples if
is first determined with respect to the sample estimate of c1,n. The values of c1,n corresponding to / for normally distributed samples can be found in Tables 2 and 3.
36
Note that even though the estimate of cm,n drives reliability, reference to c1,n suffices as long as the
sample is indeed i.i.d., since in this case E(cm,n) = E(c1,nm+1m).
39
In practice, pursuing BDS testing along the lines of this strategy does not necessarily require fixing c1,n at either of the values corresponding to the recommended
values of / = 1.5 or / = 2.0 and determining the size of each time before running the test. In most cases, it should be sufficient to determine the approximate size
of / for given c1,n on a typical sample and to use the value obtained for all further
testing with similar samples. Note that the BDS distribution usually varies very little
between / = 1.5 and / = 2.0 on normal samples, so any which puts c1,n in the
range of 0.71 and 0.84 should produce reliable BDS estimates.
I recommend running the BDS test for a range of dimensions and at least two
choices of in order to obtain an altogether reliable indication of whether or not the
sample is i.i.d. could be chosen as 1.5 and 2.0 for samples which appear to be
(near-)normally distributed or otherwise such that c1,n 0.71 and 0.84, and m could
cover the full range of dimensions 2 to 15, for which I have tabulated the BDS distribution. For samples up to and including 500 observations, the tabulated quantile values
should normally be used for hypothesis testing; otherwise reference to a standardnormal table will generally do. My MATLAB algorithm BDSSIG.M (Appendix C, Programme 2) is based on the tabulated values and can be used to test the significance of
the BDS statistic both on small and on large samples (as long as c1,n such that
/ = 0.5, 1.0, 1.5, or 2.0).
Two other approaches for setting appear to have received virtually no attention
in practice. Dechert (1994) shows how to derive equations for maximising the theoretical size of the test for given n. Unfortunately, he does not offer any simulations
which would allow the usefulness of his approach to be evaluated. Given the above
findings, one wonders how the size can be maximised when m is left out of the
equation. Worse, his approach requires specification of the exact distributional characteristics of the data-generating process under the null hypothesis. As the null distribution is often unknown, additional assumptions have to be made which restrict the generality of the BDS test. Also derivation of the required equations cannot be left to a
computer programme.
40
Wolff (1995) treats the correlation integral as a sum of dependent Bernoulli variables and derives a new test statistic which is dependent only on m and n, but not on
. The test statistic is computed differently from that of the BDS test, the only similarities being that both statistics are based on correlation integrals and that both follow
the standard-normal distribution asymptotically. The size of his new test statistic
appears to be slightly superior to that of the BDS statistic on small samples, but still
departs to a considerable extent from the standard normal. This applies even to larger
sample sizes. Moreover, implementation of the test-statistic equations appears to be far
from straightforward.
41
run parallel with mine.37 By contrast, integrating LeBarons algorithm would slow
down my own simulations so much that I refrain from doing so; instead I call his
actual programme directly from MATLAB.
Dechert
m, n
n m 1
c1, nm
cm, n
m, n (c1, n, kn )
V
(19)
The first problem with Decherts method arises from the fact that he bases estimation
of the correlation integral of dimension 1 in the nominator of the BDS ratio on the full
sample, i.e. employing c1,n instead of c1,nm+1. While c1,n is, in fact, the most efficient
estimator of the first-dimensional correlation integral, it is not appropriate to employ
this estimator in the nominator. The BDSL (1996) paper, which is incidentally coauthored by Dechert, proves consistency of the BDS statistic only for c1,nm+1. The
proof of the difference between cm,n and c1,nm+1m approaching zero as the sample size
increases to infinity relies crucially on the assumption that both correlation integrals
are estimated over the same number of histories.38
Although cm,n c1,nm is in my opinion likely to be consistent, it appears impossible to verify this formally. I have thus run some Monte-Carlo simulations to investigate whether using Decherts full-sample estimator makes any difference to the BDS
distribution in finite samples. Figure 6 compares standard BDS distributions to the corresponding distributions of full-sample BDS statistics.39 The only difference between
the BDS statistics based on full-sample U-statistic estimators and the standard BDS
37
I have confirmed on a few samples that the results thus obtained are indeed identical in each and
every digit to those obtained through running Decherts actual DOS programme.
38
39
I owe this insight to Blake LeBaron, another co-author of the BDSL paper.
To conserve space, I have not tabulated these distribution functions in Appendix B, but tables are
available from me upon request.
42
statistics lies in the use of c1,n versus c1,nm+1. The former are not Decherts BDS statistics, which differ from the standard also in other aspects discussed further below.
As Figure 6 shows, the full-sample BDS distribution tends to lie below the
standard BDS distribution in both lower and upper tails, making it difficult to decide
which of the two distributions approximates normal tail probabilities better in the
aggregate. In any case, the functions differ only significantly when the sample is rather
small and the embedding dimension rather high (Panels 1 to 12). In samples of size
500 (Panels 13 to 18) and greater (not shown), there is virtually no difference between
the two.
This supports my conjecture that, even though consistency cannot be formally
proven for the full-sample BDS-statistic, it appears to hold in practice. However, even
if the problem of consistency could be neglected, it is hard to advance an argument for
using the (fully efficient) estimator c1,n in lieu of estimator c1,nm+1. In either case, the
small-sample distribution does not approximate the normal very well. And to make use
of the tables in Appendix B, the BDS statistic should be calculated according to the
standard method with which I have derived these distributions.
43
By using c1,nV and knV instead of c1,n and kn, Dechert deprives his BDS estimators
of maximum statistical efficiency. I will consider the distribution of his BDS statistics
further below. Let me first turn to the second authors method.
LeBaron
m, n
n m 1
cm, n
maxdim 1
m, n (c
c1, n
V
1, n maxdim 1
m
maxdim 1
V
n maxdim 1
, k
(20)
)
Similarly to Dechert, LeBaron bases estimation of n on statistically inefficient V statistics, but in contrast to Dechert, LeBaron does not even make use of the full
sample.40
In addition, he makes estimation of the BDS statistic dependent on the prior
choice of the highest embedding dimension (maxdim) for which the BDS programme
is called. LeBarons BDS package routinely returns BDS statistics for dimensions 2 to
maxdim. First of all, this means that the BDS statistic for given n, and m assumes
a different value for each and every maxdim between m and infinity. In effect, LeBaron
introduces a fourth choice parameter into estimation, without knowledge of which it
is impossible to replicate results obtained with his programme. Second and worse, the
larger maxdim for a given m, the smaller the number of observations of the test sample
which enter the computation of all four correlation-integral estimates. So the size of
LeBarons BDS statistic depends on the size of maxdim m, the most reliable estimation being represented by the case m = maxdim.
This finding calls for an empirical investigation into the question what difference
the choice of maxdim makes for the distribution of LeBarons BDS statistic. Figure 7
40
Until May 1997, LeBarons C programme actually used U statistics, but was otherwise identical to
his current version. The method of this earlier version appears to be identical to that of the Fortran programme which LeBaron and Hsieh employed in their simulations and their initial applied work (BDS, 1987,
BDSL, 1996, BHL, 1991, Hsieh, 1989, Scheinkman & LeBaron, 1989). Unfortunately, the Fortran code is
not available for verification.
44
41
Comprehensive tables for the cases depicted by Figure 4.7 and others are available from me upon
request.
42
Unfortunately, it is not known which value of maxdim was chosen for the simulations reported in
BHL, and so the reliability of their tabulated quantile values cannot be judged.
43
The simulations for Panel 21 use a revised version of LeBarons programme which is still maxdimdependent, but which the author corrected (following my suggestion) to employ U statistics for all four correlation-integral estimators.
45
44
Tables for these and other distributions are available from the author upon request.
46
Note, however, that Table 2 should be used only in conjunction with the standard
method of computing the BDS statistic. As the sample size is increased to 500 (not
shown), virtually any statistical difference between the three functions vanishes.
Even in those cases where the Monte-Carlo simulations did not reveal noticeable
difference between methods in the aggregate, the results obtained on individual samples
tend to differ to a surprisingly large extent.45 This shows how sensitive the BDS statistic is to inclusion of individual observations, and it strengthens the case for evaluating
the null hypothesis for a range of m and .
In conclusion, usage of statistically inferior V statistics instead of the statistically
most efficient U statistics in estimators of some correlation integrals changes the distribution of the BDS statistic away from the norm only when these statistics are calculated on very small samples. Moreover, the small-sample properties of the BDS statistic are in some cases actually improved. By contrast, usage of maxdim-dependent
estimators has a potentially significant effect even on very large samples, and it always
worsens the finite-sample properties of the BDS statistic.
Neither Decherts nor LeBarons programme should be used on samples smaller
than 500 observations, because there are no tabulated quantile values against which the
results could be compared. This restriction can, of course, be circumvented by bootstrapping the BDS distribution of the sample under examination. Unfortunately,
Decherts DOS programme does not lend itself well for such an elaborate procedure.
LeBarons programme must be used with some care even on larger samples. Only
results obtained for the highest 10 dimensions can be taken to be fairly reliable, and
in cases of doubt the programme should be called separately for every embedding
dimension desired.
My own programme suffers from none of these deficiencies and can be used
under any circumstances. It will yield the most satisfactory results when the recommendations of Section 4 are followed.
45
Some of the individual differences between LeBarons programme and mine are due to the fact that
he uses the first nmaxdim+1 observations of the sample while I use the last nm+1 observations. Reversing
the series for one of the two programmes somewhat narrows down the observed differences.
47
Incidentally, both books are reviewed by one of the co-authors of the BDS test (LeBaron, 1995a,
1995b), and while Peters book is for other good reasons severely criticised, neither review makes mention
of the above slips.
48
simulations, many of the BDS statistics quoted in published work appear unreasonably
large due to careless choice of /. Nevertheless, almost all of the BDS statistics
quoted in these studies remain significant when evaluated against the tabulated distribution in Table 2, so that their findings of dependence appear by and large qualitatively
incontestable. There exist, however, (at least) three simulation studies with results
which are highly questionable given my findings. I will discuss them briefly in the subsequent three sub-sections.
47
The authors also assume that the BDS test is one-sided, which is normally inappropriate
note 33 on page 31.
see foot-
48
The four other tests are Hinichs bi-spectrum linearity test (see page 5, footnote 2), Whites (1989a,
1989b) neural-network test (see also Lee et al., 1993, and Jungeilges, 1996), the NEGM Lyapunov-exponent
neural-network test (Nychka et al., 1992), and Kaplans (1994) non-parametric Delta-Epsilon test. Unfortunately, the paper does not consider Tsays (1986) non-linearity test.
(continued...)
49
on my simulated quantile values of Table 2, however, changes their results dramatically: now, the BDS test yields the correct inferences in each of the five cases. The BDS
test is thus clearly superior to the two competing established tests for non-linearity and
as good as the newest two tests (see footnote 48).
48
(...continued)
The properties of the former two tests are already relatively well established. While FORTRAN
source code for the bi-spectrum test and the computationally extremely intensive NEGM test is freely available in the internet, the aforementioned authors of Whites test have kept their modules a secret to date
just as is the case with Whites (1997) reality check. MATLAB software for Kaplans test can be downloaded from Kaplans homepage at http://www.math.macalester.edu/~kaplan.
50
If the null hypothesis is not the problem, then it must be the size of the test.
Krmer & Runde simulate the BDS statistic on 1,000 samples of n = 2000 observations
for embedding dimensions m = 2, 3, 4, 5 and dimensional distance = 1.0. As argued
in Sub-Section 4.2 above, a larger would be desirable, but in light of my own simulations for n = 1000 and n = 2500, the unfortunate choice of alone cannot explain
their findings diverging so hugely from the standard-normal distribution. The fact that
some price changes are far more likely than others must come into play.
Following the detailed explanations of Section 4.3 above, it is not difficult to
see how this will occur. When the data is rounded, the number of observations being
close is reduced. The reason for this is that two observations cannot be a negative
distance away from each other, so fewer observations are made close than not
close by rounding. Yet the absolute number of observations being close is very
important to the size of the BDS test in small samples. This is particularly true when
is chosen rather small. Panel 26 of Figure 2 shows, for somewhat larger samples
of 2,500 observations, that while a setting of = 1.0 yields satisfactory distributions
for small embedding dimensions, this is no longer the case when one moves through
double-digit dimensions and the associated number of expected close observations
becomes very small. Analogously, when the data is being made discrete through rounding, the shape of the distribution of the data is changed such that the number of
close observations is reduced, the reduction in numbers depending on the degree of
rounding. As explained in Section 4.3, the lack of close observations increases the
probability of the error of rejecting the null hypothesis when it is actually true.
It is thus clear that rounding distorts the size of the BDS test. The question
remains whether this alone can explain Krmer & Rundes findings. There are many
possibilities of verifying the conjectured impact of rounding on the size of the BDS
statistic. The simplest would be to compare the size of the correlation coefficients
before and after rounding. My argument implies that rounding reduces the size of this
statistic. Secondly, my conjecture could be confirmed by re-running their simulations
for the (higher) recommended values of = 1.5 and 2.0, which should mitigate or
remove any size distortions.
51
Thirdly, increasing the sample size sufficiently should remove any size distortion
altogether. In their paper, Krmer & Runde claim that they also ran the simulations for
samples of n = 5000 observations, but do not report any corresponding results. In
private correspondence (October 1998), however, Ralf Runde has admitted that he had
never been in a position to perform these simulations as his self-written programme
was too slow to be of use for simulations on samples of that size!49
Fourthly, the distribution of the BDS statistic on rounded data could be bootstrapped, thus removing size distortions in evaluating the null hypothesis.
The compass-rose pattern is of no relevance to this thesis, as the tick size of
exchange rates is much finer than that of stock prices. Improving on the simulations
of Krmer & Runde and falsifying their claims is thus beyond the scope of this
chapter. (I believe the above arguments are being used by Dee Dechert and Blake
LeBaron in preparation of a reply to Krmer & Runde.) Nonetheless, their paper and
the problems associated with it serve to highlight the fact that some knowledge of the
size of the BDS test is indispensable to practical application.50 I hope the results of
this chapter will provide enough information to fill the gap in general knowledge.
49
Interestingly, their plots of the compass-rose pattern are all for 5,000 observations rather than for
2,000 observations, in which case any such pattern would, of course, be less strongly visible.
50
Krmer & Runde (1997) also fail to realise that the BDS test is generally a two-sided test (see page
31, footnote 33) and that it is not a chaos test as such although it is true that in conjunction with the
correct modelling approach, it can be used to test for the existence of non-linear deterministic dependence
(commonly known as chaos).
52
What exactly makes them believe that they can simulate the test under the null
hypothesis by using data derived from an unknown data-generating process remains a
mystery. No attempt is made to claim that a GARCH(1,1) process appears to capture
any dependence in the data particularly well. But even if that was the case, it would
not guarantee that the GARCH residuals were totally free of any dependence. One also
wonders why Chappell et al. (1996) restrict their attention to samples of 749 observations.
It is certainly true that their tabulated quantile values differ significantly from
those of the standard-normal distribution, even after making allowance for the fact that
1,000 repetitions can hardly be sufficient to estimate 0.5% and 1.0% quantiles reliably.
But should this come as any surprise? As shown in this chapter, even if simulated on
perfect i.i.d. data, the BDS distribution is bound to deviate significantly from the
standard normal for almost all the cases they consider. In fact, their values are
generally in line with those tabulated in Table 2 of Appendix B. Chappell et al.s paper
appears to serve no purpose other than sowing confusion about the distribution of the
BDS statistic.
7. Summary of Recommendations
My recommendations for both very fast and correctly sized estimation of the
BDS statistic may be summarised as follows.
The BDS statistic can be obtained in the speediest fashion by implementing an
algorithm comprising the following steps (as detailed in Section 3 above):
First, the absolute difference between all unique combinations of two observations
in the sample under investigation is calculated, evaluated against the size of dimensional distance and the result stored in an array of bits.
Second, correlation integrals of dimension 1 c1,n and c1,nm+1 (possibly for multiple
embedding dimensions m) as well as correlation integral kn are estimated directly
from this array. All estimates are obtained as U statistics.
Third, correlation integrals of higher dimensions cm,n are estimated by performing
bitand operations on the array.
Fourth, the BDS statistic for embedding dimension m is calculated from the resulting correlation-integral estimates.
53
References
Abhyankar, Abhay, Laurence Copeland & Woon Wong (1997), Uncovering Nonlinear Structure in RealTime Stock-Market Indexes: The S&P 500, the DAX, the Nikkei 225, and the FTSE-100, Journal of
Business and Economic Statistics, vol. 15, no. 1 (January), pp. 1-14
Akgiray, Vedat & Geoffrey Booth (1988), The Stable-Law Model of Stock Returns, Journal of Business
and Economic Statistics, vol. 6, no. 1, pp. 51-57
Andrews, Donald & Moshe Buchinsky (1997), On the Number of Bootstrap Repetitions for Bootstrap
Standard Errors, Confidence Intervals, and Tests, Yale University, Cowles Foundation Discussion
Paper, no. 1141R (August)
Ashley, Richard & Douglas Patterson (1986), A Nonparametric, Distribution-Free Test for Serial Independence in Stock Returns, Journal of Financial and Quantitative Analysis, vol. 21, no. 2 (June), pp. 221227
Ashley, Richard, Douglas Patterson & Melvin Hinich (1986), A Diagnostic Test for Nonlinear Serial
Dependence in Time Series Fitting Errors, Journal of Time Series Analysis, vol. 7, pp. 165-178
Barnett, William & Melvin Hinich (1993), Has Chaos Been Discovered with Economic Data?, Chapter
16 in Day & Chen (1993), pp. 254-265
Barnett, William, Alfredo Medio & Apostolos Serletis (1997), Nonlinear and Complex Dynamics in Economics, Washington University in St. Louis, Economics Working Paper Archive, no. WAB-97-13 (24
September)
Barnett, William, Ronald Gallant, Melvin Hinich, Jochen Jungeilges, Daniel Kaplan & Mark Jensen (1998),
A Single-Blind Controlled Competition among Tests for Nonlinearity and Chaos, Journal of Econometrics, vol. 82, no. 1 (1 January), pp. 157-192; revised version of Washington University in St. Louis,
Economics Working Paper Archive, no. WAB-96-8 (29 January 1997)
Belsley, David (1998), Mathematica as an Environment for Doing Economics and Econometrics, Computational Economics, forthcoming; revised version of Boston College Working Papers in Economics, no.
364 (20 March 1997)
Berkowitz, Jeremy & Lutz Kilian (1996), Recent Developments in Bootstrapping Time Series, Board of
Governors of the Federal Reserve System, Finance and Economics Discussion Series, no. 96-45 (8
November)
Brock, William, Davis Dechert & Jos Scheinkman (1987), A Test for Independence Based on the Correlation Dimension, University of Wisconsin-Madison, Social Systems Research Institute Working Paper,
no. 8702; reprinted in William Barnett, E. Berndt & Halbert White, eds. (1988), Dynamic Econometric
Modelling: Proceedings of the Third International Symposium on Economic Theory and Econometrics,
Cambridge University Press, Cambridge
Brock, William, David Hsieh & Blake LeBaron (1991), Nonlinear Dynamics, Chaos, and Instability: Statistical Theory and Economic Evidence, MIT Press, Cambridge, Massachusetts
Brock, William, Davis Dechert, Jos Scheinkman & Blake LeBaron (1996), A Test for Independence Based
on the Correlation Dimension, Econometric Reviews, vol. 15, no. 3 (August), pp. 197-235; reprinted
as University of Wisconsin-Madison, Social Systems Research Institute Reprint, no. 444; revised version
of University of Wisconsin-Madison, Social Systems Research Institute Working Paper, no. 9520; in
turn revised version of Brock et al. (1987)
54
55
Brockett, Patrick, Melvin Hinich & Douglas Patterson (1988), Bispectral-Based Tests for the Detection of
Gaussianity and Linearity in Time Series, Journal of the American Statistical Association, vol. 83, no.
403 (Applications & Case Studies), pp. 657-664
Campbell, John, Andrew Lo & Craig MacKinlay (1997), The Econometrics of Financial Markets, Princeton
University Press, Princeton, New Jersey
Cecen, Aydin & Cahit Erkal (1996), Distinguishing between Stochastic and Deterministic Behavior in Foreign Exchange Rate Returns: Further Evidence, Economic Letters, vol. 51, no. 3 (June), pp. 323-329
Chappell, David, Joanne Padmore & Catherine Ellis (1996), A Note on the Distribution of BDS Statistics
for a Real Exchange Rate Series, Oxford Bulletin of Economics and Statistics, vol. 58, no. 3 (August),
pp. 561-565
Crack, Timothy Falcon & Olivier Ledoit (1996), Robust Structure without Predictability: The Compass
Rose Pattern of the Stock Market, Journal of Finance, vol. 51, no. 2 (June), pp. 751-762
Dacorogna, Michel, Ulrich Mller, Olivier Pictet & Casper de Vries (1995), The Distribution of Extremal
Foreign Exchange Rate Returns in Extremely Large Data Sets, Olsen & Associates, Zrich, working
paper, 17 March
Danielsson, Jon & Casper de Vries (1997), Tail Index and Quantile Estimation with Very High Frequency
Data, Journal of Empirical Finance, vol. 4, no.s 2-3 (June), pp. 241-257; revised version of Robust
Tail Index and Quantile Estimation, Proceedings of the First International Conference on High Frequency Data in Finance, 29-31 March 1995, Olsen & Associates, Zrich, vol. 2
Day, Richard & Ping Chen, eds. (1993), Nonlinear Dynamics and Evolutionary Economics, Oxford University Press, New York
Dechert, Davis (1988a), BDS STATS: A Program to Calculate the Statistics of the Grassberger-Procaccia
Correlation Dimension Based on the Paper A Test for Independence by W. A. Brock, W. D. Dechert
and J. A. Scheinkman, release 8.21, MS-DOS software available on gopher://gopher.ssc.wisc.
edu:70/11/econgopher/software/bds/dos
Dechert, Davis (1988b), A Characterization of Independence for a Gaussian Process in Terms of the Correlation Integral, University of Wisconsin-Madison, Social Systems Research Institute Workshop Series,
no. 8812 (July)
Dechert, Davis (1994), The Correlation Integral and the Independence of Gaussian and Related Processes,
University of Wisconsin-Madison, Social Systems Research Institute Working Paper, no. 9412 (March)
De Grauwe, Paul, Hans Dewachter & Mark Embrechts (1993), Exchange Rate Theory: Chaotic Models of
Foreign Exchange Markets, Blackwell, Oxford
De Lima, Pedro (1992), A Test for IID Based upon the BDS Statistic, Johns Hopkins University, Baltimore, Department of Economics, working paper
De Lima, Pedro (1996), Nuisance Parameter Free Properties of Correlation Integral Based Statistics, Econometric Reviews, vol. 15, no. 3 (August), pp. 237-259
Denker, Manfred & Gerhard Keller (1983), On U-Statistics and v. Mises Statistics for Weakly Dependent
Processes, Zeitschrift fr Wahrscheinlichkeitstheorie und verwandte Gebiete (Probability Theory and
Stochastics), vol. 64, no. 4 (October), pp. 505-522
56
Diciccio, Thomas & Joseph Romano (1988), A Review of Bootstrap Confidence Intervals, Journal of the
Royal Statistical Society, Series B (Statistical Methodology), vol. 50, no. 3, pp. 338-354
DuMouchel, W.H. (1983), Estimating the Stable Index in Order to Measure Tail Thickness: A Critique,
Annals of Statistics, vol. 11, no. 4, pp. 1019-1031
Eckmann, J.-P. & David Ruelle (1992), Fundamental Limitations for Estimating Dimensions and Lyapunov
Exponents in Dynamical Systems, Physica D (Nonlinear Phenomena), vol. 56, no.s 2-3 (May), pp.
185-187
Efron, Bradley (1979), Bootstrap Methods: Another Look at the Jackknife, Annals of Statistics, vol. 7, no.
1, pp. 1-26
Efron, Bradley (1982), The Jackknife, The Bootstrap and Other Resampling Plans, CBMS-NSF Regional
Conference Series in Applied Mathematics, Society for Industrial and Applied Mathematics, Philadelphia, vol. 38
Efron, Bradley & Robert Tibshirani (1993), An Introduction to the Bootstrap, Monographs in Statistics and
Applied Probability, vol. 57, Chapman & Hall, New York
Engle, Robert (1982), Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of
United Kingdom Inflation, Econometrica, vol. 50, no. 4, pp. 987-1007
Grassberger, Peter & Itamar Procaccia (1983a), Characterization of Strange Attractors, Physical Review
Letters, vol. 50, no. 5 (January), pp. 346-349
Grassberger, Peter & Itamar Procaccia (1983b), Measuring the Strangeness of Strange Attractors, Physica
D (Nonlinear Phenomena), vol. 9, no.s 1-2, pp. 189-208
Guillaume, Dominique, Olivier Pictet, Ulrich Mller & Michel Dacorogna (1995), Unveiling Non-Linearities Through Time Scale Transformations, Olsen & Associates, Zrich, working paper, 20 September
Guillaume, Dominique, Michel Dacorogna, Rakhal Dav, Ulrich Mller, Richard Olsen & Olivier Pictet
(1997), From the Birds Eye to the Microscope: A Survey of New Stylized Facts of the Intra-Daily
Foreign Exchange Markets, Finance and Stochastics, vol. 1, no. 2, pp. 95-129
Hinich, Melvin (1982), Testing for Gaussianity and Linearity of a Stationary Time Series, Journal of Time
Series Analysis, vol. 3, no. 3, pp. 169-176
Hinich, Melvin and Douglas Patterson (1985), Evidence of Nonlinearity in Daily Stock Returns, Journal
of Business and Economic Statistics, vol. 3, no. 1, pp. 69-77
Hinich, Melvin & Douglas Patterson (1989), Evidence of Nonlinearity in the Trade-by-Trade Stock Market
Return Generating Process, in William Barnett, John Geweke & Karl Shell, eds., Economic Complexity:
Chaos, Sunspots, Bubbles, and Nonlinearity, Proceedings of the Fourth International Symposium in
Economic Theory and Econometrics, Cambridge University Press, Cambridge, Massachusetts, pp. 383409
Hinich, Melvin & Douglas Patterson (1990), Relating Sample Bicovariances of a Process to the Parameters
of a Quadratic Nonlinear Model, Applied Research Laboratories, University of Texas at Austin,
technical report
Hinich, Melvin & Douglas Patterson (1993), Intraday Nonlinear Behavior of Stock Prices, Chapter 14 in
Day & Chen (1993), pp. 201-214
57
Hinkley, David (1988), Bootstrap Methods, Journal of the Royal Statistical Society, Series B (Statistical
Methodology), vol. 50, no. 3, pp. 321-337
Hols, M.C.A.B. & Casper de Vries (1991), The Limiting Distribution of Extremal Exchange Rate Returns,
Journal of Applied Econometrics, vol. 6, no. 3, pp. 287-302
Hsieh, David (1989), Testing for Nonlinear Dependence in Daily Foreign Exchange Rate Changes, Journal of Business, vol. 62, no. 3 (July), pp. 339-368
Hsieh, David (1991), Chaos and Nonlinear Dynamics: Application to Financial Markets, Journal of Finance, vol. 46, no. 5, pp. 1839-1877
Huang, Roger & Hans Stoll (1994), Market Microstructure and Stock Return Predictions, Review of Financial Studies, vol. 7, no. 1 (Spring), pp. 179-213
Huisman, Ronald, Kees Koedijk, Clemens Kool & Franz Palm (1997), Fat Tails in Small Samples, Maastricht University, Limburg Institute of Financial Economics, working paper, September
Huisman, Ronald, Kees Koedijk, Clemens Kool & Franois Nissen (1998), Extreme Support of Uncovered
Interest Parity, Journal of International Money and Finance, vol. 17, no. 1 (February), pp. 211-228;
apparently revised version of The Unbiasedness Hypothesis from a Panel Perspective, Maastricht
University, Limburg Institute of Financial Economics, working paper, November 1996
Jansen, D.W. & Casper de Vries (1991), On the Frequency of Large Stock Returns: Putting Booms and
Busts into Perspective, Review of Economics and Statistics, vol. 73, no. 1, pp. 18-32
Jungeilges, Jochen (1996), Operational Characteristics of Whites Test for Neglected NonLinearities, in
William Barnett, Alan Kirman & Mark Salmon, eds., Nonlinear Dynamics in Economics, Proceedings
of the Tenth International Symposium in Economic Theory and Econometrics, Cambridge University
Press, Cambridge
Kaplan, Daniel (1994), Exceptional Events as Evidence for Determinism, Physica D (Nonlinear Phenomena), vol. 73, no.s 1-2 (May), pp. 38-48
Kearns, P. & Adrian Pagan (1997), Estimating the Density Tail Index for Financial Time Series, Review
of Economics and Statistics, vol. 79, pp. 171-175
Koedijk, Kees & Clemens Kool (1994), Tail Estimates and the EMS Target Zone, Review of International
Economics, vol. 2, no. 2 (June), pp. 153-165
Koedijk, Kees, M.M.A. Schafgans & Casper de Vries (1990), The Tail Index of Exchange Rate Returns,
Journal of International Economics, vol. 29, no.s 1-2, pp. 93-108
Kohers, Theodor, Vivek Pandey & Gerald Kohers (1997), Using Nonlinear Dynamics to Test for Market
Efficiency Among the Major U.S. Stock Exchanges, Quarterly Review of Economics and Finance, vol.
37, no. 2 (Summer), pp. 523-545
Koppl, Roger & Carlo Nardone (1997), The Angular Distribution of Asset Returns in Delay Space, Fairleigh Dickinson University, Madison, Department of Economics and Finance, working paper, February
Krmer, Walter & Ralf Runde (1997), Chaos and the Compass Rose, Economics Letters, vol. 54, no. 2
(February), pp. 113-118
58
Ksters, Ulrich & Jens Peter Steffen (1996), Matrix Programming Languages for Statistical Computing:
A Detailed Comparison of GAUSS, MATLAB, and Ox, Diskussionsbeitrge der Katholischen Universitt Eichsttt, Wirtschaftswissenschaftliche Fakultt Ingolstadt, Germany, no. 75 (25 October)
Lai, D., X. Wang & J. Wiorkowski (1994), Local Asymptotic Normality for Multivariate Nonlinear AR
Processes, Universities of Texas at Houston and at Dallas, working paper
LeBaron, Blake (1995a), Review of De Grauwe et al. (1993), Journal of International Economics, vol. 39,
no.s 1-2 (August), pp. 185-200
LeBaron, Blake (1995b), Confusion and Misinformation on Financial Chaos [Review of Peters (1994)],
Complexity, vol. 1, pp. 35-37
LeBaron, Blake (1997a), BDSTEST.C, June release accompanying LeBaron (1997b), C source code available
on http://www.econ.wisc.edu/~blebaron/software; revised version of July 1988 and March
1990 releases
LeBaron, Blake (1997b), A Fast Algorithm for the BDS Statistic, Studies in Nonlinear Dynamics and
Econometrics, vol. 2, no. 2 (July), pp. 53-59; reprinted as University of Wisconsin-Madison, Social
Systems Research Institute Reprint, no. 458
LeBaron, Blake, Arthur Brian & Richard Palmer (1998), Time Series Properties of an Artificial Stock
Market, Journal of Economic Dynamics and Control, forthcoming; revised version of University of
Wisconsin-Madison, Social Systems Research Institute Working Paper, no. 9725 (November 1997)
Lee, Tae-Hwy, Halbert White & Clive Granger (1993), Testing for Neglected Nonlinearity in Time Series
Models: A Comparison of Neural Network Methods and Alternative Tests, Journal of Econometrics,
vol. 56, no. 3 (April), pp. 269-290
Li, Hongyi & G.S. Maddala (1996), Bootstrapping Time Series Models, Econometric Reviews, vol. 15,
no. 2 (May), pp. 115-158
Loretan, Mico (1991), Testing Covariance Stationarity of Heavy-Tailed Economic Time Series, Yale University, Ph.D. Dissertation in Economics
Loretan, Mico & Peter Phillips (1994), Testing the Covariance Stationarity of Heavy-Tailed Time Series:
An Overview of the Theory with Applications to Several Financial Datasets, Journal of Empirical Finance, vol. 1, pp. 211-248; reprinted as Yale University, Cowles Foundation Paper, no. 866
Marsaglia, George & T.A. Bray (1968), One-Line Random Number Generators and Their Use in Combinations, Communications of the Association for Computing Machinery, vol. 11, no. 11 (November), pp.
757-759
Marsaglia, George & Wai Wan Tsang (1984), A Fast, Easily Implemented Method for Sampling from Decreasing or Symmetric Unimodal Density Functions, SIAM Journal on Scientific and Statistical Computing, vol. 5, no. 2 (June), pp. 349-359
MathWorks (1997a), Statistics Toolbox for Use with MATLAB, version 2.1.0 (April), The MathWorks, Inc.,
Natick, Massachusetts
MathWorks (1997b), MATLAB: The Language of Technical Computing, version 5.1.0.421 on PCWin (June),
The MathWorks, Inc., Natick, Massachusetts
59
McCulloch, Huston (1997), Measuring Tail Thickness to Estimate the Stable Index : A Critique, Journal
of Business and Economic Statistics, vol. 15, no. 1 (January), pp. 74-81
Moler, Cleve (1995), Random Thoughts: 10435 Years is a Very Long Time, MATLAB News & Notes, Fall,
pp. 12-13
Mller, Ulrich, Michel Dacorogna & Olivier Pictet (1996), Heavy Tails in High-Frequency Financial Data,
Olsen & Associates, Zrich, working paper, 11 December
Nychka, Douglas, Stephen Ellner, Ronald Gallant & Daniel McCaffrey (1992), Finding Chaos in Noisy
Systems, Journal of the Royal Statistical Society, Series B (Statistical Methodology), vol. 54, no. 2, pp.
399-426
Park, Stephen & Keith Miller (1988), Random Number Generators: Good Ones Are Hard to Find, Communications of the Association for Computing Machinery, vol. 31, no. 10 (October), pp. 1192-1201
Peters, Edgar (1994), Fractal Market Analysis: Applying Chaos Theory to Investment and Economics, John
Wiley & Sons, New York
Phillips, Peter & Mico Loretan (1992), Testing Covariance Stationarity under Moment Condition Failure
with an Application to Common Stock Returns, Yale University, Cowles Foundation Discussion Paper,
no. 947 (June)
Pictet, Olivier, Michel Dacorogna & Ulrich Mller (1996), Hill, Bootstrap and Jackknife Estimators for
Heavy Tails, Olsen & Associates, Zrich, working paper, 11 December
Ramsey, James, Chera Sayers and Philip Rothman (1990), The Statistical Properties of Dimension Calculations Using Small Data Sets: Some Economic Applications, International Economic Review, vol. 31,
no. 4 (November), pp. 991-1020
Rust, John (1993), GAUSS and MATLAB: A Comparison, Journal of Applied Econometrics, vol. 8, pp.
307-324
Scheinkman, Jos & Blake LeBaron (1989), Nonlinear Dynamics and Stock Returns, Journal of Business,
vol. 62, no. 3 (July), pp. 311-337
Theiler, James (1990), Estimating Fractal Dimension, Journal of the Optical Society of America A (Optics
and Image Science), vol. 7, no. 6 (June), pp. 1055-1073
Tsay, Ruey (1986), Nonlinearity Tests for Time Series, Biometrika, vol. 73, no. 2, pp. 461-466
White, Halbert (1989a), Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network Models, Journal of the American Statistical Association, vol. 84, no. 408, pp. 1003-1013
White, Halbert (1989b), An Additional Hidden Unit Test for Neglected Nonlinearity in Multilayer Feedforward Networks, Proceedings of the International Joint Conference on Neural Networks, IEEE Press,
New York, vol. 2, pp. 451-455
White, Halbert (1997), A Reality Check for Data Snooping, Econometrica, forthcoming; revised version
of San Diego, California, NRDA, technical report
Wolff, Rodney (1995), A Poisson Distribution for the BDS Test Statistic for Independence in a Time
Series, Chapter 5 in Howell Tong, ed., Chaos and Forecasting: Proceedings of the Royal Society Discussion Meeting, World Scientific, Singapore, pp. 109-127
Appendix B:
Tables
1 February 1999
B2
a)
b)
c)
on a Pentium-II 233 MHz personal computer running MATLAB 5.1 under Windows NT 4
Sample
size
RAM
in MB
2)
CPU-time in minutes
m=
10
15
19/ 203)
500
0
0-1
1
0:00
0:01
0:02
0:00
0:01
0:04
0:00
0:01
0:05
0:00
0:01
0:07
0:00
0:02
0:14
0:00
0:03
0:21
0:00
0:03
0:28
1,000
0
0-3
2
0:00
0:02
0:07
0:00
0:03
0:12
0:00
0:03
0:16
0:00
0:04
0:20
0:00
0:06
0:42
0:00
0:09
1:04
0:00
0:11
1:26
2,500
1
1-16
15
0:01
0:11
0:37
0:01
0:14
1:00
0:01
0:17
1:24
0:01
0:19
1:47
0:01
0:33
3:44
0:01
0:47
5:41
0:01
1:01
7:37
5,000
2
4-58
56
0:03
0:40
2:17
0:03
0:51
3:46
0:03
1:01
5:14
0:04
1:12
6:42
0:04
2:07
14:02
0:04
3:02
21:21
0:05
3:57
28:43
7,500
4
8-122
131
0:08
1:25
5:24
0:08
1:49
8:27
0:08
2:11
11:48
0:08
2:34
15:06
0:09
4:29
31:41
0:10
6:24
48:15
0:11
8:23
1:04:48
10,000
7
15-207
233
0:15
2:31
9:36
0:15
3:15
0:15
3:59
0:16
3:44
0:17
8:25
0:18
12:06
n/a
n/a
n/a
n/a
n/a
0:19
15:47
2:00:02
15
33-439
525
0:35
5:36
22:58
0:35
7:06
0:36
8:36
0:37
5:05
0:40
17:33
0:43
25:00
0:45
32:28
n/a
n/a
n/a
n/a
n/a
n/a
26
59-751
n/a
1:05
22:47
1:06
36:33
1:07
51:06
1:10
1:05:32
1:17
2:17:26
1:19
3:29:21
1:23
4:40:35
n/a
n/a
n/a
n/a
n/a
n/a
n/a
151
352-3988
n/a
8:35
8:44
8:57
9:04
9:36
10:09
10:38
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
15,000
20,000
49,000
98,000
600
1409-14639
n/a
1)
The speed of LeBaron's programme depends to a significant extend on the proportion of observations being close. It
is much faster for small than for large . Here, was set to ; for > 2 , run-time may almost double. (The speed of
the author's algorithms is independent of .)
2)
The RAM requirements refer to the amount of memory needed on top of what operating system and any running
applications, including MATLAB itself, occupy. It was measured using Windows NT Task Manager. The range of
memory for the BDS.M programme indicates how much RAM the slowest and how much RAM the fastest algorithm
would require. The amount of available RAM was set to 150 MB (appropriate for a machine with 192 MB physical
RAM), so the author's programme switched to slower algorithms for samples of size 10,000 and larger.
3)
B3
B4
B6
B8
B10
B14
B16
B12
B4
B6
B8
B10
B12
B14
B16
B18
B18
/ = 1.5
/ = 2.0
B5
B7
B9
B11
B13
B15
B17
B5
B7
B9
B11
B13
B15
B17
B4
/ = 0.5 (c 1 0.27)
Sample size n = 50
Dimension m=
10
11
12
13
14
15 N(0,1)
Quantile
0.5% -22.66 -27.66 -31.41 -40.37 -54.26 -48.20 -40.93 -34.41 -28.99 -26.20 -22.25 -22.19 -22.79 -24.73
1.0% -13.87 -16.90 -19.92 -24.97 -28.85 -25.56 -21.97 -18.59 -16.45 -15.29 -14.53 -14.65 -14.99 -16.43
2.5% -8.01 -9.87 -12.09 -14.93 -15.79 -13.73 -11.88 -10.34 -9.30 -8.68 -8.71 -8.58 -8.98 -9.75
5.0% -5.75 -6.94 -8.74 -10.85 -10.85 -9.39 -7.98 -6.89 -6.33 -5.95 -5.81 -5.80 -5.89 -6.29
95.0%
5.66 6.88 9.14 13.14 18.81 15.41 -0.68 -0.53 -0.40 -0.29 -0.21 -0.15 -0.11 -0.07
97.5%
8.27 10.05 13.92 20.89 31.30 36.99 -0.44 -0.41 -0.30 -0.22 -0.15 -0.10 -0.07 -0.05
99.0% 13.99 16.09 23.76 34.03 56.16 74.44 73.59 -0.29 -0.22 -0.15 -0.10 -0.07 -0.04 -0.03
99.5% 21.47 27.66 37.27 57.80 89.30 118.37 147.67 45.58 -0.18 -0.12 -0.08 -0.05 -0.03 -0.02
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
Size
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
Median
Mean
Std. dev.
Skewness
Kurtosis
19.8
22.3
26.4
30.3
23.5
20.7
17.8
16.0
-0.36
-0.67
41.97
-91.35
10939
23.9
26.5
30.5
34.1
25.5
22.8
20.0
18.4
29.7
32.2
35.8
39.2
27.4
25.2
23.1
21.5
-0.44 -0.72
-0.48 -1.10
38.87 109.5
-2.32 -96.49
3839 13409
37.6
39.6
43.0
46.1
27.5
26.1
24.3
23.3
57.7
61.8
67.5
71.4
21.9
21.3
20.7
20.2
53.1
59.2
68.7
76.9
7.3
7.3
7.3
7.2
41.4
47.3
57.5
67.5
2.0
2.0
2.0
2.0
32.3
37.4
46.2
55.9
0.6
0.6
0.6
0.6
26.3
30.4
37.9
46.1
0.1
0.1
0.1
0.1
22.0
25.5
32.2
39.3
0.0
0.0
0.0
0.0
19.5
22.2
27.5
33.8
0.0
0.0
0.0
0.0
16.1
18.0
21.6
25.6
0.0
0.0
0.0
0.0
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
0.00
0.00
1.00
0.00
3.00
17.8
20.2
25.0
30.3
0.0
0.0
0.0
0.0
/ = 1.0 (c 1 0.51)
Sample size n = 50
2
10
11
12
13
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-4.66
-4.10
-3.35
-2.84
2.76
3.50
4.52
5.43
-5.12
-4.42
-3.55
-2.99
2.89
3.70
4.90
5.82
-5.55
-4.67
-3.78
-3.12
3.03
3.94
5.25
6.46
-6.05
-5.18
-4.10
-3.32
3.27
4.35
5.97
7.24
-6.45
-5.49
-4.37
-3.54
3.64
4.98
6.78
8.60
-6.99
-5.89
-4.58
-3.75
4.11
5.74
8.21
10.65
-7.46
-6.27
-4.92
-4.01
4.81
6.81
9.89
12.56
-7.65
-6.45
-5.06
-4.12
5.66
8.33
12.32
16.11
-7.89
-6.57
-5.13
-4.20
6.60
10.21
15.44
21.04
-7.72
-6.65
-5.13
-4.16
7.15
11.97
20.26
26.66
-8.09
-6.69
-5.06
-4.03
6.60
13.38
24.66
34.22
-8.60
-6.95
-5.25
-4.06
2.59
12.07
27.84
40.93
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
7.0
9.3
13.8
18.6
13.5
10.3
7.4
5.9
7.8
10.3
14.9
20.2
13.6
10.5
8.0
6.5
8.8
11.3
16.4
21.7
14.2
11.2
8.5
7.0
10.2
13.0
17.9
23.4
14.8
11.8
9.3
7.9
11.7
14.5
20.0
25.5
15.5
12.9
10.4
8.9
13.6
16.7
22.2
27.9
16.5
14.0
11.5
10.3
15.8
19.4
25.5
31.4
17.3
15.1
12.9
11.6
18.1
21.9
28.6
35.6
17.5
15.5
13.8
12.7
18.3
22.7
30.3
38.5
16.4
15.0
13.5
12.6
17.1
21.0
28.8
37.5
13.6
12.6
11.8
11.2
15.5
18.9
25.9
33.9
9.3
9.0
8.6
8.4
14.2
17.3
23.3
30.2
5.4
5.3
5.1
5.0
Dimension m=
16.7
18.8
22.9
27.5
0.0
0.0
0.0
0.0
14
15 N(0,1)
Quantile
-9.37 -10.87
-7.70 -8.25
-5.45 -5.83
-4.17 -4.28
-0.18 -0.15
4.40 -0.10
26.77 16.38
46.54 46.35
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
12.9
15.2
19.7
24.7
1.4
1.4
1.4
1.4
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.27 -0.34 -0.40 -0.44 -0.52 -0.62 -0.77 -1.04 -1.27 -1.28 -1.19 -1.07 -0.95 -0.84
-0.19 -0.23 -0.26 -0.29 -0.30 -0.31 -0.33 -0.35 -0.37 -0.40 -0.42 -0.49 -0.58 -0.65
1.79 1.88 2.02 2.18 2.41 2.67 3.04 3.54 4.15 4.90 5.81 6.82 8.12 9.72
0.43 0.84 1.88 1.74 2.42 1.63 1.83 2.73 4.00 5.84 8.43 12.01 17.82 26.08
8.94 16.10 51.51 40.27 59.03 20.24 20.89 31.04 44.78 71.68 134.78 250.53 543.09 1134.7
0.00
0.00
1.00
0.00
3.00
13.2
15.7
21.1
27.1
2.7
2.7
2.7
2.6
B5
/ = 1.5 (c 1 0.71)
Sample size n = 50
2
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-4.09
-3.69
-3.08
-2.63
2.46
3.01
3.64
4.15
-4.01
-3.65
-3.12
-2.67
2.46
3.04
3.69
4.11
-4.14
-3.75
-3.19
-2.73
2.47
3.07
3.76
4.26
-4.08
-3.69
-3.19
-2.75
2.46
3.07
3.91
4.57
-4.15
-3.75
-3.25
-2.80
2.44
3.14
4.05
4.70
-4.30
-3.92
-3.32
-2.84
2.47
3.27
4.26
4.96
-4.38
-3.98
-3.36
-2.88
2.53
3.34
4.43
5.22
-4.53
-4.12
-3.49
-2.97
2.60
3.48
4.62
5.56
-4.98
-4.37
-3.63
-3.05
2.67
3.64
4.92
6.05
-5.32
-4.64
-3.74
-3.14
2.76
3.78
5.22
6.40
-5.64
-4.85
-3.94
-3.24
2.85
3.96
5.65
6.90
-6.19
-5.19
-4.21
-3.40
2.95
4.25
6.13
7.41
-7.05
-5.76
-4.46
-3.55
3.07
4.54
6.54
8.29
-7.52
-6.35
-4.74
-3.74
3.20
4.87
7.09
9.19
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
5.5
7.4
11.5
16.4
11.9
8.7
5.9
4.3
5.7
8.0
12.2
17.5
11.5
8.4
5.8
4.3
6.2
8.5
13.2
18.5
11.4
8.3
5.8
4.4
6.4
9.0
14.0
19.5
11.0
8.2
5.7
4.4
6.8
9.3
14.7
20.2
10.7
8.0
5.6
4.4
7.1
9.6
14.9
20.7
10.6
8.1
5.7
4.6
7.6
10.3
15.6
21.6
10.5
8.0
6.1
4.8
8.1
11.1
16.4
22.4
10.6
8.3
6.1
5.1
8.4
11.4
16.9
22.9
10.4
8.3
6.5
5.3
8.9
11.6
17.2
23.5
10.7
8.5
6.6
5.7
9.8
12.6
17.8
23.9
10.6
8.7
6.8
5.9
10.6
13.3
18.6
24.7
10.8
8.9
7.1
6.1
11.2
13.8
19.1
25.0
10.8
9.1
7.5
6.6
11.9
14.5
19.5
25.4
10.7
9.1
7.7
6.8
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.23 -0.29 -0.34 -0.38 -0.43 -0.47 -0.51 -0.56 -0.61 -0.64 -0.68 -0.73 -0.79 -0.82
-0.18 -0.22 -0.26 -0.30 -0.34 -0.36 -0.39 -0.42 -0.45 -0.47 -0.50 -0.54 -0.58 -0.63
1.54 1.55 1.58 1.59 1.61 1.64 1.67 1.72 1.78 1.85 1.94 2.04 2.17 2.30
0.18 0.24 0.28 0.37 0.43 0.50 0.58 0.67 0.72 0.81 0.91 1.06 1.13 1.39
3.25 3.27 3.38 3.57 3.81 4.10 4.42 4.89 5.58 6.49 7.68 9.10 10.65 13.19
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 2.0 (c 1 0.84)
Sample size n = 50
2
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-4.86
-4.42
-3.69
-3.02
2.83
3.50
4.23
4.70
-4.77
-4.34
-3.68
-3.09
2.80
3.42
4.26
4.77
-4.74
-4.35
-3.67
-3.11
2.79
3.46
4.19
4.71
-4.67
-4.28
-3.69
-3.15
2.76
3.44
4.22
4.72
-4.83
-4.40
-3.78
-3.23
2.72
3.44
4.27
4.81
-4.89
-4.48
-3.79
-3.25
2.72
3.42
4.22
4.82
-4.86
-4.52
-3.86
-3.33
2.71
3.40
4.25
4.91
-5.14
-4.62
-3.96
-3.38
2.71
3.40
4.24
4.93
-5.33
-4.78
-4.02
-3.44
2.67
3.41
4.30
5.08
-5.48
-4.88
-4.09
-3.53
2.68
3.39
4.40
5.16
-5.73
-5.11
-4.26
-3.60
2.65
3.44
4.48
5.21
-6.05
-5.32
-4.41
-3.69
2.62
3.46
4.53
5.30
-6.45
-5.65
-4.63
-3.82
2.61
3.48
4.55
5.44
-6.89
-5.94
-4.87
-3.98
2.62
3.52
4.74
5.53
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
7.6
9.5
13.4
18.0
14.0
10.8
7.7
6.2
8.4
11.0
15.4
20.2
13.7
10.4
7.6
6.1
9.2
11.8
16.5
21.6
13.5
10.4
7.5
6.0
9.5
12.3
17.2
22.5
13.3
10.1
7.3
5.8
10.2
13.1
18.0
23.3
13.1
9.9
7.2
5.7
10.7
13.6
18.7
24.3
12.7
9.8
7.1
5.7
11.0
14.2
19.5
24.9
12.6
9.7
7.1
5.7
11.5
14.5
20.0
25.7
12.3
9.5
7.0
5.6
12.3
15.3
20.7
26.2
12.2
9.4
6.8
5.5
12.7
15.8
21.1
26.8
11.9
9.3
6.8
5.5
13.2
16.2
21.5
27.2
11.6
9.0
6.7
5.3
13.8
16.6
22.1
27.7
11.3
8.7
6.4
5.2
14.5
17.6
22.7
28.3
10.9
8.5
6.4
5.2
14.8
17.8
23.0
28.5
10.4
8.2
6.2
5.2
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.23 -0.29 -0.34 -0.38 -0.42 -0.46 -0.49 -0.51 -0.55 -0.59 -0.62 -0.65 -0.67 -0.69
-0.17 -0.24 -0.28 -0.31 -0.35 -0.39 -0.42 -0.46 -0.49 -0.52 -0.56 -0.60 -0.64 -0.68
1.74 1.77 1.78 1.79 1.81 1.82 1.83 1.85 1.87 1.89 1.92 1.94 1.99 2.03
0.13 0.18 0.20 0.22 0.21 0.23 0.24 0.23 0.22 0.22 0.20 0.17 0.11 0.04
3.51 3.39 3.29 3.27 3.31 3.31 3.35 3.49 3.62 3.81 3.97 4.14 4.39 4.78
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B6
/ = 0.5 (c 1 0.27)
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-5.33
-4.66
-3.78
-3.18
3.24
4.14
5.49
6.49
-6.39
-5.51
-4.50
-3.71
3.83
4.90
6.35
7.74
-7.91
-6.74
-5.56
-4.57
4.77
6.21
8.22
9.72
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
9.2
11.7
16.3
21.5
16.5
13.2
10.2
8.4
12.8
15.7
20.4
25.1
19.0
15.8
12.8
10.8
17.9
20.8
25.5
30.2
22.3
19.1
15.9
14.3
Dimension m=
10
11
12
13
14
-9.88 -11.20
-8.49 -9.70
-6.90 -8.11
-5.67 -6.93
6.61 9.80
8.75 13.58
11.79 19.23
14.34 23.76
-9.92
-8.72
-7.44
-6.44
14.90
21.82
33.76
43.12
-8.48
-7.42
-6.27
-5.45
20.07
34.20
58.23
79.06
-7.07
-6.29
-5.35
-4.62
-1.13
19.50
87.29
-6.10
-5.49
-4.59
-3.98
-0.96
-0.82
-0.61
61.0
64.9
68.6
69.8
25.9
24.8
23.8
23.0
57.9
67.2
78.8
86.0
9.5
9.5
9.5
9.5
40.4
50.5
67.1
80.9
2.6
2.6
2.6
2.6
25.5
33.9
49.9
66.0
0.8
0.8
0.8
0.8
-2.79
-0.10
13.21
9.20
174.6
-2.34
-0.08
21.36
17.45
499.1
-1.96
-0.10
34.93
32.14
1439
15 N(0,1)
-5.59
-4.88
-4.04
-3.50
-0.78
-0.67
-0.56
130.73 137.15 -0.47
-5.07
-4.48
-3.66
-3.14
-0.63
-0.53
-0.45
-0.39
-4.83
-4.14
-3.39
-2.86
-0.50
-0.43
-0.35
-0.30
-4.58
-3.90
-3.20
-2.64
-0.41
-0.34
-0.28
-0.24
-4.41
-3.83
-3.03
-2.50
-0.33
-0.27
-0.21
-0.18
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
16.0
22.2
35.1
50.2
0.2
0.2
0.2
0.2
10.5
14.9
24.6
37.6
0.1
0.1
0.1
0.1
7.5
10.7
17.9
27.8
0.0
0.0
0.0
0.0
5.6
8.0
13.5
21.2
0.0
0.0
0.0
0.0
4.5
6.2
10.6
17.2
0.0
0.0
0.0
0.0
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-1.65
0.03
60.32
54.97
3983
-1.40
0.02
99.20
95.03
10762
0.00
0.00
1.00
0.00
3.00
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
25.2
28.0
32.4
36.4
25.6
23.0
20.4
18.7
34.2
37.3
42.8
47.6
27.4
25.6
23.4
22.1
/ = 1.0 (c 1 0.52)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-3.16
-2.88
-2.47
-2.12
2.16
2.64
3.25
3.72
-3.23
-2.93
-2.52
-2.16
2.20
2.73
3.37
3.81
-3.32
-3.00
-2.58
-2.21
2.28
2.85
3.56
4.06
-3.42
-3.12
-2.66
-2.29
2.41
3.05
3.87
4.39
-3.66
-3.27
-2.80
-2.40
2.60
3.28
4.18
4.94
-3.90
-3.50
-2.93
-2.53
2.80
3.67
4.77
5.58
-4.20
-3.72
-3.14
-2.69
3.13
4.16
5.42
6.37
-4.41
-3.97
-3.34
-2.87
3.61
4.84
6.40
7.58
-4.57
-4.10
-3.49
-3.03
4.18
5.68
7.74
9.39
-4.57
-4.12
-3.52
-3.07
5.01
6.96
9.92
12.14
-4.40
-4.01
-3.42
-3.03
6.01
8.84
12.57
15.70
-4.27
-3.85
-3.30
-2.90
7.12
10.88
16.04
20.87
-4.14
-3.73
-3.18
-2.76
7.81
12.77
21.06
27.11
-4.03
-3.63
-3.06
-2.64
7.05
14.23
26.29
35.25
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
2.0
3.4
6.7
11.0
9.6
6.5
3.9
2.7
2.2
3.7
7.0
11.7
9.6
6.7
4.2
3.1
2.5
4.0
7.8
12.6
10.1
7.2
4.7
3.5
2.9
4.6
8.5
13.6
10.9
8.0
5.4
4.2
3.7
5.6
9.8
15.1
11.7
8.9
6.4
5.1
4.6
6.7
11.3
17.0
12.7
10.0
7.5
6.1
5.9
8.2
13.2
18.8
13.9
11.2
8.8
7.4
7.5
10.3
15.7
22.0
15.3
12.6
10.2
8.8
9.2
12.7
19.1
26.1
16.6
14.1
11.9
10.7
10.7
15.0
23.2
31.4
17.6
15.5
13.5
12.2
10.3
15.1
25.5
36.6
17.8
16.1
14.4
13.4
8.6
13.0
22.9
35.8
15.8
14.6
13.4
12.8
7.0
10.4
19.0
30.5
12.6
12.1
11.4
11.0
5.5
8.3
15.2
25.2
8.1
8.0
7.8
7.7
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.18 -0.22 -0.26 -0.29 -0.32 -0.36 -0.41 -0.50 -0.62 -0.86 -1.26 -1.36 -1.29 -1.19
-0.10 -0.13 -0.15 -0.16 -0.17 -0.17 -0.17 -0.18 -0.18 -0.17 -0.17 -0.17 -0.16 -0.18
1.31 1.33 1.38 1.45 1.54 1.67 1.84 2.06 2.36 2.77 3.29 3.93 4.72 5.64
0.31 0.37 0.48 0.58 0.69 0.83 1.04 1.33 1.77 2.45 3.33 4.41 5.83 7.86
3.23 3.42 3.77 4.02 4.50 5.10 5.99 7.49 10.19 15.70 24.26 36.42 56.28 94.98
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B7
/ = 1.5 (c 1 0.71)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-3.15
-2.88
-2.45
-2.09
2.02
2.47
2.98
3.27
-3.12
-2.88
-2.48
-2.11
2.02
2.48
3.01
3.41
-3.15
-2.87
-2.44
-2.13
2.00
2.45
3.03
3.45
-3.14
-2.87
-2.47
-2.12
2.03
2.49
3.07
3.50
-3.10
-2.86
-2.49
-2.15
2.02
2.54
3.16
3.61
-3.14
-2.86
-2.49
-2.16
2.05
2.59
3.23
3.72
-3.08
-2.86
-2.48
-2.17
2.06
2.65
3.31
3.86
-3.13
-2.85
-2.48
-2.16
2.13
2.72
3.48
4.05
-3.15
-2.89
-2.50
-2.16
2.18
2.82
3.64
4.26
-3.23
-2.95
-2.50
-2.17
2.24
2.94
3.87
4.46
-3.21
-2.95
-2.53
-2.19
2.31
3.09
4.10
4.84
-3.27
-2.99
-2.56
-2.22
2.39
3.25
4.35
5.16
-3.38
-3.03
-2.60
-2.25
2.50
3.41
4.64
5.56
-3.49
-3.15
-2.63
-2.28
2.62
3.60
5.02
6.10
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
1.9
3.2
6.3
10.6
8.4
5.4
3.1
2.0
2.1
3.3
6.5
11.0
8.3
5.4
3.2
2.1
1.9
3.2
6.9
11.4
8.2
5.3
3.1
2.0
2.0
3.4
6.9
11.8
8.2
5.5
3.2
2.2
2.1
3.6
7.3
11.9
8.2
5.4
3.4
2.4
2.0
3.5
7.3
12.2
8.2
5.5
3.6
2.6
2.0
3.5
7.4
12.4
8.2
5.8
3.7
2.8
2.0
3.5
7.3
12.5
8.5
6.0
3.9
2.9
2.1
3.6
7.4
12.9
8.6
6.2
4.3
3.3
2.1
3.7
7.3
12.9
8.8
6.5
4.6
3.6
2.3
3.8
7.9
13.5
9.1
6.8
4.9
4.0
2.4
4.0
8.2
14.0
9.4
7.2
5.3
4.3
2.6
4.4
8.5
14.2
9.7
7.6
5.7
4.7
2.9
4.5
8.9
14.8
10.1
8.0
6.2
5.2
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.16 -0.21 -0.24 -0.27 -0.29 -0.31 -0.33 -0.36 -0.38 -0.41 -0.43 -0.46 -0.48 -0.51
-0.12 -0.15 -0.17 -0.19 -0.20 -0.21 -0.22 -0.23 -0.24 -0.25 -0.25 -0.26 -0.26 -0.26
1.25 1.26 1.26 1.27 1.28 1.29 1.30 1.32 1.35 1.37 1.41 1.46 1.51 1.58
0.19 0.25 0.29 0.34 0.39 0.45 0.53 0.62 0.72 0.84 0.97 1.11 1.28 1.48
3.05 3.13 3.20 3.26 3.39 3.52 3.71 3.96 4.32 4.80 5.37 6.09 7.14 8.43
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 2.0 (c 1 0.84)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-3.66
-3.28
-2.73
-2.28
2.26
2.77
3.34
3.73
-3.60
-3.24
-2.75
-2.33
2.22
2.71
3.35
3.74
-3.56
-3.25
-2.77
-2.37
2.17
2.69
3.32
3.76
-3.52
-3.25
-2.81
-2.38
2.17
2.67
3.32
3.78
-3.43
-3.17
-2.76
-2.37
2.17
2.66
3.32
3.74
-3.44
-3.14
-2.73
-2.36
2.15
2.67
3.34
3.78
-3.43
-3.15
-2.74
-2.37
2.14
2.67
3.37
3.84
-3.47
-3.16
-2.72
-2.37
2.14
2.72
3.39
3.92
-3.46
-3.18
-2.74
-2.38
2.14
2.71
3.40
3.93
-3.48
-3.20
-2.76
-2.37
2.13
2.72
3.44
4.00
-3.48
-3.16
-2.76
-2.39
2.13
2.75
3.48
4.04
-3.56
-3.24
-2.80
-2.41
2.13
2.77
3.56
4.07
-3.56
-3.26
-2.81
-2.43
2.14
2.77
3.59
4.10
-3.62
-3.31
-2.82
-2.44
2.14
2.78
3.60
4.23
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
3.1
4.7
7.9
12.0
10.5
7.4
4.6
3.3
3.3
5.0
8.5
13.0
10.0
7.1
4.4
3.1
3.5
5.3
9.3
14.1
9.8
6.8
4.2
2.9
3.6
5.5
9.5
14.4
9.5
6.5
4.0
2.9
3.4
5.4
9.6
14.6
9.3
6.4
4.0
2.8
3.4
5.3
9.6
14.7
9.0
6.3
3.9
2.9
3.4
5.4
9.8
15.1
8.8
6.1
3.9
2.9
3.4
5.4
9.6
15.0
8.9
6.1
4.0
3.0
3.5
5.4
10.0
15.4
8.8
6.1
4.0
2.9
3.5
5.5
10.0
15.5
8.7
6.1
4.0
3.0
3.6
5.5
10.0
15.6
8.5
6.1
4.0
3.0
3.8
5.8
10.3
15.9
8.3
6.0
4.0
3.1
3.8
6.0
10.6
16.2
8.3
6.0
4.1
3.1
3.9
6.1
10.6
16.6
8.2
6.1
4.1
3.1
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.17 -0.20 -0.22 -0.26 -0.27 -0.29 -0.32 -0.35 -0.37 -0.39 -0.41 -0.42 -0.44 -0.45
-0.10 -0.14 -0.17 -0.20 -0.22 -0.23 -0.25 -0.26 -0.28 -0.29 -0.30 -0.32 -0.33 -0.34
1.37 1.38 1.39 1.38 1.38 1.37 1.38 1.38 1.38 1.39 1.39 1.40 1.40 1.41
0.18 0.20 0.22 0.25 0.29 0.33 0.35 0.38 0.41 0.45 0.49 0.51 0.54 0.56
3.28 3.23 3.21 3.23 3.23 3.29 3.35 3.43 3.50 3.61 3.72 3.84 3.94 4.10
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B8
/ = 0.5 (c 1 0.27)
10
11
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-3.25
-2.94
-2.54
-2.18
2.27
2.81
3.49
3.96
-3.64
-3.32
-2.86
-2.40
2.50
3.13
3.90
4.49
-4.32
-3.91
-3.29
-2.79
2.98
3.75
4.70
5.42
-5.44
-4.85
-4.11
-3.47
3.79
4.78
6.00
7.01
-6.76
-6.19
-5.19
-4.45
5.19
6.63
8.37
9.81
-8.03
-7.40
-6.55
-5.78
7.68
10.09
13.11
15.49
-7.35
-6.95
-6.30
-5.83
12.18
16.49
22.93
28.61
-6.27
-5.91
-5.40
-5.00
18.51
28.55
42.19
54.34
-5.29
-5.02
-4.59
-4.25
13.10
43.94
73.31
-4.57
-4.33
-3.95
-3.65
-1.69
-1.52
98.79
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
2.3
3.9
7.1
11.8
10.3
7.3
4.7
3.4
3.9
5.6
9.5
14.3
12.5
9.1
6.2
4.6
6.7
9.0
13.5
18.4
15.6
12.1
9.0
7.2
11.9
14.4
19.2
23.8
19.7
16.4
13.2
11.3
19.4
22.4
27.3
31.5
24.5
21.6
18.5
16.7
29.0
33.0
38.7
41.5
27.8
25.6
23.2
21.6
50.3
50.4
50.4
50.5
25.3
24.4
23.3
22.7
79.6
82.2
83.2
83.3
16.7
16.7
16.7
16.7
72.2
83.9
92.9
94.8
5.1
5.1
5.1
5.1
44.9
63.1
85.9
95.9
1.4
1.4
1.4
1.4
-2.94
-0.10
15.74
8.04
95.01
-2.50
-0.12
25.73
15.21
305.0
Dimension m=
12
13
14
15 N(0,1)
-4.01
-3.77
-3.46
-3.17
-1.44
-1.34
-1.20
102.44 165.50 -1.08
-3.58
-3.35
-3.05
-2.80
-1.23
-1.13
-1.03
-0.94
-3.22
-3.02
-2.71
-2.48
-1.05
-0.96
-0.87
-0.82
-2.90
-2.72
-2.45
-2.23
-0.90
-0.82
-0.74
-0.69
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
21.8
36.4
65.1
87.1
0.4
0.4
0.4
0.4
9.1
17.7
40.9
68.9
0.1
0.1
0.1
0.1
3.8
7.9
23.0
47.1
0.0
0.0
0.0
0.0
1.6
3.7
11.9
29.8
0.0
0.0
0.0
0.0
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-2.14
-0.02
43.11
26.04
844.1
-1.85
0.36
73.17
42.37
2176
-1.61 -1.42
0.44 -0.19
0.00
0.00
1.00
0.00
3.00
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
115.74 143.83
67.82 111.95
5170 12546
/ = 1.0 (c 1 0.52)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.63
-2.42
-2.09
-1.81
1.83
2.26
2.76
3.08
-2.57
-2.38
-2.11
-1.83
1.88
2.30
2.82
3.15
-2.58
-2.39
-2.09
-1.83
1.88
2.35
2.93
3.31
-2.61
-2.40
-2.10
-1.84
1.95
2.44
3.01
3.47
-2.67
-2.44
-2.13
-1.86
2.03
2.54
3.18
3.70
-2.70
-2.48
-2.18
-1.90
2.15
2.69
3.40
3.97
-2.81
-2.59
-2.24
-1.97
2.25
2.88
3.78
4.39
-2.96
-2.72
-2.37
-2.06
2.45
3.12
4.11
4.91
-3.12
-2.87
-2.51
-2.20
2.71
3.49
4.58
5.38
-3.32
-3.06
-2.71
-2.37
3.05
3.93
5.21
6.49
-3.51
-3.28
-2.91
-2.57
3.56
4.68
6.22
7.70
-3.53
-3.33
-3.01
-2.73
4.21
5.60
7.76
9.43
-3.44
-3.24
-2.99
-2.76
5.05
6.99
9.79
12.14
-3.24
-3.08
-2.83
-2.63
6.33
8.82
12.75
15.87
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.6
1.3
3.5
7.1
6.7
4.1
2.2
1.4
0.5
1.2
3.6
7.4
7.0
4.4
2.4
1.6
0.5
1.2
3.7
7.5
7.0
4.4
2.6
1.8
0.6
1.3
3.8
7.5
7.4
4.9
2.9
2.0
0.7
1.4
3.9
8.0
8.0
5.5
3.3
2.4
0.7
1.7
4.4
8.7
8.8
6.2
4.0
2.9
1.0
2.0
5.0
9.6
9.9
7.0
4.6
3.6
1.5
2.7
6.2
11.2
11.0
8.1
5.6
4.4
2.1
3.8
7.9
13.2
12.4
9.7
7.0
5.6
3.4
5.4
10.3
16.2
14.3
11.3
8.7
7.3
5.0
7.7
13.7
20.2
16.4
13.5
10.6
9.0
7.0
11.0
18.3
25.3
18.0
15.4
12.7
11.4
8.1
14.0
24.8
32.8
19.3
17.1
14.8
13.4
6.0
12.6
29.8
44.8
19.7
17.8
16.0
14.9
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.12 -0.14 -0.16 -0.18 -0.21 -0.22 -0.24 -0.26 -0.29 -0.34 -0.42 -0.58 -0.84 -1.46
-0.07 -0.08 -0.09 -0.10 -0.10 -0.10 -0.09 -0.09 -0.08 -0.08 -0.07 -0.07 -0.07 -0.06
1.11 1.13 1.14 1.16 1.19 1.24 1.31 1.41 1.55 1.73 1.98 2.31 2.74 3.30
0.27 0.33 0.41 0.50 0.58 0.67 0.77 0.87 1.01 1.19 1.46 1.83 2.40 3.24
3.08 3.13 3.32 3.56 3.75 4.00 4.35 4.82 5.48 6.39 8.08 10.89 16.70 28.34
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B9
/ = 1.5 (c 1 0.71)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.72
-2.48
-2.13
-1.81
1.82
2.22
2.68
2.97
-2.70
-2.49
-2.12
-1.81
1.82
2.22
2.67
3.01
-2.67
-2.46
-2.14
-1.85
1.82
2.25
2.74
3.09
-2.63
-2.43
-2.13
-1.84
1.84
2.25
2.77
3.12
-2.60
-2.40
-2.10
-1.82
1.85
2.29
2.84
3.19
-2.55
-2.37
-2.08
-1.81
1.87
2.33
2.90
3.27
-2.51
-2.32
-2.04
-1.79
1.89
2.35
2.94
3.28
-2.49
-2.31
-2.02
-1.77
1.92
2.40
3.02
3.41
-2.49
-2.29
-2.03
-1.77
1.94
2.44
3.09
3.52
-2.45
-2.29
-2.01
-1.75
1.98
2.52
3.15
3.66
-2.46
-2.27
-2.00
-1.76
2.02
2.55
3.28
3.85
-2.44
-2.26
-1.98
-1.75
2.06
2.63
3.46
3.97
-2.43
-2.25
-1.99
-1.75
2.11
2.73
3.59
4.20
-2.43
-2.25
-1.99
-1.74
2.16
2.82
3.73
4.42
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.8
1.5
3.7
7.0
6.6
4.0
2.0
1.3
0.8
1.5
3.7
7.0
6.6
4.0
2.0
1.3
0.7
1.5
3.9
7.3
6.5
4.0
2.1
1.4
0.6
1.3
3.8
7.5
6.6
4.1
2.2
1.5
0.5
1.2
3.6
7.4
6.8
4.2
2.4
1.6
0.5
1.2
3.4
7.5
7.0
4.3
2.5
1.7
0.4
1.0
3.2
7.1
7.0
4.5
2.6
1.8
0.4
0.9
3.0
6.8
7.2
4.7
2.7
2.0
0.4
0.9
2.9
6.7
7.4
4.9
2.9
2.1
0.3
0.9
3.0
6.7
7.4
5.2
3.2
2.3
0.3
0.8
2.8
6.6
7.7
5.4
3.4
2.4
0.3
0.8
2.7
6.5
8.0
5.5
3.7
2.7
0.2
0.7
2.7
6.6
8.1
5.9
3.9
3.0
0.3
0.8
2.7
6.6
8.4
6.2
4.2
3.3
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.08 -0.11 -0.14 -0.16 -0.18 -0.19 -0.21 -0.22 -0.23 -0.25 -0.26 -0.27 -0.29 -0.31
-0.05 -0.07 -0.09 -0.10 -0.10 -0.11 -0.11 -0.12 -0.12 -0.12 -0.11 -0.11 -0.11 -0.11
1.11 1.11 1.11 1.12 1.12 1.12 1.13 1.13 1.14 1.16 1.17 1.19 1.21 1.24
0.18 0.23 0.27 0.32 0.39 0.45 0.51 0.59 0.66 0.73 0.82 0.93 1.04 1.16
3.04 3.08 3.15 3.17 3.26 3.35 3.47 3.67 3.88 4.10 4.41 4.84 5.34 5.95
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 2.0 (c 1 0.84)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.90
-2.62
-2.25
-1.91
1.88
2.30
2.79
3.16
-2.88
-2.64
-2.26
-1.92
1.86
2.29
2.78
3.15
-2.87
-2.63
-2.24
-1.93
1.86
2.29
2.75
3.09
-2.86
-2.61
-2.24
-1.93
1.86
2.28
2.78
3.07
-2.89
-2.59
-2.24
-1.93
1.85
2.27
2.80
3.13
-2.83
-2.59
-2.23
-1.94
1.83
2.29
2.79
3.17
-2.77
-2.56
-2.22
-1.93
1.83
2.28
2.81
3.14
-2.74
-2.51
-2.22
-1.93
1.84
2.28
2.83
3.18
-2.76
-2.52
-2.19
-1.91
1.85
2.29
2.83
3.20
-2.73
-2.50
-2.18
-1.91
1.85
2.29
2.81
3.24
-2.68
-2.49
-2.18
-1.89
1.85
2.31
2.84
3.22
-2.68
-2.48
-2.17
-1.89
1.87
2.33
2.88
3.24
-2.66
-2.45
-2.15
-1.89
1.88
2.35
2.92
3.32
-2.62
-2.43
-2.13
-1.88
1.88
2.36
2.93
3.36
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
1.2
2.1
4.5
8.0
7.3
4.4
2.4
1.5
1.1
2.1
4.6
8.4
7.2
4.3
2.3
1.4
1.1
2.1
4.7
8.5
6.9
4.3
2.4
1.5
1.1
2.1
4.7
8.7
6.8
4.3
2.4
1.5
1.0
2.1
4.7
9.0
6.8
4.2
2.3
1.5
1.1
2.0
4.8
8.9
6.7
4.1
2.3
1.5
0.9
1.9
4.6
8.9
6.6
4.1
2.3
1.5
0.9
1.8
4.6
8.8
6.5
4.2
2.3
1.5
0.8
1.7
4.4
8.9
6.7
4.2
2.4
1.5
0.8
1.7
4.5
8.8
6.6
4.2
2.4
1.6
0.7
1.7
4.3
8.7
6.7
4.3
2.4
1.6
0.7
1.5
4.3
8.6
6.8
4.4
2.6
1.6
0.7
1.4
4.2
8.7
6.7
4.4
2.6
1.8
0.6
1.4
4.0
8.6
6.7
4.5
2.7
1.8
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.10 -0.12 -0.14 -0.15 -0.17 -0.19 -0.20 -0.21 -0.23 -0.24 -0.24 -0.26 -0.27 -0.28
-0.07 -0.09 -0.10 -0.11 -0.13 -0.14 -0.14 -0.15 -0.16 -0.16 -0.17 -0.17 -0.18 -0.18
1.16 1.16 1.15 1.15 1.15 1.15 1.15 1.15 1.15 1.14 1.14 1.14 1.14 1.15
0.16 0.19 0.20 0.22 0.24 0.27 0.30 0.32 0.36 0.39 0.42 0.46 0.51 0.55
3.10 3.08 3.08 3.10 3.09 3.10 3.12 3.15 3.20 3.27 3.31 3.39 3.47 3.56
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B10
/ = 0.5 (c 1 0.28)
10
11
12
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.88
-2.63
-2.22
-1.89
2.01
2.46
2.95
3.32
-3.01
-2.78
-2.38
-2.01
2.13
2.64
3.27
3.61
-3.35
-3.04
-2.62
-2.24
2.37
2.92
3.72
4.14
-4.01
-3.67
-3.09
-2.64
2.87
3.55
4.41
5.03
-5.05
-4.52
-3.87
-3.32
3.68
4.55
5.74
6.58
-6.40
-5.92
-5.15
-4.38
5.25
6.61
8.50
9.75
-7.60
-7.22
-6.55
-5.78
8.05
10.41
13.55
15.80
-6.95
-6.69
-6.33
-6.00
13.57
17.88
23.17
27.76
-5.99
-5.74
-5.44
-5.20
22.54
31.94
44.46
55.17
-5.12
-4.93
-4.66
-4.46
31.55
49.71
84.82
-4.43
-4.28
-4.03
-3.85
-2.28
-2.08
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
1.1
2.0
4.3
8.1
8.5
5.4
3.1
2.1
1.6
2.7
5.5
9.6
9.3
6.2
3.8
2.7
2.7
4.3
7.7
12.4
11.4
8.1
5.3
3.9
5.4
7.6
11.7
16.2
14.9
11.4
8.2
6.5
10.6
13.4
17.9
23.0
20.3
16.7
13.1
11.0
19.3
22.4
26.8
31.4
25.7
22.7
19.5
17.6
32.6
33.3
35.5
39.4
29.9
27.7
25.3
23.7
45.1
45.1
45.1
45.3
29.0
28.5
27.8
27.0
79.7
79.7
79.7
79.7
20.3
20.3
20.3
20.3
Dimension m=
13
14
15 N(0,1)
Quantile
-3.88
-3.74
-3.54
-3.35
-2.00
-1.89
133.43 -1.73
-3.43
-3.31
-3.11
-2.95
-1.73
-1.64
-1.53
109.55 198.94 264.86 -1.45
-3.07
-2.95
-2.77
-2.63
-1.51
-1.43
-1.33
-1.27
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
92.5
93.6
93.8
93.8
6.2
6.2
6.2
6.2
83.4
94.0
98.0
98.2
1.8
1.8
1.8
1.8
51.5
75.4
96.1
99.3
0.6
0.6
0.6
0.6
21.2
43.1
81.0
97.5
0.2
0.2
0.2
0.2
6.3
17.8
53.2
86.8
0.1
0.1
0.1
0.1
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.09 -0.11 -0.14 -0.16 -0.22 -0.34 -0.69 -0.96 -3.99 -3.48 -2.99 -2.59
-0.04 -0.05 -0.06 -0.05 -0.05 -0.04 0.00 0.06 0.14 0.21 0.33 0.60
1.20 1.27 1.41 1.68 2.15 2.99 4.39 6.77 10.75 17.62 29.82 52.03
0.27 0.30 0.35 0.40 0.47 0.69 1.11 2.00 3.69 7.14 13.38 23.34
3.17 3.28 3.42 3.60 3.75 4.37 5.74 10.09 24.37 81.10 260.3 724.8
-2.26
0.86
90.29
37.41
1696
-1.99
1.47
0.00
0.00
1.00
0.00
3.00
162.65
58.45
3923
/ = 1.0 (c 1 0.52)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.55
-2.34
-2.01
-1.72
1.77
2.14
2.58
2.89
-2.52
-2.33
-2.00
-1.72
1.77
2.15
2.62
2.95
-2.53
-2.30
-1.99
-1.71
1.77
2.20
2.64
3.00
-2.51
-2.30
-1.96
-1.70
1.81
2.26
2.76
3.10
-2.46
-2.27
-1.99
-1.73
1.87
2.34
2.89
3.28
-2.46
-2.27
-2.00
-1.75
1.93
2.43
3.04
3.50
-2.48
-2.30
-2.03
-1.76
2.01
2.55
3.23
3.68
-2.54
-2.36
-2.07
-1.81
2.12
2.71
3.47
4.02
-2.65
-2.47
-2.16
-1.88
2.25
2.95
3.79
4.41
-2.80
-2.61
-2.28
-1.98
2.47
3.21
4.15
4.84
-3.01
-2.78
-2.46
-2.14
2.76
3.57
4.58
5.51
-3.24
-3.01
-2.66
-2.35
3.16
4.06
5.23
6.24
-3.43
-3.22
-2.91
-2.59
3.64
4.74
6.31
7.34
-3.47
-3.31
-3.04
-2.80
4.34
5.71
7.75
9.28
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.5
1.0
2.8
5.9
6.1
3.5
1.7
1.0
0.4
1.0
2.7
5.9
6.0
3.6
1.8
1.1
0.4
0.9
2.8
5.9
6.1
3.7
2.0
1.2
0.4
0.9
2.5
5.8
6.4
4.0
2.2
1.4
0.4
0.9
2.7
6.1
6.8
4.3
2.6
1.7
0.3
0.8
2.8
6.2
7.3
4.9
2.9
2.0
0.3
0.9
3.0
6.5
7.8
5.4
3.4
2.4
0.4
1.1
3.4
7.2
8.6
6.0
4.0
2.9
0.7
1.5
4.2
8.2
9.7
6.9
4.6
3.6
1.1
2.3
5.3
9.8
11.1
8.1
5.7
4.5
1.8
3.3
7.2
12.3
12.7
9.7
7.1
5.8
3.1
5.2
9.9
15.6
14.7
11.7
9.0
7.5
5.1
8.1
13.9
20.0
16.7
13.9
11.2
9.7
8.2
12.5
19.1
25.9
19.0
16.4
13.6
12.2
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.10 -0.12 -0.14 -0.15 -0.17 -0.17 -0.18 -0.19 -0.20 -0.22 -0.26 -0.29 -0.37 -0.50
-0.06 -0.06 -0.07 -0.08 -0.08 -0.07 -0.07 -0.06 -0.05 -0.05 -0.04 -0.04 -0.03 -0.03
1.06 1.06 1.07 1.08 1.10 1.12 1.16 1.22 1.29 1.39 1.54 1.73 1.99 2.35
0.23 0.27 0.33 0.40 0.47 0.55 0.63 0.71 0.80 0.90 1.00 1.14 1.36 1.70
3.05 3.07 3.16 3.28 3.40 3.57 3.77 4.02 4.37 4.76 5.20 5.95 7.29 9.78
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B11
/ = 1.5 (c 1 0.71)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.61
-2.37
-2.01
-1.71
1.74
2.10
2.50
2.80
-2.54
-2.35
-2.02
-1.71
1.72
2.09
2.54
2.86
-2.54
-2.31
-2.00
-1.71
1.73
2.12
2.52
2.86
-2.54
-2.30
-1.98
-1.71
1.74
2.11
2.57
2.94
-2.48
-2.26
-1.98
-1.69
1.74
2.13
2.60
2.93
-2.45
-2.25
-1.96
-1.69
1.74
2.15
2.63
2.99
-2.44
-2.23
-1.94
-1.69
1.76
2.17
2.65
3.04
-2.41
-2.22
-1.93
-1.67
1.76
2.20
2.71
3.05
-2.37
-2.19
-1.91
-1.67
1.79
2.23
2.75
3.14
-2.33
-2.17
-1.89
-1.66
1.82
2.28
2.85
3.25
-2.29
-2.14
-1.89
-1.65
1.85
2.33
2.88
3.33
-2.29
-2.12
-1.88
-1.65
1.89
2.40
2.98
3.46
-2.28
-2.10
-1.87
-1.64
1.94
2.45
3.09
3.53
-2.25
-2.09
-1.85
-1.63
1.98
2.51
3.21
3.68
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.5
1.1
2.8
5.8
5.9
3.3
1.5
0.8
0.4
1.1
2.9
5.7
5.7
3.3
1.6
0.9
0.4
1.0
2.7
5.8
5.7
3.4
1.6
0.9
0.4
0.9
2.6
5.8
5.9
3.3
1.6
1.0
0.4
0.8
2.6
5.6
5.8
3.4
1.7
1.1
0.3
0.8
2.5
5.6
5.9
3.5
1.8
1.1
0.3
0.8
2.3
5.5
6.0
3.5
1.9
1.2
0.3
0.7
2.3
5.4
6.1
3.6
2.0
1.3
0.2
0.6
2.2
5.3
6.2
3.8
2.1
1.4
0.2
0.5
2.0
5.2
6.4
4.1
2.3
1.6
0.1
0.4
2.0
5.1
6.7
4.3
2.5
1.7
0.1
0.4
1.9
5.0
6.8
4.6
2.8
1.9
0.1
0.4
1.8
4.9
7.0
4.8
3.0
2.1
0.1
0.3
1.7
4.8
7.3
5.1
3.2
2.3
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.07 -0.08 -0.09 -0.10 -0.11 -0.12 -0.13 -0.15 -0.16 -0.17 -0.18 -0.19 -0.20 -0.21
-0.04 -0.05 -0.06 -0.06 -0.07 -0.07 -0.07 -0.08 -0.08 -0.08 -0.08 -0.08 -0.08 -0.07
1.05 1.05 1.05 1.05 1.05 1.05 1.05 1.06 1.06 1.07 1.08 1.09 1.10 1.12
0.15 0.18 0.21 0.25 0.29 0.33 0.37 0.42 0.47 0.54 0.61 0.68 0.75 0.83
3.01 3.01 3.05 3.09 3.13 3.17 3.22 3.29 3.41 3.54 3.69 3.86 4.09 4.34
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 2.0 (c 1 0.84)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.65
-2.43
-2.09
-1.76
1.78
2.13
2.58
2.91
-2.65
-2.43
-2.08
-1.78
1.76
2.14
2.57
2.87
-2.70
-2.45
-2.08
-1.79
1.74
2.14
2.58
2.88
-2.63
-2.43
-2.09
-1.79
1.73
2.13
2.58
2.87
-2.62
-2.39
-2.08
-1.79
1.74
2.12
2.55
2.88
-2.63
-2.40
-2.07
-1.78
1.75
2.11
2.57
2.87
-2.63
-2.40
-2.07
-1.78
1.75
2.10
2.57
2.88
-2.64
-2.39
-2.06
-1.78
1.74
2.10
2.56
2.92
-2.60
-2.40
-2.07
-1.78
1.74
2.12
2.60
2.92
-2.55
-2.37
-2.05
-1.78
1.74
2.15
2.63
2.95
-2.52
-2.34
-2.04
-1.78
1.75
2.16
2.66
2.99
-2.49
-2.31
-2.02
-1.76
1.77
2.18
2.69
2.99
-2.46
-2.30
-2.01
-1.74
1.77
2.19
2.71
3.06
-2.46
-2.28
-1.98
-1.73
1.78
2.20
2.75
3.10
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.7
1.3
3.3
6.4
6.4
3.5
1.7
1.0
0.7
1.3
3.3
6.6
6.2
3.5
1.7
1.0
0.8
1.4
3.3
6.6
5.9
3.4
1.6
1.0
0.6
1.4
3.5
6.7
5.8
3.3
1.7
1.0
0.6
1.3
3.3
6.9
5.9
3.4
1.7
1.0
0.6
1.3
3.2
6.8
6.0
3.4
1.6
1.0
0.6
1.3
3.3
6.8
5.9
3.3
1.6
1.0
0.6
1.2
3.2
6.7
6.0
3.3
1.6
1.0
0.6
1.2
3.2
6.8
5.9
3.4
1.7
1.1
0.4
1.1
3.2
6.7
5.9
3.5
1.8
1.1
0.4
1.0
3.1
6.6
6.0
3.4
1.9
1.2
0.3
1.0
3.0
6.5
6.1
3.7
1.9
1.2
0.3
0.9
2.9
6.4
6.0
3.7
1.9
1.3
0.3
0.9
2.7
6.3
6.1
3.8
2.0
1.3
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.09 -0.09 -0.10 -0.11 -0.12 -0.13 -0.14 -0.15 -0.15 -0.16 -0.17 -0.18 -0.18 -0.20
-0.05 -0.06 -0.07 -0.08 -0.09 -0.09 -0.10 -0.10 -0.10 -0.10 -0.11 -0.11 -0.11 -0.12
1.08 1.08 1.08 1.08 1.08 1.08 1.08 1.08 1.08 1.08 1.07 1.07 1.07 1.07
0.17 0.16 0.16 0.18 0.21 0.22 0.24 0.25 0.27 0.30 0.33 0.37 0.41 0.45
3.03 3.01 3.00 3.00 2.99 3.01 3.03 3.04 3.05 3.09 3.15 3.21 3.28 3.35
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B12
/ = 0.75 (c 1 0.40)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-3.71
-3.28
-2.77
-2.39
2.43
2.99
3.72
4.25
-3.93
-3.47
-2.94
-2.51
2.58
3.25
4.13
4.79
-4.28
-3.84
-3.19
-2.72
2.83
3.60
4.60
5.33
-4.90
-4.34
-3.57
-2.99
3.21
4.12
5.40
6.36
-5.55
-4.89
-4.04
-3.39
3.85
4.97
6.65
7.83
-6.08
-5.49
-4.61
-3.87
4.78
6.35
8.76
10.45
-6.39
-5.75
-4.96
-4.25
6.07
8.45
11.72
14.51
-6.04
-5.48
-4.77
-4.21
8.04
11.60
16.64
20.80
-5.61
-4.98
-4.31
-3.84
10.32
15.95
24.81
31.45
-5.08
-4.56
-3.91
-3.44
11.19
20.16
34.87
50.07
-4.71
-4.18
-3.58
-3.12
-0.62
20.08
44.46
70.14
-4.33
-3.93
-3.30
-2.84
-0.60
-0.48
43.12
87.61
-4.15
-3.70
-3.08
-2.63
-0.52
-0.43
-0.33
62.64
-4.02
-3.54
-2.94
-2.48
-0.43
-0.36
-0.29
-0.25
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
3.6
5.5
9.5
14.7
11.3
8.3
5.6
4.2
4.5
6.7
11.0
16.0
12.2
8.9
6.4
5.1
6.1
8.3
12.9
18.0
13.3
10.3
7.5
6.1
7.9
10.4
15.1
20.6
15.3
12.2
9.4
7.8
10.9
13.8
18.8
24.3
17.5
14.8
12.0
10.4
15.4
18.6
24.0
29.6
20.0
17.3
14.8
13.1
21.8
25.6
32.0
37.5
22.0
19.7
17.5
16.1
29.6
35.9
45.0
50.5
21.7
20.1
18.5
17.5
26.0
34.3
49.6
62.8
18.7
17.8
16.8
16.1
18.2
25.5
40.0
56.6
9.6
9.5
9.4
9.3
11.7
17.2
29.6
44.7
4.0
4.0
4.0
4.0
7.7
11.7
20.8
33.8
1.6
1.6
1.6
1.6
5.5
8.4
15.4
25.3
0.6
0.6
0.6
0.6
4.3
6.3
11.7
19.7
0.3
0.3
0.3
0.3
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.22 -0.26 -0.30 -0.34 -0.42 -0.55 -0.82 -1.68 -1.95 -1.77 -1.55 -1.36 -1.19
-0.14 -0.16 -0.17 -0.17 -0.17 -0.17 -0.18 -0.17 -0.18 -0.18 -0.19 -0.21 -0.28
1.48 1.57 1.71 1.92 2.25 2.73 3.43 4.39 5.78 7.75 10.33 13.73 18.03
0.38 0.43 0.50 0.61 0.78 1.12 1.74 2.79 4.57 7.26 11.15 17.60 28.03
3.66 3.85 4.32 4.77 5.19 6.30 9.19 17.08 37.80 84.71 184.7 449.7 1085
-1.04
-0.33
23.35
42.69
2378
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 1.0 (c 1 0.52)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.52
-2.32
-1.98
-1.68
1.72
2.08
2.48
2.78
-2.53
-2.31
-1.97
-1.69
1.74
2.13
2.59
2.91
-2.49
-2.30
-1.98
-1.69
1.76
2.15
2.65
2.99
-2.48
-2.28
-1.97
-1.69
1.79
2.21
2.69
3.09
-2.49
-2.29
-1.96
-1.69
1.80
2.27
2.78
3.16
-2.45
-2.25
-1.94
-1.69
1.87
2.35
2.88
3.27
-2.44
-2.23
-1.95
-1.70
1.91
2.43
3.00
3.50
-2.47
-2.28
-1.97
-1.73
2.01
2.52
3.21
3.73
-2.51
-2.32
-2.04
-1.77
2.13
2.68
3.46
3.94
-2.61
-2.40
-2.11
-1.83
2.29
2.90
3.64
4.22
-2.78
-2.55
-2.23
-1.96
2.49
3.17
4.02
4.70
-2.96
-2.76
-2.42
-2.13
2.74
3.55
4.53
5.22
-3.24
-3.03
-2.66
-2.34
3.13
4.03
5.16
6.06
-3.46
-3.24
-2.91
-2.59
3.70
4.71
6.21
7.30
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.4
1.0
2.6
5.4
5.8
3.2
1.5
0.8
0.4
1.0
2.6
5.6
5.9
3.4
1.8
1.0
0.3
0.9
2.6
5.5
6.1
3.6
1.9
1.1
0.3
0.9
2.6
5.5
6.2
3.8
2.0
1.3
0.4
0.8
2.5
5.5
6.4
4.0
2.3
1.4
0.3
0.8
2.4
5.6
6.8
4.4
2.6
1.7
0.3
0.7
2.4
5.7
7.4
4.7
2.9
2.0
0.3
0.8
2.6
6.2
8.3
5.4
3.2
2.3
0.4
1.0
3.1
6.8
9.0
6.2
3.9
2.9
0.6
1.3
3.7
7.6
10.1
7.1
4.9
3.7
0.9
2.0
5.0
9.4
11.3
8.3
5.8
4.6
1.7
3.1
7.0
11.9
13.2
10.2
7.3
5.8
3.0
5.1
9.8
15.2
15.5
12.4
9.4
7.7
5.1
8.2
13.8
19.7
17.8
14.8
11.8
10.1
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.07 -0.09 -0.10 -0.11 -0.12 -0.13 -0.15 -0.15 -0.16 -0.17 -0.19 -0.21 -0.25 -0.31
-0.04 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.04 -0.03 -0.02 -0.02 -0.01 0.00 0.01
1.04 1.04 1.05 1.06 1.07 1.09 1.12 1.15 1.21 1.28 1.38 1.52 1.72 1.99
0.18 0.25 0.31 0.36 0.41 0.48 0.56 0.63 0.70 0.75 0.81 0.90 1.01 1.17
3.02 3.14 3.23 3.30 3.40 3.52 3.65 3.82 4.08 4.15 4.39 4.75 5.17 5.75
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B13
/ = 1.5 (c 1 0.71)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.57
-2.34
-2.01
-1.68
1.69
2.02
2.48
2.79
-2.56
-2.32
-1.99
-1.69
1.68
2.03
2.50
2.78
-2.54
-2.33
-1.98
-1.69
1.68
2.03
2.48
2.77
-2.53
-2.30
-1.98
-1.69
1.69
2.06
2.50
2.80
-2.50
-2.29
-1.97
-1.71
1.71
2.07
2.54
2.88
-2.45
-2.27
-1.96
-1.69
1.72
2.11
2.58
2.91
-2.40
-2.24
-1.95
-1.68
1.73
2.12
2.61
2.97
-2.39
-2.21
-1.93
-1.67
1.74
2.17
2.63
3.02
-2.36
-2.17
-1.91
-1.65
1.75
2.19
2.69
3.08
-2.34
-2.16
-1.90
-1.65
1.78
2.23
2.75
3.16
-2.31
-2.14
-1.87
-1.63
1.79
2.25
2.84
3.29
-2.28
-2.10
-1.85
-1.61
1.80
2.28
2.93
3.33
-2.26
-2.09
-1.84
-1.60
1.84
2.32
2.95
3.43
-2.23
-2.07
-1.82
-1.60
1.87
2.39
3.04
3.56
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.5
1.0
2.8
5.5
5.5
2.9
1.4
0.8
0.5
1.0
2.7
5.6
5.4
2.9
1.4
0.8
0.4
1.0
2.6
5.5
5.3
2.9
1.5
0.8
0.4
0.9
2.7
5.4
5.5
3.1
1.4
0.9
0.4
0.9
2.6
5.7
5.6
3.2
1.5
0.9
0.3
0.8
2.5
5.6
5.6
3.3
1.7
1.0
0.3
0.7
2.4
5.4
5.8
3.3
1.8
1.1
0.2
0.7
2.3
5.3
5.8
3.5
1.9
1.1
0.2
0.6
2.1
5.1
5.9
3.6
2.1
1.2
0.2
0.5
2.0
5.0
6.1
3.8
2.1
1.4
0.1
0.5
1.8
4.8
6.2
3.9
2.2
1.5
0.1
0.4
1.8
4.6
6.4
4.1
2.4
1.6
0.1
0.4
1.6
4.5
6.5
4.3
2.5
1.8
0.1
0.3
1.5
4.4
6.7
4.4
2.7
1.9
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.06 -0.07 -0.08 -0.09 -0.09 -0.11 -0.12 -0.13 -0.14 -0.14 -0.15 -0.16 -0.17 -0.18
-0.04 -0.04 -0.05 -0.05 -0.06 -0.06 -0.06 -0.06 -0.07 -0.07 -0.07 -0.07 -0.07 -0.06
1.03 1.03 1.03 1.03 1.04 1.04 1.04 1.04 1.04 1.05 1.05 1.06 1.06 1.07
0.14 0.17 0.18 0.20 0.24 0.28 0.33 0.37 0.43 0.48 0.54 0.60 0.67 0.73
3.10 3.07 3.07 3.08 3.09 3.13 3.18 3.24 3.32 3.43 3.56 3.71 3.87 4.04
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 2.0 (c 1 0.84)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.66
-2.40
-2.03
-1.72
1.75
2.09
2.53
2.86
-2.64
-2.40
-2.03
-1.72
1.71
2.09
2.54
2.81
-2.61
-2.40
-2.05
-1.74
1.71
2.09
2.49
2.77
-2.65
-2.40
-2.04
-1.74
1.69
2.05
2.47
2.78
-2.58
-2.37
-2.03
-1.73
1.70
2.05
2.47
2.75
-2.57
-2.36
-2.03
-1.73
1.69
2.04
2.48
2.73
-2.55
-2.36
-2.03
-1.73
1.69
2.04
2.48
2.77
-2.55
-2.36
-2.04
-1.73
1.69
2.06
2.52
2.80
-2.53
-2.35
-2.04
-1.74
1.70
2.08
2.55
2.88
-2.51
-2.34
-2.02
-1.72
1.71
2.08
2.57
2.88
-2.51
-2.31
-2.00
-1.72
1.71
2.09
2.59
2.91
-2.50
-2.30
-1.99
-1.72
1.72
2.12
2.60
2.95
-2.49
-2.29
-1.97
-1.71
1.73
2.13
2.62
2.97
-2.46
-2.25
-1.97
-1.70
1.75
2.14
2.64
2.99
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.6
1.2
3.0
5.9
6.0
3.3
1.6
0.9
0.6
1.2
2.9
6.1
5.7
3.3
1.5
0.9
0.6
1.2
3.1
5.9
5.7
3.2
1.5
0.8
0.6
1.2
3.0
6.2
5.5
3.1
1.3
0.8
0.5
1.1
3.0
6.0
5.5
3.1
1.3
0.8
0.5
1.1
2.9
6.0
5.5
3.0
1.4
0.8
0.5
1.1
3.1
6.0
5.5
3.0
1.5
0.8
0.4
1.1
3.1
6.0
5.5
3.1
1.5
0.9
0.4
1.1
3.0
6.2
5.4
3.1
1.6
0.9
0.4
1.1
2.9
6.0
5.7
3.1
1.6
1.0
0.4
1.0
2.8
5.9
5.7
3.2
1.6
1.0
0.4
0.9
2.7
5.9
5.7
3.3
1.7
1.1
0.3
0.9
2.6
5.8
5.8
3.3
1.8
1.1
0.3
0.8
2.5
5.8
5.8
3.4
1.9
1.2
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.07 -0.08 -0.09 -0.11 -0.11 -0.11 -0.12 -0.13 -0.14 -0.14 -0.14 -0.15 -0.16 -0.17
-0.05 -0.06 -0.07 -0.07 -0.08 -0.08 -0.08 -0.09 -0.09 -0.09 -0.09 -0.09 -0.10 -0.10
1.06 1.05 1.05 1.05 1.04 1.04 1.04 1.04 1.04 1.05 1.05 1.05 1.05 1.05
0.16 0.16 0.16 0.16 0.17 0.18 0.20 0.22 0.24 0.27 0.29 0.32 0.34 0.38
3.09 3.06 3.04 3.04 3.01 3.02 3.03 3.06 3.09 3.10 3.12 3.15 3.17 3.20
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B14
/ = 0.5 (c 1 0.28)
10
11
12
13
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.66
-2.44
-2.08
-1.77
1.87
2.26
2.72
3.04
-2.72
-2.48
-2.11
-1.79
1.95
2.37
2.86
3.25
-2.83
-2.59
-2.24
-1.92
2.10
2.54
3.12
3.52
-3.17
-2.91
-2.50
-2.16
2.36
2.88
3.54
4.03
-3.82
-3.51
-3.01
-2.58
2.83
3.46
4.31
4.82
-4.92
-4.49
-3.90
-3.31
3.81
4.69
5.75
6.53
-6.63
-6.10
-5.30
-4.59
5.53
6.81
8.59
9.84
-8.03
-7.69
-7.04
-6.15
8.69
10.86
13.50
16.19
-7.33
-7.15
-6.85
-6.60
14.19
18.53
24.27
28.67
-6.36
-6.19
-5.97
-5.76
24.40
32.76
46.74
56.34
-5.50
-5.35
-5.16
-4.99
36.34
49.56
89.25
-4.79
-4.66
-4.49
-4.34
-3.02
-2.85
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.7
1.4
3.2
6.4
7.0
4.3
2.2
1.4
0.8
1.4
3.5
6.9
7.9
4.9
2.7
1.8
1.1
2.1
4.6
8.2
9.2
6.0
3.5
2.3
2.2
3.6
6.9
11.2
11.7
8.0
5.2
3.8
5.0
7.2
11.3
16.1
15.6
11.9
8.4
6.6
10.7
13.6
18.4
23.1
21.7
18.0
14.3
12.2
20.0
22.7
27.3
31.3
27.6
24.5
21.6
19.5
30.0
34.6
39.5
41.0
31.5
29.3
26.8
25.3
40.3
40.3
40.8
43.4
32.9
32.2
31.1
30.1
77.3
77.3
77.3
77.3
22.7
22.7
22.7
22.7
93.2
93.2
93.2
93.2
6.9
6.9
6.9
6.9
98.2
98.2
98.2
98.2
1.8
1.8
1.8
1.8
-0.06 -0.08 -0.10 -0.10 -0.11 -0.15 -0.28 -0.65 -1.33 -4.78 -4.19
-0.03 -0.03 -0.03 -0.02 -0.02 0.00 0.03 0.04 0.04 0.05 -0.04
1.10 1.14 1.22 1.38 1.66 2.19 3.11 4.64 7.18 11.41 18.32
0.22 0.28 0.32 0.34 0.34 0.43 0.57 0.94 1.71 3.18 6.08
3.11 3.11 3.16 3.26 3.31 3.48 3.76 4.52 7.32 17.02 52.10
-3.65
-0.28
29.56
11.87
181.9
Dimension m=
14
15 N(0,1)
Quantile
-4.21
-4.10
-3.94
-3.81
-2.66
-2.56
137.59 -2.44
115.13 190.18 -2.25
-3.73
-3.62
-3.49
-3.37
-2.33
-2.25
-2.16
-2.10
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
97.2
99.4
99.5
99.5
0.5
0.5
0.5
0.5
77.7
95.3
99.8
99.9
0.2
0.2
0.2
0.2
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-3.18
-0.22
50.23
21.60
563.8
-2.80
0.19
86.95
35.92
1565
0.00
0.00
1.00
0.00
3.00
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 1.0 (c 1 0.52)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.52
-2.28
-1.96
-1.68
1.68
2.03
2.43
2.74
-2.53
-2.29
-1.96
-1.67
1.70
2.07
2.50
2.78
-2.47
-2.26
-1.93
-1.66
1.72
2.11
2.56
2.86
-2.45
-2.22
-1.94
-1.65
1.75
2.15
2.57
2.89
-2.43
-2.20
-1.92
-1.65
1.78
2.19
2.66
2.99
-2.42
-2.22
-1.90
-1.65
1.82
2.25
2.73
3.13
-2.40
-2.21
-1.92
-1.65
1.88
2.31
2.82
3.25
-2.40
-2.21
-1.93
-1.68
1.95
2.40
3.00
3.39
-2.44
-2.25
-1.97
-1.71
2.04
2.54
3.15
3.64
-2.52
-2.34
-2.06
-1.80
2.12
2.66
3.35
3.84
-2.65
-2.48
-2.15
-1.88
2.28
2.87
3.62
4.25
-2.87
-2.63
-2.31
-2.00
2.50
3.13
3.94
4.63
-3.07
-2.86
-2.51
-2.20
2.79
3.52
4.44
5.21
-3.33
-3.11
-2.77
-2.44
3.19
4.07
5.20
5.94
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.4
0.9
2.5
5.4
5.3
3.0
1.3
0.7
0.4
0.9
2.5
5.3
5.5
3.1
1.4
0.8
0.3
0.8
2.4
5.2
5.8
3.2
1.7
1.0
0.4
0.7
2.4
5.1
6.0
3.6
1.8
1.0
0.3
0.7
2.2
5.0
6.1
3.8
1.9
1.2
0.3
0.7
2.1
5.0
6.6
4.1
2.2
1.4
0.3
0.7
2.2
5.2
7.0
4.5
2.4
1.6
0.3
0.7
2.3
5.5
7.6
4.9
2.8
1.9
0.3
0.8
2.5
5.8
8.4
5.7
3.4
2.4
0.4
1.1
3.4
7.0
8.8
6.1
3.9
2.8
0.7
1.5
4.2
8.2
10.0
7.1
4.7
3.6
1.2
2.4
5.5
10.2
11.7
8.6
5.9
4.6
2.2
3.8
7.8
13.0
13.7
10.4
7.4
6.0
3.8
6.1
11.1
16.6
16.0
12.8
9.8
8.1
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.05 -0.08 -0.09 -0.10 -0.11 -0.12 -0.13 -0.14 -0.14 -0.16 -0.17 -0.18 -0.20 -0.25
-0.03 -0.04 -0.04 -0.05 -0.05 -0.04 -0.04 -0.04 -0.03 -0.03 -0.03 -0.02 -0.02 -0.01
1.02 1.03 1.03 1.04 1.05 1.06 1.08 1.11 1.15 1.21 1.29 1.40 1.55 1.76
0.13 0.17 0.24 0.30 0.36 0.42 0.48 0.54 0.61 0.65 0.70 0.75 0.80 0.90
3.02 3.03 3.10 3.15 3.22 3.31 3.40 3.51 3.68 3.93 4.07 4.20 4.35 4.61
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B15
/ = 1.5 (c 1 0.71)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.53
-2.33
-2.00
-1.69
1.68
2.01
2.43
2.70
-2.54
-2.34
-1.99
-1.71
1.69
2.02
2.46
2.74
-2.50
-2.30
-2.01
-1.71
1.68
2.03
2.45
2.78
-2.50
-2.28
-1.99
-1.71
1.68
2.04
2.48
2.81
-2.47
-2.27
-1.97
-1.71
1.70
2.06
2.49
2.82
-2.46
-2.24
-1.95
-1.67
1.71
2.08
2.53
2.87
-2.44
-2.22
-1.94
-1.67
1.73
2.12
2.55
2.89
-2.41
-2.21
-1.92
-1.66
1.73
2.16
2.58
2.89
-2.38
-2.19
-1.92
-1.66
1.74
2.16
2.65
2.93
-2.34
-2.17
-1.91
-1.65
1.75
2.19
2.69
3.01
-2.31
-2.15
-1.89
-1.63
1.77
2.22
2.73
3.07
-2.27
-2.12
-1.87
-1.63
1.79
2.25
2.78
3.16
-2.26
-2.09
-1.86
-1.62
1.81
2.27
2.84
3.23
-2.23
-2.07
-1.83
-1.61
1.83
2.31
2.90
3.30
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.4
1.0
2.7
5.5
5.3
2.9
1.3
0.7
0.5
1.0
2.7
5.7
5.5
2.8
1.3
0.7
0.4
0.9
2.8
5.7
5.3
2.9
1.3
0.8
0.4
0.9
2.7
5.8
5.3
3.0
1.4
0.8
0.4
0.8
2.6
5.7
5.5
3.0
1.5
0.8
0.3
0.8
2.5
5.4
5.6
3.2
1.6
0.9
0.3
0.7
2.3
5.3
5.7
3.3
1.6
1.0
0.2
0.7
2.2
5.1
5.8
3.4
1.8
1.0
0.2
0.6
2.2
5.2
5.8
3.5
1.9
1.2
0.2
0.5
2.1
5.1
5.8
3.6
2.0
1.3
0.2
0.5
2.0
4.9
6.0
3.7
2.1
1.4
0.1
0.4
1.8
4.9
6.2
3.8
2.2
1.5
0.1
0.4
1.8
4.6
6.4
4.0
2.2
1.6
0.1
0.3
1.6
4.5
6.6
4.1
2.4
1.7
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.05 -0.07 -0.08 -0.08 -0.09 -0.10 -0.11 -0.12 -0.12 -0.13 -0.14 -0.15 -0.16 -0.16
-0.03 -0.04 -0.05 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06
1.02 1.03 1.03 1.03 1.03 1.03 1.03 1.04 1.04 1.04 1.04 1.05 1.05 1.06
0.11 0.14 0.16 0.18 0.22 0.26 0.29 0.33 0.37 0.41 0.45 0.50 0.56 0.62
2.99 2.99 3.02 3.05 3.08 3.09 3.12 3.16 3.21 3.29 3.35 3.45 3.56 3.70
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 2.0 (c 1 0.84)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.70
-2.40
-2.05
-1.73
1.70
2.04
2.45
2.77
-2.65
-2.41
-2.04
-1.72
1.68
2.04
2.47
2.76
-2.62
-2.39
-2.04
-1.74
1.69
2.04
2.45
2.80
-2.59
-2.37
-2.05
-1.74
1.69
2.05
2.47
2.79
-2.58
-2.37
-2.04
-1.73
1.69
2.04
2.46
2.77
-2.57
-2.37
-2.04
-1.74
1.69
2.04
2.47
2.76
-2.58
-2.35
-2.05
-1.74
1.69
2.06
2.51
2.80
-2.60
-2.36
-2.04
-1.74
1.69
2.07
2.51
2.82
-2.56
-2.35
-2.03
-1.73
1.69
2.07
2.50
2.83
-2.53
-2.33
-2.02
-1.72
1.70
2.07
2.52
2.85
-2.51
-2.30
-2.00
-1.73
1.71
2.08
2.52
2.88
-2.49
-2.28
-2.00
-1.73
1.72
2.08
2.55
2.87
-2.47
-2.26
-1.98
-1.71
1.73
2.09
2.57
2.90
-2.46
-2.25
-1.96
-1.71
1.73
2.11
2.58
2.93
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.7
1.3
3.1
6.0
5.5
2.9
1.4
0.8
0.6
1.3
3.0
5.9
5.4
2.9
1.4
0.8
0.6
1.2
3.0
6.1
5.5
3.0
1.4
0.8
0.5
1.1
3.1
6.1
5.5
2.9
1.4
0.8
0.5
1.1
3.1
6.0
5.4
2.9
1.4
0.8
0.5
1.2
3.0
6.1
5.5
3.0
1.4
0.8
0.5
1.0
3.1
6.1
5.4
3.0
1.4
0.9
0.5
1.1
3.0
6.1
5.4
3.1
1.4
0.9
0.5
1.0
2.9
6.0
5.5
3.1
1.5
0.9
0.4
1.0
2.9
6.0
5.5
3.1
1.5
0.9
0.4
0.9
2.8
6.0
5.6
3.2
1.5
0.9
0.4
0.8
2.8
5.9
5.7
3.2
1.6
1.0
0.4
0.8
2.6
5.9
5.6
3.2
1.7
1.0
0.3
0.8
2.5
5.8
5.7
3.3
1.7
1.0
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.07 -0.08 -0.08 -0.09 -0.10 -0.10 -0.10 -0.11 -0.11 -0.12 -0.13 -0.14 -0.15 -0.15
-0.05 -0.06 -0.06 -0.06 -0.07 -0.07 -0.07 -0.08 -0.08 -0.09 -0.09 -0.09 -0.09 -0.09
1.04 1.04 1.04 1.05 1.04 1.04 1.05 1.04 1.04 1.04 1.04 1.04 1.04 1.04
0.11 0.12 0.14 0.15 0.16 0.17 0.18 0.19 0.21 0.23 0.26 0.28 0.31 0.33
3.14 3.13 3.08 3.05 3.04 3.05 3.04 3.05 3.06 3.08 3.11 3.13 3.16 3.18
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B16
/ = 0.5 (c 1 0.28)
10
11
12
13
14
15 N(0,1)
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.51
-2.31
-1.97
-1.69
1.73
2.07
2.45
2.78
-2.53
-2.33
-1.99
-1.70
1.75
2.11
2.54
2.85
-2.59
-2.37
-2.06
-1.74
1.79
2.21
2.69
3.02
-2.74
-2.55
-2.16
-1.85
1.94
2.36
2.93
3.26
-3.00
-2.77
-2.39
-2.04
2.18
2.68
3.22
3.60
-3.67
-3.35
-2.89
-2.46
2.66
3.19
3.93
4.36
-4.78
-4.41
-3.82
-3.21
3.53
4.27
5.28
5.93
-6.51
-6.00
-5.23
-4.52
5.18
6.32
7.70
8.69
-8.74
-8.28
-7.22
-6.35
8.20
10.24
12.65
14.35
-8.85
-8.68
-8.41
-8.19
13.75
17.79
22.49
26.23
-7.73
-7.63
-7.45
-7.29
24.50
31.87
43.30
52.23
-6.77
-6.66
-6.51
-6.38
32.59
60.12
83.54
-5.93
-5.84
-5.71
-5.60
-4.41
95.19
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.4
0.9
2.5
5.6
5.8
3.1
1.4
0.8
0.4
1.0
2.7
5.7
6.0
3.4
1.6
0.9
0.5
1.1
3.1
6.3
6.5
3.8
2.0
1.2
0.9
1.7
3.9
7.3
7.7
4.9
2.7
1.8
1.7
2.9
5.8
9.8
10.0
6.7
4.0
2.9
4.2
6.1
10.0
14.6
14.2
10.5
7.2
5.5
9.9
12.3
16.6
21.3
21.0
17.1
13.2
10.9
19.4
22.0
26.6
30.2
27.3
24.3
21.1
18.9
29.9
32.8
37.0
39.6
32.1
30.1
27.4
25.9
44.0
44.0
44.0
44.2
35.1
33.9
31.9
30.3
64.5
64.5
64.5
64.5
35.5
35.4
35.1
34.6
Dimension m=
Quantile
-5.24
-5.16
-5.05
-4.93
-3.95
-3.85
131.83 -3.66
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
88.6
88.6
88.6
88.6
11.4
11.4
11.4
11.4
96.7
96.7
96.7
96.7
3.3
3.3
3.3
3.3
99.1
99.1
99.1
99.1
0.9
0.9
0.9
0.9
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.04 -0.05 -0.05 -0.06 -0.07 -0.07 -0.10 -0.22 -0.52 -0.96 -6.38 -5.71
-0.02 -0.02 -0.02 -0.02 -0.02 -0.01 0.01 0.01 0.01 -0.01 -0.03 -0.07
1.03 1.05 1.09 1.15 1.29 1.56 2.06 2.96 4.52 7.13 11.54 18.80
0.14 0.17 0.21 0.25 0.26 0.24 0.27 0.41 0.71 1.31 2.46 4.69
3.02 3.03 3.10 3.15 3.16 3.15 3.18 3.33 4.03 6.05 12.21 34.43
-5.02
-0.12
30.91
8.49
95.10
-4.42
-0.22
51.28
15.41
0.00
0.00
1.00
0.00
3.00
284.44
/ = 1.0 (c 1 0.52)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.58
-2.35
-1.97
-1.66
1.67
2.02
2.47
2.78
-2.52
-2.27
-1.94
-1.66
1.70
2.06
2.50
2.77
-2.47
-2.26
-1.91
-1.64
1.69
2.08
2.53
2.81
-2.46
-2.24
-1.91
-1.63
1.71
2.10
2.55
2.88
-2.42
-2.22
-1.90
-1.63
1.75
2.12
2.54
2.89
-2.40
-2.20
-1.91
-1.64
1.76
2.16
2.57
2.91
-2.40
-2.20
-1.90
-1.64
1.78
2.18
2.66
3.04
-2.38
-2.22
-1.89
-1.63
1.79
2.20
2.74
3.13
-2.42
-2.22
-1.90
-1.62
1.83
2.24
2.83
3.23
-2.44
-2.22
-1.92
-1.65
1.88
2.31
2.88
3.25
-2.47
-2.25
-1.97
-1.68
1.95
2.39
3.02
3.43
-2.50
-2.31
-2.02
-1.74
2.07
2.53
3.17
3.68
-2.61
-2.41
-2.11
-1.83
2.21
2.72
3.35
3.99
-2.81
-2.63
-2.29
-1.99
2.39
2.97
3.77
4.31
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.5
1.1
2.6
5.2
5.2
2.8
1.4
0.8
0.4
0.8
2.4
5.2
5.5
3.0
1.5
0.9
0.3
0.8
2.2
5.0
5.5
3.2
1.6
0.9
0.3
0.8
2.2
4.9
5.6
3.3
1.6
1.0
0.3
0.7
2.2
4.9
5.9
3.5
1.7
1.0
0.3
0.7
2.1
4.9
6.1
3.6
1.8
1.0
0.2
0.7
2.1
4.9
6.2
3.7
1.8
1.2
0.2
0.7
2.1
4.8
6.4
3.8
2.0
1.3
0.2
0.7
2.1
4.8
6.7
4.0
2.2
1.5
0.3
0.8
2.3
5.1
7.2
4.4
2.4
1.6
0.3
0.8
2.5
5.4
7.7
5.0
2.8
1.9
0.4
1.0
3.0
6.1
8.5
5.7
3.4
2.3
0.6
1.4
3.7
7.5
9.7
6.7
4.3
3.0
1.2
2.3
5.3
9.5
11.3
8.2
5.4
4.0
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.04 -0.05 -0.05 -0.07 -0.07 -0.08 -0.10 -0.09 -0.09 -0.11 -0.13 -0.13 -0.14 -0.13
-0.03 -0.02 -0.03 -0.03 -0.03 -0.03 -0.03 -0.02 -0.02 -0.03 -0.02 -0.02 -0.01 -0.01
1.02 1.02 1.02 1.02 1.03 1.03 1.04 1.05 1.07 1.09 1.12 1.17 1.24 1.35
0.10 0.15 0.19 0.24 0.28 0.32 0.36 0.40 0.44 0.47 0.53 0.58 0.62 0.65
3.08 3.04 3.06 3.12 3.19 3.24 3.30 3.37 3.47 3.48 3.61 3.75 3.87 4.03
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B17
/ = 1.5 (c 1 0.71)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.55
-2.34
-1.98
-1.65
1.60
1.95
2.36
2.59
-2.54
-2.32
-1.96
-1.66
1.60
1.91
2.34
2.60
-2.53
-2.31
-1.95
-1.66
1.61
1.92
2.34
2.56
-2.49
-2.29
-1.94
-1.64
1.63
1.93
2.34
2.61
-2.50
-2.27
-1.94
-1.64
1.64
1.95
2.32
2.58
-2.45
-2.23
-1.92
-1.63
1.64
1.98
2.35
2.59
-2.43
-2.21
-1.92
-1.64
1.65
2.00
2.36
2.62
-2.42
-2.22
-1.92
-1.64
1.67
2.01
2.38
2.67
-2.40
-2.20
-1.90
-1.63
1.67
2.02
2.38
2.73
-2.40
-2.18
-1.89
-1.63
1.68
2.04
2.42
2.74
-2.39
-2.16
-1.88
-1.63
1.69
2.07
2.47
2.77
-2.36
-2.16
-1.88
-1.62
1.71
2.11
2.54
2.80
-2.36
-2.15
-1.86
-1.61
1.72
2.14
2.58
2.84
-2.32
-2.13
-1.84
-1.59
1.75
2.17
2.63
2.89
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.5
1.1
2.6
5.1
4.5
2.4
1.1
0.5
0.5
1.0
2.5
5.2
4.5
2.3
1.0
0.5
0.4
1.0
2.5
5.1
4.6
2.3
1.0
0.5
0.4
0.9
2.4
5.0
4.8
2.4
1.0
0.6
0.3
0.8
2.3
4.9
4.9
2.5
1.0
0.5
0.3
0.8
2.2
4.9
5.0
2.6
1.1
0.5
0.3
0.7
2.2
4.9
5.0
2.8
1.1
0.6
0.3
0.7
2.3
4.9
5.3
2.8
1.2
0.6
0.3
0.6
2.1
4.8
5.2
2.9
1.2
0.7
0.3
0.6
2.0
4.9
5.4
3.0
1.3
0.7
0.2
0.6
2.0
4.9
5.5
3.1
1.5
0.7
0.2
0.6
1.9
4.6
5.6
3.2
1.6
0.9
0.2
0.5
1.8
4.5
5.6
3.4
1.8
1.0
0.2
0.5
1.8
4.3
5.9
3.6
1.9
1.1
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.06 -0.07 -0.07 -0.08 -0.08 -0.08 -0.08 -0.08 -0.09 -0.09 -0.10 -0.10 -0.11 -0.11
-0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05
1.00 1.00 1.00 1.00 1.00 1.00 1.01 1.01 1.01 1.01 1.01 1.01 1.01 1.02
0.07 0.09 0.09 0.11 0.13 0.15 0.17 0.19 0.21 0.24 0.27 0.31 0.35 0.39
3.06 3.02 2.95 2.95 2.94 2.92 2.93 2.97 2.99 3.02 3.06 3.11 3.17 3.22
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 2.0 (c 1 0.84)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.56
-2.33
-1.96
-1.67
1.69
2.06
2.47
2.72
-2.57
-2.33
-2.00
-1.70
1.70
2.08
2.44
2.71
-2.55
-2.32
-2.00
-1.70
1.69
2.03
2.45
2.76
-2.54
-2.30
-1.99
-1.70
1.68
2.01
2.45
2.74
-2.56
-2.32
-1.98
-1.70
1.68
2.03
2.44
2.74
-2.56
-2.33
-1.98
-1.70
1.69
2.03
2.42
2.69
-2.54
-2.31
-1.98
-1.71
1.69
2.04
2.45
2.70
-2.53
-2.29
-1.98
-1.71
1.70
2.07
2.42
2.71
-2.51
-2.28
-1.96
-1.71
1.71
2.07
2.44
2.70
-2.51
-2.25
-1.96
-1.69
1.71
2.07
2.45
2.72
-2.50
-2.26
-1.95
-1.68
1.71
2.06
2.45
2.76
-2.47
-2.26
-1.94
-1.67
1.70
2.07
2.47
2.81
-2.46
-2.24
-1.92
-1.66
1.71
2.07
2.51
2.78
-2.44
-2.24
-1.91
-1.65
1.72
2.07
2.49
2.79
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.5
1.0
2.5
5.2
5.4
2.9
1.4
0.7
0.5
1.0
2.8
5.6
5.5
3.1
1.4
0.7
0.5
1.0
2.7
5.5
5.5
2.9
1.4
0.8
0.5
1.0
2.8
5.7
5.4
2.8
1.4
0.7
0.5
1.0
2.7
5.7
5.3
2.9
1.3
0.7
0.5
1.0
2.6
5.6
5.4
2.9
1.3
0.7
0.5
0.9
2.6
5.7
5.4
3.0
1.3
0.7
0.4
0.9
2.6
5.7
5.6
3.0
1.3
0.7
0.4
0.9
2.5
5.7
5.6
3.1
1.4
0.7
0.4
0.8
2.5
5.6
5.7
3.0
1.4
0.7
0.4
0.8
2.5
5.5
5.6
3.1
1.4
0.8
0.3
0.8
2.4
5.3
5.6
3.2
1.4
0.8
0.3
0.8
2.3
5.1
5.6
3.2
1.4
0.8
0.3
0.8
2.2
5.1
5.7
3.2
1.5
0.8
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.02 -0.04 -0.04 -0.05 -0.06 -0.06 -0.07 -0.07 -0.07 -0.07 -0.07 -0.07 -0.07 -0.07
-0.01 -0.02 -0.03 -0.03 -0.04 -0.04 -0.04 -0.04 -0.04 -0.04 -0.04 -0.04 -0.04 -0.04
1.02 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03
0.11 0.11 0.12 0.12 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.20 0.21 0.22
3.09 3.06 3.02 3.02 3.01 3.01 3.02 3.01 3.00 3.00 3.01 3.02 3.02 3.03
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
B18
/ = 1.0 (c 1 0.52)
10
11
12
13
14
15 N(0,1)
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.52
-2.27
-1.97
-1.65
1.69
2.03
2.40
2.65
-2.51
-2.30
-1.94
-1.64
1.68
2.02
2.38
2.69
-2.47
-2.24
-1.92
-1.64
1.66
2.01
2.46
2.70
-2.51
-2.22
-1.88
-1.63
1.66
2.03
2.51
2.71
-2.48
-2.15
-1.88
-1.61
1.70
2.04
2.50
2.78
-2.43
-2.16
-1.87
-1.62
1.70
2.07
2.49
2.81
-2.47
-2.17
-1.88
-1.61
1.72
2.07
2.50
2.76
-2.42
-2.17
-1.88
-1.59
1.73
2.05
2.50
2.80
-2.44
-2.19
-1.86
-1.61
1.72
2.09
2.52
2.77
-2.40
-2.18
-1.87
-1.60
1.74
2.11
2.56
2.80
-2.44
-2.20
-1.88
-1.61
1.76
2.15
2.64
2.86
-2.41
-2.21
-1.87
-1.63
1.78
2.21
2.68
-2.49
-2.27
-1.92
-1.63
1.80
2.24
2.97
3.07
3.15
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.4
0.9
2.5
5.0
5.4
2.9
1.2
0.7
0.4
0.9
2.4
4.8
5.4
3.0
1.2
0.6
0.4
0.7
2.3
4.9
5.1
2.8
1.4
0.7
0.4
0.7
2.1
4.8
5.2
3.0
1.4
0.9
0.4
0.6
2.1
4.7
5.4
3.0
1.5
0.8
0.4
0.6
2.1
4.7
5.6
3.2
1.5
0.8
0.4
0.6
1.9
4.7
5.7
3.1
1.5
0.9
0.4
0.6
2.0
4.4
5.8
3.2
1.5
0.8
0.4
0.7
1.8
4.6
5.8
3.3
1.6
0.9
0.3
0.7
1.9
4.4
6.2
3.4
1.7
1.0
0.3
0.6
2.0
4.5
6.3
3.5
1.8
1.1
0.3
0.7
2.0
4.7
6.2
3.7
2.0
1.2
0.4
0.8
2.4
4.8
6.4
4.1
2.1
1.3
0.4
0.9
2.4
5.2
7.0
4.5
2.6
1.6
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.02 -0.04 -0.04 -0.05 -0.06 -0.07 -0.07 -0.08 -0.08 -0.07 -0.08 -0.08 -0.08 -0.09
0.00 -0.01 -0.02 -0.02 -0.02 -0.03 -0.03 -0.03 -0.03 -0.03 -0.03 -0.02 -0.02 -0.02
1.01 1.01 1.00 1.00 1.01 1.01 1.01 1.01 1.01 1.02 1.02 1.04 1.05 1.08
0.08 0.11 0.14 0.18 0.21 0.24 0.25 0.27 0.29 0.31 0.33 0.34 0.35 0.37
3.13 3.11 3.11 3.16 3.17 3.18 3.18 3.18 3.19 3.23 3.28 3.30 3.29 3.27
0.00
0.00
1.00
0.00
3.00
Dimension m=
Quantile
-2.55
-2.30
-1.95
-1.66
1.88
2.33
2.75 2.80
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 1.0 (c 1 0.52)
10
11
12
13
14
0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%
-2.57
-2.36
-2.02
-1.72
1.69
2.04
2.35
2.56
-2.61
-2.38
-2.01
-1.69
1.67
2.00
2.44
2.63
-2.59
-2.34
-1.99
-1.70
1.71
2.00
2.38
2.71
-2.60
-2.27
-1.96
-1.67
1.69
1.99
2.41
2.71
-2.61
-2.29
-1.96
-1.66
1.70
2.00
2.40
2.63
-2.60
-2.31
-1.95
-1.66
1.71
2.02
2.43
2.67
-2.55
-2.34
-1.97
-1.64
1.70
2.06
2.49
2.67
-2.52
-2.32
-1.96
-1.61
1.70
2.08
2.48
2.74
-2.49
-2.28
-1.94
-1.62
1.71
2.10
2.48
2.87
-2.45
-2.24
-1.91
-1.64
1.74
2.07
2.50
2.79
-2.42
-2.23
-1.88
-1.63
1.75
2.10
2.53
2.81
-2.42
-2.23
-1.92
-1.62
1.76
2.13
2.54
2.82
-2.46
-2.26
-1.92
-1.64
1.79
2.17
2.62
2.92
-2.48
-2.23
-1.95
-1.68
1.85
2.22
2.70
3.01
-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
0.5
1.0
2.8
5.7
5.5
2.9
1.1
0.5
0.5
1.2
2.9
5.4
5.3
2.7
1.3
0.6
0.5
1.1
2.7
5.5
5.7
2.8
1.2
0.7
0.5
0.8
2.5
5.4
5.6
2.6
1.2
0.7
0.5
0.9
2.5
5.2
5.8
2.8
1.3
0.6
0.5
0.9
2.5
5.2
5.7
3.0
1.3
0.6
0.4
1.0
2.5
4.8
5.4
3.1
1.4
0.8
0.4
1.0
2.5
4.6
5.5
3.1
1.4
0.8
0.3
0.9
2.4
4.8
5.7
3.0
1.4
0.8
0.3
0.9
2.2
5.0
5.7
3.3
1.5
0.9
0.3
0.8
2.1
4.8
5.8
3.4
1.6
1.0
0.3
0.8
2.2
4.7
6.2
3.4
1.7
0.9
0.3
0.7
2.2
5.0
6.5
3.9
1.8
1.1
0.4
0.8
2.4
5.4
6.7
4.2
2.2
1.2
0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5
-0.02 -0.02 -0.02 -0.01 0.00 -0.02 -0.03 -0.04 -0.05 -0.06 -0.05 -0.05 -0.05 -0.06
-0.02 -0.02 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 0.00
1.01 1.02 1.03 1.03 1.03 1.03 1.02 1.02 1.02 1.03 1.03 1.04 1.05 1.07
0.02 0.04 0.04 0.05 0.08 0.11 0.13 0.15 0.17 0.20 0.22 0.23 0.25 0.27
3.00 2.98 2.99 2.99 3.00 2.99 3.02 3.04 3.06 3.06 3.06 3.06 3.07 3.09
0.00
0.00
1.00
0.00
3.00
Dimension m=
15 N(0,1)
Quantile
Size
Median
Mean
Std. dev.
Skewness
Kurtosis
Based on BDS test results of 7,500 and 5,000 normal random samples
B19
n=
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 0.75
50
100
250
500
1,000
2,500
50,000
0.2694
0.2722
0.0157
1.0683
5.0378
0.2731
0.2744
0.0100
0.9013
4.3824
0.2749
0.2755
0.0058
0.6129
3.4846
0.2757
0.2760
0.0040
0.4656
3.2963
0.2760
0.2761
0.0028
0.3258
3.1784
0.2762
0.2762
0.0017
0.2542
3.0917
0.2764
0.2763
0.0004
n=
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 1.0
n=
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 1.5
n=
Median
Mean
Std. dev.
Skewness
Kurtosis
/ = 2.0
n=
Median
Mean
Std. dev.
Skewness
Kurtosis
(b. on 55
samples)
100
50,000
0.3998
0.4016
0.0121
0.8768
4.2276
0.4040
0.4040
0.0000
(b. on 1
sample)
50
100
250
500
750
1,000
2,500
50,000
0.5118
0.5149
0.0188
0.9439
4.3014
0.5160
0.5177
0.0128
0.7605
3.9202
0.5187
0.5194
0.0078
0.5081
3.2932
0.5197
0.5200
0.0054
0.3648
3.1600
0.5199
0.5201
0.0044
0.3141
3.1587
0.5201
0.5202
0.0038
0.2501
3.1187
0.5203
0.5204
0.0024
0.1879
3.1015
0.5206
0.5205
0.0005
50
100
250
500
750
1,000
2,500
50,000
0.7053
0.7064
0.0148
0.5794
4.1767
0.7081
0.7088
0.0100
0.5293
3.7649
0.7098
0.7103
0.0061
0.3943
3.3233
0.7105
0.7107
0.0043
0.2941
3.1699
0.7107
0.7108
0.0035
0.2433
3.1535
0.7108
0.7109
0.0030
0.1903
3.0978
0.7110
0.7111
0.0019
0.1459
3.0728
0.7112
0.7112
0.0004
50
100
250
500
750
1,000
2,500
50,000
0.8400
0.8407
0.0078
0.3895
4.9165
0.8412
0.8416
0.0050
0.5977
4.7093
0.8420
0.8423
0.0029
0.6233
4.0685
0.8423
0.8425
0.0020
0.4967
3.6818
0.8424
0.8426
0.0016
0.4131
3.3459
0.8425
0.8426
0.0014
0.3461
3.2653
0.8426
0.8427
0.0009
0.2566
3.1776
0.8427
0.8427
0.0002
(b. on 55
samples)
(b. on 55
samples)
(b. on 55
samples)
1 February 1999
Appendix C:
Software (MATLAB Code)
C2
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
% % % % % % % % % % Executable part of main function BDS.M starts here % % % % % % % % %
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
C3
=
=
=
=
=
[ram1,
[ram2,
[ram3,
[ram4,
[ram5,
[ram6,
=
=
=
=
=
=
min(fastbuild
min(fastbuild
min(slowbuild
min(slowbuild
min(
min(
bits1]
bits2]
bits3]
bits4]
bits5]
bits6]
+
+
+
+
holdinfo
holdinfo
holdinfo
holdinfo
+
+
+
+
C4
% Vector BITINFO lists the number of bits set for each integer between 0 and 2^bits
% (corresponding to the indices of the vector shifted by 1). See Kanzler (1998) for an
% explanation.
bitinfo = uint8(sum(rem(floor((0:pow2(bits)-1)'*pow2(1-bits:0)),2),2));
elseif ram3 < maxram | ram4 < maxram
if ram3 < maxram
method = 3;
bits = bits3; ram = ram3;
else
method = 4;
bits = bits4; ram = ram4;
stepping = floor((maxram - ram) * bits / n / 0.000024);
end
bitinfo(1:pow2(bits), :) = uint8(0);
% the same as above, but created through
for bit = 1 : bits
% a loop, which consumes less memory
bitinfo(1:pow2(bits)) = sum([bitinfo, ...
kron(ones(pow2(bits-bit),1), [zeros(pow2(bit-1),1); ones(pow2(bit-1),1)])],2);
end
elseif ram5 < maxram | ram6 < maxram
if ram5 < maxram
method = 5;
bits = bits5; ram = ram5;
else
method = 6;
bits = bits6; ram = ram6;
stepping = floor((maxram - ram) * bits / n / 0.000024);
end
else
disp('Insufficient amount of memory. Allocate more memory to the system')
disp('or reduce the number of observations, then try again.')
error(' ')
end
%%%%%%%%%%%%%%%%%%%%% Determination of dimensional distance EPSILON %%%%%%%%%%%%%%%%%%%%%%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
The empirical investigation by Kanzler (1998) shows that choosing EPSILON such that the
first-order correlation integral is around 0.7 yields the most efficient estimation of
low-dimensional BDS statistics. Hence the objective here is to choose EPSILON such that,
say, 70% of all observations lie within distance EPSILON to each other. If desired, the
programme first determines EPSILON as to fulfil this or a similar requirement.
The conceptually simplest way of setting up the calculation of distance among all
observations is to define a two-dimensional table D (for "distance") of length and width
N and assign to each co-ordinate (x,y) the result of the problem ABS(x-y).
In principle, the entire table could thus be created with the following one-line
statement:
D = ABS( SERIES(ONES(1000,1),:)' - SERIES(ONES(1000,1),:) )
Since the lower triangle of the table only replicates the upper triangle and since the
diagonal values represent own values (ones) which are not desired to be included in the
calculation, only the upper triangle receives further attention.
Unfortunately, sewing all the row vectors of the upper triangle together to form one
single (row) vector makes indexing very messy. To aid understanding of the vector-space
indexing used here (as well as in the optional sub-function further below), one may wish
to refer to the following exemplary matrix table (N=7):
*
*
r
o
w
*
*
* * * *c o l u m n* * * *
1
2
3
4
5
6
7
1
2
3
4
5
6
7
*
.
.
.
.
.
.
1
*
.
.
.
.
.
2
7
*
.
.
.
.
3
8
12
*
.
.
.
4
9
13
16
*
.
.
5
10
14
17
19
*
.
6
11
15
18
20
21
*
C5
%
% (A formal derivation of the above formulae is beyond the scope of this script.)
%
% To calculate a percentile of the distribution of distance values, the row vector is
% sorted (unfortunately, this requires a lot of time and RAM in MATLAB).
if ~flag
demeaned = series-sum(series)/n;
epsilon = distance * sqrt(demeaned*demeaned'/(n-1));
clear demeaned % to save memory
elseif 0.000008 * 3 * sum(1:n-1) < maxram % check memory requirements for DIST and sorting
dist(1:sum(1:n-1)) = 0;
for i = 1 : n-1
dist(1+(i-1)*(n-1)-sum(0:i-2):i*(n-1)-sum(1:i-1)) = abs(series(i+1:n)-series(i));
end
sorted = sort(dist);
epsilon = sorted(round(distance*sum(1:n-1))); % DISTANCEth percentile of SORTED series
clear dist sorted
else
error('Insufficient RAM to compute EPSILON; allocate more memory or use METHOD = 1.')
end
%%%%%%%%%%%% Computation and storage of one-dimensional distance information %%%%%%%%%%%%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
Similarly to the above, a two-dimensional table C (for "close") of length and width N
can be defined by assigning to each co-ordinate (x,y) the result of the problem
ABS(x-y) <= EPSILON; (x,y) assumes the value 1 if the statement is true and 0 otherwise.
Formally, for given EPSILON:
C(x,y) = 1 if ABS(x-y) <= EPSILON
= 0 otherwise
Once again, the resulting information needs to be stored in the most efficient way.
In this implementation, this is done by chopping each row of the table into "words" of
several bits, the precise number of bits per word being determined by the above
algorithms. One "word" is thus represented by one integer. This slashes the size of the
table by the number of bits. See Kanzler (1998) for more details.
The below routine stores all rows of the upper triangle of the conceptual table
(described in Kanzler, 1998) left-aligned and assigns zeros to all other elements.
As will also be explained further below, the computation of parameter K requires the sum
of each FULL row, i.e. each row including the elements in the lower triangle and on the
diagonal. The "missing" bits correspond to the sums over each column in the upper
triangle, and these sums are also computed and stored in the below loop. And to make
matters simple, diagonal values are allocated to the column sums by initialising them
with value 1. See also Kanzler (1998).
colsum(1:n)
rowsum(1:n)
nwords
wrdmtrx(1:n-1,1:nwords)
=
=
=
=
1;
0;
ceil((n-1)/bits);
0;
=
=
=
=
=
end
clear series bitvec
%%%%%%%%%%%%%%%%%% Computation of one-dimensional correlation estimates %%%%%%%%%%%%%%%%%%
%
%
%
%
%
%
%
%
C1(1), the fraction (or estimated probability) of pairs in SERIES being "close" in the
first dimension is just the average over ALL unique elements. C1(1) is hence the most
efficient estimator of C(1), and the resulting estimate is used in the computation of
SIGMA(M) further below.
However, for the difference term C(M) - C(1)^M of the BDS statistic (see further below)
to follow SIGMA asymptotically, both C(M) and C(1) need to be estimated over the same
length vector, and so MAXDIM different C1's need to be estimated here:
C6
%
%
N
N
%
C1(M) = 2/(N-M+1)/(N-M) * SUM
SUM B(S,T)
%
S=M T=S+1
%
% Each C1(M) is easily computed from the sum of all bits set in rows M to N-1 divided by
% the appropriate total number of bits.
bitsum(maxdim:-1:1) = cumsum([sum(rowsum(maxdim:n-1)), rowsum(maxdim-1:-1:1)]);
c1
(maxdim:-1:1) = bitsum(maxdim:-1:1) ./ cumsum([sum(1:n-maxdim), n-maxdim+1 : n-1]);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Computation of parameter K %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
N
N
SUM
SUM {C(T,S)*C(S,R) + C(T,R)*C(R,S) + C(S,T)*C(T,R)} / 3
T=T+1 R=S+1
sum over each row and over each column including the
To compute K from this, the sum of the squares of the
adjusted as reasoned above, whereby the sum of all
twice the sum of all vector elements plus the diagonal
N
SUM
T=S+1
M-1
PROD B(S-J, T-J)
J=0
C7
for m = 2 : maxdim
bitcount = 0;
if sum(method == [1 3])
wrdmtrx(m:n-1,:) = bitand(wrdmtrx(m:n-1,:),wrdmtrx(m-1:n-2,:)); % BITAND and bit
bitcount = sum(sum(bitinfo(wrdmtrx(m:n-1,:)+1)));
% count all at once
elseif method == 5
wrdmtrx(m:n-1,:) = bitand(wrdmtrx(m:n-1,:), wrdmtrx(m-1:n-2,:));
for col = 1 : ceil((n-1)/bits)
bitcount = bitcount + sum(sum(rem(floor(wrdmtrx(m:...
n-1-(col-1)*bits, col) * pow2(1-bits:0)), 2)));
end
%
%
%
%
%
%
%
%
%
%
%
BITAND
and bit
count in
backward
loops
through
the table
BITAND at once...
bit count
by brute force
in loops
else
for row = n-stepping : -stepping : m+1
wrdmtrx(row:row+stepping-1,:) = bitand(wrdmtrx(row:...
row+stepping-1,:), wrdmtrx(row-1:row+stepping-2,:));
end
wrdmtrx(m:row-1,:) = bitand(wrdmtrx(m:row-1,:), wrdmtrx(m-1:row-2,:));
for col = 1 : ceil((n-1)/bits)
bitcount = bitcount + sum(sum(rem(floor(wrdmtrx(m:...
n-1-(col-1)*bits, col) * pow2(1-bits:0)),2)));
end
end
c(m-1)
= bitcount / sum(1:n-m);
sigma(m-1) = 2*sqrt(prod(ones(1,m)*k) + 2*ivp(k,m-(1:m-1),m-1)...
*(ivp(c1(1),2*(1:m-1),m-1))' + (m-1)*(m-1)...
*prod(ones(1,2*m)*c1(1)) - m*m*k*prod(ones(1,2*m-2)*c1(1)));
end
clear wrdmtrx
%
%
%
%
%
%
%
%
%
%
%
BITAND
operations
and bruteforce bit
counting
in loops
indexing of
C and SIGMA
runs from 1
to MAXDIM-1
Under the null hypothesis of independence, it is obvious that the time-series process
has the property C(1)^M = C(M). In finite samples, C(1) and C(M) are consistently
estimated by C1(M) and C(M) as above. Also, Brock et al. (1996) show that the standard
deviation of the difference C(M) - C1(M)^M can be consistently estimated by SIGMA(M)
divided by SQRT(N-M+1), where:
M-1
SIGMA(M)^2 = 4* [K^M + 2* SUM {K^(M-J)* C^(2*J)} + (M-1)^2* C^(2*M) - M^2* K* C^(2*M-2)]
J=1
and C = C1(1) and K as above.
For given N and EPSILON, the BDS Statistic
is defined as the ratio of the two terms:
C(M) - C1(M)^M
W(M) = SQRT(N-M+1) * -------------SIGMA(M)
Since it follows asymptotically the normal distribution with mean 0 and variance 1,
hypothesis testing is straightforward. If available, this is done here using function
NORMCDF of the MATLAB Statistics Toolbox.
Integer powers are again calculated by a sub-routine which is more accurate than the
MATLAB built-in power function; without using the sub-routine, the line for calculating
W would be: w = sqrt(n-(2:maxdim)+1) .* (c - c1(2:maxdim).^(2:maxdim)) ./ sigma;
C8
if maxdim > 1
w = sqrt(n-(2:maxdim)+1) .* (c - idvp(c1(2:maxdim), 2:maxdim, maxdim-1)) ./ sigma;
if exist('normcdf.m','file') & nargout > 1
sig = min(normcdf(w,0,1), 1-normcdf(w,0,1)) * 2;
elseif nargout > 1
sig(1:maxdim-1) = NaN;
end
else
w
= [];
sig = [];
c
= [];
end
%%%%%%%%%%%%%%%%%%%%%%% Sub-functions for computing integer powers %%%%%%%%%%%%%%%%%%%%%%%
function ipow = ivp (base, intpowvec, veclen)
ipow(1 : veclen) = 0;
for j = 1 : veclen
ipow(j) = prod(ones(1, intpowvec(j)) * base);
end
function ipow = idvp (basevec, intpowvec, veclen)
ipow(1 : veclen) = 0;
for j = 1 : veclen
ipow(j) = prod(ones(1, intpowvec(j)) * basevec(j));
end
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
% % % % % % % % % % Executable part of main function BDS.M ends here % % % % % % % % % %
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
%
%
The following sub-function is not actually used by the main function and only
%
included for the benefit of those who would like to implement the BDS test in a
%
language which is either incapable of or inefficient in handling bit-wise AND%
operations, or those who would like to cross-check the above computation. Deleting
%
the sub-function from the script will NOT result in any increase in performance.
%
%
To use the function, save the remainder of this code in a file named BDSNOBIT.M.
%
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
%
%
%
%
%
%
%
%
%
%
%
=
=
=
=
std(series)*eps;
series(:)';
% PAIRS is the total number of unique pairs which can be
length(series); % formed from all observations (note that while this is
sum(1:n-1);
% just (N-1)*N/2, MATLAB computes SUM(1:N-1) twice as fast!)
C9
Recall that in the implementation of the main function above, table C is stored in bitrepresentation. When this is not possible or desirable, the second best method is to use
one continuous vector of unassigned 8-bit integers (called UINT8). This, however,
requires version 5.1 or higher, and a similar option may not be available in other highlevel languages. Implementation does not depend on the ability to use unassigned low-bit
integers and would work equally with double-precision integers, but the memory
requirements would, of course, be higher. Using UINT8's is still a rather inefficient
way of storing zeros and ones, which in principle require only a single bit each. On the
PC, MATLAB actually requires "only" around 5 bytes for each UNIT8.
b(1:pairs) = uint8(0);
for i = 1 : n-1
b(1+(i-1)*(n-1)-sum(0:i-2):i*(n-1)-sum(1:i-1)) = abs(series(i+1:n)-series(i))<=epsilon;
end
clear series
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Computation of parameter K %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
sums(1 : n) = 0;
for i = 1 : n
sums(i) =
sum(b(i+(0 : i-2)*n - cumsum(1 : i-1)))...
% sum over column I
+ 1 ...
% diagonal element
+ sum(b(1+(i-1)*(n-1)-sum(1:i-2) : i*(n-1)-sum(1:i-1))); % sum over row I
end
k = (sum(sums.^2) + 2*n - 3*(2*sum(b)+n)) / n/(n-1)/(n-2);
%%%%%%%%%%%%%%%%%% Computation of one-dimensional correlation estimates %%%%%%%%%%%%%%%%%%
bitsum(1:maxdim) = sum(b(1+(maxdim-1)*(n-1)-sum(0:maxdim-2) : pairs));
for m = maxdim-1 : -1 : 1
bitsum(m) = bitsum(m+1) + sum(b(1+(m-1)*(n-1)-sum(0:m-2):m*(n-1)-sum(1:m-1)));
end
c1(maxdim:-1:1) = bitsum(maxdim:-1:1) ./ cumsum([sum(1:n-maxdim), n-maxdim+1 : n-1]);
%%%%%%%%%% Computation of correlation estimates and SIGMA for higher dimensions %%%%%%%%%%
for m = 2 : maxdim
% Indexing in vector space once again follows the rules set out above. Multiplication
% is done by moving up column by column into north-west direction, so counter I runs
% backwards in the below WHILE loop until the Mth column (from the left) is reached:
i = n;
while i - m
% Multiplication is not defined on UINT8 variables and translating the columns
% twice, once from UINT8 to DOUBLE integer and then back to UINT8, would be
% inefficient, so it is better to sum entries (this operation - undocumented by
% MATLAB - is defined, and even faster than the documented FIND function!) and
% compare them against the value 2:
b(i + (m-1 : i-2)*n - sum(1:m-1) - cumsum(m : i-1)) = ...
sum([ b(i
+ (m-1 : i-2)*n - sum(1:m-1) - cumsum(m
: i-1)); ...
b(i-1 + (m-2 : i-3)*n - sum(1:m-2) - cumsum(m-1 : i-2)) ]) == 2;
% The sum over each column is computed immediately after that column has been
% updated. To store the column sums, the vector SUMS already used above for the row
% sums is recycled (this is more memory-efficient than clearing the above SUMS
% vector and defining a new vector of the column sums, because in the latter case,
% MATLAB's memory space will end up being fragmented by variables K and C added to
% the memory in the meantime!):
sums(i) = sum(b(i + (m-1 : i-2)*n - sum(1:m-1) - cumsum(m : i-1)));
i = i - 1;
end
c(m-1)
= sum(sums(m+1:n)) / sum(1:n-m);
sigma(m-1) = 2*sqrt(k^m + 2*k.^(m-(1:m-1))*(c1(1).^(2*(1:m-1)))'... % could use above
+ (m-1)^2*c1(1)^(2*m) - m^2*k*c1(1)^(2*m-2)); % inter-power subend
% functions instead
%%%%%%%%%%%%%%% Computation of the BDS statistic and level of significance %%%%%%%%%%%%%%%
w = sqrt(n-(2:maxdim)+1) .* (c-c1(2:maxdim).^(2:maxdim)) ./ sigma; % or use sub-functions
C10
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
% % % % % % % % % % % % % % Sub-function BDSNOBIT.M ends here % % % % % % % % % % % % % %
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
REFERENCES:
%
%
%
%
%
ACKNOWLEDGEMENT:
Brock, William, Davis Dechert & Jos Scheinkman (1987), "A Test for Independence
Based on the Correlation Dimension", University of Wisconsin-Madison, Social
Science Research Working Paper, no. 8762
Brock, William, Davis Dechert, Jos Scheinkman & Blake LeBaron (1996), "A test for
independence based on the correlation dimension", Econometric Reviews, vol. 15,
no. 3 (August), pp. 197-235, revised version of Brock et al. (1987)
Dechert, Davis (1988), "BDS STATS: A Program to Calculate the Statistics of the
Grassberger-Procaccia Correlation Dimension Based on the Paper "A Test for
Independence" by W. A. Brock, W. D. Dechert and J. A. Scheinkman", version 8.21
(latest), MS-DOS software available on gopher.econ.wisc.edu
Kanzler, Ludwig (1998), "Very Fast and Correctly Sized Estimation of the BDS Statistic",
Oxford University, Department of Economics, working paper, available on
http://users.ox.ac.uk/~econlrk
LeBaron, Blake (1988, 1990, 1997a), "BDSTEST.C", version June 1997 (latest), C source
code available on gopher.econ.wisc.edu
LeBaron, Blake (1997b), "A Fast Algorithm for the BDS Statistic", Studies in
Nonlinear Dynamics and Econometrics, vol. 2, pp. 53-59
I am grateful to Blake LeBaron for giving me the exclusive opportunity to beta-test his
C programme in its compiled version for MATLAB 5 and thus enabling me to compare the two
programmes directly. I have benefited from the many associated discussions.
% End of file.
C11
=
=
=
=
2 : 15;
[0.5 1.0 1.5 2.0];
[50 100 250 500 750 1000 2500];
[0.005 0.010 0.025 0.050 1 0.050 0.025 0.010 0.005];
if nargin < 1
error('This function needs some argument input!')
elseif length(w) ~= length(w(:))
error('Cannot evaluate a matrix of BDS statistics; input must be a scalar or vector.')
elseif nargin < 2
n = inf;
end
if nargin < 3
m = 2 : length(w);
elseif unique([mcases, m(:)']) ~= 14
error('Cannot handle embedding dimension other than integers between 2 and 15.')
elseif length(m) ~= length(w) & length(m) ~= 1
C12
The quantile values are taken from Kanzler (1998) and are found along dimension 1
(with the corresponding values for N(0,1) in parenthesis):
< 0.5% (-2.58)
< 1.0% (-2.33)
< 2.5% (-1.96)
< 5.0% (-1.65)
> 95.0% ( 1.65)
> 97.5% ( 1.96)
> 99.0% ( 2.33)
> 99.5% ( 2.58)
Embedding dimensions m = [2 3 4 5 6 7 8 9 10 11 12 13 14 15] are along dimension 2.
Sample sizes n = [50 100 250 500 750 1000 2500] are along dimension 3.
Dimensional distances in units of the standard deviation of a normally distributed
sample eps = [0.5 1.0 1.5 2.0] are along dimension 4.
-48.20
-25.56
-13.73
-9.39
15.41
36.99
74.44
118.37
[...
-6.45
-5.49
-4.37
-3.54
3.64
4.98
6.78
8.60
-6.99
-5.89
-4.58
-3.75
4.11
5.74
8.21
10.65
-7.46
-6.27
-4.92
-4.01
4.81
6.81
9.89
12.56
-7.65
-6.45
-5.06
-4.12
5.66
8.33
12.32
16.11
-7.89
-6.57
-5.13
-4.20
6.60
10.21
15.44
21.04
-7.72
-6.65
-5.13
-4.16
7.15
11.97
20.26
26.66
-8.09
-6.69
-5.06
-4.03
6.60
13.38
24.66
34.22
-8.60
-6.95
-5.25
-4.06
2.59
12.07
27.84
40.93
-9.37 -10.87
-7.70 -8.25
-5.45 -5.83
-4.17 -4.28
-0.18 -0.15
4.40 -0.10
26.77 16.38
46.54 46.35];
C13
-3.08 -3.12 -3.19 -3.19 -3.25 -3.32 -3.36 -3.49 -3.63 -3.74 -3.94 -4.21 -4.46 -4.74
-2.63 -2.67 -2.73 -2.75 -2.80 -2.84 -2.88 -2.97 -3.05 -3.14 -3.24 -3.40 -3.55 -3.74
2.46 2.46 2.47 2.46 2.44 2.47 2.53 2.60 2.67 2.76 2.85 2.95 3.07 3.20
3.01 3.04 3.07 3.07 3.14 3.27 3.34 3.48 3.64 3.78 3.96 4.25 4.54 4.87
3.64 3.69 3.76 3.91 4.05 4.26 4.43 4.62 4.92 5.22 5.65 6.13 6.54 7.09
4.15 4.11 4.26 4.57 4.70 4.96 5.22 5.56 6.05 6.40 6.90 7.41 8.29 9.19];
% n = 50, eps = 2.0 (c1 ~ 0.84)
quants(1:8, 1:14, 1, 4) =
-4.86 -4.77 -4.74 -4.67
-4.42 -4.34 -4.35 -4.28
-3.69 -3.68 -3.67 -3.69
-3.02 -3.09 -3.11 -3.15
2.83 2.80 2.79 2.76
3.50 3.42 3.46 3.44
4.23 4.26 4.19 4.22
4.70 4.77 4.71 4.72
[...
-4.83
-4.40
-3.78
-3.23
2.72
3.44
4.27
4.81
-4.89
-4.48
-3.79
-3.25
2.72
3.42
4.22
4.82
-4.86
-4.52
-3.86
-3.33
2.71
3.40
4.25
4.91
-5.14
-4.62
-3.96
-3.38
2.71
3.40
4.24
4.93
-5.33
-4.78
-4.02
-3.44
2.67
3.41
4.30
5.08
-5.48
-4.88
-4.09
-3.53
2.68
3.39
4.40
5.16
-5.73
-5.11
-4.26
-3.60
2.65
3.44
4.48
5.21
-6.05
-5.32
-4.41
-3.69
2.62
3.46
4.53
5.30
-6.45
-5.65
-4.63
-3.82
2.61
3.48
4.55
5.44
-6.89
-5.94
-4.87
-3.98
2.62
3.52
4.74
5.53];
-9.92
-8.72
-7.44
-6.44
14.90
21.82
33.76
43.12
-5.59
-4.88
-4.04
-3.50
-0.78
-0.67
-0.56
-0.47
-5.07
-4.48
-3.66
-3.14
-0.63
-0.53
-0.45
-0.39
-4.83
-4.14
-3.39
-2.86
-0.50
-0.43
-0.35
-0.30
-4.58
-3.90
-3.20
-2.64
-0.41
-0.34
-0.28
-0.24
-4.41
-3.83
-3.03
-2.50
-0.33
-0.27
-0.21
-0.18];
[...
-3.66
-3.27
-2.80
-2.40
2.60
3.28
4.18
4.94
-3.90
-3.50
-2.93
-2.53
2.80
3.67
4.77
5.58
-4.20
-3.72
-3.14
-2.69
3.13
4.16
5.42
6.37
-4.41
-3.97
-3.34
-2.87
3.61
4.84
6.40
7.58
-4.57
-4.10
-3.49
-3.03
4.18
5.68
7.74
9.39
-4.57
-4.12
-3.52
-3.07
5.01
6.96
9.92
12.14
-4.40
-4.01
-3.42
-3.03
6.01
8.84
12.57
15.70
-4.27
-3.85
-3.30
-2.90
7.12
10.88
16.04
20.87
-4.14
-3.73
-3.18
-2.76
7.81
12.77
21.06
27.11
-4.03
-3.63
-3.06
-2.64
7.05
14.23
26.29
35.25];
-3.14
-2.86
-2.49
-2.16
2.05
2.59
3.23
3.72
-3.08
-2.86
-2.48
-2.17
2.06
2.65
3.31
3.86
-3.13
-2.85
-2.48
-2.16
2.13
2.72
3.48
4.05
-3.15
-2.89
-2.50
-2.16
2.18
2.82
3.64
4.26
-3.23
-2.95
-2.50
-2.17
2.24
2.94
3.87
4.46
-3.21
-2.95
-2.53
-2.19
2.31
3.09
4.10
4.84
-3.27
-2.99
-2.56
-2.22
2.39
3.25
4.35
5.16
-3.38
-3.03
-2.60
-2.25
2.50
3.41
4.64
5.56
-3.49
-3.15
-2.63
-2.28
2.62
3.60
5.02
6.10];
-3.44
-3.14
-2.73
-2.36
2.15
2.67
3.34
3.78
-3.43
-3.15
-2.74
-2.37
2.14
2.67
3.37
3.84
-3.47
-3.16
-2.72
-2.37
2.14
2.72
3.39
3.92
-3.46
-3.18
-2.74
-2.38
2.14
2.71
3.40
3.93
-3.48
-3.20
-2.76
-2.37
2.13
2.72
3.44
4.00
-3.48
-3.16
-2.76
-2.39
2.13
2.75
3.48
4.04
-3.56
-3.24
-2.80
-2.41
2.13
2.77
3.56
4.07
-3.56
-3.26
-2.81
-2.43
2.14
2.77
3.59
4.10
-3.62
-3.31
-2.82
-2.44
2.14
2.78
3.60
4.23];
-8.03
-7.40
-6.55
-5.78
-7.35
-6.95
-6.30
-5.83
-6.27
-5.91
-5.40
-5.00
[...
-3.10
-2.86
-2.49
-2.15
2.02
2.54
3.16
3.61
[...
-3.43
-3.17
-2.76
-2.37
2.17
2.66
3.32
3.74
[...
-6.76
-6.19
-5.19
-4.45
-5.29
-5.02
-4.59
-4.25
-4.57
-4.33
-3.95
-3.65
-4.01
-3.77
-3.46
-3.17
-3.58
-3.35
-3.05
-2.80
-3.22
-3.02
-2.71
-2.48
-2.90
-2.72
-2.45
-2.23
2.50
3.13
3.90
4.49
2.98
3.75
4.70
5.42
3.79
4.78
6.00
7.01
C14
-1.44
-1.34
-1.20
-1.08
-1.23
-1.13
-1.03
-0.94
-1.05
-0.96
-0.87
-0.82
-0.90
-0.82
-0.74
-0.69];
[...
-2.67
-2.44
-2.13
-1.86
2.03
2.54
3.18
3.70
-2.70
-2.48
-2.18
-1.90
2.15
2.69
3.40
3.97
-2.81
-2.59
-2.24
-1.97
2.25
2.88
3.78
4.39
-2.96
-2.72
-2.37
-2.06
2.45
3.12
4.11
4.91
-3.12
-2.87
-2.51
-2.20
2.71
3.49
4.58
5.38
-3.32
-3.06
-2.71
-2.37
3.05
3.93
5.21
6.49
-3.51
-3.28
-2.91
-2.57
3.56
4.68
6.22
7.70
-3.53
-3.33
-3.01
-2.73
4.21
5.60
7.76
9.43
-3.44
-3.24
-2.99
-2.76
5.05
6.99
9.79
12.14
-3.24
-3.08
-2.83
-2.63
6.33
8.82
12.75
15.87];
-2.55
-2.37
-2.08
-1.81
1.87
2.33
2.90
3.27
-2.51
-2.32
-2.04
-1.79
1.89
2.35
2.94
3.28
-2.49
-2.31
-2.02
-1.77
1.92
2.40
3.02
3.41
-2.49
-2.29
-2.03
-1.77
1.94
2.44
3.09
3.52
-2.45
-2.29
-2.01
-1.75
1.98
2.52
3.15
3.66
-2.46
-2.27
-2.00
-1.76
2.02
2.55
3.28
3.85
-2.44
-2.26
-1.98
-1.75
2.06
2.63
3.46
3.97
-2.43
-2.25
-1.99
-1.75
2.11
2.73
3.59
4.20
-2.43
-2.25
-1.99
-1.74
2.16
2.82
3.73
4.42];
-2.83
-2.59
-2.23
-1.94
1.83
2.29
2.79
3.17
-2.77
-2.56
-2.22
-1.93
1.83
2.28
2.81
3.14
-2.74
-2.51
-2.22
-1.93
1.84
2.28
2.83
3.18
-2.76
-2.52
-2.19
-1.91
1.85
2.29
2.83
3.20
-2.73
-2.50
-2.18
-1.91
1.85
2.29
2.81
3.24
-2.68
-2.49
-2.18
-1.89
1.85
2.31
2.84
3.22
-2.68
-2.48
-2.17
-1.89
1.87
2.33
2.88
3.24
-2.66
-2.45
-2.15
-1.89
1.88
2.35
2.92
3.32
-2.62
-2.43
-2.13
-1.88
1.88
2.36
2.93
3.36];
-6.40
-5.92
-5.15
-4.38
5.25
6.61
8.50
9.75
-7.60
-7.22
-6.55
-5.78
8.05
10.41
13.55
15.80
-6.95
-6.69
-6.33
-6.00
13.57
17.88
23.17
27.76
-2.46
-2.27
-2.00
-1.75
1.93
2.43
3.04
3.50
-2.48
-2.30
-2.03
-1.76
2.01
2.55
3.23
3.68
-2.54
-2.36
-2.07
-1.81
2.12
2.71
3.47
4.02
-2.65
-2.47
-2.16
-1.88
2.25
2.95
3.79
4.41
-2.80
-2.61
-2.28
-1.98
2.47
3.21
4.15
4.84
-3.01
-2.78
-2.46
-2.14
2.76
3.57
4.58
5.51
-3.24
-3.01
-2.66
-2.35
3.16
4.06
5.23
6.24
-3.43
-3.22
-2.91
-2.59
3.64
4.74
6.31
7.34
-3.47
-3.31
-3.04
-2.80
4.34
5.71
7.75
9.28];
-2.45
-2.25
-1.96
-1.69
1.74
2.15
-2.44
-2.23
-1.94
-1.69
1.76
2.17
-2.41
-2.22
-1.93
-1.67
1.76
2.20
-2.37
-2.19
-1.91
-1.67
1.79
2.23
-2.33
-2.17
-1.89
-1.66
1.82
2.28
-2.29
-2.14
-1.89
-1.65
1.85
2.33
-2.29
-2.12
-1.88
-1.65
1.89
2.40
-2.28
-2.10
-1.87
-1.64
1.94
2.45
-2.25
-2.09
-1.85
-1.63
1.98
2.51
[...
-2.60
-2.40
-2.10
-1.82
1.85
2.29
2.84
3.19
[...
-2.89
-2.59
-2.24
-1.93
1.85
2.27
2.80
3.13
[...
-5.05
-4.52
-3.87
-3.32
3.68
4.55
5.74
6.58
-3.43
-3.31
-3.11
-2.95
-1.73
-1.64
-1.53
-1.45
-3.07
-2.95
-2.77
-2.63
-1.51
-1.43
-1.33
-1.27];
[...
-2.46
-2.27
-1.99
-1.73
1.87
2.34
2.89
3.28
[...
-2.48
-2.26
-1.98
-1.69
1.74
2.13
2.54
2.86
2.52
2.86
2.57
2.94
2.60
2.93
C15
2.63
2.99
2.65
3.04
2.71
3.05
2.75
3.14
2.85
3.25
2.88
3.33
2.98
3.46
3.09
3.53
3.21
3.68];
-2.63
-2.40
-2.07
-1.78
1.75
2.11
2.57
2.87
-2.63
-2.40
-2.07
-1.78
1.75
2.10
2.57
2.88
-2.64
-2.39
-2.06
-1.78
1.74
2.10
2.56
2.92
-2.60
-2.40
-2.07
-1.78
1.74
2.12
2.60
2.92
-2.55
-2.37
-2.05
-1.78
1.74
2.15
2.63
2.95
-2.52
-2.34
-2.04
-1.78
1.75
2.16
2.66
2.99
-2.49
-2.31
-2.02
-1.76
1.77
2.18
2.69
2.99
-2.46
-2.30
-2.01
-1.74
1.77
2.19
2.71
3.06
-2.46
-2.28
-1.98
-1.73
1.78
2.20
2.75
3.10];
-2.45
-2.25
-1.94
-1.69
1.87
2.35
2.88
3.27
-2.44
-2.23
-1.95
-1.70
1.91
2.43
3.00
3.50
-2.47
-2.28
-1.97
-1.73
2.01
2.52
3.21
3.73
-2.51
-2.32
-2.04
-1.77
2.13
2.68
3.46
3.94
-2.61
-2.40
-2.11
-1.83
2.29
2.90
3.64
4.22
-2.78
-2.55
-2.23
-1.96
2.49
3.17
4.02
4.70
-2.96
-2.76
-2.42
-2.13
2.74
3.55
4.53
5.22
-3.24
-3.03
-2.66
-2.34
3.13
4.03
5.16
6.06
-3.46
-3.24
-2.91
-2.59
3.70
4.71
6.21
7.30];
-2.45
-2.27
-1.96
-1.69
1.72
2.11
2.58
2.91
-2.40
-2.24
-1.95
-1.68
1.73
2.12
2.61
2.97
-2.39
-2.21
-1.93
-1.67
1.74
2.17
2.63
3.02
-2.36
-2.17
-1.91
-1.65
1.75
2.19
2.69
3.08
-2.34
-2.16
-1.90
-1.65
1.78
2.23
2.75
3.16
-2.31
-2.14
-1.87
-1.63
1.79
2.25
2.84
3.29
-2.28
-2.10
-1.85
-1.61
1.80
2.28
2.93
3.33
-2.26
-2.09
-1.84
-1.60
1.84
2.32
2.95
3.43
-2.23
-2.07
-1.82
-1.60
1.87
2.39
3.04
3.56];
-2.57
-2.36
-2.03
-1.73
1.69
2.04
2.48
2.73
-2.55
-2.36
-2.03
-1.73
1.69
2.04
2.48
2.77
-2.55
-2.36
-2.04
-1.73
1.69
2.06
2.52
2.80
-2.53
-2.35
-2.04
-1.74
1.70
2.08
2.55
2.88
-2.51
-2.34
-2.02
-1.72
1.71
2.08
2.57
2.88
-2.51
-2.31
-2.00
-1.72
1.71
2.09
2.59
2.91
-2.50
-2.30
-1.99
-1.72
1.72
2.12
2.60
2.95
-2.49
-2.29
-1.97
-1.71
1.73
2.13
2.62
2.97
-2.46
-2.25
-1.97
-1.70
1.75
2.14
2.64
2.99];
-6.63
-6.10
-5.30
-4.59
5.53
6.81
8.59
9.84
-8.03
-7.69
-7.04
-6.15
8.69
10.86
13.50
16.19
-7.33
-7.15
-6.85
-6.60
14.19
18.53
24.27
28.67
-2.40
-2.21
-1.92
-1.65
1.88
2.31
2.82
3.25
-2.40
-2.21
-1.93
-1.68
1.95
2.40
3.00
3.39
-2.44
-2.25
-1.97
-1.71
2.04
2.54
3.15
3.64
-2.52
-2.34
-2.06
-1.80
2.12
2.66
3.35
3.84
[...
-2.62
-2.39
-2.08
-1.79
1.74
2.12
2.55
2.88
[...
-2.49
-2.29
-1.96
-1.69
1.80
2.27
2.78
3.16
[...
-2.50
-2.29
-1.97
-1.71
1.71
2.07
2.54
2.88
[...
-2.58
-2.37
-2.03
-1.73
1.70
2.05
2.47
2.75
[...
-3.82
-3.51
-3.01
-2.58
2.83
3.46
4.31
4.82
-4.92
-4.49
-3.90
-3.31
3.81
4.69
5.75
6.53
-3.73
-3.62
-3.49
-3.37
-2.33
-2.25
-2.16
-2.10];
[...
-2.43
-2.20
-1.92
-1.65
1.78
2.19
2.66
2.99
-2.42
-2.22
-1.90
-1.65
1.82
2.25
2.73
3.13
-2.65
-2.48
-2.15
-1.88
2.28
2.87
3.62
4.25
-2.87
-2.63
-2.31
-2.00
2.50
3.13
3.94
4.63
-3.07
-2.86
-2.51
-2.20
2.79
3.52
4.44
5.21
-3.33
-3.11
-2.77
-2.44
3.19
4.07
5.20
5.94];
C16
[...
-2.47
-2.27
-1.97
-1.71
1.70
2.06
2.49
2.82
-2.46
-2.24
-1.95
-1.67
1.71
2.08
2.53
2.87
-2.44
-2.22
-1.94
-1.67
1.73
2.12
2.55
2.89
-2.41
-2.21
-1.92
-1.66
1.73
2.16
2.58
2.89
-2.38
-2.19
-1.92
-1.66
1.74
2.16
2.65
2.93
-2.34
-2.17
-1.91
-1.65
1.75
2.19
2.69
3.01
-2.31
-2.15
-1.89
-1.63
1.77
2.22
2.73
3.07
-2.27
-2.12
-1.87
-1.63
1.79
2.25
2.78
3.16
-2.26
-2.09
-1.86
-1.62
1.81
2.27
2.84
3.23
-2.23
-2.07
-1.83
-1.61
1.83
2.31
2.90
3.30];
-2.58
-2.35
-2.05
-1.74
1.69
2.06
2.51
2.80
-2.60
-2.36
-2.04
-1.74
1.69
2.07
2.51
2.82
-2.56
-2.35
-2.03
-1.73
1.69
2.07
2.50
2.83
-2.53
-2.33
-2.02
-1.72
1.70
2.07
2.52
2.85
-2.51
-2.30
-2.00
-1.73
1.71
2.08
2.52
2.88
-2.49
-2.28
-2.00
-1.73
1.72
2.08
2.55
2.87
-2.47
-2.26
-1.98
-1.71
1.73
2.09
2.57
2.90
-2.46
-2.25
-1.96
-1.71
1.73
2.11
2.58
2.93];
-4.78
-4.41
-3.82
-3.21
3.53
4.27
5.28
5.93
-6.51
-6.00
-5.23
-4.52
5.18
6.32
7.70
8.69
-8.74
-8.28
-7.22
-6.35
8.20
10.24
12.65
14.35
-8.85
-8.68
-8.41
-8.19
13.75
17.79
22.49
26.23
-2.40
-2.20
-1.90
-1.64
1.78
2.18
2.66
3.04
-2.38
-2.22
-1.89
-1.63
1.79
2.20
2.74
3.13
-2.42
-2.22
-1.90
-1.62
1.83
2.24
2.83
3.23
-2.44
-2.22
-1.92
-1.65
1.88
2.31
2.88
3.25
-2.47
-2.25
-1.97
-1.68
1.95
2.39
3.02
3.43
-2.50
-2.31
-2.02
-1.74
2.07
2.53
3.17
3.68
-2.61
-2.41
-2.11
-1.83
2.21
2.72
3.35
3.99
-2.81
-2.63
-2.29
-1.99
2.39
2.97
3.77
4.31];
-2.43
-2.21
-1.92
-1.64
1.65
2.00
2.36
2.62
-2.42
-2.22
-1.92
-1.64
1.67
2.01
2.38
2.67
-2.40
-2.20
-1.90
-1.63
1.67
2.02
2.38
2.73
-2.40
-2.18
-1.89
-1.63
1.68
2.04
2.42
2.74
-2.39
-2.16
-1.88
-1.63
1.69
2.07
2.47
2.77
-2.36
-2.16
-1.88
-1.62
1.71
2.11
2.54
2.80
-2.36
-2.15
-1.86
-1.61
1.72
2.14
2.58
2.84
-2.32
-2.13
-1.84
-1.59
1.75
2.17
2.63
2.89];
-2.54
-2.31
-1.98
-1.71
1.69
2.04
2.45
2.70
-2.53
-2.29
-1.98
-1.71
1.70
2.07
2.42
2.71
-2.51
-2.28
-1.96
-1.71
1.71
2.07
2.44
2.70
-2.51
-2.25
-1.96
-1.69
1.71
2.07
2.45
2.72
-2.50
-2.26
-1.95
-1.68
1.71
2.06
2.45
2.76
-2.47
-2.26
-1.94
-1.67
1.70
2.07
2.47
2.81
-2.46
-2.24
-1.92
-1.66
1.71
2.07
2.51
2.78
-2.44
-2.24
-1.91
-1.65
1.72
2.07
2.49
2.79];
[...
-2.58
-2.37
-2.04
-1.73
1.69
2.04
2.46
2.77
-2.57
-2.37
-2.04
-1.74
1.69
2.04
2.47
2.76
[...
-3.00
-2.77
-2.39
-2.04
2.18
2.68
3.22
3.60
-3.67
-3.35
-2.89
-2.46
2.66
3.19
3.93
4.36
[...
-2.42
-2.22
-1.90
-1.63
1.75
2.12
2.54
2.89
-2.40
-2.20
-1.91
-1.64
1.76
2.16
2.57
2.91
[...
-2.50
-2.27
-1.94
-1.64
1.64
1.95
2.32
2.58
-2.45
-2.23
-1.92
-1.63
1.64
1.98
2.35
2.59
[...
-2.56
-2.32
-1.98
-1.70
1.68
2.03
2.44
2.74
-2.56
-2.33
-1.98
-1.70
1.69
2.03
2.42
2.69
C17
if n <= 2500
upperqus = reshape(quants(1:8, m(i)-1, upper, eps*2), 8, 1);
else % i.e. approaching standard normality:
upperqus = norminv(siglevels([1:4 6:9]))';
ncases
= [ncases 5000];
end
%
%
%
%
%
Interpolate the quantile values for the actual sample size from the quantile
values of the surrounding sample sizes; note that this method may slightly
increase the size of a type I error for sample sizes which are not close to one
of the tabulated cases; this problem could be mitigated by a response surface
yet to be developed.
if lower ~= upper
qus = lowerqus + (upperqus - lowerqus) * (n - ncases(lower)) /...
(ncases(upper) - ncases(lower));
else
qus = lowerqus;
end
% Find the matching significance levels; at least one of the terms must be 1, or
% both, so their product yields the overall one-sided significance level:
sig(i) = siglevels(5 - sum(w(i)<=qus(1:4))) * siglevels(5 + sum(w(i)>=qus(5:8)));
end
%%%%%%%%%%%%%%%%%%%%%%%%% Otherwise use standard-normal look-up %%%%%%%%%%%%%%%%%%%%%%%%%
else
qus = [norminv(siglevels(1:4)) norminv(1 - siglevels(6:9))]';
for i = 1 : length(w)
sig(i) = siglevels(5 - sum(w(i)<=qus(1:4))) * siglevels(5 + sum(w(i)>=qus(5:8)));
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% REFERENCES:
%
% Kanzler, Ludwig (1998), "Very Fast and Correctly Sized Estimation of the BDS Statistic",
%
Oxford University, Department of Economics, working paper, available on
%
http://users.ox.ac.uk/~econlrk
% End of file.