Sie sind auf Seite 1von 95

VERY FAST AND CORRECTLY SIZED

ESTIMATION OF THE BDS STATISTIC


Ludwig Kanzler
Christ Church and
Department of Economics
University of Oxford

1 February 1999

Postal:

Ludwig Kanzler, Christ Church, Oxford OX1 1DP, England

E-Mail: ludwig.kanzler@economics.oxford.ac.uk
WWW:

http://users.ox.ac.uk/~econlrk

Abstract
This paper is concerned with the study of some fundamental aspects of the BDS
test. Brock, Dechert, Scheinkman & LeBaron (Econometric Reviews, 1996) propose
this non-parametric tool as a test of the null hypothesis of an independently and identically distributed (i.i.d.) time series, with power against virtually all linear and nonlinear, stochastic and deterministic (chaotic) alternatives. Unfortunately, it is extremely processing-intensive and requires an efficient computer algorithm to be viably run
even on relatively small data sets. An algorithm is presented which is very fast, rather
simple and easily implemented in common programming environments; a version for
MATLAB is part of the paper. The algorithm overcomes a number of deficiencies of
the two most widely used BDS packages by Dechert and LeBaron.
Extensive Monte-Carlo simulations are conducted which show that the properties
of the BDS statistic are sensitive to the choice of embedding dimension and dimensional distance, and to sample size. Unless the choice parameters are set in accordance
with the recommendations emerging from the simulations, a statistical test for iidness
is bound to be badly sized on small samples and thus yield misleading conclusions.
Tables of the small-sample distribution are offered for correctly sized testing. The
recommendations, tabulated quantile values and computer algorithms put forward will
hopefully help render the BDS test part of the standard econometric toolbox.
JEL subjects: C12, C13, C14, C15, C52, C63, C87
Keywords:
BDS statistic, computer algorithm, correlation integral, hypothesis
testing, i.i.d., model misspecification, non-linearity, random walk,
small-sample distribution, ultra-high-frequency data

Acknowledgements
Part of this paper was completed while I was staying at the Institute for
Monetary and Economic Studies at the Bank of Japan. I am very much indebted to the
Bank in particular, Jun Muranaga, Sachiko Kuroda Nakada, Makoto Ohsawa and
Tetsuya Yonetani for their hospitality and for granting me access to their research
facilities. An earlier version of the paper was presented at a Workshops of the Banks
Institute. I was happy to receive helpful comments during and after my presentations
from seminar participants, in particular Fumio Hayashi of Tokyo University.
My thanks also extend to Blake LeBaron (now at Brandeis University) for
giving me the exclusive opportunity to beta-test his BDS programme compiled for
MATLAB, which spurred my interest in the BDS test to the extent that I decided to
devote this long paper to it. In the course of my research, Blake and I exchanged over
160 e-mail messages with each other. The simulations would have taken me more time,
if Jrg Filthaut, my father and my wife had not kindly allowed me access to their
computing resources.
My particular gratitude goes to Peter Oppenheimer of Christ Church, Oxford,
for supervising this research as part of my D.Phil. thesis, A Study of the Efficiency of
the Foreign Exchange Market through Analysis of Ultra-High Frequency Data. Peter
afforded me valuable support through his strong sense of style and presentation of
analytical material. All remaining errors remain, of course, my sole responsibility.
Financial support from the Economic and Social Research Council (ESRC), and
Christ Church is gratefully acknowledged.
2

Contents
1. Introduction

2. A Short Description of the BDS Test

3. A Fast, Simple and Portable Algorithm for the BDS Statistic

13

3.1

Implementing the least number of evaluations

15

3.2

Estimation of correlation integrals of dimension 1

17

3.3

Estimation of correlation parameter kn

17

3.4

Estimation of correlation integrals of dimension m

20

4. The Small-Sample Properties of the BDS Distribution and


Recommendations on the Choice of Parameters and m

22

4.1

Setup of Monte-Carlo simulations

23

4.2

Results of Monte-Carlo simulations

28

4.3

Explaining the finite-sample properties

33

4.4

Distribution-free choice of parameter

36

5. Some Pitfalls in Statistically Efficient and Consistent Estimation


of the BDS Estimators

40

5.1

Estimation without proof of consistency

40

5.2

Inefficient estimation due to V statistics

42

5.3

Inefficient estimation due to maxdim-dependence

42

5.4

Empirical differences and recommendations on usage

44

6. A Critical Evaluation of Some Applications

46

6.1

Inference in small samples

48

6.2

Compass-rose patterns

49

6.3

Tabulating scrambled real-world data

51

7. Summary of Recommendations

52

References

54-59

Appendix A:

Figures

A1-A41

Appendix B:

Tables

B1-B19

Appendix C:

Software (MATLAB Code)

C1-C17

Contents of Appendices
Appendix A:

Figures

Fig. 1 (Panels 1- 6)

Cumulative Distributions of Some Standardised Normal Random


Number Samples

A2

Fig. 2 (Panels 1-30)

Comparison of Cumulative BDS Distributions across m

A3

Fig. 3 (Panels 1-24)

Comparison of Cumulative BDS Distributions across n

A18

Fig. 4 (Panels 1-36)

Comparison of Cumulative BDS Distributions across /

A22

Fig. 5 (Panels 1- 4)

Cumulative Distribution of Correlation Integral c1 in Normal


Samples

A28

Fig. 6 (Panels 1-18)

Cumulative Distribution of BDS Statistics Based on Full-Sample


U Statistics

A30

Fig. 7 (Panels 1-21)

Comparison of LeBarons Cumulative BDS Distributions across


maxdim

A33

Fig. 8 (Panels 1-30)

Comparison of the Standard Cumulative BDS Distribution with


those of Dechert and LeBaron

A37

Appendix B:

Tables

Table 1

Run-Time and Memory Requirements of the BDS Test

B2

Table 2 (Panels 1-30)

Tables of the BDS Distribution

B3

Table 3

Distribution of Correlation Integral of Dimension 1 in Normal


Samples

Appendix C:

Software (MATLAB Code)

Prog. 1 BDS.M:

Brock-Dechert-Scheinkman Test for Independence

C2

Prog. 2 BDSSIG.M

Significance of the BDS Statistic in Small Samples

C11

B19

***

All econometric MATLAB functions developed for this thesis and copies of the
appendices in portable-document format (pdf) can be downloaded from the authors
homepage:
http://users.ox.ac.uk/~econlrk

1. Introduction
This paper is concerned with the study of some fundamental aspects of the BDS
test, so called after its original authors William Brock, Davis Dechert and Jos
Scheinkman, who developed it in 1986. After their first working paper had appeared
in 1987, Blake LeBaron joined the team to develop viable Fortran and later C software
to compute the BDS statistic (LeBaron, 1997a), to examine some finite-sample properties (see Brock, Hsieh & LeBaron, 1991) and to apply the test to financial time series
(Scheinkman & LeBaron, 1989). The revised working paper was eventually published
by Brock, Dechert, Scheinkman & LeBaron (henceforward BDSL) in 1996. Section 2
briefly reviews intuition and the main equations of the test.
This test for independence based on estimation of correlation integrals at various
dimensions (as explained in Section 2) has power against virtually all types of linear
and non-linear departure.1 While estimation of the BDS statistic is non-parametric, the
test statistic asymptotically follows a normal distribution with zero mean and unit
variance and therefore lends itself for easy hypothesis testing.
Moreover, in principle no distributional assumptions need to be made about the
data under the null hypothesis other than that it is i.i.d. For example, unlike the bispectrum test or the bootstrap-linearity test,2 the BDS test does not depend on the
existence of higher moments. Given that excess/lepto-kurtosis, also called fat
tails, has been documented to be almost a standard feature of financial-time series,3

Dechert (1988b) considers two types of theoretical exceptions which play no role in practice.

The references are Hinich (1982), Hinich & Patterson (1985, 1989, 1990), Ashley et al. (1986),
Brockett et al. (1988), Barnett & Hinich (1993) for the bi-spectrum test and Ashley & Patterson (1986) for
the bootstrap-linearity test. See also footnote 48 (page 48) for four other non-linearity tests.
3

With respect to the ultra-high-frequency exchange-rate data distributed by Olsen & Associates, see
their papers by Dacorogna et al. (1995), Mller et al. (1996), and Guillaume et al. (1997), as well as the
paper by Danielsson & de Vries (1998). With respect to other financial data, see DuMouchel (1983), Akgiray & Booth (1988), Hols & de Vries (1991), Jansen & de Vries (1991), Loretan (1991), Phillips & Loretan
(1992), Koedijk et al. (1992), Koedijk & Kool (1994), Loretan & Phillips (1994), and Kearns & Pagan
(1997), among others.
Most of the above studies probably over-estimate the degree of what is called fat-tailedness, since
the commonly used tail-index estimators all appear to be severely biased in small samples, as documented
by McCulloch (1997), Pictet et al. (1996), and Huisman et al. (1997). Nonetheless, even the latter authors
improved estimator finds evidence of distributional instability in exchange rates see Huisman et al. (1997,
1998).
5

L. Kanzler: BDS Estimation

the importance of this property is not to be underestimated (see also Hinich & Patterson, 1993).
The BDS test can be run over the residuals of a regressions and can thus be
viewed as a test for model misspecification. But it can also be interpreted as a test for
non-linearity, if appropriately used in conjunction with ARIMA modelling. In a first
step, the best-fitting ARIMA(p,d,q) is determined and fitted to the data, thus eliminating all linearity from the data. Only in a second step is the test applied by running
it on the residuals of that ARIMA model, which by default must be linearly independent, so that any dependence found in the residuals must be non-linear in nature.4
It is important to realise that the BDS test, like most other tests with power
against non-linear stochastic dependence and/or non-linear deterministic dependence,
is not per se a non-linearity and/or chaos test.5 However, when applied to linearly
whitened data, it becomes a test with power against virtually any type of stochastic and
deterministic non-linearity.
BDS testing may thus help identify the existence of non-linear dependence, but
not its type. To obtain more information about the type of non-linearity prevalent in
the data, one would need to model likely non-linear processes directly. For example,
given a significant ARCH statistic (Engle, 1982), it would be a good idea to model
ARCH, using the appropriate variants. The BDS test could play an important role in
this, as it can be used as a powerful test for misspecification of ARCH models. Moreover, it could also indicate the existence of non-ARCH non-linearity (in the residuals
of an appropriate ARCH model).
Unfortunately, despite the fact that the BDS test is theoretically robust against
the inclusion of nuisance parameters in the original regression, additive GARCH residuals appear for an as yet unknown reason to bias the BDS statistic (Brock et al., 1991,
Hsieh, 1991, and private correspondence with Blake LeBaron, October 1998), therefore

When I talk of non-linearity, non-linear correlation or non-linear dependence without reference


to particular moments, I include linear dependence in higher conditional moments. This may sound slightly
confusing, but is common practice and often useful as short-hand.
5

Unfortunately, this is frequently misunderstood

see page 51, footnote 50 for an example.

L. Kanzler: BDS Estimation

requiring the significance of the BDS statistic to be evaluated through bootstrapping


rather than looking up asymptotic or small-sample tail values.6 Since computation of
the BDS statistic itself is already very processing-intensive, bootstrapping the BDS
distribution through repeated simulation of the BDS statistic on series re-sampled
randomly (with replacement) from the original series puts an enormous burden on the
computer. (Note also that quite apart from the BDS testing, fitting a single ARCH
model

through a maximum-likelihood method or a generalised method of moments

can be computationally quite burdensome, in contrast to running an OLS regression.)


Running the BDS test is far from straightforward, and this may explain why it
has found relatively little application to date.7 The test is extremely computationally
intensive, and special algorithms are needed to make implementation viable. Two software packages by Dechert (1988a) and LeBaron (1997a) are available which are sufficiently fast for computation. Unfortunately, both are generally difficult to integrate into
existing programming environments and suffer from a number of deficiencies in terms
of statistical consistency and efficiency (described in Section 5). An algorithm which
employs statistically consistent and efficient estimators and which is of sufficient generality, simplicity and speed to be implementable in common programming languages
is desirable. An algorithm which meets these requirements is offered in Appendix C
(Programme 1) and described in some detail in Section 3.
Another difficulty in conducting the BDS test arises from the question how two
choice parameters, namely the embedding dimension m and the size of dimensional
distance (see Section 2), should be set. If the data is truly i.i.d., the BDS test asymptotically fails to reject independence for any combination of m and in line with the
standard-normal probabilities. In Section 4, I show that the same cannot be generally
6

For the original contributions on bootstrapping, see Efron (1979, 1982). But see also the book by Efron & Tibshirani (1993), and the surveys by Hinkley (1988), Diciccio & Romano (1988) and Li & Maddala
(1996) with the ensuing discussion of their articles in the same journal numbers. For recent research, see,
for example, Berkowitz & Kilian (1996), and Andrews & Buchinsky (1997).
7

BDSL (1996) list a number of studies using the BDS test, and some additional ones can be found in
Table 1 of Abhyankar et al. (1997). Apart from Campbell et al. (1997), who devote a single page to the test,
no textbook has come to my attention which includes a description of the test. See also Section 6 below for
some ill-understood implementations of the test.

L. Kanzler: BDS Estimation

said for small samples, as the size of the error of falsely rejecting the null hypothesis
is excessively large for some m, . I run extensive Monte-Carlo simulations with the
objective of examining the small-sample properties of the BDS distribution with varying choice parameters and m and sample size n. A selection of the results is graphed
and tabulated in Appendices A and B.
Some recommendations on the optimal choice of emerge. Unless is chosen
sufficiently large, evaluation of the significance level of the BDS statistic will give
misleading results. For low dimensions, the size of the BDS test is maximised when
is chosen such that the correlation integral of dimension 1 c1,n lies around 0.7 (which
corresponds to 150% of the standard deviation in a normally distributed sample).
Higher dimensions require the value of to be raised further. These recommendations
are independent of the actual distribution of the sample. Moreover, the significance of
the BDS statistic calculated on samples with 500 or fewer observations should be
evaluated using the quantile values of the small-sample BDS distribution tabulated in
Appendix B, Table 2 (and these are also incorporated in Programme 2 of Appendix C).
In Section 5, I conduct comparisons between Decherts, LeBarons and my own
programme. Significant differences in BDS statistics are revealed for some choices of
n, m and , which are the result of differing methods for estimating correlation integrals. I find that Dechert and LeBaron employ estimators for some of the correlation
integrals which are inferior in statistical efficiency to the standard estimators and that
Dechert uses an estimator for which statistical consistency has not been proven.
LeBarons and Decherts methods yield BDS distributions distinctly different from
mine when the sample size is small, when the embedding dimension considered is high
and when the distance parameter chosen is small relative to the recommended choice
for .
Both BDS programmes are better avoided on samples of size 500 or smaller,
unless bootstrapping is employed to evaluate the significance of the BDS statistic.
However, on large samples and for reasonable values for m and , all three methods
appear to give statistically indistinguishable results.

L. Kanzler: BDS Estimation

In most applications of the BDS test, parameter has actually been chosen outside the range of values which yield the best small-sample properties. In Section 6, I
briefly review a number of such studies and check whether either reference to the tabulated small-sample quantile values or setting within the recommended range would
change their respective findings qualitatively.
Section 7 summarises my recommendations for very fast and correctly sized
estimation of the BDS statistic.

2. A Short Description of the BDS Test


The BDS statistic is derived and its asymptotic distribution and power versus
non-i.i.d. alternatives is proven in great formal detail in BDSL (1996), but an intuitive
explanation is all that is needed to understand the test sufficiently well to be able to
use it. The following explanation generalises LeBarons (1997b) illustration of the test.
Consider a time series x of an infinite number of observations following some
distribution F:
x F

(1)

Choose an arbitrary size of dimensional distance , the only condition being that it
must not exceed the spread of the time series (if it exists):
0 < < max(x) min(x)

(2)

Now consider the probability of any pair of observations Xi, Xj lying within of each
other:
P1 P ( | Xi Xj | )

for any integers i j

(3)

A similar relationship in dimensions 2 is defined by any two observations and their


respective neighbours directly preceding them. Here the probability of closeness is
defined by the probability of any two observations being close to each other as well

L. Kanzler: BDS Estimation

10

as their two predecessors being close to each other, i.e. the probability of a history
of two observations being within of each other:
P2 P ( | Xi Xj | , | Xi

Xj 1 | )

for any integer i j

(4)

It is clear that the probabilities for the two dimensions differ. However, if (and almost
only if) the time series is i.i.d., there is a well defined relationship between the two:
the probability of a two-observation history being close is equal to the square of the
probability of any two observations being close:
P2

P1

if x F (i.i.d.)

(5)

The power relationship generalises for any dimension, and the BDS test for embedding
dimension m is a test of the null hypothesis that the probabilities for dimension 1 and
for dimension m are equal:
H0: Pm

P1

H1: Pm P1

(6)

Testing the above null hypothesis is almost equivalent to testing for iidness against
all other alternatives (see also page 5, footnote 1):
H0: x F (iid)

(7)

To obtain the test statistic, probability Pm is estimated by the correlation integral cm,n()
in finite space.8 Let I be the Heavside function so that variable I(Xi, Xj,) assumes the
value 1 if observations Xi and Xj are within distance of each other and 0 otherwise:
I ( Xi , Xj )

1
0

if | Xi Xj |
otherwise

(8)

The correlation integral was brought to prominence by Grassberger & Procaccia (1983a, 1983b).

L. Kanzler: BDS Estimation

11

On a sample of n observations, the correlation integral for dimension m is calculated


as the average of all available products of m-histories:9
cm, n()

m 1

(9)

I ( Xs j , Xt j )

(n m 1) (n m)

s m

t s 1

j 0

As shown in BDSL (1996), the BDS statistic for embedding dimension m and
dimensional distance is estimated consistently on a sample of n observations by:10
wm, n ()

n m 1

cm, n ()

c1, n

m 1

() m

(10)

m, n ()

where the estimated variance of cm,n() c1,nm+1() m is given by:


m, n ()
2

m 1

km

k m j c 2j

(m 1) 2 c 2m

m 2 k c 2m

(11)

j 1

Parameter c is the first-dimensional correlation integral:


c c1,n ()

(12)

Equation (9) is equivalent to

cm, n()

n m

n m 1

m 1

s 1

t s 1

j 0

2
(n m 1) (n m)

I ( Xs j , Xt j )

LeBaron (1997b) and all other BDS-related papers I have seen specify a third equation instead, having the
summations running from s = 1 to n and t = s + 1 to n respectively and consequently averaging by n(n1).
However, for any j s, I(Xsj, Xtj) is not defined, as the start of the time series has been reached for
j = s 1. So only the two equations cited here can be implemented in practice.
10

LeBaron (1997b), Barnett et al. (1997) and many others state this equation as

wm, n ()

cm, n ()

c1, n () m

m, n ()

giving the misleading impression that correlation integral of dimension 1 is to be estimated over the full
sample and that the ratio term is to be multiplied by the square root of the full sample size. See also Section
5.

L. Kanzler: BDS Estimation

12

Parameter k is the probability of any triplet of observations lying within distance of


each other and is thus to be computed as
kn ()

2
n (n 1) (n 2)

I ( Xt , Xs ) I ( Xs , Xr ) I ( Xt , Xr ) I (Xr , Xs )
t 1 s t 1 r s 1

(13)

I ( Xs , Xt ) I ( Xt , Xr )
There are actually many ways of estimating k and c1 consistently; however, only the
above equations represent the most efficient estimators (see also Sub-Section 5.2
below).
Finally, BDSL (1996) show that the BDS statistic follows the standard-normal
distribution asymptotically:11
limn wm,n () N (0, 1)

for any m,

(14)

In principle, running the test is then straightforward: fix distance parameter and
embedding dimension m, compute c1,n and kn according to equations (9) and (13), use
these estimates to compute m,n as defined by (11), similarly use (9) to compute cm,n
and c1,nm+1 and plug all these estimates into equation (10) to obtain the BDS statistic.12 The significance of the null hypothesis is then evaluated against the standardnormal distribution.
In practice, it is far from easy to implement the above equations in such a way
as to make obtaining the BDS statistic viable in terms of computational speed. Section
3 deals with this issue by offering a fast algorithm which is relatively easy to implement. Moreover, the question arises which settings one should consider for parameters
and m. Section 4 aims to provide some guidance. And finally, Section 5 uncovers
some differences in statistical efficiency of competing BDS software and examines
whether their deficiencies should be of concern.

11

Asymptotic normality was only proven formally by de Lima (1992) and Lai et al. (1994).

In the remainder of this chapter, dependence on parameter is suppressed for the benefit of notational clarity.
12

L. Kanzler: BDS Estimation

13

3. A Fast, Simple and Portable Algorithm for the BDS Statistic


Computing the BDS statistic through literal implementation of the above equations would be extremely processing-intensive. For example, to evaluate the triple sum
for k, three Is would have to be evaluated, multiplied with each other and added together a total of n(n1)(n2) times (the results of which would then be averaged to
yield an estimate of k). For cm,n, m(nm+1)(nm) evaluations of I would be needed,
similarly (nm+1)(nm) runs for c1,nm+1 plus another n(n1) evaluations for c1,n.
As a matter of fact, the computational cost of obtaining k, which is of order n3, would
completely outweigh the computation of all other correlation integrals. Nonetheless, it
should be recognised that even computations of order n2 are burdensome. By comparison, a Box-Pierce Q test involves only order-n computations. A literal implementation
of the BDS test on a sample of reasonable size would simply not be practical, not even
on the most powerful computers available to average users today.
So algorithms are needed which yield the same BDS statistic, but use far fewer
computations than the above and implement the remaining burden in the most efficient
way possible. In fact, two very fast software programmes written by two of the authors
of the BDS test have been available publicly for ten years. While Decherts (1988a)
BDS package is already impressively fast, LeBarons (1997a) programme requires only
a fraction of the run-time of the former. Many studies using the BDS test appear to
have used one of these programmes.13
Yet while their respective speed is most impressive, both programmes carry the
drawback that they are platform-dependent. Dechert compiled his programme for the
DOS operating system,14 and since it also runs in a DOS box under Windows 95/98
and Windows NT, the BDS test can be used on most personal computers. But there is
a drawback: the input data has to be provided in correctly formatted ASCII files, and
13

For example, Guillaume et al. (1995) rely on LeBarons programme, and Peters (1994) mentions
Decherts software. Unfortunately a number of authors fail to acknowledge the source of the algorithm employed to run their BDS tests. Given the difficulty involved in writing a fast algorithm for the test, it is hard
to believe that they used their own proprietary algorithm without saying so and without offering their code
to the academic community.
14

The source code was written in Turbo-Pascal, but is unfortunately not available publicly.

L. Kanzler: BDS Estimation

14

since the programme is menu- rather than command-driven, it is not possible to integrate the test into other software packages. This limitation is of little importance when
evaluating only a few series, but when the test is to be run repeatedly, for example on
bootstrapped data, Decherts package appears unsuitable.
LeBaron wrote his programme in the C language and it is the C source code
which is in the public domain. His programme can thus be integrated directly with
other C programmes, and it can also be compiled for other environments, but of course
this requires access to a C language interpreter or an appropriate compiler. Most of the
statistical packages and programming interpreters used by economists do not have the
application-programme interface (API) needed to integrate external routines such as
LeBarons programme. Even though the potential for integration is greater, many
interested researchers must therefore find it even more difficult to make use of
LeBarons code than Decherts software.
It would therefore be useful to have BDS algorithms available which could be
easily implemented in any of the commonly used programming environments. With
this objective in mind, I have developed a fast BDS algorithm in MATLAB (MathWorks, 1997b).15 My algorithm is sufficiently general to be quite easily translated into
other high-level programming environments (such as GAUSS, Maple, Mathematica,

15

MATLAB is a high-performance language for technical computing, which has become popular
among applied economists. MATLAB integrates computation, visualisation and programming in an easy-touse environment where problems and solutions are expressed in familiar mathematical notation as opposed
to the idiosyncratic computer code of many competing products. It is also an interactive system whose basic
data element is an array that does not require dimensioning. This enables developing solutions to technical
computing problems characterised by matrix and vector formulations in a fraction of the time it would take
to write a programme in a scalar non-interactive language such as C or Fortran or the scalar interactive
environments which economists typically use to analyse small data sets.
For comparisons of MATLAB (albeit in the outdated versions 4.0 and 4.2c1) with GAUSS, until
recently the programming environment used most widely among economists, see Rust (1993) and Ksters
& Steffen (1996). Many of the benefits of using MATLAB can also be deducted from a paper by Belsley
(1998) which although not the authors intention presents a number of hair-raising disadvantages of
the competing Mathematica environment, none of which incidentally apply to MATLAB.
One drawback of MATLAB is that it was not originally intended to be used for econometric analysis and therefore does not include any econometric functions whatsoever. There are not (yet) any add-on
functions available either commercially or in the public domain to enhance the MATLAB functionality,
except for the pricey Statistics Toolbox (MathWorks, 1997a), which offers hardly more than the most basic
statistical functions. However, I have programmed a number of econometric functions in MATLAB, and
these can be downloaded from my homepage at http://users.ox.ac.uk/~econlrk.

L. Kanzler: BDS Estimation

15

Ox, or S-Plus) or indeed low-level languages (e.g. C). My algorithm shares a number
of features with LeBarons C code, but its core is far simpler in design and thus easier
to understand and implement.16
My algorithm would also be faster than his if translated into C. In practice, however, a compiled version of his code computes the BDS statistic in a fraction of the
time taken by my uncompiled programme in MATLAB (see Table 1). While in this
respect LeBarons compiled software appears preferable, it is only my programme
which is based on the most efficient estimators of the relevant correlation integrals.
The significance of this fact is explored in Section 5.
The source code for my programme can be found in Appendix C and is also
available publicly at http://users.ox.ac.uk/~econlrk. The remainder of this
section explains the main ingredients of the algorithm. Some aspects of the full algorithm are too platform-specific to be of general interest, so they do not receive further
attention here. For example, I have actually developed a combination of six different
algorithms for computing cm,n which differ in speed and memory requirements. The
programme chooses the algorithm which maximises speed given available memory.
The syntax for calling the programme from the MATLAB command prompt is
explained in the header of the source code, and further explanations not covered by this
section can be found as comments interspersed with the code.

3.1 Implementing the least number of evaluations


The four different correlation integrals entering the BDS ratio (cm,n, c1,nm+1, c1,n
and kn) all require the following relationship to be evaluated:
I ( Xi , Xj )

1
0

if | Xi Xj |
otherwise

(15)

The computation of the BDS statistic can thus be speeded up considerably by performing each evaluation only once and using the result for all four integrals. Both Decherts
16

LeBaron (1997b) describes some of the features of his code, but is too language specific and too
short on detail to be of much use to users who are not C-language specialists.

L. Kanzler: BDS Estimation

16

and LeBarons algorithms do not exploit this potential to the full, as they evaluate the
above once for c1,nV and knV and then again for c1,nm+1 and cm,n. This is the first reason
why a compiled version of my programme would be faster than their software.
Next, it is important to realise that all correlation integrals are defined only for
unique combinations of different integers i and j, so equation (15) needs to be evaluated only for all integers i < j n (nm+1 in the case of c1,nm+1). The correlation integrals are defined as U statistics as opposed to V statistics. U statistics include only
unique combinations; V statistics include all combinations including own-points.17 So
evaluation of the double sum of a U statistic requires only up to n(n1) calculations, while arriving at the corresponding V statistic requires n2, i.e. more than twice
as many evaluations. It is important to note that the required n(n1) evaluations still
place an enormous burden on the computer.18
Dechert and LeBaron use the computationally and statistically inefficient V statistics to compute c1,n and kn. This is the second reason why my programme would be
faster after compiling it appropriately. In fact, V statistics are not only computationally
inefficient, they are also statistically inefficient in the context of the BDS test; the latter
aspect is considered in some detail in Section 5.
It is helpful to picture the combinations of observations as a two-dimensional
matrix of elements I(Xi, Xj), with i running from 1 to n in vertical direction and j running from 1 to n in horizontal direction. Elements I(Xi, Xj) assume either value 1 or
value 0, depending on whether or not observations Xi and Xj are close. Evaluating
the full matrix would correspond to computing a V statistic. Evaluating only the upper

17

On U and V statistics, see Denker & Keller (1983) and the relevant sources cited therein. LeBarons
(1997b) claims that [t]he only difference between U and V statistics is that the own points are counted in
V-statistics. While it is true that all points for which j < i are only replications of all points for which i < j
and thus only points for which i = j are truly new, inclusion of replications in V statistics still means that
unique combinations receive twice the weight vis--vis own-points and it also means that the computational
burden is almost twice as large. See also below.
18

High-level command interpreters are best instructed to perform the required operations vector-wise
or even matrix-wise, so it may appear as if only n runs are or even only one run is performed. But in reality
(i.e. at low level), this nonetheless requires n(n1) computations.

L. Kanzler: BDS Estimation

17

triangle, or alternatively only the lower triangle, of the matrix would correspond to calculating a U statistic. Here is an example:
i|j

1 2 3 4 5 6 7

1
2
3
4
5
6
7

1
1
0
0
1
0
1

1
1
1
1
1
0
0

0
1
1
0
1
0
1

0
1
0
1
1
1
0

1
1
1
1
1
0
1

0
0
0
1
0
1
1

1
0
1
0
1
1
1

3.2 Estimation of correlation integrals of dimension 1


Correlation integrals c1,n and c1,nm+1 are obtained by averaging all 1s and 0s
in rows 1 to n and rows m to n respectively (alternatively, this could be done columnwise).

3.3 Estimation of correlation parameter kn


Unfortunately, kn cannot be obtained as easily. kn is given by a triple sum over
three products of I-values, and it is at first sight not apparent how k can be read off the
matrix other than by implementing these sums and products literally. This, however,
would be computationally extremely laborious even on relatively small series. To
understand what k actually estimates, and how this estimation can be made computationally more efficient, refer to the indices of the aforementioned imaginary matrix
I(Xi, Xj):
(1,1) (1,2) (1,3) (1,4) (1,5) (1,6) (1,7)
(2,1) (2,2) (2,3) (2,4) (2,5) (2,6) (2,7)
(3,1) (3,2) (3,3) (3,4) (3,5) (3,6) (3,7)
(4,1) (4,2) (4,3) (4,4) (4,5) (4,6) (4,7)
(5,1) (5,2) (5,3) (5,4) (5,5) (5,6) (5,7)
(6,1) (6,2) (6,3) (6,4) (6,5) (6,6) (6,7)
(7,1) (7,2) (7,3) (7,4) (7,5) (7,6) (7,7)

L. Kanzler: BDS Estimation

18

Consider equation (13) on page 12 for t = 1, s = 2, r = 3. By mapping symmetrically


all points which fall below the first diagonal, i.e. in the lower triangle, along the first
diagonal into the upper triangle, it becomes apparent that the term
{ I(1,2) I(2,3) + I(1,3) I(3,2) + I(2,1) I(1,3) }
corresponds to the average product of elements which can be formed from the pairs
in triangle (1,2), (1,3) and (2,3). As r increases, i co-ordinate 3 is replaced by 4, 5, 6,
etc., so the triangle is stretched towards the right of the table. As s increases, the starting point moves to the right and the triangle stretches further across the rows towards
the bottom. And as t increases, the starting point moves towards the bottom. All such
triangles are within the upper triangle of the table. The average of all these triangle
averages is kn.
There is, however, no need to take averages of averages: one can easily compute
all individual products first before averaging them all in one go. Thus the problem
boils down to multiplying each element in the upper triangle with all other elements
to the right and towards the bottom and averaging the results. This amounts to multiplying each element only once by the sum of all qualifying elements:
n

kn

2
n (n 1)

I ( Xt , Xs )

(n t) (n s 1)

t 1 s t 1

I ( Xr , Xq )

(16)

r t q s 1

Corresponding to the above U statistic, there is a V statistic which simplifies to:19


n

V
n

1
n

I ( Xt , Xs )

t 1

(17)

s 1

The difference between the two statistics is that knV includes all the products which can
be formed from all the elements in the entire matrix, whereas kn averages only all combinations of products in the upper triangle. So, while each product entering the summa-

19

Incidentally, this is the equation by which Dechert and LeBaron compute knV. Equation (17) appears
in Dechert (1994), but without any derivation whatsoever.

L. Kanzler: BDS Estimation

19

tion for kn is unique, knV contains a lot of duplications among the many more products
which are summed.
There are three types of duplications. The first is given by taking products with
any elements in a north-westerly direction, but within the upper triangle. The second
is given by all the products involving the lower triangle. And the third is given by all
products among pairs stretching across the diagonal. It is thus clear that all products
are evenly duplicated, so averaging over all elements is equivalent to averaging over
the unique products in the upper triangle alone.
However, knV also encompasses own-products and products formed with one
element on the diagonal. Both of the latter render the statistic statistically inefficient
and should be adjusted for. Own-products occur twice, once with the element itself and
once with the image of the element across the diagonal. Products involving one diagonal element occur only once. In both cases, the product value is equal to the value of
the (non-diagonal) element. However, all diagonal elements themselves mirror onto
themselves and should not be triple-counted. Their product value is nonetheless equal
to their own-value.
The total adjustment required is thus three times the value of all elements in the
table less twice the values of the diagonal. This adjustment reduces the number of
products by which the adjusted sum of bits is to be averaged by 3n2 2n:
n

kn

1
n (n 1) (n 2)

[
t 1

I ( Xt , Xs ) ]
s 1

I ( Xs , Xt )

2n

(18)

s 1 t s 1

kn may be relatively easily computed by summing the squares of the sums over each
row of full length n in the matrix, adjusting the sum and averaging the result as shown
above.
The remaining problem is that summation is over the full length of each row in
the matrix, i.e. requires the left-hand part of each row which falls into the lower
triangle and onto the diagonal in addition to the right-hand part of each row in the
upper triangle. It may appear that the matrix of Is should after all be evaluated as a

L. Kanzler: BDS Estimation

20

V statistic, even though this would more than double the computational burden. Fortunately, this is not needed.20
The missing left-hand part of each row is given by its mirror-image across the
diagonal, i.e. by the column based on the diagonal value on which the row is anchored.
For example, the entire row 3 in the above table is simply given by the first two
elements in column 3 (1,3) and (2,3), an element of value 1 on the diagonal (3,3) and
the remaining elements on row 3 (3,4), (3,5), (3,6) and (3,7). In practice, c1,n and kn are
best computed by summing over rows and columns while running the evaluation for
the upper triangle of matrix I.

3.4 Estimation of correlation integrals of dimension m


The amount of memory required to store a matrix of order n2 poses another
problem. It is important to store these n(n1) bits efficiently. This is best done by
packing several bits into one word, thus drastically reducing the amount of storage
required by the number of bits per word. This bit-packing can be done both in lowlevel and in high-level languages. A low-level implementation would make direct use
of the computers hardware. A high-level implementation such as mine translates an
array of bits into integer representation. For example, the bit array 1 0 0 1 1 0 1
corresponds to
20

Because Dechert and LeBaron are obviously not aware of the following trick, they instead attempt
to cut the processing time by implementing a method due to Theiler (1990) to obtain c1 and k. If the time
series is sorted first, then all close observations are found in a cohesive bulk around a given observation.
So one only needs to search for the first sorted observation lying outside distance of the given observation
to evaluate equation (15) for all n(n1) unique combinations of i and j. The effective number of evaluations therefore depends on the proportion of observations actually being close, and this proportion is
given by c1,n. If c1,n is small, pre-sorting has the potential of reducing the computational requirements significantly. If it is close to 1, the potential gains are negligible.
There is, however, a cost exceeding any potential savings: To compute cm,n for m > 1, one needs
to revert to the unsorted series, and as is argued in the main text, to speed up computing of cm,n significantly,
one needs to be able actually to build on the above matrix, which can only be derived from the unsorted
series. So counting on the sorted series provides no relief from evaluating the unsorted series as well. On
the contrary, it places an additional burden on the computer, and this burden is even greater when the cost
of sorting is taken into account.
The Theiler sort is thus not worthwhile being performed as part of an otherwise optimised algorithm. The reason why Dechert and LeBaron still benefit from it is that, unaware of the trick described in
the main text, they insist on computing c1,n and kn separately as V statistics, in addition to computing cm,n
and c1,nm+1 as U statistics.

L. Kanzler: BDS Estimation

21

1x 26 + 0x 25 + 0x 24 + 1x 23+ 1x 22 + 0x 21 + 1x 20 = 77
Whereas packing as many bits as possible into one word (16, 32 or 64 bits on a low
level, 52 bits in MATLAB) yields the largest savings in memory, this makes counting
the number of bits set (i.e. the number of 1s) a formidable task. Unfortunately, neither
on a low level nor on a high level is it possible simply to sum the 1s. The fastest way
of counting bits in, say, a 17-bit representation is to create a table relating all integers
up to 2171 = 131,071 to the number of bits set for each of these integers and then to
look up the number of bits set for a given integer. The larger the number of bits used
per word, the larger the memory requirements on this look-up table (doubling with
every additional bit). The MATLAB algorithm chooses the number of bits per word
so as to minimise the total memory consumed by the table, the matrix and the subsequent processing for given sample size n.21
The correlation integral for higher dimensions cm,n is defined by the average of
all unique products between chains of elements I(Xi, Xj), I(Xi1, Xj1), ,
I(Xim+1, Xjm+1). In the above-mentioned matrix I, these chains run in a north-westerly
direction. After left-aligning all rows in (the upper triangle of) the matrix, e.g.
1 0 0 1
1 1 1
0 1
1

0
0
0
1
0

1
0
1
0
1
1

1
1
0
1
0
1

0
1
1
1
1

0 1 0 1
1 0 0
0 1
0

the relevant products are now defined in a vertical direction. To obtain c2,n, one needs
only to multiply all elements which have a neighbour above them with that respective
neighbour and average the results. Given that the matrix of elements is stored in bitrepresentation, a logical and-operation on bit-words is computationally more efficient
than multiplying individual bits. To obtain c3,n without extending multiplication to two
neighbours in the upper direction, the results of the bit-and operations for c2,n are
stored, effectively replacing matrix I(Xi, Xj) by matrix I(Xi, Xj) I(Xi1, Xj1), similar bit21

LeBarons algorithm is fixed at 15 bits per word.

L. Kanzler: BDS Estimation

22

and operations one step in upper direction are performed on the new matrix and the
resulting bits are averaged to yield c3,n. And so on for higher dimension, if desired.22
The great speed of computing cm,n as described above relies to some extend on
the ability to perform bit-wise and-operations efficiently. In MATLAB, this is achieved
through delegating these operations to an external C routine. Other high-level languages may not have similar capabilities.
I have included a second version of my programme towards the end of the code
which does not use bit-wise operations and could arguably be translated into any language. Instead of a bit-wise matrix array, it relies on a single vector to store all n(n1)
bits. This requires a complex indexing scheme, which is explained in the comments
interspersed with the code. While the results returned are identical to the main programme, it is much slower and requires more memory than the main programme (see
Table 1). Nonetheless, it is still sufficiently fast for occasional use.
The statistical size of the BDS test is examined in the following sections 4 and
5, and the test is extensively used on real data in the empirical studies of the thesis.

4. The Small-Sample Properties of the BDS Distribution and


Recommendations on the Choice of Parameters and m
Apart from the first difficulty of writing a viable test programme considered
above, a second difficulty of applying the BDS test arises from the question how parameters and m should be chosen. As briefly explained in Section 2, the choice cannot
matter for infinite series (as long as is bracketed by the spread of the distribution).
Asymptotically, the BDS statistic follows the normal distribution with zero mean and
unit variance. This implies that the probability of rejecting the null hypothesis of
independence when the data is actually i.i.d. is independent of the choice of and m
and that it is equal to the tail weight of the standard-normal distribution. But, as with
most test statistics, one would not expect the same to hold in small samples.

22

LeBaron (1997b) describes a similar method.

L. Kanzler: BDS Estimation

23

The objective of this section is to investigate the distributional properties of the


BDS statistic computed on samples with a finite number of observations n, and in
particular to examine the dependence of the BDS distribution on the choice of the
embedding correlation dimension m and the dimensional distance . This is done by
simulating computation of the BDS statistic on pseudo-i.i.d. samples for a selected
number of combinations of choice parameters m and and sample size n.
Brock, Hsieh & LeBaron (1991, henceforward BHL) also simulate the BDS statistic on finite samples. BHL consider only a limited number of cases.23 The main objective of their study is to demonstrate that the BDS test in fact has power to distinguish statistically between i.i.d. and non-i.i.d. data. While I do not doubt the power of
the BDS test as such, I wish to examine how the choice of parameters m and influences the size of the test on i.i.d. data, and consequently the question how m and
should be chosen to minimise the size of the error of falsely rejecting the null hypothesis.
It can be seen from the few cases tabulated by BHL that deviation from the
normal (which is what they actually tabulate) differs considerably for different combinations of n, m and , but it is hard to discern a coherent picture from the few cases
tabulated. BHL themselves recommend that higher dimensions should not be considered for small samples and that should be chosen in the range between 0.5 and
1.5. As I shall demonstrate, the latter recommendation in particular is at variance with
empirical evidence and it is possible generally to draw far clearer conclusions from
Monte-Carlo simulations.

4.1 Setup of Monte-Carlo simulations


Given that the BDS test is potentially very powerful in detecting any kind of
dependence (as proven asymptotically by BDSL, 1996), some care needs to be taken

23

BHL tabulate the size of the distribution at standard-normal quantiles as well as median, mean, standard deviation, skewness and kurtosis for n = 100 with m = 2, n = 500 with m = 2 and 5, and n = 1000 with
m = 2, 5 and 10, the range of / in each case being 0.25, 0.5, 1.0, 1.5 and 2.0. Quantiles are tabulated
only for / = 0.5 and 1.0, and n = 100, 250 and 500. There is no graphical illustration.

L. Kanzler: BDS Estimation

24

in choosing a suitable random-number generator (RNG). Any kind of dependence


which the BDS statistic might be able to pick up would have an undesirable impact on
the estimated finite-sample distribution.
I use a sophisticated RNG which combines a multiplicative integer congruential
generator and an integer-shift-register generator to create uniformly distributed random
numbers.24 Marsaglias ziggurat method (Marsaglia & Tsang, 1984) is then applied
to obtain normally distributed random numbers from the uniform random numbers.
This is done by using a simple uniform RNG to sample a table of pre-computed values
which partition the normal distribution into regions of 1/32th of its area. By comparison with the conventional method of obtaining pseudo-random numbers by multiplicative congruential generation alone,25 the combination of three different RNG methods
employed here should make the resulting samples look almost perfectly independent.
The BDS test itself provides a suitable means of evaluating pseudo-independence. At first sight it may sound paradoxical to use the BDS statistic to test for independence in pseudo-random samples when it is the BDS statistic whose distribution I
wish to simulate using such data. However, while the asymptotic properties of the BDS
distribution are only to be examined in fairly small samples of up to 2,500 observations, independence of pseudo-random data can be evaluated at far larger sample sizes.
In fact, the larger the sample, the greater is the probability of any consistent dependence being detected. As will be discussed further below, the tail distribution of the
BDS statistic for samples of 10,000 and 15,000 pseudo-random numbers is almost indistinguishable from that of the standard normal, and there is hence no statistical evidence of dependence in pseudo-random numbers. I also avoid any one simulation being
run on the same pseudo-random sample, thus ensuring that the simulated distributions

24

The uniform and normal RNGs described here are built into MATLAB version 5. I obtained the full
C source code from Cleve Moler, chairman and co-founder of The MathWorks, which produces MATLAB.
The code is confidential, but a good description of the main algorithms can be found in Moler (1995).
The classical RNG reference is Marsaglia & Bray (1968), and a good review of the relevant literature is contained in Park & Miller (1988).
25

Traditionally, Gaussian random numbers are obtained by simple scaling of uniform random numbers.

L. Kanzler: BDS Estimation

25

are not all conditioned on the same dependence if there is any.26 I am thus confident
that the Monte-Carlo simulations are de facto conducted under the true null hypothesis.
In principle, there is no need to run the simulations on normally distributed samples; the BDS test makes no such distributional assumption. Yet as will be shown, the
finite-sample distribution of the BDS statistic is influenced by the proportion of observations being close in each dimension up to embedding dimension m. For given
dimensional distance , the estimates of these correlation integrals depend on the actual
distribution of the underlying sample. So to obtain consistent results, it is important to
choose a specific distribution a priori and to run the simulations on samples drawn
from this distribution alone. The actual choice of distribution is not particularly
important to study the small-sample properties per se.
However, for reasons of computational convenience, is in practice often
specified in units of the standard deviation of the sample under examination (see also
Sub-Section 4.4). The size of the standard deviation varies, of course, strongly across
different distributions, so the choice of distribution is, in fact, important when compiling quantile tables for practical applications. Since real-world data examined with
the BDS test tends to be close to Gaussian in distribution, I have chosen to simulate
the BDS statistic on normally distributed pseudo-random data.27 Some examples of
pseudo-random samples graphed in Figure 1 show that Gaussianity is well approximate
by my RNG, even in very small samples. It is still possible to relate the results
obtained here to other distributions, and I will return to this point in Sub-Section 4.4.
The Monte-Carlo simulations are performed by running my BDS programme (as
described in Section 3 above) on samples with n = 50, 100, 250, 500, 750, 1000 and

26

The period of the above-mentioned uniform RNG is approximately 21492, i.e. even if one drew random
numbers at a rate of one million per second (possible on a 90 MHz Pentium PC), they would repeat themselves only after 10435 years! Thus continuous sampling alone would avoid any replications in practice. For
reasons of computational convenience, I do not always sample continuously, but instead shuffle the ziggurat pointers (called states in MATLAB) and later check that in fact not a single BDS statistic is replicated
in the simulation results. When comparing different methods of computing the BDS statistic for Section 5,
the simulations are, however, run in parallel.
27

Note that the size of the mean of the distribution does not impact on the BDS test, and that the size
of the standard deviation is also irrelevant as long as is specified in units of .

L. Kanzler: BDS Estimation

26

2500 observations. In most cases, all embedding dimensions from m = 2 to 19 are considered.28 is defined in fractions of the standard deviation of each random series,
and most experiments are conducted for / = 0.50, 1.00, 1.50 and 2.00. For each
combination of m and /, the BDS statistic is simulated on 25,000 random samples
of sizes 50, 100, 250, 500, 750 and 1,000, and on 16,350 random samples of size
2,500.29
The computational cost of running these simulations is immense, in particular
for larger sample sizes. Running the BDS programme for dimensions 2 to 19 on
25,000 samples of 50 observations takes approximately 5 hours on a personal computer with a 90 MHz Pentium chip (P90) and a 512 KB L2 cache running MATLAB
5.1 under Windows NT 4.0.30 One set of experiments has to be conducted for the
entire range of /, so all Monte-Carlo experiments for sample size 50 can be
concluded just within one full day. As the sample size increases, the run-times virtually
explode. Even though the number of replications is reduced by

for n = 2500, a stag-

gering 195 days would be needed to complete the Monte-Carlo simulations!


Excluding initial testing and a few special cases, it would take roughly 9
months of continuous computing to evaluate all of the above-mentioned cases. For
comparative purposes (see Section 5), I also run LeBarons software in parallel, and
this would add another 5 months to the total run-time, if all simulations were run on
a P90. In fact, many of the simulations are run concurrently on up to six different per28

I restricted the maximum dimension to the limit imposed by LeBarons software, which crashes for
dimensions 20 and higher. For larger sample sizes and reasonable values of , higher dimensions could, in
practice, be considered. But the additional computational cost of running the required simulations would be
enormous and the results are unlikely to offer insights beyond the findings discussed here.
29

BHLs tabulations are based on only 5,000 runs for samples of 100 and 500 observations and 2,000
runs for 1,000 observations (which was understandable given the speed of the computing equipment available to them ten years ago). This makes their estimates quite unreliable, considering that a tail estimate of
1% is based on only 50 or 20 BDS statistics. It is thus only natural that they do not even tabulate tail probabilities for 0.5%, which is, however, an important cut-off point because it is needed to evaluate the null
hypothesis of independence at the 1% significance level (since the BDS test is two-sided see also page
31, footnote 33). Nevertheless, the few results tabulated in BHL are broadly consistent with mine.
30

Level-1 and level-2 caching greatly influences computational speed, in particular in the case of
LeBarons software. For example, disabling the internal L2 cache on the Pentium-II (see also footnote 31
below) reduces the speed of my programme by 40% and that of LeBarons programme by 60%.

L. Kanzler: BDS Estimation

27

sonal computers. These are one P90, three P133s, one P200 and one Pentium-II233.31 I was thus able to complete the simulations within a little under three months,
during which time at least the Pentium-II machine was computing virtually continuously.
For each combination of the values chosen for n, m and , I obtain 25,000 BDS
statistics. In total, I thus have around 500 small-sample BDS distributions for evaluation. The distribution mapped out by a sample of BDS statistics can be visualised by
plotting the cumulative distribution of all 25,000 BDS statistics. The small-sample
properties are then easily understood by comparing the lower and upper tails of the
resulting curve with those of the theoretical Gaussian with zero mean and unit variance
plotted in the same graph, and also by comparing the BDS distributions among each
other.
Figure 2 (Panels 1 to 30) show cumulative distributions for all available sample
sizes (n) and for all dimensional-distance parameters in units of (eps = /) and for
some or all embedding dimensions (m) between 2 and 19. One panel shows the distributions of several embedding dimensions for given n and . Figure 3 (Panels 1 to 24)
and Figure 4 (Panels 1 to 36) plot BDS distributions in different perspectives. In Figure
3, one panel plots the results of increasing the sample size n as parameters m and are
held constant. Figure 4 allows BDS distributions to be compared for different ceteris
paribus.
Each distribution curve links up all observations between 3 and 3, so these
plots are very precise. For example, approximately 430,000 data points are embedded
in Panel 8 of Figure 2. However, in some graphs I omit curves which interfere too
much with their neighbours and which are very similar in shape to their neighbours.

31

Algorithms in raw MATLAB run approximately 4.5 times faster on the PII-233 than on the P90,
while LeBarons compiled software even runs six times faster. The speed of the P133s is proportionate to
the frequency of these processors vis--vis the P90. The P200 is slightly faster than its frequency would
suggest, because the size of the L1 cache is 32 KB instead of 16 KB. The PII has an even larger L1 cache
of 64 KB and a high-frequency internal L2 cache of 512 KB, and it uses a more advanced CPU technology,
all of which explains why it is almost twice as fast as the P200.

L. Kanzler: BDS Estimation

28

BDS distributions are tabulated in Appendix B, Table 2, in the same order as


Panels 1 to 30 of Figure 2. To conserve space, I have included only dimensions 2 to
15, but the values for higher dimensions are available from me upon request. Printed
are some quantiles of the BDS sample distributions, their size, their medians and their
first four moments. The quantiles are for the 0.5%, 1.0%, 2.5% and 5.0% areas in both
lower and upper tails. In the cumulative-distribution plots, the quantile ordinates are
found on the vertical axis. The size of a given quantile of the standard-normal distribution is the proportion (in percent) of BDS statistics exceeding the respective standardnormal quantile in absolute value. The size ordinates are found on the horizontal axis
of the plots. The size provides a good indication of the deviation of the BDS distribution from the standard normal. For all tabulated values, their asymptotic (i.e. standardnormal) equivalents are displayed in the most right-hand column.

4.2 Results of Monte-Carlo simulations


The cumulative functions plotted in Appendix A reveal a number of broad
features of the BDS distribution. Figure 2 shows that as the embedding dimension m
increases, the BDS distribution moves away from its asymptotic distribution, the standard normal. The lower the dimension, the better the small-sample properties, whatever
the sample size n and the size of the dimensional distance parameter /. But it is also
clear that increasing the sample size always improves the small-sample properties,
whatever the dimension and the dimensional distance

see Figure 3. Finally, as is

apparent from Figure 4, varying the dimensional distance changes the shape of the distribution in some cases drastically, but it is less clear at first sight which value of /
gives the best distributional properties in general.
Another distinct feature of the BDS distribution in small samples is the fact that
the median (0.5 quantile) is always negative, indicating that c1,nm tends to be larger than
cm,n in value (Figure 2 and Table 2). With very few exceptions, the mean is larger than
the median, but it is usually still negative (Table 2). Even in very large samples, the
BDS distribution never appears to coincide with the standard normal around the
median.

L. Kanzler: BDS Estimation

29

In practice, however, only in the tails is the shape of the distribution of concern,
as hypothesis testing is usually based only on extreme probabilities represented by tail
areas. Asymptotically, the BDS statistic follows the standard-normal distribution, allowing significance levels to be evaluated with reference to normal-quantile values. Ideally,
one would want the BDS distribution to be approximated by the standard normal even
in small samples. For the larger the deviation of the BDS distribution from the standard
normal, the larger the size of the error of falsely interpreting the BDS statistic as
evidence for or against independence.
As the plots show, the cumulative BDS distribution tends to lie above the standard normal in the lower tail and below it in the upper tail. In other words, the finitesample distribution is fat-tailed, always displaying excessive kurtosis (i.e. exceeding
3, the standard-normal equivalent), as can be seen from the Table 2. The greater the
weight in the tails, the larger the error with which one would reject the null hypothesis
of independence when the data was, in fact, independent.
As can be seen from the plots and tables, the size of this error varies considerably with the sample size, the embedding dimension and the dimensional distance. For
practical applications, the dependence of the size of this error on n, m and / must
be properly understood. The significance of the BDS statistic should be evaluated on
standard-normal quantiles only when the deviation of the size of the BDS error from
the standard-normal error is negligible. In this case, the BDS statistic should be computed only for such combinations of m and / on a given sample of size n for which
the BDS distribution and the standard normal virtually coincide in the tails. One may
indeed wish to choose m and / so as to minimise the deviation from the normal.32
However, it may not always be possible to approximate the standard normal by
educated choice of m and /, either because no suitable combination exists, or
because one wishes to evaluate the BDS statistic for many different combinations of
m and /, only some of which fulfil the above criterion. In these cases, reference to

32

Note that reducing the error of false rejection beyond that of the standard normal is not desirable
either. In some extreme cases, the BDS distribution has tails which are flatter than the normal.

L. Kanzler: BDS Estimation

30

the quantile values tabulated in Appendix B (or bootstrapping) is indispensable. Yet


even then one is well advised not to choose distributions which are too far off the
standard normal, because it is quite likely that the tabulated tail values of wildly
swinging BDS distributions are far more sensitive than the better behaved distributions to the Gaussianity of the data on which they are derived.
In summary, the BDS test should be conducted only for a combination of parameters which either imply that the BDS statistic closely follows the normal (in which
case hypothesis testing can be done with reference to standard-normal tables) or imply
that the BDS distribution is at least well behaved (in which case the tabulated values
in Appendix B should be used). It is with these two objectives in mind that I proceed
to a more detailed discussion of the actual distributions plotted and tabulated in Appendices A and B.

Varying the sample size


Starting with a sample of 50 observations, it is immediately clear from Figure
2, Panels 1 to 4 and the corresponding parts of Table 2 that the standard normal is not
approximated well for any combination of m and /. A choice of / = 1.5 and m = 2
appears to give the best approximation, but even then independence would be rejected
far too often. For example, a (two-sided) 10% normal test would reject independence
28.3% of the time; and perhaps even worse, there is a 9.8% chance of rejecting
independence at the 1% level. For / = 0.5 and m = 2, these errors are considerably
larger still: 53.8% and 35.8% respectively! In fact, for / = 0.5, the BDS-distribution
degenerates very quickly as one moves to higher embedding dimensions, and from
dimension 11 onwards, the BDS statistic is always negative. For larger values of /,
the BDS distribution displays the same tendency, but at a lesser rate. The distribution
is well behaved up to dimension 15 for / = 1.5 and beyond dimension 19 for
/ = 2.0. So the BDS test on samples of size 50 can be conducted at any of the tabulated embedding dimension by choosing / = 1.5 and/or 2.0 and reading the significance levels of Table 2.

L. Kanzler: BDS Estimation

31

For samples of size 100, the picture hardly changes for / = 0.5 and / = 1.0.
For / = 1.5 and / = 2.0, however, the tail probabilities are significantly smaller
than for half the sample size. Consider again the case / = 1.5 and m = 2: rejecting
independence for BDS statistics exceeding 1.645 in absolute value, one would falsely
reject the null 19.0% of the time instead of 10% for the standard normal; similarly
3.9% rejections instead of 1% at 2.576. Since these errors are still unacceptably large,
hypothesis testing should be based on the tabulated quantiles for / = 1.5 and/or
/ = 2.0.
Increasing the sample size to 250, the tails of the distributions keep on moving
closer to the standard normal. Now even for / = 1.0, there are a few dimensions for
which the BDS distribution is quite well approximated by the normal in the outer part
of the tails. The size of the BDS statistic still appears to be too large for correctly sized
hypothesis testing to be performed: in the benchmark case of / = 1.5 and m = 2, the
respective errors of false rejection are 13.6% at the 10% level and 2.1% at the 1%
level. The BDS tables are still indispensable to testing in samples of size 250.
Doubling the sample size to 500 improves the properties of the BDS distribution
further. For / = 1.5 and small m, the lower tails virtually coincide with the standard
normal. Moving to larger m, the BDS distribution starts undercutting the normal and
/ = 2.0 yields better results in the lower tail than / = 1.5. The upper tails are,
however, too far off the normal to give the desired asymptotic property in the
aggregate and the recommendation to use the BDS tables instead of normal quantiles
remains unchanged.
Disappointingly, as the sample size is increased further, the tails of the BDS
distributions do not keep on converging to the normal at previous pace. In fact, virtual
normality in both tails for a large range of embedding dimensions is not achieved until
the size of the samples is increased to 15,000 (Table 2). From n = 750, the lower tail
tends to undercut that of the standard normal while the upper tail stays stubbornly
below the normal.

L. Kanzler: BDS Estimation

32

Note, however, that the BDS test is usually conducted against a two-sided alternative hypothesis.33 Hence all one should be concerned about is whether lower and
upper tail probabilities in the aggregate approximate standard-normal equivalents well
enough to render reference to the tabulated BDS distribution unnecessary. For samples
of size 750 and larger and / = 1.5 or 2.0, the null will be rejected at maximum half
a percent too often compared with what normal probabilities would suggest. An error
of this size is probably acceptable for most applications. Note, however, that even in
relatively large samples, a setting of / = 0.5 yields unsatisfactory results for all but
the lowest dimensions. Similarly, higher dimensions are best avoided when choosing
to run the BDS test for / = 1.0.
In Figure 3, I explicitly explore the impact of increasing the sample size for
constant m and /, for various combinations of m and /. Except for the most
extreme cases (large m and / = 0.5), increasing the sample size leads to clear
improvements in convergence to the normal, and it is also apparent that the largest improvements are made in the range up to n = 500.
Choosing /
Figure 4 compares cumulative BDS distributions across a range of /. In very
small samples, setting / = 0.5 always produces a distribution which is not only way
off the normal mark, but also far away from the other three functions for / = 1.0,
1.5 and 2.0. In larger samples, / = 0.5 has the potential of yielding better results, but
only when the embedding dimension chosen is very low. And even then such distributions never come as close to the standard normal as those for larger /. My recommendation is therefore not to consider / = 0.5 a possible choice for the distance
parameter.

33

One would need to hold a firm view about the size of the correlation dimension at embedding dimension m to formulate a one-sided alternative hypothesis. Such beliefs could be based on findings from chaosrelated tests using correlation-dimension estimates. It should be pointed out, however, that these techniques
may easily give misleading results (see Ramsey et al., 1990, and Eckmann & Ruelle, 1992) and thus in my
opinion rarely provide sufficient support to narrow down the alternative hypothesis of the BDS test.

L. Kanzler: BDS Estimation

33

A setting of / = 1.0 produces superior results to / = 0.5, but it still gives


rise to a similar tendency: as m is increased and/or n is decreased, the distribution
quickly walks away from the standard normal. So both / = 0.5 and / = 1.0 suffer
from the problem that these settings cannot be recommended on a universal basis.
/ = 1.0 can be used in a number of cases, but these must be identified from the distribution plots or tables first. Only for samples of size 15,000 a choice of / = 1.0 can
be universally approved.
By contrast, it is possible to give generalised advice on the usage of / = 1.5
and / = 2.0. Among the ranges of dimensions and sample sizes considered, / = 2.0
always produces satisfactorily shaped distribution functions. / = 1.5 does so too for
all but the most extreme case of n = 50 and m = 19. Among the four choices of /
considered here, the distributions for / = 1.5 and / = 2.0 are the closest to each
other, and in some cases they are indistinguishably close.
Figure 4 also reveals that a setting of / = 1.5 approximates the normal tails
better for lower embedding dimensions than for higher dimensions, and vice versa for
/ = 2.0. As the embedding dimension is increased for a given sample size of 500
and larger, the function for / = 1.5 starts undercutting the lower tail of the normal
for higher dimensions while moving away from the upper tail of the normal. At the
same time, the distribution for / = 2.0 starts moving closer to the normal in both
lower and upper tails. For low embedding dimensions, / = 1.5 yields the better
approximation. It appears difficult to identify a specific embedding dimension at which
one should switch from / = 1.5 to / = 2.0 for higher dimensions. In larger
samples, there is a large range of dimensions for which the respective sizes of the error
are not significantly different between / = 1.5 and / = 2.0. In practical applications, either or both of / = 1.5 and / = 2.0 will yield satisfactory results.
4.3 Explaining the finite-sample properties
The questions remain (i) why the small-sample properties of the BDS distribution vary so significantly with the size of dimensional distance and with the embed-

L. Kanzler: BDS Estimation

34

ding dimension when asymptotically there is no difference, and (ii) what role the size
of the sample plays in this relationship.
Consider as an example the BDS statistic being calculated for / = 0.5 on a
sample of 100 (near-)normally distributed observations. As Table 3 and Figure 3, Panel
2 show, typically around 27% of observations are close (value 1). Under the null
hypothesis, one expects (27%)2 = 7% of all 2-histories to be close, similarly
(27%)3 = 2% of all 3-histories, and so on for higher dimensions. Also, the higher the
dimension, the smaller the absolute number of (unique) m-histories which can be
formed: (nm+1)(nm).34 As the number of close histories decreases, so presumably does the reliability with which the correlation integrals of higher dimensions are
being estimated. In the given example, estimation of the correlation integral of dimension 5 would be based on a mere 7 histories which are expected to be close. These
are just too few histories to make reliable inference about the underlying data-generating process.
Decreasing , decreasing n and increasing m all cause the expected number of
close histories to shrink, thus rendering estimation of cm,n and hence the BDS statistic
less reliable. So it may not seem surprising to find the BDS statistic badly behaved for
small /, small n and large m, and especially for combinations of these.
Presumably also it is not a good idea to choose too large a dimensional distance,
because this would mean that c1,nm+1 would be estimated on too small a number of
histories being not close. However, the case of too few 0s is not symmetric to the
case of too few 1s, because the correlation integral worst affected by a lack of
reliability would be c1,n, not cm,n. For c1,n the expected number of histories being close
is just nc1,n, while for cm,n, it is nc1,n m, which is always smaller than c1,n, and much
smaller when c1,n itself is small. So as c1,n is decreased, the number of close histories
approaches zero much more quickly than the number of close histories approaches
one when c1,n is increased. Also the number of m-histories will always be much smaller

34

So for a sample of 100, there are 4,950 1-histories, 4,851 2-histories, 4,753 3-histories, etc.

L. Kanzler: BDS Estimation

35

than the number of close observations (1-histories). One would have to make /
very large indeed before a loss of reliability would become significant.
This reasoning also points to the fact that the choice of has a far greater
impact on the reliability of cm,n than on that of c1,nm+1, c1,n and also of kn (which is of
the order c1,n2). The reliability with which cm,n on the one hand and c1,nm+1m on the
other can be estimated differs markedly, even though their expected value is the same
under the null hypothesis. The odd behaviour of the small-sample BDS distribution
must be due to this relative lack of reliability in estimation. For instance, the smaller
the number of expected histories being close, the larger the probability of encountering not a single close history in estimation. A total lack of close histories means
that the estimate of cm,n becomes zero, and so the BDS statistic will be negative. This
explains why the BDS statistic tends to be more often negative than positive, and why
this tendency increases with increasing m, decreasing n and decreasing . But even
when the estimate of cm,n is positive, the number of values it can possibly assume when
is small, n is small and/or m is large, is very limited. This shows up in the crooked
shape of some small-sample functions.
In short, the foregoing features explain why larger / yield so much better
behaved BDS distributions, why the difference in error between / = 1.5 and
/ = 2.0 is so much smaller than the difference between / = 0.5 and / = 1.0 and
why larger / suit larger m particularly well.
It is also apparent that increasing the sample size should improve reliability,
because a larger expected number of histories is available for estimation. However, if
this was the only way through which the sample size influenced the distribution, then
the choice of embedding dimension should have a much larger impact on reliability
than it actually appears to have. Moving just one dimension higher reduces the typical
number of close histories drastically, in the above example from 1,337 to 353, from
353 to 94, from 94 to 25, from 25 to 7, from 7 to 2, and so on. Yet as the distribution
plots show, the impact on the shape of the BDS distribution is smaller than when a
similar reduction in the number of expected close histories is achieved through
decreasing the size of the sample.

L. Kanzler: BDS Estimation

36

To examine the impact of the sample size on correlation-integral estimates, I


have derived empirically the distribution of c1,n on normally distributed samples, for
the same combinations of n and / for which the BDS statistic was simulated. Figure
5 and Table 3 exhibit the distribution functions and summarise their characteristics. As
is clearly visible and also apparent from the first four moments, the estimates of c1,n
show a far larger deviation around their asymptotic values when the samples are small
than when they are large.35 Even though I have not simulated the correlation integral
for higher dimensions, it is clear that under the null hypothesis of independence, cm,n
will display a similar behaviour with respect to n (although the respective means and
standard deviations will be much smaller). So the fact that the variance of the BDS
statistic increases with decreasing sample size is presumably mainly the result of an
increasing variance of cm,n c1,nm+1. Since the BDS statistic was derived asymptotically, there is no factor in the BDS statistic which sufficiently adjusts for the impact
of n on the variance. It would be desirable to derive a small-sample correction for the
BDS statistic, but given that and m also contribute to disturbing the distribution, this
may be quite a difficult task.
This concludes my explanations of the features of the small-sample distributions
plotted in Appendix A and tabulated in Appendix B.
4.4 Distribution-free choice of parameter
The above explanation is based on the insight that it is the absolute number of
close m-histories which is responsible for the size of the BDS statistic in small
samples. This number is determined by the total number of unique m-histories, which
in turn depends on n and m, and the proportion of close m-histories, which in turn
depends on the estimate of cm,n. enters the equation only indirectly, as a determinant
of cm,n; it is the estimate of cm,n itself that drives the number of close m-histories. So
to raise the reliability with which the BDS statistic is computed for given m and n, one

35

On a sample of 50, c1,n can assume a maximum of 1,225 different evenly spaced values between 0
and 1, which is why the plots for n = 50 display jumps.

L. Kanzler: BDS Estimation

37

should choose such as to ensure there is a sufficiently large number of close mhistories available to make estimation of cm,n relatively reliable.
In principle, there is nothing wrong with this approach of fixing cm,n at a predetermined value, determining in response and computing the other correlation integrals and thus the BDS statistic accordingly. Unfortunately, this procedure is highly impracticable. The computationally efficient method for computing the BDS statistic outlined in Section 3 relies on the ability to work with a given from the outset. A
method of fixing cm,n at the start and determining in response could only be implemented either through iteration or else through computing and storing the difference
between each and every m-history. Either solution would be extremely processingintensive and could not be done on a routine basis.
Since under the null hypothesis cm,n and c1,nm+1m are equal, a second-best
solution would be to fix c1,n and determine accordingly. Now the actual number of
close m-histories would vary, but at least its expected number could be fixed through
choice of c1,n. In terms of computing power, this method is less demanding than the
above, but it is still too processing-intensive to be useful in practice. My MATLAB
programme gives the user the option to pursue this solution if a sufficient amount of
memory is available.
However, as it is not viable to fix the actual number of close m-histories, one
might as well fix their expected value by choice of rather than c1,n. This is the third
best solution, which differs little in terms of statistical reliability from the second, but
which is computationally efficient because the fast algorithm of Section 3 can be used
as it is. The idea is thus to choose such that the expected number of m-histories is
large enough and varies little to achieve (relatively) reliable estimation. Fortunately,
there is no need to make any explicit calculations, because the results of the simulations have already brought to light which values of tend to produce the best sized
BDS statistics.
It should be remembered, however, that this approach is only second-best (or
rather third-best), since it does not allow full control over the size of cm,n. The
proportion of histories which one can make close by choice of depends on how

L. Kanzler: BDS Estimation

38

these observations are distributed. The larger the distance between any one pair of
observations, the smaller the absolute number of pairs which a given will capture as
close. To mitigate this problem, I have specified in units of the standard deviation
of the sample. While this solution makes the number of close observations less
dependent on their variance, it still does not give full control over cm,n in small
samples. Figure 5 and Table 3 show that the estimate of c1,n may vary considerably
even when the size of dimensional distance is stated in units of the standard deviation
of the sample. Under the null hypothesis, a similar result holds true for cm,n.
Also, a given / tends to capture a different proportion of close observations
in a normally distributed sample than in a sample drawn from an altogether different
distribution. Stating the size of the dimensional distance in units of makes the correlation integrals depend on the kind of distribution from which the sample is drawn.
And consequently, a setting of, say, / = 1.5 will yield different BDS distributions
for, say, uniformly distributed samples and normally distributed samples respectively.
BHL simulate the BDS statistic for a given range of / on a number of different i.i.d.distributions, and the simulation results clearly vary with the type of distribution. The
reason must be entirely due to the fact that, while the ratio of to is in each case
the same, the expected proportion of close m-histories differs, and so does the variance with which the BDS statistic is estimated. (BHL themselves offer no explanation.)
To make the distribution of the BDS statistic comparable across a range of (independent) sample distributions, must thus be stated with reference to c1,n, not .36
So a BDS test conducted for determined with respect to a given c1,n is truly distribution-free in small samples. The quantile values of the BDS statistic tabulated in Appendix B are equally valid for normally and for non-normally distributed samples if
is first determined with respect to the sample estimate of c1,n. The values of c1,n corresponding to / for normally distributed samples can be found in Tables 2 and 3.

36

Note that even though the estimate of cm,n drives reliability, reference to c1,n suffices as long as the
sample is indeed i.i.d., since in this case E(cm,n) = E(c1,nm+1m).

L. Kanzler: BDS Estimation

39

In practice, pursuing BDS testing along the lines of this strategy does not necessarily require fixing c1,n at either of the values corresponding to the recommended
values of / = 1.5 or / = 2.0 and determining the size of each time before running the test. In most cases, it should be sufficient to determine the approximate size
of / for given c1,n on a typical sample and to use the value obtained for all further
testing with similar samples. Note that the BDS distribution usually varies very little
between / = 1.5 and / = 2.0 on normal samples, so any which puts c1,n in the
range of 0.71 and 0.84 should produce reliable BDS estimates.
I recommend running the BDS test for a range of dimensions and at least two
choices of in order to obtain an altogether reliable indication of whether or not the
sample is i.i.d. could be chosen as 1.5 and 2.0 for samples which appear to be
(near-)normally distributed or otherwise such that c1,n 0.71 and 0.84, and m could
cover the full range of dimensions 2 to 15, for which I have tabulated the BDS distribution. For samples up to and including 500 observations, the tabulated quantile values
should normally be used for hypothesis testing; otherwise reference to a standardnormal table will generally do. My MATLAB algorithm BDSSIG.M (Appendix C, Programme 2) is based on the tabulated values and can be used to test the significance of
the BDS statistic both on small and on large samples (as long as c1,n such that
/ = 0.5, 1.0, 1.5, or 2.0).
Two other approaches for setting appear to have received virtually no attention
in practice. Dechert (1994) shows how to derive equations for maximising the theoretical size of the test for given n. Unfortunately, he does not offer any simulations
which would allow the usefulness of his approach to be evaluated. Given the above
findings, one wonders how the size can be maximised when m is left out of the
equation. Worse, his approach requires specification of the exact distributional characteristics of the data-generating process under the null hypothesis. As the null distribution is often unknown, additional assumptions have to be made which restrict the generality of the BDS test. Also derivation of the required equations cannot be left to a
computer programme.

L. Kanzler: BDS Estimation

40

Wolff (1995) treats the correlation integral as a sum of dependent Bernoulli variables and derives a new test statistic which is dependent only on m and n, but not on
. The test statistic is computed differently from that of the BDS test, the only similarities being that both statistics are based on correlation integrals and that both follow
the standard-normal distribution asymptotically. The size of his new test statistic
appears to be slightly superior to that of the BDS statistic on small samples, but still
departs to a considerable extent from the standard normal. This applies even to larger
sample sizes. Moreover, implementation of the test-statistic equations appears to be far
from straightforward.

5. Some Pitfalls in Statistically Efficient and Consistent Estimation


of the BDS Estimators
In Section 3, I described how the BDS statistic can be computed by what in my
view is the computationally most efficient algorithm, and in a few footnotes I contrasted my algorithm with those used by Dechert (1988a) and LeBaron (1997a). This
section is devoted to comparing the three programmes in terms of their statistical
efficiency and also their statistical consistency. The source code for Decherts package
is unfortunately not available publicly, but I was nevertheless able to deduce details of
his method from the screen output which the programme provides. LeBarons C code
is available publicly, but as I am not a C expert, I have gained most insights from
comparing the output of his programme with that of mine and from discussing many
aspects of the algorithm with Blake LeBaron himself.
The differences between the three methods boil down to both Dechert and
LeBaron employing estimators of some correlation integrals which are not fully (statistically) efficient and Dechert using an estimator for which consistency has not been
proven. The details are explained below. To examine the statistical impact of these
differences, I compute Decherts and LeBarons BDS statistics in parallel to many of
the simulations run for Section 4. Decherts algorithm is relatively easily replicated and

L. Kanzler: BDS Estimation

41

run parallel with mine.37 By contrast, integrating LeBarons algorithm would slow
down my own simulations so much that I refrain from doing so; instead I call his
actual programme directly from MATLAB.

5.1 Estimation without proof of consistency


Dechert computes the BDS statistic as
w

Dechert
m, n

n m 1

c1, nm

cm, n

m, n (c1, n, kn )
V

(19)

The first problem with Decherts method arises from the fact that he bases estimation
of the correlation integral of dimension 1 in the nominator of the BDS ratio on the full
sample, i.e. employing c1,n instead of c1,nm+1. While c1,n is, in fact, the most efficient
estimator of the first-dimensional correlation integral, it is not appropriate to employ
this estimator in the nominator. The BDSL (1996) paper, which is incidentally coauthored by Dechert, proves consistency of the BDS statistic only for c1,nm+1. The
proof of the difference between cm,n and c1,nm+1m approaching zero as the sample size
increases to infinity relies crucially on the assumption that both correlation integrals
are estimated over the same number of histories.38
Although cm,n c1,nm is in my opinion likely to be consistent, it appears impossible to verify this formally. I have thus run some Monte-Carlo simulations to investigate whether using Decherts full-sample estimator makes any difference to the BDS
distribution in finite samples. Figure 6 compares standard BDS distributions to the corresponding distributions of full-sample BDS statistics.39 The only difference between
the BDS statistics based on full-sample U-statistic estimators and the standard BDS

37

I have confirmed on a few samples that the results thus obtained are indeed identical in each and
every digit to those obtained through running Decherts actual DOS programme.
38

39

I owe this insight to Blake LeBaron, another co-author of the BDSL paper.

To conserve space, I have not tabulated these distribution functions in Appendix B, but tables are
available from me upon request.

L. Kanzler: BDS Estimation

42

statistics lies in the use of c1,n versus c1,nm+1. The former are not Decherts BDS statistics, which differ from the standard also in other aspects discussed further below.
As Figure 6 shows, the full-sample BDS distribution tends to lie below the
standard BDS distribution in both lower and upper tails, making it difficult to decide
which of the two distributions approximates normal tail probabilities better in the
aggregate. In any case, the functions differ only significantly when the sample is rather
small and the embedding dimension rather high (Panels 1 to 12). In samples of size
500 (Panels 13 to 18) and greater (not shown), there is virtually no difference between
the two.
This supports my conjecture that, even though consistency cannot be formally
proven for the full-sample BDS-statistic, it appears to hold in practice. However, even
if the problem of consistency could be neglected, it is hard to advance an argument for
using the (fully efficient) estimator c1,n in lieu of estimator c1,nm+1. In either case, the
small-sample distribution does not approximate the normal very well. And to make use
of the tables in Appendix B, the BDS statistic should be calculated according to the
standard method with which I have derived these distributions.

5.2 Inefficient estimation due to V statistics


The second problem with Decherts method is that he uses V statistics instead
of U statistics as estimators of c1,n and kn in the denominator of the BDS ratio. Recall
from Section 2 that a correlation integral is defined as the probability of two histories
being close. The most efficient estimator of this probability is one which makes use
of all unique combinations of histories available in a given sample, i.e. one which
calculates a correlation-integral estimate as a U statistic. In fact, this is how I defined
all correlation-integral estimators in Section 2. By contrast, V statistics are based on
all possible combinations of histories, which include histories paired with themselves.
These own-points provide nuisance values which spoil most efficient estimation of
correlation integrals. V statistics are nevertheless consistent estimators, because the
fraction of nuisance values (quickly) vanishes asymptotically; yet they are not as
efficient as U statistics in finite samples.

L. Kanzler: BDS Estimation

43

By using c1,nV and knV instead of c1,n and kn, Dechert deprives his BDS estimators
of maximum statistical efficiency. I will consider the distribution of his BDS statistics
further below. Let me first turn to the second authors method.

5.3 Inefficient estimation due to maxdim-dependence


LeBaron computes the BDS statistic as
w

LeBaron
m, n

n m 1

cm, n

maxdim 1

m, n (c

c1, n

V
1, n maxdim 1

m
maxdim 1

V
n maxdim 1

, k

(20)
)

Similarly to Dechert, LeBaron bases estimation of n on statistically inefficient V statistics, but in contrast to Dechert, LeBaron does not even make use of the full
sample.40
In addition, he makes estimation of the BDS statistic dependent on the prior
choice of the highest embedding dimension (maxdim) for which the BDS programme
is called. LeBarons BDS package routinely returns BDS statistics for dimensions 2 to
maxdim. First of all, this means that the BDS statistic for given n, and m assumes
a different value for each and every maxdim between m and infinity. In effect, LeBaron
introduces a fourth choice parameter into estimation, without knowledge of which it
is impossible to replicate results obtained with his programme. Second and worse, the
larger maxdim for a given m, the smaller the number of observations of the test sample
which enter the computation of all four correlation-integral estimates. So the size of
LeBarons BDS statistic depends on the size of maxdim m, the most reliable estimation being represented by the case m = maxdim.
This finding calls for an empirical investigation into the question what difference
the choice of maxdim makes for the distribution of LeBarons BDS statistic. Figure 7

40

Until May 1997, LeBarons C programme actually used U statistics, but was otherwise identical to
his current version. The method of this earlier version appears to be identical to that of the Fortran programme which LeBaron and Hsieh employed in their simulations and their initial applied work (BDS, 1987,
BDSL, 1996, BHL, 1991, Hsieh, 1989, Scheinkman & LeBaron, 1989). Unfortunately, the Fortran code is
not available for verification.

L. Kanzler: BDS Estimation

44

displays a selection of Monte-Carlo results.41 In each Panel, the base case is


m = maxdim, and Figure 7 compares one base case each to the results of estimating
BDS statistic for given embedding dimension m by calling LeBarons programme for
various maxdims greater than m. As one would expect, the larger the difference
between maxdim and m, the further the function moves away from the base case.
Panels 1 and 2 show that as maxdim is increased step by step, the distribution first
swivels above the lower tail and below the upper tail, worsening the size of BDS testing in either case, and then shifts in its entirety to the right.
In practice, this phenomenon will only be noticed either in very small samples
(n = 50) or when the difference between m and maxdim exceeds at least 10. For
example, Panels 7 to 12 show that even on a sample as small as 100, it makes virtually
no difference whether LeBarons BDS distribution for m = 2 and m = 5 is estimated
with a setting of maxdim = m, maxdim = 10 or anything in between.42 Yet even in
larger samples, maxdim does significantly influence the BDS statistic when maxdim is
chosen relatively large in relation to m (e.g. 19 versus 2); see Panels 13 to 18 for
samples with 750 observations. Similarly, in samples as large as 2,500, an impact of
maxdim cannot be ruled out a priori; see Panels 19 and 20. Finally, as is apparent from
Panel 21, maxdim-dependence can be observed even on very large samples, if only the
difference between maxdim and m is chosen to be large enough.43
It is, of course, possible to circumvent size problems arising from maxdimdependence by calling LeBarons programme for each desired embedding dimension
separately. However, superior test size is thus bought at the cost of wrecking computational efficiency.

41

Comprehensive tables for the cases depicted by Figure 4.7 and others are available from me upon
request.
42

Unfortunately, it is not known which value of maxdim was chosen for the simulations reported in
BHL, and so the reliability of their tabulated quantile values cannot be judged.
43

The simulations for Panel 21 use a revised version of LeBarons programme which is still maxdimdependent, but which the author corrected (following my suggestion) to employ U statistics for all four correlation-integral estimators.

L. Kanzler: BDS Estimation

45

5.4 Empirical differences and recommendations on usage


To keep a comparison between the three BDS programmes tractable, one must
decide on a specific setting for maxdim. In the knowledge of the above results, a careful user would play safe and, despite the computational cost, set m = maxdim. I thus
call LeBarons programme once for every embedding dimension considered. Hence,
the remaining differences between LeBarons and my method are that LeBaron uses
V statistics over nm+1 observations in the denominator of the BDS ratio while I employ U statistics over all n observations. The remaining differences between LeBarons
and Decherts method are that Dechert uses full-sample estimators for all correlation
integrals while LeBaron calculates all estimators over only nm+1 observations, which
is consistent for the nominator, but inefficient for the denominator.
Figure 8 compares the BDS distributions of the three methods obtained on
samples with 50, 100 and 250 observations, for a range of / and m.44 There is little
empirical difference between Decherts and LeBarons methods with the exception of
some extreme cases (very small samples and high dimensions). The difference between
Decherts and LeBarons BDS distributions on the one hand and the standard BDS distribution on the other is far more significant. Both findings indicate that the usage of
V statistics is responsible for most of the differences. As long as the embedding dimension is not raised above 10, their distributions actually give a better approximation
to the standard-normal distribution than the usual BDS distribution. However when the
BDS statistic is computed for higher dimensions, the standard BDS distribution gives
the better fit. As can be seen from those graphs which show all 18 dimensions at a
time, Decherts and LeBarons distributions quickly undercut the standard normal as
one move to higher dimensions, while the standard BDS distributions remains much
better behaved.
In practice, these differences hardly matter, because even Decherts and
LeBarons distributions do not approximate the standard normal well enough on small
samples to alleviate the researcher from the need to turn to tabulated quantile values.

44

Tables for these and other distributions are available from the author upon request.

L. Kanzler: BDS Estimation

46

Note, however, that Table 2 should be used only in conjunction with the standard
method of computing the BDS statistic. As the sample size is increased to 500 (not
shown), virtually any statistical difference between the three functions vanishes.
Even in those cases where the Monte-Carlo simulations did not reveal noticeable
difference between methods in the aggregate, the results obtained on individual samples
tend to differ to a surprisingly large extent.45 This shows how sensitive the BDS statistic is to inclusion of individual observations, and it strengthens the case for evaluating
the null hypothesis for a range of m and .
In conclusion, usage of statistically inferior V statistics instead of the statistically
most efficient U statistics in estimators of some correlation integrals changes the distribution of the BDS statistic away from the norm only when these statistics are calculated on very small samples. Moreover, the small-sample properties of the BDS statistic are in some cases actually improved. By contrast, usage of maxdim-dependent
estimators has a potentially significant effect even on very large samples, and it always
worsens the finite-sample properties of the BDS statistic.
Neither Decherts nor LeBarons programme should be used on samples smaller
than 500 observations, because there are no tabulated quantile values against which the
results could be compared. This restriction can, of course, be circumvented by bootstrapping the BDS distribution of the sample under examination. Unfortunately,
Decherts DOS programme does not lend itself well for such an elaborate procedure.
LeBarons programme must be used with some care even on larger samples. Only
results obtained for the highest 10 dimensions can be taken to be fairly reliable, and
in cases of doubt the programme should be called separately for every embedding
dimension desired.
My own programme suffers from none of these deficiencies and can be used
under any circumstances. It will yield the most satisfactory results when the recommendations of Section 4 are followed.

45

Some of the individual differences between LeBarons programme and mine are due to the fact that
he uses the first nmaxdim+1 observations of the sample while I use the last nm+1 observations. Reversing
the series for one of the two programmes somewhat narrows down the observed differences.

L. Kanzler: BDS Estimation

47

6. A Critical Evaluation of Some Applications


While the BDS test has already been used in a number of studies, none of these
appear to be consistent with the foregoing recommendations. In fact, most papers explicitly defend their choice of parameters and m with reference to BHL, who endorse
in the range between 0.5 and 1.5 (irrespective of the distributional shape of the
sample), and m no greater and preferably smaller than 10.
Let me provide some prominent examples. Scheinkman & LeBaron (1989) set
/ = 0.5 and consider m between 2 and 5, while Cecen & Erkal (1996) also set
/ = 0.5, but let m run up to 10. Guillaume et al. (1995) check the BDS statistic for
/ = 1.0 and m = 7 only, and perform a peculiar form of bootstrapping whereby the
results are compared merely to one single reshuffled sample. De Lima (1996) simulates
the size of the BDS statistic on EGARCH(1,0) residuals through 500 and 1,000 replications for each of the cases / = 1.0 and 1.25, m = 2. Kohers et al. (1997) choose
parameter settings of / = 0.5, 0.75, 1.0 and m = 2 to 10. Hsieh (1989) employs the
BDS test in the most comprehensive way by considering / = 0.25, 0.5, 1.0, 1.5 and
2.0, and m = 2, 5 and 10, (properly) bootstrapping the distribution in each case.
LeBaron et al. (1998) leave the readers completely in the dark as to their choice of
embedding dimension and dimensional distance.
The most serious misunderstanding of the properties of the BDS test is revealed
in the books by de Grauwe et al. (1993) and Peters (1994). De Grauwe et al. claim
that the choice of a very large [] will reduce the number of observations very
drastically (sic!) and proceed not to reveal their actual choice of / at all! Peters lets
/ = 0.5 and m = 6, reasoning that Hsieh (1989) tested many embedding dimensions
on currencies, and m = 6 gave results comparable to the other higher (and lower)
embedding dimensions!46
It is interesting to note that not one author indicates whether a different choice
of and m would have resulted in qualitatively different findings. In light of my
46

Incidentally, both books are reviewed by one of the co-authors of the BDS test (LeBaron, 1995a,
1995b), and while Peters book is for other good reasons severely criticised, neither review makes mention
of the above slips.

L. Kanzler: BDS Estimation

48

simulations, many of the BDS statistics quoted in published work appear unreasonably
large due to careless choice of /. Nevertheless, almost all of the BDS statistics
quoted in these studies remain significant when evaluated against the tabulated distribution in Table 2, so that their findings of dependence appear by and large qualitatively
incontestable. There exist, however, (at least) three simulation studies with results
which are highly questionable given my findings. I will discuss them briefly in the subsequent three sub-sections.

6.1 Inference in small samples


A first study of high academic quality, which would, however, have benefitted
from knowledge of the small-sample distribution of the BDS statistic, is that of Barnett
et al. (1998). They conduct a single-blind controlled competition among several tests
for non-linearity and chaos, on samples of 380 and 2,000 observations simulated by
five different processes. One of these tests is the BDS test, used in conjunction with
ARIMA-pre-whitening to eliminate linearity from the data and run with Decherts
(1988a) DOS programme, for embedding dimensions m = 2, 3, 4, 5, 6, 7, 8 and dimensional distance 1.0.
On the five large samples the BDS test always yields the correct inference. On
the small samples, however, the results are at best ambiguous and at worst inconsistent
with the true processes in three out of the five cases. At least, this is the conclusion
which the authors reach by assuming that the null hypothesis is to be rejected if the
BDS statistic is large (perhaps exceeding 2), recognising the deteriorated smallsample properties.47 As a consequence, the BDS test fares rather badly compared to
some of the other tests.48 Evaluating the significance of their reported BDS statistics

47

The authors also assume that the BDS test is one-sided, which is normally inappropriate
note 33 on page 31.

see foot-

48

The four other tests are Hinichs bi-spectrum linearity test (see page 5, footnote 2), Whites (1989a,
1989b) neural-network test (see also Lee et al., 1993, and Jungeilges, 1996), the NEGM Lyapunov-exponent
neural-network test (Nychka et al., 1992), and Kaplans (1994) non-parametric Delta-Epsilon test. Unfortunately, the paper does not consider Tsays (1986) non-linearity test.

(continued...)

L. Kanzler: BDS Estimation

49

on my simulated quantile values of Table 2, however, changes their results dramatically: now, the BDS test yields the correct inferences in each of the five cases. The BDS
test is thus clearly superior to the two competing established tests for non-linearity and
as good as the newest two tests (see footnote 48).

6.2 Compass-rose patterns


The second debatable study is by Krmer & Runde (1997) who simulate the
BDS statistic on artificial stock-price data exhibiting what has become known as the
compass rose. As a result of stock exchanges commonly limiting possible prices to
even tick values, e.g. an eight of a dollar or a tenth of a Mark, some relative price
changes are more likely than others (Huang & Stoll, 1994) and this shows up as a
compass-rose pattern (Crack & Ledoit, 1996) in delay space (Koppl & Nardone,
1997), i.e. a scatter-plot of current versus lagged price changes. Krmer & Runde show
that the BDS test rejects independence far more often on such data than it should,
given that the underlying data-generating process is actually fully consistent with the
null hypothesis. However, they have no explanation as to why simple rounding should
have such an important impact on the BDS statistic, which they show increases with
the coarseness of the tick size.
It is not at all apparent why rounding should yield data which is any less
identically and independently distributed than before transformation. Of course, the
newly created discrete data follows a different distribution from the original continuously distributed data, but in both cases the data is i.i.d. Hence, if BDSL are really
correct in claiming that the null hypothesis of the BDS test is iidness rather than
iidness without any structure, then asymptotically, the BDS test must be able to distinguish between iidness and non-iidness in rounded data without fail.

48

(...continued)

The properties of the former two tests are already relatively well established. While FORTRAN
source code for the bi-spectrum test and the computationally extremely intensive NEGM test is freely available in the internet, the aforementioned authors of Whites test have kept their modules a secret to date
just as is the case with Whites (1997) reality check. MATLAB software for Kaplans test can be downloaded from Kaplans homepage at http://www.math.macalester.edu/~kaplan.

L. Kanzler: BDS Estimation

50

If the null hypothesis is not the problem, then it must be the size of the test.
Krmer & Runde simulate the BDS statistic on 1,000 samples of n = 2000 observations
for embedding dimensions m = 2, 3, 4, 5 and dimensional distance = 1.0. As argued
in Sub-Section 4.2 above, a larger would be desirable, but in light of my own simulations for n = 1000 and n = 2500, the unfortunate choice of alone cannot explain
their findings diverging so hugely from the standard-normal distribution. The fact that
some price changes are far more likely than others must come into play.
Following the detailed explanations of Section 4.3 above, it is not difficult to
see how this will occur. When the data is rounded, the number of observations being
close is reduced. The reason for this is that two observations cannot be a negative
distance away from each other, so fewer observations are made close than not
close by rounding. Yet the absolute number of observations being close is very
important to the size of the BDS test in small samples. This is particularly true when
is chosen rather small. Panel 26 of Figure 2 shows, for somewhat larger samples
of 2,500 observations, that while a setting of = 1.0 yields satisfactory distributions
for small embedding dimensions, this is no longer the case when one moves through
double-digit dimensions and the associated number of expected close observations
becomes very small. Analogously, when the data is being made discrete through rounding, the shape of the distribution of the data is changed such that the number of
close observations is reduced, the reduction in numbers depending on the degree of
rounding. As explained in Section 4.3, the lack of close observations increases the
probability of the error of rejecting the null hypothesis when it is actually true.
It is thus clear that rounding distorts the size of the BDS test. The question
remains whether this alone can explain Krmer & Rundes findings. There are many
possibilities of verifying the conjectured impact of rounding on the size of the BDS
statistic. The simplest would be to compare the size of the correlation coefficients
before and after rounding. My argument implies that rounding reduces the size of this
statistic. Secondly, my conjecture could be confirmed by re-running their simulations
for the (higher) recommended values of = 1.5 and 2.0, which should mitigate or
remove any size distortions.

L. Kanzler: BDS Estimation

51

Thirdly, increasing the sample size sufficiently should remove any size distortion
altogether. In their paper, Krmer & Runde claim that they also ran the simulations for
samples of n = 5000 observations, but do not report any corresponding results. In
private correspondence (October 1998), however, Ralf Runde has admitted that he had
never been in a position to perform these simulations as his self-written programme
was too slow to be of use for simulations on samples of that size!49
Fourthly, the distribution of the BDS statistic on rounded data could be bootstrapped, thus removing size distortions in evaluating the null hypothesis.
The compass-rose pattern is of no relevance to this thesis, as the tick size of
exchange rates is much finer than that of stock prices. Improving on the simulations
of Krmer & Runde and falsifying their claims is thus beyond the scope of this
chapter. (I believe the above arguments are being used by Dee Dechert and Blake
LeBaron in preparation of a reply to Krmer & Runde.) Nonetheless, their paper and
the problems associated with it serve to highlight the fact that some knowledge of the
size of the BDS test is indispensable to practical application.50 I hope the results of
this chapter will provide enough information to fill the gap in general knowledge.

6.3 Tabulating scrambled real-world data


The third paper is by Chappell et al. (1996) who purport to show that the bootstrapped distribution of the BDS statistic on GARCH(1,1) residuals does not approximate the standard-normal distribution very well. To do so, they first fit a GARCH(1,1)
model on real-world data and then simulate the BDS statistic for embedding dimensions m = 2, 3, 4, 5, 6, 7, 8 and dimensional distance 0.4, 0.625, 1.0, 1.6 on
1,000 samples of size n = 749 drawn randomly from the residuals.

49

Interestingly, their plots of the compass-rose pattern are all for 5,000 observations rather than for
2,000 observations, in which case any such pattern would, of course, be less strongly visible.
50

Krmer & Runde (1997) also fail to realise that the BDS test is generally a two-sided test (see page
31, footnote 33) and that it is not a chaos test as such although it is true that in conjunction with the
correct modelling approach, it can be used to test for the existence of non-linear deterministic dependence
(commonly known as chaos).

L. Kanzler: BDS Estimation

52

What exactly makes them believe that they can simulate the test under the null
hypothesis by using data derived from an unknown data-generating process remains a
mystery. No attempt is made to claim that a GARCH(1,1) process appears to capture
any dependence in the data particularly well. But even if that was the case, it would
not guarantee that the GARCH residuals were totally free of any dependence. One also
wonders why Chappell et al. (1996) restrict their attention to samples of 749 observations.
It is certainly true that their tabulated quantile values differ significantly from
those of the standard-normal distribution, even after making allowance for the fact that
1,000 repetitions can hardly be sufficient to estimate 0.5% and 1.0% quantiles reliably.
But should this come as any surprise? As shown in this chapter, even if simulated on
perfect i.i.d. data, the BDS distribution is bound to deviate significantly from the
standard normal for almost all the cases they consider. In fact, their values are
generally in line with those tabulated in Table 2 of Appendix B. Chappell et al.s paper
appears to serve no purpose other than sowing confusion about the distribution of the
BDS statistic.

7. Summary of Recommendations
My recommendations for both very fast and correctly sized estimation of the
BDS statistic may be summarised as follows.
The BDS statistic can be obtained in the speediest fashion by implementing an
algorithm comprising the following steps (as detailed in Section 3 above):
First, the absolute difference between all unique combinations of two observations
in the sample under investigation is calculated, evaluated against the size of dimensional distance and the result stored in an array of bits.
Second, correlation integrals of dimension 1 c1,n and c1,nm+1 (possibly for multiple
embedding dimensions m) as well as correlation integral kn are estimated directly
from this array. All estimates are obtained as U statistics.
Third, correlation integrals of higher dimensions cm,n are estimated by performing
bitand operations on the array.
Fourth, the BDS statistic for embedding dimension m is calculated from the resulting correlation-integral estimates.

L. Kanzler: BDS Estimation

53

The algorithm can, in principle, be implemented in any programming environment. The


code of two versions of this algorithm for MATLAB is printed in Appendix C.
In practice, reliable evaluation of the null hypothesis of independence requires
some knowledge of the finite-sample properties of the BDS statistic. The results of
extensive Monte-Carlo simulations summarised in Figure 2 and Table 2 show that the
size of falsely evaluating the significance of the BDS statistic is generally minimised
when is set in the range of 1.5 to 2.0 units of the standard deviation of a (near-)
normally distributed sample. When the distribution of the sample does not appear to
approximate Gaussianity very well, should be chosen such as to obtain estimates of
the correlation integral of dimension 1 which lie in the range between 0.71 and 0.84.
On samples of up to 500 observations, the significance of the BDS statistic
should be evaluated with reference to the quantile values tabulated in Appendix B.
Otherwise, the BDS distribution appears to be well approximated by the standard
normal, and hence standard-normal tail probabilities can be used for hypothesis testing.
(Alternatively, my MATLAB Programme 2 can be used independently of the actual
sample size.)
If the sample under examination is really i.i.d., it should pass the BDS test at
all embedding dimensions for which the BDS statistic can be reliably estimated. The
recommendation is therefore to calculate the BDS statistic for a range of different m.
Table 2 covers all dimensions between 2 and 15. Raising the size of within the
recommended range while moving to higher dimensions will generally yield the most
satisfactory results in terms of statistical size.
The two BDS programmes by Dechert (1988a) and LeBaron (1997a) employ
statistically less efficient correlation-integral estimators, which cause the BDS statistic
to be distributed differently from the tabulated values in samples of size 500 and
smaller. Usage of these two programmes can be recommended only if the size of the
sample is large and choice parameters and m are set as recommended above.
I hope that my BDS programme and the explanations provided in this chapter
will enhance understanding of the BDS test, will aid in overcoming some of the difficulties of using the test and will help spread application of the test beyond the rather
small number of researchers who have employed it to date. Given the significant econometric power of the test, it deserves to become part of a standard toolbox for econometric analysis.

References
Abhyankar, Abhay, Laurence Copeland & Woon Wong (1997), Uncovering Nonlinear Structure in RealTime Stock-Market Indexes: The S&P 500, the DAX, the Nikkei 225, and the FTSE-100, Journal of
Business and Economic Statistics, vol. 15, no. 1 (January), pp. 1-14
Akgiray, Vedat & Geoffrey Booth (1988), The Stable-Law Model of Stock Returns, Journal of Business
and Economic Statistics, vol. 6, no. 1, pp. 51-57
Andrews, Donald & Moshe Buchinsky (1997), On the Number of Bootstrap Repetitions for Bootstrap
Standard Errors, Confidence Intervals, and Tests, Yale University, Cowles Foundation Discussion
Paper, no. 1141R (August)
Ashley, Richard & Douglas Patterson (1986), A Nonparametric, Distribution-Free Test for Serial Independence in Stock Returns, Journal of Financial and Quantitative Analysis, vol. 21, no. 2 (June), pp. 221227
Ashley, Richard, Douglas Patterson & Melvin Hinich (1986), A Diagnostic Test for Nonlinear Serial
Dependence in Time Series Fitting Errors, Journal of Time Series Analysis, vol. 7, pp. 165-178
Barnett, William & Melvin Hinich (1993), Has Chaos Been Discovered with Economic Data?, Chapter
16 in Day & Chen (1993), pp. 254-265
Barnett, William, Alfredo Medio & Apostolos Serletis (1997), Nonlinear and Complex Dynamics in Economics, Washington University in St. Louis, Economics Working Paper Archive, no. WAB-97-13 (24
September)
Barnett, William, Ronald Gallant, Melvin Hinich, Jochen Jungeilges, Daniel Kaplan & Mark Jensen (1998),
A Single-Blind Controlled Competition among Tests for Nonlinearity and Chaos, Journal of Econometrics, vol. 82, no. 1 (1 January), pp. 157-192; revised version of Washington University in St. Louis,
Economics Working Paper Archive, no. WAB-96-8 (29 January 1997)
Belsley, David (1998), Mathematica as an Environment for Doing Economics and Econometrics, Computational Economics, forthcoming; revised version of Boston College Working Papers in Economics, no.
364 (20 March 1997)
Berkowitz, Jeremy & Lutz Kilian (1996), Recent Developments in Bootstrapping Time Series, Board of
Governors of the Federal Reserve System, Finance and Economics Discussion Series, no. 96-45 (8
November)
Brock, William, Davis Dechert & Jos Scheinkman (1987), A Test for Independence Based on the Correlation Dimension, University of Wisconsin-Madison, Social Systems Research Institute Working Paper,
no. 8702; reprinted in William Barnett, E. Berndt & Halbert White, eds. (1988), Dynamic Econometric
Modelling: Proceedings of the Third International Symposium on Economic Theory and Econometrics,
Cambridge University Press, Cambridge
Brock, William, David Hsieh & Blake LeBaron (1991), Nonlinear Dynamics, Chaos, and Instability: Statistical Theory and Economic Evidence, MIT Press, Cambridge, Massachusetts
Brock, William, Davis Dechert, Jos Scheinkman & Blake LeBaron (1996), A Test for Independence Based
on the Correlation Dimension, Econometric Reviews, vol. 15, no. 3 (August), pp. 197-235; reprinted
as University of Wisconsin-Madison, Social Systems Research Institute Reprint, no. 444; revised version
of University of Wisconsin-Madison, Social Systems Research Institute Working Paper, no. 9520; in
turn revised version of Brock et al. (1987)

54

L. Kanzler: BDS Estimation

55

Brockett, Patrick, Melvin Hinich & Douglas Patterson (1988), Bispectral-Based Tests for the Detection of
Gaussianity and Linearity in Time Series, Journal of the American Statistical Association, vol. 83, no.
403 (Applications & Case Studies), pp. 657-664
Campbell, John, Andrew Lo & Craig MacKinlay (1997), The Econometrics of Financial Markets, Princeton
University Press, Princeton, New Jersey
Cecen, Aydin & Cahit Erkal (1996), Distinguishing between Stochastic and Deterministic Behavior in Foreign Exchange Rate Returns: Further Evidence, Economic Letters, vol. 51, no. 3 (June), pp. 323-329
Chappell, David, Joanne Padmore & Catherine Ellis (1996), A Note on the Distribution of BDS Statistics
for a Real Exchange Rate Series, Oxford Bulletin of Economics and Statistics, vol. 58, no. 3 (August),
pp. 561-565
Crack, Timothy Falcon & Olivier Ledoit (1996), Robust Structure without Predictability: The Compass
Rose Pattern of the Stock Market, Journal of Finance, vol. 51, no. 2 (June), pp. 751-762
Dacorogna, Michel, Ulrich Mller, Olivier Pictet & Casper de Vries (1995), The Distribution of Extremal
Foreign Exchange Rate Returns in Extremely Large Data Sets, Olsen & Associates, Zrich, working
paper, 17 March
Danielsson, Jon & Casper de Vries (1997), Tail Index and Quantile Estimation with Very High Frequency
Data, Journal of Empirical Finance, vol. 4, no.s 2-3 (June), pp. 241-257; revised version of Robust
Tail Index and Quantile Estimation, Proceedings of the First International Conference on High Frequency Data in Finance, 29-31 March 1995, Olsen & Associates, Zrich, vol. 2
Day, Richard & Ping Chen, eds. (1993), Nonlinear Dynamics and Evolutionary Economics, Oxford University Press, New York
Dechert, Davis (1988a), BDS STATS: A Program to Calculate the Statistics of the Grassberger-Procaccia
Correlation Dimension Based on the Paper A Test for Independence by W. A. Brock, W. D. Dechert
and J. A. Scheinkman, release 8.21, MS-DOS software available on gopher://gopher.ssc.wisc.
edu:70/11/econgopher/software/bds/dos

Dechert, Davis (1988b), A Characterization of Independence for a Gaussian Process in Terms of the Correlation Integral, University of Wisconsin-Madison, Social Systems Research Institute Workshop Series,
no. 8812 (July)
Dechert, Davis (1994), The Correlation Integral and the Independence of Gaussian and Related Processes,
University of Wisconsin-Madison, Social Systems Research Institute Working Paper, no. 9412 (March)
De Grauwe, Paul, Hans Dewachter & Mark Embrechts (1993), Exchange Rate Theory: Chaotic Models of
Foreign Exchange Markets, Blackwell, Oxford
De Lima, Pedro (1992), A Test for IID Based upon the BDS Statistic, Johns Hopkins University, Baltimore, Department of Economics, working paper
De Lima, Pedro (1996), Nuisance Parameter Free Properties of Correlation Integral Based Statistics, Econometric Reviews, vol. 15, no. 3 (August), pp. 237-259
Denker, Manfred & Gerhard Keller (1983), On U-Statistics and v. Mises Statistics for Weakly Dependent
Processes, Zeitschrift fr Wahrscheinlichkeitstheorie und verwandte Gebiete (Probability Theory and
Stochastics), vol. 64, no. 4 (October), pp. 505-522

L. Kanzler: BDS Estimation

56

Diciccio, Thomas & Joseph Romano (1988), A Review of Bootstrap Confidence Intervals, Journal of the
Royal Statistical Society, Series B (Statistical Methodology), vol. 50, no. 3, pp. 338-354
DuMouchel, W.H. (1983), Estimating the Stable Index in Order to Measure Tail Thickness: A Critique,
Annals of Statistics, vol. 11, no. 4, pp. 1019-1031
Eckmann, J.-P. & David Ruelle (1992), Fundamental Limitations for Estimating Dimensions and Lyapunov
Exponents in Dynamical Systems, Physica D (Nonlinear Phenomena), vol. 56, no.s 2-3 (May), pp.
185-187
Efron, Bradley (1979), Bootstrap Methods: Another Look at the Jackknife, Annals of Statistics, vol. 7, no.
1, pp. 1-26
Efron, Bradley (1982), The Jackknife, The Bootstrap and Other Resampling Plans, CBMS-NSF Regional
Conference Series in Applied Mathematics, Society for Industrial and Applied Mathematics, Philadelphia, vol. 38
Efron, Bradley & Robert Tibshirani (1993), An Introduction to the Bootstrap, Monographs in Statistics and
Applied Probability, vol. 57, Chapman & Hall, New York
Engle, Robert (1982), Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of
United Kingdom Inflation, Econometrica, vol. 50, no. 4, pp. 987-1007
Grassberger, Peter & Itamar Procaccia (1983a), Characterization of Strange Attractors, Physical Review
Letters, vol. 50, no. 5 (January), pp. 346-349
Grassberger, Peter & Itamar Procaccia (1983b), Measuring the Strangeness of Strange Attractors, Physica
D (Nonlinear Phenomena), vol. 9, no.s 1-2, pp. 189-208
Guillaume, Dominique, Olivier Pictet, Ulrich Mller & Michel Dacorogna (1995), Unveiling Non-Linearities Through Time Scale Transformations, Olsen & Associates, Zrich, working paper, 20 September
Guillaume, Dominique, Michel Dacorogna, Rakhal Dav, Ulrich Mller, Richard Olsen & Olivier Pictet
(1997), From the Birds Eye to the Microscope: A Survey of New Stylized Facts of the Intra-Daily
Foreign Exchange Markets, Finance and Stochastics, vol. 1, no. 2, pp. 95-129
Hinich, Melvin (1982), Testing for Gaussianity and Linearity of a Stationary Time Series, Journal of Time
Series Analysis, vol. 3, no. 3, pp. 169-176
Hinich, Melvin and Douglas Patterson (1985), Evidence of Nonlinearity in Daily Stock Returns, Journal
of Business and Economic Statistics, vol. 3, no. 1, pp. 69-77
Hinich, Melvin & Douglas Patterson (1989), Evidence of Nonlinearity in the Trade-by-Trade Stock Market
Return Generating Process, in William Barnett, John Geweke & Karl Shell, eds., Economic Complexity:
Chaos, Sunspots, Bubbles, and Nonlinearity, Proceedings of the Fourth International Symposium in
Economic Theory and Econometrics, Cambridge University Press, Cambridge, Massachusetts, pp. 383409
Hinich, Melvin & Douglas Patterson (1990), Relating Sample Bicovariances of a Process to the Parameters
of a Quadratic Nonlinear Model, Applied Research Laboratories, University of Texas at Austin,
technical report
Hinich, Melvin & Douglas Patterson (1993), Intraday Nonlinear Behavior of Stock Prices, Chapter 14 in
Day & Chen (1993), pp. 201-214

L. Kanzler: BDS Estimation

57

Hinkley, David (1988), Bootstrap Methods, Journal of the Royal Statistical Society, Series B (Statistical
Methodology), vol. 50, no. 3, pp. 321-337
Hols, M.C.A.B. & Casper de Vries (1991), The Limiting Distribution of Extremal Exchange Rate Returns,
Journal of Applied Econometrics, vol. 6, no. 3, pp. 287-302
Hsieh, David (1989), Testing for Nonlinear Dependence in Daily Foreign Exchange Rate Changes, Journal of Business, vol. 62, no. 3 (July), pp. 339-368
Hsieh, David (1991), Chaos and Nonlinear Dynamics: Application to Financial Markets, Journal of Finance, vol. 46, no. 5, pp. 1839-1877
Huang, Roger & Hans Stoll (1994), Market Microstructure and Stock Return Predictions, Review of Financial Studies, vol. 7, no. 1 (Spring), pp. 179-213
Huisman, Ronald, Kees Koedijk, Clemens Kool & Franz Palm (1997), Fat Tails in Small Samples, Maastricht University, Limburg Institute of Financial Economics, working paper, September
Huisman, Ronald, Kees Koedijk, Clemens Kool & Franois Nissen (1998), Extreme Support of Uncovered
Interest Parity, Journal of International Money and Finance, vol. 17, no. 1 (February), pp. 211-228;
apparently revised version of The Unbiasedness Hypothesis from a Panel Perspective, Maastricht
University, Limburg Institute of Financial Economics, working paper, November 1996
Jansen, D.W. & Casper de Vries (1991), On the Frequency of Large Stock Returns: Putting Booms and
Busts into Perspective, Review of Economics and Statistics, vol. 73, no. 1, pp. 18-32
Jungeilges, Jochen (1996), Operational Characteristics of Whites Test for Neglected NonLinearities, in
William Barnett, Alan Kirman & Mark Salmon, eds., Nonlinear Dynamics in Economics, Proceedings
of the Tenth International Symposium in Economic Theory and Econometrics, Cambridge University
Press, Cambridge
Kaplan, Daniel (1994), Exceptional Events as Evidence for Determinism, Physica D (Nonlinear Phenomena), vol. 73, no.s 1-2 (May), pp. 38-48
Kearns, P. & Adrian Pagan (1997), Estimating the Density Tail Index for Financial Time Series, Review
of Economics and Statistics, vol. 79, pp. 171-175
Koedijk, Kees & Clemens Kool (1994), Tail Estimates and the EMS Target Zone, Review of International
Economics, vol. 2, no. 2 (June), pp. 153-165
Koedijk, Kees, M.M.A. Schafgans & Casper de Vries (1990), The Tail Index of Exchange Rate Returns,
Journal of International Economics, vol. 29, no.s 1-2, pp. 93-108
Kohers, Theodor, Vivek Pandey & Gerald Kohers (1997), Using Nonlinear Dynamics to Test for Market
Efficiency Among the Major U.S. Stock Exchanges, Quarterly Review of Economics and Finance, vol.
37, no. 2 (Summer), pp. 523-545
Koppl, Roger & Carlo Nardone (1997), The Angular Distribution of Asset Returns in Delay Space, Fairleigh Dickinson University, Madison, Department of Economics and Finance, working paper, February
Krmer, Walter & Ralf Runde (1997), Chaos and the Compass Rose, Economics Letters, vol. 54, no. 2
(February), pp. 113-118

L. Kanzler: BDS Estimation

58

Ksters, Ulrich & Jens Peter Steffen (1996), Matrix Programming Languages for Statistical Computing:
A Detailed Comparison of GAUSS, MATLAB, and Ox, Diskussionsbeitrge der Katholischen Universitt Eichsttt, Wirtschaftswissenschaftliche Fakultt Ingolstadt, Germany, no. 75 (25 October)
Lai, D., X. Wang & J. Wiorkowski (1994), Local Asymptotic Normality for Multivariate Nonlinear AR
Processes, Universities of Texas at Houston and at Dallas, working paper
LeBaron, Blake (1995a), Review of De Grauwe et al. (1993), Journal of International Economics, vol. 39,
no.s 1-2 (August), pp. 185-200
LeBaron, Blake (1995b), Confusion and Misinformation on Financial Chaos [Review of Peters (1994)],
Complexity, vol. 1, pp. 35-37
LeBaron, Blake (1997a), BDSTEST.C, June release accompanying LeBaron (1997b), C source code available
on http://www.econ.wisc.edu/~blebaron/software; revised version of July 1988 and March
1990 releases
LeBaron, Blake (1997b), A Fast Algorithm for the BDS Statistic, Studies in Nonlinear Dynamics and
Econometrics, vol. 2, no. 2 (July), pp. 53-59; reprinted as University of Wisconsin-Madison, Social
Systems Research Institute Reprint, no. 458
LeBaron, Blake, Arthur Brian & Richard Palmer (1998), Time Series Properties of an Artificial Stock
Market, Journal of Economic Dynamics and Control, forthcoming; revised version of University of
Wisconsin-Madison, Social Systems Research Institute Working Paper, no. 9725 (November 1997)
Lee, Tae-Hwy, Halbert White & Clive Granger (1993), Testing for Neglected Nonlinearity in Time Series
Models: A Comparison of Neural Network Methods and Alternative Tests, Journal of Econometrics,
vol. 56, no. 3 (April), pp. 269-290
Li, Hongyi & G.S. Maddala (1996), Bootstrapping Time Series Models, Econometric Reviews, vol. 15,
no. 2 (May), pp. 115-158
Loretan, Mico (1991), Testing Covariance Stationarity of Heavy-Tailed Economic Time Series, Yale University, Ph.D. Dissertation in Economics
Loretan, Mico & Peter Phillips (1994), Testing the Covariance Stationarity of Heavy-Tailed Time Series:
An Overview of the Theory with Applications to Several Financial Datasets, Journal of Empirical Finance, vol. 1, pp. 211-248; reprinted as Yale University, Cowles Foundation Paper, no. 866
Marsaglia, George & T.A. Bray (1968), One-Line Random Number Generators and Their Use in Combinations, Communications of the Association for Computing Machinery, vol. 11, no. 11 (November), pp.
757-759
Marsaglia, George & Wai Wan Tsang (1984), A Fast, Easily Implemented Method for Sampling from Decreasing or Symmetric Unimodal Density Functions, SIAM Journal on Scientific and Statistical Computing, vol. 5, no. 2 (June), pp. 349-359
MathWorks (1997a), Statistics Toolbox for Use with MATLAB, version 2.1.0 (April), The MathWorks, Inc.,
Natick, Massachusetts
MathWorks (1997b), MATLAB: The Language of Technical Computing, version 5.1.0.421 on PCWin (June),
The MathWorks, Inc., Natick, Massachusetts

L. Kanzler: BDS Estimation

59

McCulloch, Huston (1997), Measuring Tail Thickness to Estimate the Stable Index : A Critique, Journal
of Business and Economic Statistics, vol. 15, no. 1 (January), pp. 74-81
Moler, Cleve (1995), Random Thoughts: 10435 Years is a Very Long Time, MATLAB News & Notes, Fall,
pp. 12-13
Mller, Ulrich, Michel Dacorogna & Olivier Pictet (1996), Heavy Tails in High-Frequency Financial Data,
Olsen & Associates, Zrich, working paper, 11 December
Nychka, Douglas, Stephen Ellner, Ronald Gallant & Daniel McCaffrey (1992), Finding Chaos in Noisy
Systems, Journal of the Royal Statistical Society, Series B (Statistical Methodology), vol. 54, no. 2, pp.
399-426
Park, Stephen & Keith Miller (1988), Random Number Generators: Good Ones Are Hard to Find, Communications of the Association for Computing Machinery, vol. 31, no. 10 (October), pp. 1192-1201
Peters, Edgar (1994), Fractal Market Analysis: Applying Chaos Theory to Investment and Economics, John
Wiley & Sons, New York
Phillips, Peter & Mico Loretan (1992), Testing Covariance Stationarity under Moment Condition Failure
with an Application to Common Stock Returns, Yale University, Cowles Foundation Discussion Paper,
no. 947 (June)
Pictet, Olivier, Michel Dacorogna & Ulrich Mller (1996), Hill, Bootstrap and Jackknife Estimators for
Heavy Tails, Olsen & Associates, Zrich, working paper, 11 December
Ramsey, James, Chera Sayers and Philip Rothman (1990), The Statistical Properties of Dimension Calculations Using Small Data Sets: Some Economic Applications, International Economic Review, vol. 31,
no. 4 (November), pp. 991-1020
Rust, John (1993), GAUSS and MATLAB: A Comparison, Journal of Applied Econometrics, vol. 8, pp.
307-324
Scheinkman, Jos & Blake LeBaron (1989), Nonlinear Dynamics and Stock Returns, Journal of Business,
vol. 62, no. 3 (July), pp. 311-337
Theiler, James (1990), Estimating Fractal Dimension, Journal of the Optical Society of America A (Optics
and Image Science), vol. 7, no. 6 (June), pp. 1055-1073
Tsay, Ruey (1986), Nonlinearity Tests for Time Series, Biometrika, vol. 73, no. 2, pp. 461-466
White, Halbert (1989a), Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network Models, Journal of the American Statistical Association, vol. 84, no. 408, pp. 1003-1013
White, Halbert (1989b), An Additional Hidden Unit Test for Neglected Nonlinearity in Multilayer Feedforward Networks, Proceedings of the International Joint Conference on Neural Networks, IEEE Press,
New York, vol. 2, pp. 451-455
White, Halbert (1997), A Reality Check for Data Snooping, Econometrica, forthcoming; revised version
of San Diego, California, NRDA, technical report
Wolff, Rodney (1995), A Poisson Distribution for the BDS Test Statistic for Independence in a Time
Series, Chapter 5 in Howell Tong, ed., Chaos and Forecasting: Proceedings of the Royal Society Discussion Meeting, World Scientific, Singapore, pp. 109-127

VERY FAST AND CORRECTLY SIZED


ESTIMATION OF THE BDS STATISTIC
Ludwig Kanzler
Christ Church
University of Oxford

Appendix B:
Tables

See page 4 for a table of contents.

1 February 1999

BDS Estimation: Appendix B (Tables)

B2

Table 1: Run-Time and Memory Requirements of the BDS Test


using:

a)
b)
c)

LeBaron's C programme compiled for MATLAB (BDSC.DLL) 1)


the author's uncompiled MATLAB programme (BDS.M)
the author's slow version without bitwise operations (BDSNOBIT.M)

on a Pentium-II 233 MHz personal computer running MATLAB 5.1 under Windows NT 4

Sample
size

RAM
in MB

2)

CPU-time in minutes
m=

10

15

19/ 203)

500

0
0-1
1

0:00
0:01
0:02

0:00
0:01
0:04

0:00
0:01
0:05

0:00
0:01
0:07

0:00
0:02
0:14

0:00
0:03
0:21

0:00
0:03
0:28

1,000

0
0-3
2

0:00
0:02
0:07

0:00
0:03
0:12

0:00
0:03
0:16

0:00
0:04
0:20

0:00
0:06
0:42

0:00
0:09
1:04

0:00
0:11
1:26

2,500

1
1-16
15

0:01
0:11
0:37

0:01
0:14
1:00

0:01
0:17
1:24

0:01
0:19
1:47

0:01
0:33
3:44

0:01
0:47
5:41

0:01
1:01
7:37

5,000

2
4-58
56

0:03
0:40
2:17

0:03
0:51
3:46

0:03
1:01
5:14

0:04
1:12
6:42

0:04
2:07
14:02

0:04
3:02
21:21

0:05
3:57
28:43

7,500

4
8-122
131

0:08
1:25
5:24

0:08
1:49
8:27

0:08
2:11
11:48

0:08
2:34
15:06

0:09
4:29
31:41

0:10
6:24
48:15

0:11
8:23
1:04:48

10,000

7
15-207
233

0:15
2:31
9:36

0:15
3:15

0:15
3:59

0:16
3:44

0:17
8:25

0:18
12:06

n/a

n/a

n/a

n/a

n/a

0:19
15:47
2:00:02

15
33-439
525

0:35
5:36
22:58

0:35
7:06

0:36
8:36

0:37
5:05

0:40
17:33

0:43
25:00

0:45
32:28

n/a

n/a

n/a

n/a

n/a

n/a

26
59-751
n/a

1:05
22:47

1:06
36:33

1:07
51:06

1:10
1:05:32

1:17
2:17:26

1:19
3:29:21

1:23
4:40:35

n/a

n/a

n/a

n/a

n/a

n/a

n/a

151
352-3988
n/a

8:35

8:44

8:57

9:04

9:36

10:09

10:38

n/a
n/a

n/a
n/a

n/a
n/a

n/a
n/a

n/a
n/a

n/a
n/a

n/a
n/a

n/a
n/a
n/a

n/a
n/a
n/a

n/a
n/a
n/a

n/a
n/a
n/a

n/a
n/a
n/a

n/a
n/a
n/a

n/a
n/a
n/a

15,000

20,000

49,000

98,000

600
1409-14639
n/a

1)

The speed of LeBaron's programme depends to a significant extend on the proportion of observations being close. It
is much faster for small than for large . Here, was set to ; for > 2 , run-time may almost double. (The speed of
the author's algorithms is independent of .)

2)

The RAM requirements refer to the amount of memory needed on top of what operating system and any running
applications, including MATLAB itself, occupy. It was measured using Windows NT Task Manager. The range of
memory for the BDS.M programme indicates how much RAM the slowest and how much RAM the fastest algorithm
would require. The amount of available RAM was set to 150 MB (appropriate for a machine with 192 MB physical
RAM), so the author's programme switched to slower algorithms for samples of size 10,000 and larger.

3)

LeBaron's programme is limited to a maximum embedding dimension of 19.

Appendix B (Tables) of D.Phil. Thesis L. Kanzler

B3

Table 2, Panels 1 to 30: Tables of the BDS Distribution


The small-sample distribution of the BDS statistic is tabulated here for some
combinations of parameter values
- n , the sample size:
50, 100, 250, 500, 1000, 2500, 10000, 15000,
- m , the correlation dimension: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
- , the dimensional distance:
0.5 , 0.75 , 1.0 , 1.5 , 2.0 ,
where denotes the standard deviation of a normally distributed sample.
All of the distributions tabulated here are also plotted analogously in Appendix A,
Figure 2.
Reference to / requires the test sample to be distributed approximately Gaussian.
By contrast, a table-lookup with respect to the first-dimensional correlation integral
c 1 is (theoretically) valid for any distribution under the null hypothesis of
independence. The recommended test procedure is thus to find a value of (by trial)
such that c 1 is close to one of the tabulated values. In general, the BDS distribution
comes closest to N(0,1) for / = 1.5 low dimensions and for / = 2.0 for high
dimensions.
The location of the BDS tables in this appendix is indicated by the following index:
/ = 0.5 / = 0.75 / = 1.0
n=
50
n=
100
n=
250
n=
500
n=
750
n = 1,000
n = 2,500
n = 10,000
n = 15,000

B4
B6
B8
B10
B14
B16

B12

B4
B6
B8
B10
B12
B14
B16
B18
B18

/ = 1.5

/ = 2.0

B5
B7
B9
B11
B13
B15
B17

B5
B7
B9
B11
B13
B15
B17

BDS Estimation: Appendix B (Tables)

B4

Distribution of the BDS Statistic

/ = 0.5 (c 1 0.27)

Sample size n = 50
Dimension m=

10

11

12

13

14

15 N(0,1)

Quantile
0.5% -22.66 -27.66 -31.41 -40.37 -54.26 -48.20 -40.93 -34.41 -28.99 -26.20 -22.25 -22.19 -22.79 -24.73
1.0% -13.87 -16.90 -19.92 -24.97 -28.85 -25.56 -21.97 -18.59 -16.45 -15.29 -14.53 -14.65 -14.99 -16.43
2.5% -8.01 -9.87 -12.09 -14.93 -15.79 -13.73 -11.88 -10.34 -9.30 -8.68 -8.71 -8.58 -8.98 -9.75
5.0% -5.75 -6.94 -8.74 -10.85 -10.85 -9.39 -7.98 -6.89 -6.33 -5.95 -5.81 -5.80 -5.89 -6.29
95.0%
5.66 6.88 9.14 13.14 18.81 15.41 -0.68 -0.53 -0.40 -0.29 -0.21 -0.15 -0.11 -0.07
97.5%
8.27 10.05 13.92 20.89 31.30 36.99 -0.44 -0.41 -0.30 -0.22 -0.15 -0.10 -0.07 -0.05
99.0% 13.99 16.09 23.76 34.03 56.16 74.44 73.59 -0.29 -0.22 -0.15 -0.10 -0.07 -0.04 -0.03
99.5% 21.47 27.66 37.27 57.80 89.30 118.37 147.67 45.58 -0.18 -0.12 -0.08 -0.05 -0.03 -0.02

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

Size
% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576
Median
Mean
Std. dev.
Skewness
Kurtosis

19.8
22.3
26.4
30.3
23.5
20.7
17.8
16.0
-0.36
-0.67
41.97
-91.35
10939

23.9
26.5
30.5
34.1
25.5
22.8
20.0
18.4

29.7
32.2
35.8
39.2
27.4
25.2
23.1
21.5

-0.44 -0.72
-0.48 -1.10
38.87 109.5
-2.32 -96.49
3839 13409

37.6
39.6
43.0
46.1
27.5
26.1
24.3
23.3

57.7
61.8
67.5
71.4
21.9
21.3
20.7
20.2

53.1
59.2
68.7
76.9
7.3
7.3
7.3
7.2

41.4
47.3
57.5
67.5
2.0
2.0
2.0
2.0

-1.30 -3.03 -2.71 -2.22


-0.97 -0.75 -0.16 -1.06
124.8 134.8 168.9 106.7
-113.25 -10.45 82.51 30.92
16581 8209 12394 6732

32.3
37.4
46.2
55.9
0.6
0.6
0.6
0.6

26.3
30.4
37.9
46.1
0.1
0.1
0.1
0.1

22.0
25.5
32.2
39.3
0.0
0.0
0.0
0.0

19.5
22.2
27.5
33.8
0.0
0.0
0.0
0.0

16.1
18.0
21.6
25.6
0.0
0.0
0.0
0.0

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-1.83 -1.51 -1.27 -1.09 -0.92 -0.79 -0.69


-0.68 0.01 -1.62 -2.37 -2.30 -2.19 -2.11
160.9 255.2 101.3 40.01 28.36 28.82 26.15
118.63 134.82 93.32 -17.74 -88.96 -89.50 -92.49
17468 19944 11566 7082 9162 8848 9718

0.00
0.00
1.00
0.00
3.00

Distribution of the BDS Statistic

17.8
20.2
25.0
30.3
0.0
0.0
0.0
0.0

/ = 1.0 (c 1 0.51)

Sample size n = 50
2

10

11

12

13

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-4.66
-4.10
-3.35
-2.84
2.76
3.50
4.52
5.43

-5.12
-4.42
-3.55
-2.99
2.89
3.70
4.90
5.82

-5.55
-4.67
-3.78
-3.12
3.03
3.94
5.25
6.46

-6.05
-5.18
-4.10
-3.32
3.27
4.35
5.97
7.24

-6.45
-5.49
-4.37
-3.54
3.64
4.98
6.78
8.60

-6.99
-5.89
-4.58
-3.75
4.11
5.74
8.21
10.65

-7.46
-6.27
-4.92
-4.01
4.81
6.81
9.89
12.56

-7.65
-6.45
-5.06
-4.12
5.66
8.33
12.32
16.11

-7.89
-6.57
-5.13
-4.20
6.60
10.21
15.44
21.04

-7.72
-6.65
-5.13
-4.16
7.15
11.97
20.26
26.66

-8.09
-6.69
-5.06
-4.03
6.60
13.38
24.66
34.22

-8.60
-6.95
-5.25
-4.06
2.59
12.07
27.84
40.93

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

7.0
9.3
13.8
18.6
13.5
10.3
7.4
5.9

7.8
10.3
14.9
20.2
13.6
10.5
8.0
6.5

8.8
11.3
16.4
21.7
14.2
11.2
8.5
7.0

10.2
13.0
17.9
23.4
14.8
11.8
9.3
7.9

11.7
14.5
20.0
25.5
15.5
12.9
10.4
8.9

13.6
16.7
22.2
27.9
16.5
14.0
11.5
10.3

15.8
19.4
25.5
31.4
17.3
15.1
12.9
11.6

18.1
21.9
28.6
35.6
17.5
15.5
13.8
12.7

18.3
22.7
30.3
38.5
16.4
15.0
13.5
12.6

17.1
21.0
28.8
37.5
13.6
12.6
11.8
11.2

15.5
18.9
25.9
33.9
9.3
9.0
8.6
8.4

14.2
17.3
23.3
30.2
5.4
5.3
5.1
5.0

Dimension m=

16.7
18.8
22.9
27.5
0.0
0.0
0.0
0.0

14

15 N(0,1)

Quantile
-9.37 -10.87
-7.70 -8.25
-5.45 -5.83
-4.17 -4.28
-0.18 -0.15
4.40 -0.10
26.77 16.38
46.54 46.35

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

12.9
15.2
19.7
24.7
1.4
1.4
1.4
1.4

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.27 -0.34 -0.40 -0.44 -0.52 -0.62 -0.77 -1.04 -1.27 -1.28 -1.19 -1.07 -0.95 -0.84
-0.19 -0.23 -0.26 -0.29 -0.30 -0.31 -0.33 -0.35 -0.37 -0.40 -0.42 -0.49 -0.58 -0.65
1.79 1.88 2.02 2.18 2.41 2.67 3.04 3.54 4.15 4.90 5.81 6.82 8.12 9.72
0.43 0.84 1.88 1.74 2.42 1.63 1.83 2.73 4.00 5.84 8.43 12.01 17.82 26.08
8.94 16.10 51.51 40.27 59.03 20.24 20.89 31.04 44.78 71.68 134.78 250.53 543.09 1134.7

0.00
0.00
1.00
0.00
3.00

Based on BDS test results of 25,000 normal random samples

13.2
15.7
21.1
27.1
2.7
2.7
2.7
2.6

BDS Estimation: Appendix B (Tables)

B5

Distribution of the BDS Statistic

/ = 1.5 (c 1 0.71)

Sample size n = 50
2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-4.09
-3.69
-3.08
-2.63
2.46
3.01
3.64
4.15

-4.01
-3.65
-3.12
-2.67
2.46
3.04
3.69
4.11

-4.14
-3.75
-3.19
-2.73
2.47
3.07
3.76
4.26

-4.08
-3.69
-3.19
-2.75
2.46
3.07
3.91
4.57

-4.15
-3.75
-3.25
-2.80
2.44
3.14
4.05
4.70

-4.30
-3.92
-3.32
-2.84
2.47
3.27
4.26
4.96

-4.38
-3.98
-3.36
-2.88
2.53
3.34
4.43
5.22

-4.53
-4.12
-3.49
-2.97
2.60
3.48
4.62
5.56

-4.98
-4.37
-3.63
-3.05
2.67
3.64
4.92
6.05

-5.32
-4.64
-3.74
-3.14
2.76
3.78
5.22
6.40

-5.64
-4.85
-3.94
-3.24
2.85
3.96
5.65
6.90

-6.19
-5.19
-4.21
-3.40
2.95
4.25
6.13
7.41

-7.05
-5.76
-4.46
-3.55
3.07
4.54
6.54
8.29

-7.52
-6.35
-4.74
-3.74
3.20
4.87
7.09
9.19

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

5.5
7.4
11.5
16.4
11.9
8.7
5.9
4.3

5.7
8.0
12.2
17.5
11.5
8.4
5.8
4.3

6.2
8.5
13.2
18.5
11.4
8.3
5.8
4.4

6.4
9.0
14.0
19.5
11.0
8.2
5.7
4.4

6.8
9.3
14.7
20.2
10.7
8.0
5.6
4.4

7.1
9.6
14.9
20.7
10.6
8.1
5.7
4.6

7.6
10.3
15.6
21.6
10.5
8.0
6.1
4.8

8.1
11.1
16.4
22.4
10.6
8.3
6.1
5.1

8.4
11.4
16.9
22.9
10.4
8.3
6.5
5.3

8.9
11.6
17.2
23.5
10.7
8.5
6.6
5.7

9.8
12.6
17.8
23.9
10.6
8.7
6.8
5.9

10.6
13.3
18.6
24.7
10.8
8.9
7.1
6.1

11.2
13.8
19.1
25.0
10.8
9.1
7.5
6.6

11.9
14.5
19.5
25.4
10.7
9.1
7.7
6.8

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.23 -0.29 -0.34 -0.38 -0.43 -0.47 -0.51 -0.56 -0.61 -0.64 -0.68 -0.73 -0.79 -0.82
-0.18 -0.22 -0.26 -0.30 -0.34 -0.36 -0.39 -0.42 -0.45 -0.47 -0.50 -0.54 -0.58 -0.63
1.54 1.55 1.58 1.59 1.61 1.64 1.67 1.72 1.78 1.85 1.94 2.04 2.17 2.30
0.18 0.24 0.28 0.37 0.43 0.50 0.58 0.67 0.72 0.81 0.91 1.06 1.13 1.39
3.25 3.27 3.38 3.57 3.81 4.10 4.42 4.89 5.58 6.49 7.68 9.10 10.65 13.19

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

/ = 2.0 (c 1 0.84)

Sample size n = 50
2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-4.86
-4.42
-3.69
-3.02
2.83
3.50
4.23
4.70

-4.77
-4.34
-3.68
-3.09
2.80
3.42
4.26
4.77

-4.74
-4.35
-3.67
-3.11
2.79
3.46
4.19
4.71

-4.67
-4.28
-3.69
-3.15
2.76
3.44
4.22
4.72

-4.83
-4.40
-3.78
-3.23
2.72
3.44
4.27
4.81

-4.89
-4.48
-3.79
-3.25
2.72
3.42
4.22
4.82

-4.86
-4.52
-3.86
-3.33
2.71
3.40
4.25
4.91

-5.14
-4.62
-3.96
-3.38
2.71
3.40
4.24
4.93

-5.33
-4.78
-4.02
-3.44
2.67
3.41
4.30
5.08

-5.48
-4.88
-4.09
-3.53
2.68
3.39
4.40
5.16

-5.73
-5.11
-4.26
-3.60
2.65
3.44
4.48
5.21

-6.05
-5.32
-4.41
-3.69
2.62
3.46
4.53
5.30

-6.45
-5.65
-4.63
-3.82
2.61
3.48
4.55
5.44

-6.89
-5.94
-4.87
-3.98
2.62
3.52
4.74
5.53

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

7.6
9.5
13.4
18.0
14.0
10.8
7.7
6.2

8.4
11.0
15.4
20.2
13.7
10.4
7.6
6.1

9.2
11.8
16.5
21.6
13.5
10.4
7.5
6.0

9.5
12.3
17.2
22.5
13.3
10.1
7.3
5.8

10.2
13.1
18.0
23.3
13.1
9.9
7.2
5.7

10.7
13.6
18.7
24.3
12.7
9.8
7.1
5.7

11.0
14.2
19.5
24.9
12.6
9.7
7.1
5.7

11.5
14.5
20.0
25.7
12.3
9.5
7.0
5.6

12.3
15.3
20.7
26.2
12.2
9.4
6.8
5.5

12.7
15.8
21.1
26.8
11.9
9.3
6.8
5.5

13.2
16.2
21.5
27.2
11.6
9.0
6.7
5.3

13.8
16.6
22.1
27.7
11.3
8.7
6.4
5.2

14.5
17.6
22.7
28.3
10.9
8.5
6.4
5.2

14.8
17.8
23.0
28.5
10.4
8.2
6.2
5.2

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.23 -0.29 -0.34 -0.38 -0.42 -0.46 -0.49 -0.51 -0.55 -0.59 -0.62 -0.65 -0.67 -0.69
-0.17 -0.24 -0.28 -0.31 -0.35 -0.39 -0.42 -0.46 -0.49 -0.52 -0.56 -0.60 -0.64 -0.68
1.74 1.77 1.78 1.79 1.81 1.82 1.83 1.85 1.87 1.89 1.92 1.94 1.99 2.03
0.13 0.18 0.20 0.22 0.21 0.23 0.24 0.23 0.22 0.22 0.20 0.17 0.11 0.04
3.51 3.39 3.29 3.27 3.31 3.31 3.35 3.49 3.62 3.81 3.97 4.14 4.39 4.78

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B6

Distribution of the BDS Statistic

/ = 0.5 (c 1 0.27)

Sample size n = 100


2

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-5.33
-4.66
-3.78
-3.18
3.24
4.14
5.49
6.49

-6.39
-5.51
-4.50
-3.71
3.83
4.90
6.35
7.74

-7.91
-6.74
-5.56
-4.57
4.77
6.21
8.22
9.72

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

9.2
11.7
16.3
21.5
16.5
13.2
10.2
8.4

12.8
15.7
20.4
25.1
19.0
15.8
12.8
10.8

17.9
20.8
25.5
30.2
22.3
19.1
15.9
14.3

Dimension m=

10

11

12

13

14

-9.88 -11.20
-8.49 -9.70
-6.90 -8.11
-5.67 -6.93
6.61 9.80
8.75 13.58
11.79 19.23
14.34 23.76

-9.92
-8.72
-7.44
-6.44
14.90
21.82
33.76
43.12

-8.48
-7.42
-6.27
-5.45
20.07
34.20
58.23
79.06

-7.07
-6.29
-5.35
-4.62
-1.13
19.50
87.29

-6.10
-5.49
-4.59
-3.98
-0.96
-0.82
-0.61

61.0
64.9
68.6
69.8
25.9
24.8
23.8
23.0

57.9
67.2
78.8
86.0
9.5
9.5
9.5
9.5

40.4
50.5
67.1
80.9
2.6
2.6
2.6
2.6

25.5
33.9
49.9
66.0
0.8
0.8
0.8
0.8

-0.28 -0.30 -0.40 -0.63 -1.46 -3.11


-0.15 -0.17 -0.21 -0.22 -0.21 -0.19
2.02 2.38 2.96 3.95 5.64 8.42
0.53 0.53 0.67 1.18 2.17 4.52
5.63 7.78 8.33 9.92 17.58 54.21

-2.79
-0.10
13.21
9.20
174.6

-2.34
-0.08
21.36
17.45
499.1

-1.96
-0.10
34.93
32.14
1439

15 N(0,1)

-5.59
-4.88
-4.04
-3.50
-0.78
-0.67
-0.56
130.73 137.15 -0.47

-5.07
-4.48
-3.66
-3.14
-0.63
-0.53
-0.45
-0.39

-4.83
-4.14
-3.39
-2.86
-0.50
-0.43
-0.35
-0.30

-4.58
-3.90
-3.20
-2.64
-0.41
-0.34
-0.28
-0.24

-4.41
-3.83
-3.03
-2.50
-0.33
-0.27
-0.21
-0.18

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

16.0
22.2
35.1
50.2
0.2
0.2
0.2
0.2

10.5
14.9
24.6
37.6
0.1
0.1
0.1
0.1

7.5
10.7
17.9
27.8
0.0
0.0
0.0
0.0

5.6
8.0
13.5
21.2
0.0
0.0
0.0
0.0

4.5
6.2
10.6
17.2
0.0
0.0
0.0
0.0

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-1.65
0.03
60.32
54.97
3983

-1.40
0.02
99.20
95.03
10762

-1.21 -1.05 -0.93


0.06 -0.60 -1.11
143.16 75.54 0.74
131.74 131.77 -2.24
18763 18262 12.41

0.00
0.00
1.00
0.00
3.00

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

25.2
28.0
32.4
36.4
25.6
23.0
20.4
18.7

34.2
37.3
42.8
47.6
27.4
25.6
23.4
22.1

Distribution of the BDS Statistic

/ = 1.0 (c 1 0.52)

Sample size n = 100


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-3.16
-2.88
-2.47
-2.12
2.16
2.64
3.25
3.72

-3.23
-2.93
-2.52
-2.16
2.20
2.73
3.37
3.81

-3.32
-3.00
-2.58
-2.21
2.28
2.85
3.56
4.06

-3.42
-3.12
-2.66
-2.29
2.41
3.05
3.87
4.39

-3.66
-3.27
-2.80
-2.40
2.60
3.28
4.18
4.94

-3.90
-3.50
-2.93
-2.53
2.80
3.67
4.77
5.58

-4.20
-3.72
-3.14
-2.69
3.13
4.16
5.42
6.37

-4.41
-3.97
-3.34
-2.87
3.61
4.84
6.40
7.58

-4.57
-4.10
-3.49
-3.03
4.18
5.68
7.74
9.39

-4.57
-4.12
-3.52
-3.07
5.01
6.96
9.92
12.14

-4.40
-4.01
-3.42
-3.03
6.01
8.84
12.57
15.70

-4.27
-3.85
-3.30
-2.90
7.12
10.88
16.04
20.87

-4.14
-3.73
-3.18
-2.76
7.81
12.77
21.06
27.11

-4.03
-3.63
-3.06
-2.64
7.05
14.23
26.29
35.25

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

2.0
3.4
6.7
11.0
9.6
6.5
3.9
2.7

2.2
3.7
7.0
11.7
9.6
6.7
4.2
3.1

2.5
4.0
7.8
12.6
10.1
7.2
4.7
3.5

2.9
4.6
8.5
13.6
10.9
8.0
5.4
4.2

3.7
5.6
9.8
15.1
11.7
8.9
6.4
5.1

4.6
6.7
11.3
17.0
12.7
10.0
7.5
6.1

5.9
8.2
13.2
18.8
13.9
11.2
8.8
7.4

7.5
10.3
15.7
22.0
15.3
12.6
10.2
8.8

9.2
12.7
19.1
26.1
16.6
14.1
11.9
10.7

10.7
15.0
23.2
31.4
17.6
15.5
13.5
12.2

10.3
15.1
25.5
36.6
17.8
16.1
14.4
13.4

8.6
13.0
22.9
35.8
15.8
14.6
13.4
12.8

7.0
10.4
19.0
30.5
12.6
12.1
11.4
11.0

5.5
8.3
15.2
25.2
8.1
8.0
7.8
7.7

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.18 -0.22 -0.26 -0.29 -0.32 -0.36 -0.41 -0.50 -0.62 -0.86 -1.26 -1.36 -1.29 -1.19
-0.10 -0.13 -0.15 -0.16 -0.17 -0.17 -0.17 -0.18 -0.18 -0.17 -0.17 -0.17 -0.16 -0.18
1.31 1.33 1.38 1.45 1.54 1.67 1.84 2.06 2.36 2.77 3.29 3.93 4.72 5.64
0.31 0.37 0.48 0.58 0.69 0.83 1.04 1.33 1.77 2.45 3.33 4.41 5.83 7.86
3.23 3.42 3.77 4.02 4.50 5.10 5.99 7.49 10.19 15.70 24.26 36.42 56.28 94.98

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B7

Distribution of the BDS Statistic

/ = 1.5 (c 1 0.71)

Sample size n = 100


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-3.15
-2.88
-2.45
-2.09
2.02
2.47
2.98
3.27

-3.12
-2.88
-2.48
-2.11
2.02
2.48
3.01
3.41

-3.15
-2.87
-2.44
-2.13
2.00
2.45
3.03
3.45

-3.14
-2.87
-2.47
-2.12
2.03
2.49
3.07
3.50

-3.10
-2.86
-2.49
-2.15
2.02
2.54
3.16
3.61

-3.14
-2.86
-2.49
-2.16
2.05
2.59
3.23
3.72

-3.08
-2.86
-2.48
-2.17
2.06
2.65
3.31
3.86

-3.13
-2.85
-2.48
-2.16
2.13
2.72
3.48
4.05

-3.15
-2.89
-2.50
-2.16
2.18
2.82
3.64
4.26

-3.23
-2.95
-2.50
-2.17
2.24
2.94
3.87
4.46

-3.21
-2.95
-2.53
-2.19
2.31
3.09
4.10
4.84

-3.27
-2.99
-2.56
-2.22
2.39
3.25
4.35
5.16

-3.38
-3.03
-2.60
-2.25
2.50
3.41
4.64
5.56

-3.49
-3.15
-2.63
-2.28
2.62
3.60
5.02
6.10

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

1.9
3.2
6.3
10.6
8.4
5.4
3.1
2.0

2.1
3.3
6.5
11.0
8.3
5.4
3.2
2.1

1.9
3.2
6.9
11.4
8.2
5.3
3.1
2.0

2.0
3.4
6.9
11.8
8.2
5.5
3.2
2.2

2.1
3.6
7.3
11.9
8.2
5.4
3.4
2.4

2.0
3.5
7.3
12.2
8.2
5.5
3.6
2.6

2.0
3.5
7.4
12.4
8.2
5.8
3.7
2.8

2.0
3.5
7.3
12.5
8.5
6.0
3.9
2.9

2.1
3.6
7.4
12.9
8.6
6.2
4.3
3.3

2.1
3.7
7.3
12.9
8.8
6.5
4.6
3.6

2.3
3.8
7.9
13.5
9.1
6.8
4.9
4.0

2.4
4.0
8.2
14.0
9.4
7.2
5.3
4.3

2.6
4.4
8.5
14.2
9.7
7.6
5.7
4.7

2.9
4.5
8.9
14.8
10.1
8.0
6.2
5.2

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.16 -0.21 -0.24 -0.27 -0.29 -0.31 -0.33 -0.36 -0.38 -0.41 -0.43 -0.46 -0.48 -0.51
-0.12 -0.15 -0.17 -0.19 -0.20 -0.21 -0.22 -0.23 -0.24 -0.25 -0.25 -0.26 -0.26 -0.26
1.25 1.26 1.26 1.27 1.28 1.29 1.30 1.32 1.35 1.37 1.41 1.46 1.51 1.58
0.19 0.25 0.29 0.34 0.39 0.45 0.53 0.62 0.72 0.84 0.97 1.11 1.28 1.48
3.05 3.13 3.20 3.26 3.39 3.52 3.71 3.96 4.32 4.80 5.37 6.09 7.14 8.43

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

/ = 2.0 (c 1 0.84)

Sample size n = 100


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-3.66
-3.28
-2.73
-2.28
2.26
2.77
3.34
3.73

-3.60
-3.24
-2.75
-2.33
2.22
2.71
3.35
3.74

-3.56
-3.25
-2.77
-2.37
2.17
2.69
3.32
3.76

-3.52
-3.25
-2.81
-2.38
2.17
2.67
3.32
3.78

-3.43
-3.17
-2.76
-2.37
2.17
2.66
3.32
3.74

-3.44
-3.14
-2.73
-2.36
2.15
2.67
3.34
3.78

-3.43
-3.15
-2.74
-2.37
2.14
2.67
3.37
3.84

-3.47
-3.16
-2.72
-2.37
2.14
2.72
3.39
3.92

-3.46
-3.18
-2.74
-2.38
2.14
2.71
3.40
3.93

-3.48
-3.20
-2.76
-2.37
2.13
2.72
3.44
4.00

-3.48
-3.16
-2.76
-2.39
2.13
2.75
3.48
4.04

-3.56
-3.24
-2.80
-2.41
2.13
2.77
3.56
4.07

-3.56
-3.26
-2.81
-2.43
2.14
2.77
3.59
4.10

-3.62
-3.31
-2.82
-2.44
2.14
2.78
3.60
4.23

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

3.1
4.7
7.9
12.0
10.5
7.4
4.6
3.3

3.3
5.0
8.5
13.0
10.0
7.1
4.4
3.1

3.5
5.3
9.3
14.1
9.8
6.8
4.2
2.9

3.6
5.5
9.5
14.4
9.5
6.5
4.0
2.9

3.4
5.4
9.6
14.6
9.3
6.4
4.0
2.8

3.4
5.3
9.6
14.7
9.0
6.3
3.9
2.9

3.4
5.4
9.8
15.1
8.8
6.1
3.9
2.9

3.4
5.4
9.6
15.0
8.9
6.1
4.0
3.0

3.5
5.4
10.0
15.4
8.8
6.1
4.0
2.9

3.5
5.5
10.0
15.5
8.7
6.1
4.0
3.0

3.6
5.5
10.0
15.6
8.5
6.1
4.0
3.0

3.8
5.8
10.3
15.9
8.3
6.0
4.0
3.1

3.8
6.0
10.6
16.2
8.3
6.0
4.1
3.1

3.9
6.1
10.6
16.6
8.2
6.1
4.1
3.1

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.17 -0.20 -0.22 -0.26 -0.27 -0.29 -0.32 -0.35 -0.37 -0.39 -0.41 -0.42 -0.44 -0.45
-0.10 -0.14 -0.17 -0.20 -0.22 -0.23 -0.25 -0.26 -0.28 -0.29 -0.30 -0.32 -0.33 -0.34
1.37 1.38 1.39 1.38 1.38 1.37 1.38 1.38 1.38 1.39 1.39 1.40 1.40 1.41
0.18 0.20 0.22 0.25 0.29 0.33 0.35 0.38 0.41 0.45 0.49 0.51 0.54 0.56
3.28 3.23 3.21 3.23 3.23 3.29 3.35 3.43 3.50 3.61 3.72 3.84 3.94 4.10

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B8

Distribution of the BDS Statistic

/ = 0.5 (c 1 0.27)

Sample size n = 250


2

10

11

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-3.25
-2.94
-2.54
-2.18
2.27
2.81
3.49
3.96

-3.64
-3.32
-2.86
-2.40
2.50
3.13
3.90
4.49

-4.32
-3.91
-3.29
-2.79
2.98
3.75
4.70
5.42

-5.44
-4.85
-4.11
-3.47
3.79
4.78
6.00
7.01

-6.76
-6.19
-5.19
-4.45
5.19
6.63
8.37
9.81

-8.03
-7.40
-6.55
-5.78
7.68
10.09
13.11
15.49

-7.35
-6.95
-6.30
-5.83
12.18
16.49
22.93
28.61

-6.27
-5.91
-5.40
-5.00
18.51
28.55
42.19
54.34

-5.29
-5.02
-4.59
-4.25
13.10
43.94
73.31

-4.57
-4.33
-3.95
-3.65
-1.69
-1.52
98.79

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

2.3
3.9
7.1
11.8
10.3
7.3
4.7
3.4

3.9
5.6
9.5
14.3
12.5
9.1
6.2
4.6

6.7
9.0
13.5
18.4
15.6
12.1
9.0
7.2

11.9
14.4
19.2
23.8
19.7
16.4
13.2
11.3

19.4
22.4
27.3
31.5
24.5
21.6
18.5
16.7

29.0
33.0
38.7
41.5
27.8
25.6
23.2
21.6

50.3
50.4
50.4
50.5
25.3
24.4
23.3
22.7

79.6
82.2
83.2
83.3
16.7
16.7
16.7
16.7

72.2
83.9
92.9
94.8
5.1
5.1
5.1
5.1

44.9
63.1
85.9
95.9
1.4
1.4
1.4
1.4

-0.16 -0.18 -0.21 -0.24 -0.41 -0.74 -2.86 -3.43


-0.08 -0.09 -0.09 -0.09 -0.11 -0.14 -0.16 -0.20
1.36 1.51 1.78 2.24 2.99 4.26 6.37 9.81
0.36 0.37 0.41 0.48 0.72 1.20 2.26 4.22
3.38 3.56 3.73 4.02 4.63 6.35 11.65 29.41

-2.94
-0.10
15.74
8.04
95.01

-2.50
-0.12
25.73
15.21
305.0

Dimension m=

12

13

14

15 N(0,1)

-4.01
-3.77
-3.46
-3.17
-1.44
-1.34
-1.20
102.44 165.50 -1.08

-3.58
-3.35
-3.05
-2.80
-1.23
-1.13
-1.03
-0.94

-3.22
-3.02
-2.71
-2.48
-1.05
-0.96
-0.87
-0.82

-2.90
-2.72
-2.45
-2.23
-0.90
-0.82
-0.74
-0.69

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

21.8
36.4
65.1
87.1
0.4
0.4
0.4
0.4

9.1
17.7
40.9
68.9
0.1
0.1
0.1
0.1

3.8
7.9
23.0
47.1
0.0
0.0
0.0
0.0

1.6
3.7
11.9
29.8
0.0
0.0
0.0
0.0

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-2.14
-0.02
43.11
26.04
844.1

-1.85
0.36
73.17
42.37
2176

-1.61 -1.42
0.44 -0.19

0.00
0.00
1.00
0.00
3.00

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

115.74 143.83
67.82 111.95

5170 12546

/ = 1.0 (c 1 0.52)

Sample size n = 250


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.63
-2.42
-2.09
-1.81
1.83
2.26
2.76
3.08

-2.57
-2.38
-2.11
-1.83
1.88
2.30
2.82
3.15

-2.58
-2.39
-2.09
-1.83
1.88
2.35
2.93
3.31

-2.61
-2.40
-2.10
-1.84
1.95
2.44
3.01
3.47

-2.67
-2.44
-2.13
-1.86
2.03
2.54
3.18
3.70

-2.70
-2.48
-2.18
-1.90
2.15
2.69
3.40
3.97

-2.81
-2.59
-2.24
-1.97
2.25
2.88
3.78
4.39

-2.96
-2.72
-2.37
-2.06
2.45
3.12
4.11
4.91

-3.12
-2.87
-2.51
-2.20
2.71
3.49
4.58
5.38

-3.32
-3.06
-2.71
-2.37
3.05
3.93
5.21
6.49

-3.51
-3.28
-2.91
-2.57
3.56
4.68
6.22
7.70

-3.53
-3.33
-3.01
-2.73
4.21
5.60
7.76
9.43

-3.44
-3.24
-2.99
-2.76
5.05
6.99
9.79
12.14

-3.24
-3.08
-2.83
-2.63
6.33
8.82
12.75
15.87

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.6
1.3
3.5
7.1
6.7
4.1
2.2
1.4

0.5
1.2
3.6
7.4
7.0
4.4
2.4
1.6

0.5
1.2
3.7
7.5
7.0
4.4
2.6
1.8

0.6
1.3
3.8
7.5
7.4
4.9
2.9
2.0

0.7
1.4
3.9
8.0
8.0
5.5
3.3
2.4

0.7
1.7
4.4
8.7
8.8
6.2
4.0
2.9

1.0
2.0
5.0
9.6
9.9
7.0
4.6
3.6

1.5
2.7
6.2
11.2
11.0
8.1
5.6
4.4

2.1
3.8
7.9
13.2
12.4
9.7
7.0
5.6

3.4
5.4
10.3
16.2
14.3
11.3
8.7
7.3

5.0
7.7
13.7
20.2
16.4
13.5
10.6
9.0

7.0
11.0
18.3
25.3
18.0
15.4
12.7
11.4

8.1
14.0
24.8
32.8
19.3
17.1
14.8
13.4

6.0
12.6
29.8
44.8
19.7
17.8
16.0
14.9

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.12 -0.14 -0.16 -0.18 -0.21 -0.22 -0.24 -0.26 -0.29 -0.34 -0.42 -0.58 -0.84 -1.46
-0.07 -0.08 -0.09 -0.10 -0.10 -0.10 -0.09 -0.09 -0.08 -0.08 -0.07 -0.07 -0.07 -0.06
1.11 1.13 1.14 1.16 1.19 1.24 1.31 1.41 1.55 1.73 1.98 2.31 2.74 3.30
0.27 0.33 0.41 0.50 0.58 0.67 0.77 0.87 1.01 1.19 1.46 1.83 2.40 3.24
3.08 3.13 3.32 3.56 3.75 4.00 4.35 4.82 5.48 6.39 8.08 10.89 16.70 28.34

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B9

Distribution of the BDS Statistic

/ = 1.5 (c 1 0.71)

Sample size n = 250


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.72
-2.48
-2.13
-1.81
1.82
2.22
2.68
2.97

-2.70
-2.49
-2.12
-1.81
1.82
2.22
2.67
3.01

-2.67
-2.46
-2.14
-1.85
1.82
2.25
2.74
3.09

-2.63
-2.43
-2.13
-1.84
1.84
2.25
2.77
3.12

-2.60
-2.40
-2.10
-1.82
1.85
2.29
2.84
3.19

-2.55
-2.37
-2.08
-1.81
1.87
2.33
2.90
3.27

-2.51
-2.32
-2.04
-1.79
1.89
2.35
2.94
3.28

-2.49
-2.31
-2.02
-1.77
1.92
2.40
3.02
3.41

-2.49
-2.29
-2.03
-1.77
1.94
2.44
3.09
3.52

-2.45
-2.29
-2.01
-1.75
1.98
2.52
3.15
3.66

-2.46
-2.27
-2.00
-1.76
2.02
2.55
3.28
3.85

-2.44
-2.26
-1.98
-1.75
2.06
2.63
3.46
3.97

-2.43
-2.25
-1.99
-1.75
2.11
2.73
3.59
4.20

-2.43
-2.25
-1.99
-1.74
2.16
2.82
3.73
4.42

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.8
1.5
3.7
7.0
6.6
4.0
2.0
1.3

0.8
1.5
3.7
7.0
6.6
4.0
2.0
1.3

0.7
1.5
3.9
7.3
6.5
4.0
2.1
1.4

0.6
1.3
3.8
7.5
6.6
4.1
2.2
1.5

0.5
1.2
3.6
7.4
6.8
4.2
2.4
1.6

0.5
1.2
3.4
7.5
7.0
4.3
2.5
1.7

0.4
1.0
3.2
7.1
7.0
4.5
2.6
1.8

0.4
0.9
3.0
6.8
7.2
4.7
2.7
2.0

0.4
0.9
2.9
6.7
7.4
4.9
2.9
2.1

0.3
0.9
3.0
6.7
7.4
5.2
3.2
2.3

0.3
0.8
2.8
6.6
7.7
5.4
3.4
2.4

0.3
0.8
2.7
6.5
8.0
5.5
3.7
2.7

0.2
0.7
2.7
6.6
8.1
5.9
3.9
3.0

0.3
0.8
2.7
6.6
8.4
6.2
4.2
3.3

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.08 -0.11 -0.14 -0.16 -0.18 -0.19 -0.21 -0.22 -0.23 -0.25 -0.26 -0.27 -0.29 -0.31
-0.05 -0.07 -0.09 -0.10 -0.10 -0.11 -0.11 -0.12 -0.12 -0.12 -0.11 -0.11 -0.11 -0.11
1.11 1.11 1.11 1.12 1.12 1.12 1.13 1.13 1.14 1.16 1.17 1.19 1.21 1.24
0.18 0.23 0.27 0.32 0.39 0.45 0.51 0.59 0.66 0.73 0.82 0.93 1.04 1.16
3.04 3.08 3.15 3.17 3.26 3.35 3.47 3.67 3.88 4.10 4.41 4.84 5.34 5.95

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

/ = 2.0 (c 1 0.84)

Sample size n = 250


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.90
-2.62
-2.25
-1.91
1.88
2.30
2.79
3.16

-2.88
-2.64
-2.26
-1.92
1.86
2.29
2.78
3.15

-2.87
-2.63
-2.24
-1.93
1.86
2.29
2.75
3.09

-2.86
-2.61
-2.24
-1.93
1.86
2.28
2.78
3.07

-2.89
-2.59
-2.24
-1.93
1.85
2.27
2.80
3.13

-2.83
-2.59
-2.23
-1.94
1.83
2.29
2.79
3.17

-2.77
-2.56
-2.22
-1.93
1.83
2.28
2.81
3.14

-2.74
-2.51
-2.22
-1.93
1.84
2.28
2.83
3.18

-2.76
-2.52
-2.19
-1.91
1.85
2.29
2.83
3.20

-2.73
-2.50
-2.18
-1.91
1.85
2.29
2.81
3.24

-2.68
-2.49
-2.18
-1.89
1.85
2.31
2.84
3.22

-2.68
-2.48
-2.17
-1.89
1.87
2.33
2.88
3.24

-2.66
-2.45
-2.15
-1.89
1.88
2.35
2.92
3.32

-2.62
-2.43
-2.13
-1.88
1.88
2.36
2.93
3.36

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

1.2
2.1
4.5
8.0
7.3
4.4
2.4
1.5

1.1
2.1
4.6
8.4
7.2
4.3
2.3
1.4

1.1
2.1
4.7
8.5
6.9
4.3
2.4
1.5

1.1
2.1
4.7
8.7
6.8
4.3
2.4
1.5

1.0
2.1
4.7
9.0
6.8
4.2
2.3
1.5

1.1
2.0
4.8
8.9
6.7
4.1
2.3
1.5

0.9
1.9
4.6
8.9
6.6
4.1
2.3
1.5

0.9
1.8
4.6
8.8
6.5
4.2
2.3
1.5

0.8
1.7
4.4
8.9
6.7
4.2
2.4
1.5

0.8
1.7
4.5
8.8
6.6
4.2
2.4
1.6

0.7
1.7
4.3
8.7
6.7
4.3
2.4
1.6

0.7
1.5
4.3
8.6
6.8
4.4
2.6
1.6

0.7
1.4
4.2
8.7
6.7
4.4
2.6
1.8

0.6
1.4
4.0
8.6
6.7
4.5
2.7
1.8

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.10 -0.12 -0.14 -0.15 -0.17 -0.19 -0.20 -0.21 -0.23 -0.24 -0.24 -0.26 -0.27 -0.28
-0.07 -0.09 -0.10 -0.11 -0.13 -0.14 -0.14 -0.15 -0.16 -0.16 -0.17 -0.17 -0.18 -0.18
1.16 1.16 1.15 1.15 1.15 1.15 1.15 1.15 1.15 1.14 1.14 1.14 1.14 1.15
0.16 0.19 0.20 0.22 0.24 0.27 0.30 0.32 0.36 0.39 0.42 0.46 0.51 0.55
3.10 3.08 3.08 3.10 3.09 3.10 3.12 3.15 3.20 3.27 3.31 3.39 3.47 3.56

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B10

Distribution of the BDS Statistic

/ = 0.5 (c 1 0.28)

Sample size n = 500


2

10

11

12

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.88
-2.63
-2.22
-1.89
2.01
2.46
2.95
3.32

-3.01
-2.78
-2.38
-2.01
2.13
2.64
3.27
3.61

-3.35
-3.04
-2.62
-2.24
2.37
2.92
3.72
4.14

-4.01
-3.67
-3.09
-2.64
2.87
3.55
4.41
5.03

-5.05
-4.52
-3.87
-3.32
3.68
4.55
5.74
6.58

-6.40
-5.92
-5.15
-4.38
5.25
6.61
8.50
9.75

-7.60
-7.22
-6.55
-5.78
8.05
10.41
13.55
15.80

-6.95
-6.69
-6.33
-6.00
13.57
17.88
23.17
27.76

-5.99
-5.74
-5.44
-5.20
22.54
31.94
44.46
55.17

-5.12
-4.93
-4.66
-4.46
31.55
49.71
84.82

-4.43
-4.28
-4.03
-3.85
-2.28
-2.08

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

1.1
2.0
4.3
8.1
8.5
5.4
3.1
2.1

1.6
2.7
5.5
9.6
9.3
6.2
3.8
2.7

2.7
4.3
7.7
12.4
11.4
8.1
5.3
3.9

5.4
7.6
11.7
16.2
14.9
11.4
8.2
6.5

10.6
13.4
17.9
23.0
20.3
16.7
13.1
11.0

19.3
22.4
26.8
31.4
25.7
22.7
19.5
17.6

32.6
33.3
35.5
39.4
29.9
27.7
25.3
23.7

45.1
45.1
45.1
45.3
29.0
28.5
27.8
27.0

79.7
79.7
79.7
79.7
20.3
20.3
20.3
20.3

Dimension m=

13

14

15 N(0,1)

Quantile
-3.88
-3.74
-3.54
-3.35
-2.00
-1.89
133.43 -1.73

-3.43
-3.31
-3.11
-2.95
-1.73
-1.64
-1.53
109.55 198.94 264.86 -1.45

-3.07
-2.95
-2.77
-2.63
-1.51
-1.43
-1.33
-1.27

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

92.5
93.6
93.8
93.8
6.2
6.2
6.2
6.2

83.4
94.0
98.0
98.2
1.8
1.8
1.8
1.8

51.5
75.4
96.1
99.3
0.6
0.6
0.6
0.6

21.2
43.1
81.0
97.5
0.2
0.2
0.2
0.2

6.3
17.8
53.2
86.8
0.1
0.1
0.1
0.1

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.09 -0.11 -0.14 -0.16 -0.22 -0.34 -0.69 -0.96 -3.99 -3.48 -2.99 -2.59
-0.04 -0.05 -0.06 -0.05 -0.05 -0.04 0.00 0.06 0.14 0.21 0.33 0.60
1.20 1.27 1.41 1.68 2.15 2.99 4.39 6.77 10.75 17.62 29.82 52.03
0.27 0.30 0.35 0.40 0.47 0.69 1.11 2.00 3.69 7.14 13.38 23.34
3.17 3.28 3.42 3.60 3.75 4.37 5.74 10.09 24.37 81.10 260.3 724.8

-2.26
0.86
90.29
37.41
1696

-1.99
1.47

0.00
0.00
1.00
0.00
3.00

Distribution of the BDS Statistic

162.65

58.45
3923

/ = 1.0 (c 1 0.52)

Sample size n = 500


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.55
-2.34
-2.01
-1.72
1.77
2.14
2.58
2.89

-2.52
-2.33
-2.00
-1.72
1.77
2.15
2.62
2.95

-2.53
-2.30
-1.99
-1.71
1.77
2.20
2.64
3.00

-2.51
-2.30
-1.96
-1.70
1.81
2.26
2.76
3.10

-2.46
-2.27
-1.99
-1.73
1.87
2.34
2.89
3.28

-2.46
-2.27
-2.00
-1.75
1.93
2.43
3.04
3.50

-2.48
-2.30
-2.03
-1.76
2.01
2.55
3.23
3.68

-2.54
-2.36
-2.07
-1.81
2.12
2.71
3.47
4.02

-2.65
-2.47
-2.16
-1.88
2.25
2.95
3.79
4.41

-2.80
-2.61
-2.28
-1.98
2.47
3.21
4.15
4.84

-3.01
-2.78
-2.46
-2.14
2.76
3.57
4.58
5.51

-3.24
-3.01
-2.66
-2.35
3.16
4.06
5.23
6.24

-3.43
-3.22
-2.91
-2.59
3.64
4.74
6.31
7.34

-3.47
-3.31
-3.04
-2.80
4.34
5.71
7.75
9.28

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.5
1.0
2.8
5.9
6.1
3.5
1.7
1.0

0.4
1.0
2.7
5.9
6.0
3.6
1.8
1.1

0.4
0.9
2.8
5.9
6.1
3.7
2.0
1.2

0.4
0.9
2.5
5.8
6.4
4.0
2.2
1.4

0.4
0.9
2.7
6.1
6.8
4.3
2.6
1.7

0.3
0.8
2.8
6.2
7.3
4.9
2.9
2.0

0.3
0.9
3.0
6.5
7.8
5.4
3.4
2.4

0.4
1.1
3.4
7.2
8.6
6.0
4.0
2.9

0.7
1.5
4.2
8.2
9.7
6.9
4.6
3.6

1.1
2.3
5.3
9.8
11.1
8.1
5.7
4.5

1.8
3.3
7.2
12.3
12.7
9.7
7.1
5.8

3.1
5.2
9.9
15.6
14.7
11.7
9.0
7.5

5.1
8.1
13.9
20.0
16.7
13.9
11.2
9.7

8.2
12.5
19.1
25.9
19.0
16.4
13.6
12.2

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.10 -0.12 -0.14 -0.15 -0.17 -0.17 -0.18 -0.19 -0.20 -0.22 -0.26 -0.29 -0.37 -0.50
-0.06 -0.06 -0.07 -0.08 -0.08 -0.07 -0.07 -0.06 -0.05 -0.05 -0.04 -0.04 -0.03 -0.03
1.06 1.06 1.07 1.08 1.10 1.12 1.16 1.22 1.29 1.39 1.54 1.73 1.99 2.35
0.23 0.27 0.33 0.40 0.47 0.55 0.63 0.71 0.80 0.90 1.00 1.14 1.36 1.70
3.05 3.07 3.16 3.28 3.40 3.57 3.77 4.02 4.37 4.76 5.20 5.95 7.29 9.78

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B11

Distribution of the BDS Statistic

/ = 1.5 (c 1 0.71)

Sample size n = 500


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.61
-2.37
-2.01
-1.71
1.74
2.10
2.50
2.80

-2.54
-2.35
-2.02
-1.71
1.72
2.09
2.54
2.86

-2.54
-2.31
-2.00
-1.71
1.73
2.12
2.52
2.86

-2.54
-2.30
-1.98
-1.71
1.74
2.11
2.57
2.94

-2.48
-2.26
-1.98
-1.69
1.74
2.13
2.60
2.93

-2.45
-2.25
-1.96
-1.69
1.74
2.15
2.63
2.99

-2.44
-2.23
-1.94
-1.69
1.76
2.17
2.65
3.04

-2.41
-2.22
-1.93
-1.67
1.76
2.20
2.71
3.05

-2.37
-2.19
-1.91
-1.67
1.79
2.23
2.75
3.14

-2.33
-2.17
-1.89
-1.66
1.82
2.28
2.85
3.25

-2.29
-2.14
-1.89
-1.65
1.85
2.33
2.88
3.33

-2.29
-2.12
-1.88
-1.65
1.89
2.40
2.98
3.46

-2.28
-2.10
-1.87
-1.64
1.94
2.45
3.09
3.53

-2.25
-2.09
-1.85
-1.63
1.98
2.51
3.21
3.68

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.5
1.1
2.8
5.8
5.9
3.3
1.5
0.8

0.4
1.1
2.9
5.7
5.7
3.3
1.6
0.9

0.4
1.0
2.7
5.8
5.7
3.4
1.6
0.9

0.4
0.9
2.6
5.8
5.9
3.3
1.6
1.0

0.4
0.8
2.6
5.6
5.8
3.4
1.7
1.1

0.3
0.8
2.5
5.6
5.9
3.5
1.8
1.1

0.3
0.8
2.3
5.5
6.0
3.5
1.9
1.2

0.3
0.7
2.3
5.4
6.1
3.6
2.0
1.3

0.2
0.6
2.2
5.3
6.2
3.8
2.1
1.4

0.2
0.5
2.0
5.2
6.4
4.1
2.3
1.6

0.1
0.4
2.0
5.1
6.7
4.3
2.5
1.7

0.1
0.4
1.9
5.0
6.8
4.6
2.8
1.9

0.1
0.4
1.8
4.9
7.0
4.8
3.0
2.1

0.1
0.3
1.7
4.8
7.3
5.1
3.2
2.3

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.07 -0.08 -0.09 -0.10 -0.11 -0.12 -0.13 -0.15 -0.16 -0.17 -0.18 -0.19 -0.20 -0.21
-0.04 -0.05 -0.06 -0.06 -0.07 -0.07 -0.07 -0.08 -0.08 -0.08 -0.08 -0.08 -0.08 -0.07
1.05 1.05 1.05 1.05 1.05 1.05 1.05 1.06 1.06 1.07 1.08 1.09 1.10 1.12
0.15 0.18 0.21 0.25 0.29 0.33 0.37 0.42 0.47 0.54 0.61 0.68 0.75 0.83
3.01 3.01 3.05 3.09 3.13 3.17 3.22 3.29 3.41 3.54 3.69 3.86 4.09 4.34

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

/ = 2.0 (c 1 0.84)

Sample size n = 500


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.65
-2.43
-2.09
-1.76
1.78
2.13
2.58
2.91

-2.65
-2.43
-2.08
-1.78
1.76
2.14
2.57
2.87

-2.70
-2.45
-2.08
-1.79
1.74
2.14
2.58
2.88

-2.63
-2.43
-2.09
-1.79
1.73
2.13
2.58
2.87

-2.62
-2.39
-2.08
-1.79
1.74
2.12
2.55
2.88

-2.63
-2.40
-2.07
-1.78
1.75
2.11
2.57
2.87

-2.63
-2.40
-2.07
-1.78
1.75
2.10
2.57
2.88

-2.64
-2.39
-2.06
-1.78
1.74
2.10
2.56
2.92

-2.60
-2.40
-2.07
-1.78
1.74
2.12
2.60
2.92

-2.55
-2.37
-2.05
-1.78
1.74
2.15
2.63
2.95

-2.52
-2.34
-2.04
-1.78
1.75
2.16
2.66
2.99

-2.49
-2.31
-2.02
-1.76
1.77
2.18
2.69
2.99

-2.46
-2.30
-2.01
-1.74
1.77
2.19
2.71
3.06

-2.46
-2.28
-1.98
-1.73
1.78
2.20
2.75
3.10

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.7
1.3
3.3
6.4
6.4
3.5
1.7
1.0

0.7
1.3
3.3
6.6
6.2
3.5
1.7
1.0

0.8
1.4
3.3
6.6
5.9
3.4
1.6
1.0

0.6
1.4
3.5
6.7
5.8
3.3
1.7
1.0

0.6
1.3
3.3
6.9
5.9
3.4
1.7
1.0

0.6
1.3
3.2
6.8
6.0
3.4
1.6
1.0

0.6
1.3
3.3
6.8
5.9
3.3
1.6
1.0

0.6
1.2
3.2
6.7
6.0
3.3
1.6
1.0

0.6
1.2
3.2
6.8
5.9
3.4
1.7
1.1

0.4
1.1
3.2
6.7
5.9
3.5
1.8
1.1

0.4
1.0
3.1
6.6
6.0
3.4
1.9
1.2

0.3
1.0
3.0
6.5
6.1
3.7
1.9
1.2

0.3
0.9
2.9
6.4
6.0
3.7
1.9
1.3

0.3
0.9
2.7
6.3
6.1
3.8
2.0
1.3

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.09 -0.09 -0.10 -0.11 -0.12 -0.13 -0.14 -0.15 -0.15 -0.16 -0.17 -0.18 -0.18 -0.20
-0.05 -0.06 -0.07 -0.08 -0.09 -0.09 -0.10 -0.10 -0.10 -0.10 -0.11 -0.11 -0.11 -0.12
1.08 1.08 1.08 1.08 1.08 1.08 1.08 1.08 1.08 1.08 1.07 1.07 1.07 1.07
0.17 0.16 0.16 0.18 0.21 0.22 0.24 0.25 0.27 0.30 0.33 0.37 0.41 0.45
3.03 3.01 3.00 3.00 2.99 3.01 3.03 3.04 3.05 3.09 3.15 3.21 3.28 3.35

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B12

Distribution of the BDS Statistic

/ = 0.75 (c 1 0.40)

Sample size n = 100


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-3.71
-3.28
-2.77
-2.39
2.43
2.99
3.72
4.25

-3.93
-3.47
-2.94
-2.51
2.58
3.25
4.13
4.79

-4.28
-3.84
-3.19
-2.72
2.83
3.60
4.60
5.33

-4.90
-4.34
-3.57
-2.99
3.21
4.12
5.40
6.36

-5.55
-4.89
-4.04
-3.39
3.85
4.97
6.65
7.83

-6.08
-5.49
-4.61
-3.87
4.78
6.35
8.76
10.45

-6.39
-5.75
-4.96
-4.25
6.07
8.45
11.72
14.51

-6.04
-5.48
-4.77
-4.21
8.04
11.60
16.64
20.80

-5.61
-4.98
-4.31
-3.84
10.32
15.95
24.81
31.45

-5.08
-4.56
-3.91
-3.44
11.19
20.16
34.87
50.07

-4.71
-4.18
-3.58
-3.12
-0.62
20.08
44.46
70.14

-4.33
-3.93
-3.30
-2.84
-0.60
-0.48
43.12
87.61

-4.15
-3.70
-3.08
-2.63
-0.52
-0.43
-0.33
62.64

-4.02
-3.54
-2.94
-2.48
-0.43
-0.36
-0.29
-0.25

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

3.6
5.5
9.5
14.7
11.3
8.3
5.6
4.2

4.5
6.7
11.0
16.0
12.2
8.9
6.4
5.1

6.1
8.3
12.9
18.0
13.3
10.3
7.5
6.1

7.9
10.4
15.1
20.6
15.3
12.2
9.4
7.8

10.9
13.8
18.8
24.3
17.5
14.8
12.0
10.4

15.4
18.6
24.0
29.6
20.0
17.3
14.8
13.1

21.8
25.6
32.0
37.5
22.0
19.7
17.5
16.1

29.6
35.9
45.0
50.5
21.7
20.1
18.5
17.5

26.0
34.3
49.6
62.8
18.7
17.8
16.8
16.1

18.2
25.5
40.0
56.6
9.6
9.5
9.4
9.3

11.7
17.2
29.6
44.7
4.0
4.0
4.0
4.0

7.7
11.7
20.8
33.8
1.6
1.6
1.6
1.6

5.5
8.4
15.4
25.3
0.6
0.6
0.6
0.6

4.3
6.3
11.7
19.7
0.3
0.3
0.3
0.3

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.22 -0.26 -0.30 -0.34 -0.42 -0.55 -0.82 -1.68 -1.95 -1.77 -1.55 -1.36 -1.19
-0.14 -0.16 -0.17 -0.17 -0.17 -0.17 -0.18 -0.17 -0.18 -0.18 -0.19 -0.21 -0.28
1.48 1.57 1.71 1.92 2.25 2.73 3.43 4.39 5.78 7.75 10.33 13.73 18.03
0.38 0.43 0.50 0.61 0.78 1.12 1.74 2.79 4.57 7.26 11.15 17.60 28.03
3.66 3.85 4.32 4.77 5.19 6.30 9.19 17.08 37.80 84.71 184.7 449.7 1085

-1.04
-0.33
23.35
42.69
2378

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

/ = 1.0 (c 1 0.52)

Sample size n = 750


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.52
-2.32
-1.98
-1.68
1.72
2.08
2.48
2.78

-2.53
-2.31
-1.97
-1.69
1.74
2.13
2.59
2.91

-2.49
-2.30
-1.98
-1.69
1.76
2.15
2.65
2.99

-2.48
-2.28
-1.97
-1.69
1.79
2.21
2.69
3.09

-2.49
-2.29
-1.96
-1.69
1.80
2.27
2.78
3.16

-2.45
-2.25
-1.94
-1.69
1.87
2.35
2.88
3.27

-2.44
-2.23
-1.95
-1.70
1.91
2.43
3.00
3.50

-2.47
-2.28
-1.97
-1.73
2.01
2.52
3.21
3.73

-2.51
-2.32
-2.04
-1.77
2.13
2.68
3.46
3.94

-2.61
-2.40
-2.11
-1.83
2.29
2.90
3.64
4.22

-2.78
-2.55
-2.23
-1.96
2.49
3.17
4.02
4.70

-2.96
-2.76
-2.42
-2.13
2.74
3.55
4.53
5.22

-3.24
-3.03
-2.66
-2.34
3.13
4.03
5.16
6.06

-3.46
-3.24
-2.91
-2.59
3.70
4.71
6.21
7.30

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.4
1.0
2.6
5.4
5.8
3.2
1.5
0.8

0.4
1.0
2.6
5.6
5.9
3.4
1.8
1.0

0.3
0.9
2.6
5.5
6.1
3.6
1.9
1.1

0.3
0.9
2.6
5.5
6.2
3.8
2.0
1.3

0.4
0.8
2.5
5.5
6.4
4.0
2.3
1.4

0.3
0.8
2.4
5.6
6.8
4.4
2.6
1.7

0.3
0.7
2.4
5.7
7.4
4.7
2.9
2.0

0.3
0.8
2.6
6.2
8.3
5.4
3.2
2.3

0.4
1.0
3.1
6.8
9.0
6.2
3.9
2.9

0.6
1.3
3.7
7.6
10.1
7.1
4.9
3.7

0.9
2.0
5.0
9.4
11.3
8.3
5.8
4.6

1.7
3.1
7.0
11.9
13.2
10.2
7.3
5.8

3.0
5.1
9.8
15.2
15.5
12.4
9.4
7.7

5.1
8.2
13.8
19.7
17.8
14.8
11.8
10.1

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.07 -0.09 -0.10 -0.11 -0.12 -0.13 -0.15 -0.15 -0.16 -0.17 -0.19 -0.21 -0.25 -0.31
-0.04 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.04 -0.03 -0.02 -0.02 -0.01 0.00 0.01
1.04 1.04 1.05 1.06 1.07 1.09 1.12 1.15 1.21 1.28 1.38 1.52 1.72 1.99
0.18 0.25 0.31 0.36 0.41 0.48 0.56 0.63 0.70 0.75 0.81 0.90 1.01 1.17
3.02 3.14 3.23 3.30 3.40 3.52 3.65 3.82 4.08 4.15 4.39 4.75 5.17 5.75

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B13

Distribution of the BDS Statistic

/ = 1.5 (c 1 0.71)

Sample size n = 750


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.57
-2.34
-2.01
-1.68
1.69
2.02
2.48
2.79

-2.56
-2.32
-1.99
-1.69
1.68
2.03
2.50
2.78

-2.54
-2.33
-1.98
-1.69
1.68
2.03
2.48
2.77

-2.53
-2.30
-1.98
-1.69
1.69
2.06
2.50
2.80

-2.50
-2.29
-1.97
-1.71
1.71
2.07
2.54
2.88

-2.45
-2.27
-1.96
-1.69
1.72
2.11
2.58
2.91

-2.40
-2.24
-1.95
-1.68
1.73
2.12
2.61
2.97

-2.39
-2.21
-1.93
-1.67
1.74
2.17
2.63
3.02

-2.36
-2.17
-1.91
-1.65
1.75
2.19
2.69
3.08

-2.34
-2.16
-1.90
-1.65
1.78
2.23
2.75
3.16

-2.31
-2.14
-1.87
-1.63
1.79
2.25
2.84
3.29

-2.28
-2.10
-1.85
-1.61
1.80
2.28
2.93
3.33

-2.26
-2.09
-1.84
-1.60
1.84
2.32
2.95
3.43

-2.23
-2.07
-1.82
-1.60
1.87
2.39
3.04
3.56

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.5
1.0
2.8
5.5
5.5
2.9
1.4
0.8

0.5
1.0
2.7
5.6
5.4
2.9
1.4
0.8

0.4
1.0
2.6
5.5
5.3
2.9
1.5
0.8

0.4
0.9
2.7
5.4
5.5
3.1
1.4
0.9

0.4
0.9
2.6
5.7
5.6
3.2
1.5
0.9

0.3
0.8
2.5
5.6
5.6
3.3
1.7
1.0

0.3
0.7
2.4
5.4
5.8
3.3
1.8
1.1

0.2
0.7
2.3
5.3
5.8
3.5
1.9
1.1

0.2
0.6
2.1
5.1
5.9
3.6
2.1
1.2

0.2
0.5
2.0
5.0
6.1
3.8
2.1
1.4

0.1
0.5
1.8
4.8
6.2
3.9
2.2
1.5

0.1
0.4
1.8
4.6
6.4
4.1
2.4
1.6

0.1
0.4
1.6
4.5
6.5
4.3
2.5
1.8

0.1
0.3
1.5
4.4
6.7
4.4
2.7
1.9

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.06 -0.07 -0.08 -0.09 -0.09 -0.11 -0.12 -0.13 -0.14 -0.14 -0.15 -0.16 -0.17 -0.18
-0.04 -0.04 -0.05 -0.05 -0.06 -0.06 -0.06 -0.06 -0.07 -0.07 -0.07 -0.07 -0.07 -0.06
1.03 1.03 1.03 1.03 1.04 1.04 1.04 1.04 1.04 1.05 1.05 1.06 1.06 1.07
0.14 0.17 0.18 0.20 0.24 0.28 0.33 0.37 0.43 0.48 0.54 0.60 0.67 0.73
3.10 3.07 3.07 3.08 3.09 3.13 3.18 3.24 3.32 3.43 3.56 3.71 3.87 4.04

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

/ = 2.0 (c 1 0.84)

Sample size n = 750


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.66
-2.40
-2.03
-1.72
1.75
2.09
2.53
2.86

-2.64
-2.40
-2.03
-1.72
1.71
2.09
2.54
2.81

-2.61
-2.40
-2.05
-1.74
1.71
2.09
2.49
2.77

-2.65
-2.40
-2.04
-1.74
1.69
2.05
2.47
2.78

-2.58
-2.37
-2.03
-1.73
1.70
2.05
2.47
2.75

-2.57
-2.36
-2.03
-1.73
1.69
2.04
2.48
2.73

-2.55
-2.36
-2.03
-1.73
1.69
2.04
2.48
2.77

-2.55
-2.36
-2.04
-1.73
1.69
2.06
2.52
2.80

-2.53
-2.35
-2.04
-1.74
1.70
2.08
2.55
2.88

-2.51
-2.34
-2.02
-1.72
1.71
2.08
2.57
2.88

-2.51
-2.31
-2.00
-1.72
1.71
2.09
2.59
2.91

-2.50
-2.30
-1.99
-1.72
1.72
2.12
2.60
2.95

-2.49
-2.29
-1.97
-1.71
1.73
2.13
2.62
2.97

-2.46
-2.25
-1.97
-1.70
1.75
2.14
2.64
2.99

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.6
1.2
3.0
5.9
6.0
3.3
1.6
0.9

0.6
1.2
2.9
6.1
5.7
3.3
1.5
0.9

0.6
1.2
3.1
5.9
5.7
3.2
1.5
0.8

0.6
1.2
3.0
6.2
5.5
3.1
1.3
0.8

0.5
1.1
3.0
6.0
5.5
3.1
1.3
0.8

0.5
1.1
2.9
6.0
5.5
3.0
1.4
0.8

0.5
1.1
3.1
6.0
5.5
3.0
1.5
0.8

0.4
1.1
3.1
6.0
5.5
3.1
1.5
0.9

0.4
1.1
3.0
6.2
5.4
3.1
1.6
0.9

0.4
1.1
2.9
6.0
5.7
3.1
1.6
1.0

0.4
1.0
2.8
5.9
5.7
3.2
1.6
1.0

0.4
0.9
2.7
5.9
5.7
3.3
1.7
1.1

0.3
0.9
2.6
5.8
5.8
3.3
1.8
1.1

0.3
0.8
2.5
5.8
5.8
3.4
1.9
1.2

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.07 -0.08 -0.09 -0.11 -0.11 -0.11 -0.12 -0.13 -0.14 -0.14 -0.14 -0.15 -0.16 -0.17
-0.05 -0.06 -0.07 -0.07 -0.08 -0.08 -0.08 -0.09 -0.09 -0.09 -0.09 -0.09 -0.10 -0.10
1.06 1.05 1.05 1.05 1.04 1.04 1.04 1.04 1.04 1.05 1.05 1.05 1.05 1.05
0.16 0.16 0.16 0.16 0.17 0.18 0.20 0.22 0.24 0.27 0.29 0.32 0.34 0.38
3.09 3.06 3.04 3.04 3.01 3.02 3.03 3.06 3.09 3.10 3.12 3.15 3.17 3.20

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B14

Distribution of the BDS Statistic

/ = 0.5 (c 1 0.28)

Sample size n = 1,000


2

10

11

12

13

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.66
-2.44
-2.08
-1.77
1.87
2.26
2.72
3.04

-2.72
-2.48
-2.11
-1.79
1.95
2.37
2.86
3.25

-2.83
-2.59
-2.24
-1.92
2.10
2.54
3.12
3.52

-3.17
-2.91
-2.50
-2.16
2.36
2.88
3.54
4.03

-3.82
-3.51
-3.01
-2.58
2.83
3.46
4.31
4.82

-4.92
-4.49
-3.90
-3.31
3.81
4.69
5.75
6.53

-6.63
-6.10
-5.30
-4.59
5.53
6.81
8.59
9.84

-8.03
-7.69
-7.04
-6.15
8.69
10.86
13.50
16.19

-7.33
-7.15
-6.85
-6.60
14.19
18.53
24.27
28.67

-6.36
-6.19
-5.97
-5.76
24.40
32.76
46.74
56.34

-5.50
-5.35
-5.16
-4.99
36.34
49.56
89.25

-4.79
-4.66
-4.49
-4.34
-3.02
-2.85

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.7
1.4
3.2
6.4
7.0
4.3
2.2
1.4

0.8
1.4
3.5
6.9
7.9
4.9
2.7
1.8

1.1
2.1
4.6
8.2
9.2
6.0
3.5
2.3

2.2
3.6
6.9
11.2
11.7
8.0
5.2
3.8

5.0
7.2
11.3
16.1
15.6
11.9
8.4
6.6

10.7
13.6
18.4
23.1
21.7
18.0
14.3
12.2

20.0
22.7
27.3
31.3
27.6
24.5
21.6
19.5

30.0
34.6
39.5
41.0
31.5
29.3
26.8
25.3

40.3
40.3
40.8
43.4
32.9
32.2
31.1
30.1

77.3
77.3
77.3
77.3
22.7
22.7
22.7
22.7

93.2
93.2
93.2
93.2
6.9
6.9
6.9
6.9

98.2
98.2
98.2
98.2
1.8
1.8
1.8
1.8

-0.06 -0.08 -0.10 -0.10 -0.11 -0.15 -0.28 -0.65 -1.33 -4.78 -4.19
-0.03 -0.03 -0.03 -0.02 -0.02 0.00 0.03 0.04 0.04 0.05 -0.04
1.10 1.14 1.22 1.38 1.66 2.19 3.11 4.64 7.18 11.41 18.32
0.22 0.28 0.32 0.34 0.34 0.43 0.57 0.94 1.71 3.18 6.08
3.11 3.11 3.16 3.26 3.31 3.48 3.76 4.52 7.32 17.02 52.10

-3.65
-0.28
29.56
11.87
181.9

Dimension m=

14

15 N(0,1)

Quantile
-4.21
-4.10
-3.94
-3.81
-2.66
-2.56
137.59 -2.44
115.13 190.18 -2.25

-3.73
-3.62
-3.49
-3.37
-2.33
-2.25
-2.16
-2.10

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

97.2
99.4
99.5
99.5
0.5
0.5
0.5
0.5

77.7
95.3
99.8
99.9
0.2
0.2
0.2
0.2

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-3.18
-0.22
50.23
21.60
563.8

-2.80
0.19
86.95
35.92
1565

0.00
0.00
1.00
0.00
3.00

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

/ = 1.0 (c 1 0.52)

Sample size n = 1,000


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.52
-2.28
-1.96
-1.68
1.68
2.03
2.43
2.74

-2.53
-2.29
-1.96
-1.67
1.70
2.07
2.50
2.78

-2.47
-2.26
-1.93
-1.66
1.72
2.11
2.56
2.86

-2.45
-2.22
-1.94
-1.65
1.75
2.15
2.57
2.89

-2.43
-2.20
-1.92
-1.65
1.78
2.19
2.66
2.99

-2.42
-2.22
-1.90
-1.65
1.82
2.25
2.73
3.13

-2.40
-2.21
-1.92
-1.65
1.88
2.31
2.82
3.25

-2.40
-2.21
-1.93
-1.68
1.95
2.40
3.00
3.39

-2.44
-2.25
-1.97
-1.71
2.04
2.54
3.15
3.64

-2.52
-2.34
-2.06
-1.80
2.12
2.66
3.35
3.84

-2.65
-2.48
-2.15
-1.88
2.28
2.87
3.62
4.25

-2.87
-2.63
-2.31
-2.00
2.50
3.13
3.94
4.63

-3.07
-2.86
-2.51
-2.20
2.79
3.52
4.44
5.21

-3.33
-3.11
-2.77
-2.44
3.19
4.07
5.20
5.94

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.4
0.9
2.5
5.4
5.3
3.0
1.3
0.7

0.4
0.9
2.5
5.3
5.5
3.1
1.4
0.8

0.3
0.8
2.4
5.2
5.8
3.2
1.7
1.0

0.4
0.7
2.4
5.1
6.0
3.6
1.8
1.0

0.3
0.7
2.2
5.0
6.1
3.8
1.9
1.2

0.3
0.7
2.1
5.0
6.6
4.1
2.2
1.4

0.3
0.7
2.2
5.2
7.0
4.5
2.4
1.6

0.3
0.7
2.3
5.5
7.6
4.9
2.8
1.9

0.3
0.8
2.5
5.8
8.4
5.7
3.4
2.4

0.4
1.1
3.4
7.0
8.8
6.1
3.9
2.8

0.7
1.5
4.2
8.2
10.0
7.1
4.7
3.6

1.2
2.4
5.5
10.2
11.7
8.6
5.9
4.6

2.2
3.8
7.8
13.0
13.7
10.4
7.4
6.0

3.8
6.1
11.1
16.6
16.0
12.8
9.8
8.1

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.05 -0.08 -0.09 -0.10 -0.11 -0.12 -0.13 -0.14 -0.14 -0.16 -0.17 -0.18 -0.20 -0.25
-0.03 -0.04 -0.04 -0.05 -0.05 -0.04 -0.04 -0.04 -0.03 -0.03 -0.03 -0.02 -0.02 -0.01
1.02 1.03 1.03 1.04 1.05 1.06 1.08 1.11 1.15 1.21 1.29 1.40 1.55 1.76
0.13 0.17 0.24 0.30 0.36 0.42 0.48 0.54 0.61 0.65 0.70 0.75 0.80 0.90
3.02 3.03 3.10 3.15 3.22 3.31 3.40 3.51 3.68 3.93 4.07 4.20 4.35 4.61

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B15

Distribution of the BDS Statistic

/ = 1.5 (c 1 0.71)

Sample size n = 1,000


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.53
-2.33
-2.00
-1.69
1.68
2.01
2.43
2.70

-2.54
-2.34
-1.99
-1.71
1.69
2.02
2.46
2.74

-2.50
-2.30
-2.01
-1.71
1.68
2.03
2.45
2.78

-2.50
-2.28
-1.99
-1.71
1.68
2.04
2.48
2.81

-2.47
-2.27
-1.97
-1.71
1.70
2.06
2.49
2.82

-2.46
-2.24
-1.95
-1.67
1.71
2.08
2.53
2.87

-2.44
-2.22
-1.94
-1.67
1.73
2.12
2.55
2.89

-2.41
-2.21
-1.92
-1.66
1.73
2.16
2.58
2.89

-2.38
-2.19
-1.92
-1.66
1.74
2.16
2.65
2.93

-2.34
-2.17
-1.91
-1.65
1.75
2.19
2.69
3.01

-2.31
-2.15
-1.89
-1.63
1.77
2.22
2.73
3.07

-2.27
-2.12
-1.87
-1.63
1.79
2.25
2.78
3.16

-2.26
-2.09
-1.86
-1.62
1.81
2.27
2.84
3.23

-2.23
-2.07
-1.83
-1.61
1.83
2.31
2.90
3.30

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.4
1.0
2.7
5.5
5.3
2.9
1.3
0.7

0.5
1.0
2.7
5.7
5.5
2.8
1.3
0.7

0.4
0.9
2.8
5.7
5.3
2.9
1.3
0.8

0.4
0.9
2.7
5.8
5.3
3.0
1.4
0.8

0.4
0.8
2.6
5.7
5.5
3.0
1.5
0.8

0.3
0.8
2.5
5.4
5.6
3.2
1.6
0.9

0.3
0.7
2.3
5.3
5.7
3.3
1.6
1.0

0.2
0.7
2.2
5.1
5.8
3.4
1.8
1.0

0.2
0.6
2.2
5.2
5.8
3.5
1.9
1.2

0.2
0.5
2.1
5.1
5.8
3.6
2.0
1.3

0.2
0.5
2.0
4.9
6.0
3.7
2.1
1.4

0.1
0.4
1.8
4.9
6.2
3.8
2.2
1.5

0.1
0.4
1.8
4.6
6.4
4.0
2.2
1.6

0.1
0.3
1.6
4.5
6.6
4.1
2.4
1.7

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.05 -0.07 -0.08 -0.08 -0.09 -0.10 -0.11 -0.12 -0.12 -0.13 -0.14 -0.15 -0.16 -0.16
-0.03 -0.04 -0.05 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06 -0.06
1.02 1.03 1.03 1.03 1.03 1.03 1.03 1.04 1.04 1.04 1.04 1.05 1.05 1.06
0.11 0.14 0.16 0.18 0.22 0.26 0.29 0.33 0.37 0.41 0.45 0.50 0.56 0.62
2.99 2.99 3.02 3.05 3.08 3.09 3.12 3.16 3.21 3.29 3.35 3.45 3.56 3.70

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

/ = 2.0 (c 1 0.84)

Sample size n = 1,000


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.70
-2.40
-2.05
-1.73
1.70
2.04
2.45
2.77

-2.65
-2.41
-2.04
-1.72
1.68
2.04
2.47
2.76

-2.62
-2.39
-2.04
-1.74
1.69
2.04
2.45
2.80

-2.59
-2.37
-2.05
-1.74
1.69
2.05
2.47
2.79

-2.58
-2.37
-2.04
-1.73
1.69
2.04
2.46
2.77

-2.57
-2.37
-2.04
-1.74
1.69
2.04
2.47
2.76

-2.58
-2.35
-2.05
-1.74
1.69
2.06
2.51
2.80

-2.60
-2.36
-2.04
-1.74
1.69
2.07
2.51
2.82

-2.56
-2.35
-2.03
-1.73
1.69
2.07
2.50
2.83

-2.53
-2.33
-2.02
-1.72
1.70
2.07
2.52
2.85

-2.51
-2.30
-2.00
-1.73
1.71
2.08
2.52
2.88

-2.49
-2.28
-2.00
-1.73
1.72
2.08
2.55
2.87

-2.47
-2.26
-1.98
-1.71
1.73
2.09
2.57
2.90

-2.46
-2.25
-1.96
-1.71
1.73
2.11
2.58
2.93

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.7
1.3
3.1
6.0
5.5
2.9
1.4
0.8

0.6
1.3
3.0
5.9
5.4
2.9
1.4
0.8

0.6
1.2
3.0
6.1
5.5
3.0
1.4
0.8

0.5
1.1
3.1
6.1
5.5
2.9
1.4
0.8

0.5
1.1
3.1
6.0
5.4
2.9
1.4
0.8

0.5
1.2
3.0
6.1
5.5
3.0
1.4
0.8

0.5
1.0
3.1
6.1
5.4
3.0
1.4
0.9

0.5
1.1
3.0
6.1
5.4
3.1
1.4
0.9

0.5
1.0
2.9
6.0
5.5
3.1
1.5
0.9

0.4
1.0
2.9
6.0
5.5
3.1
1.5
0.9

0.4
0.9
2.8
6.0
5.6
3.2
1.5
0.9

0.4
0.8
2.8
5.9
5.7
3.2
1.6
1.0

0.4
0.8
2.6
5.9
5.6
3.2
1.7
1.0

0.3
0.8
2.5
5.8
5.7
3.3
1.7
1.0

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.07 -0.08 -0.08 -0.09 -0.10 -0.10 -0.10 -0.11 -0.11 -0.12 -0.13 -0.14 -0.15 -0.15
-0.05 -0.06 -0.06 -0.06 -0.07 -0.07 -0.07 -0.08 -0.08 -0.09 -0.09 -0.09 -0.09 -0.09
1.04 1.04 1.04 1.05 1.04 1.04 1.05 1.04 1.04 1.04 1.04 1.04 1.04 1.04
0.11 0.12 0.14 0.15 0.16 0.17 0.18 0.19 0.21 0.23 0.26 0.28 0.31 0.33
3.14 3.13 3.08 3.05 3.04 3.05 3.04 3.05 3.06 3.08 3.11 3.13 3.16 3.18

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 25,000 normal random samples

BDS Estimation: Appendix B (Tables)

B16

Distribution of the BDS Statistic

/ = 0.5 (c 1 0.28)

Sample size n = 2,500


2

10

11

12

13

14

15 N(0,1)

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.51
-2.31
-1.97
-1.69
1.73
2.07
2.45
2.78

-2.53
-2.33
-1.99
-1.70
1.75
2.11
2.54
2.85

-2.59
-2.37
-2.06
-1.74
1.79
2.21
2.69
3.02

-2.74
-2.55
-2.16
-1.85
1.94
2.36
2.93
3.26

-3.00
-2.77
-2.39
-2.04
2.18
2.68
3.22
3.60

-3.67
-3.35
-2.89
-2.46
2.66
3.19
3.93
4.36

-4.78
-4.41
-3.82
-3.21
3.53
4.27
5.28
5.93

-6.51
-6.00
-5.23
-4.52
5.18
6.32
7.70
8.69

-8.74
-8.28
-7.22
-6.35
8.20
10.24
12.65
14.35

-8.85
-8.68
-8.41
-8.19
13.75
17.79
22.49
26.23

-7.73
-7.63
-7.45
-7.29
24.50
31.87
43.30
52.23

-6.77
-6.66
-6.51
-6.38
32.59
60.12
83.54

-5.93
-5.84
-5.71
-5.60
-4.41
95.19

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.4
0.9
2.5
5.6
5.8
3.1
1.4
0.8

0.4
1.0
2.7
5.7
6.0
3.4
1.6
0.9

0.5
1.1
3.1
6.3
6.5
3.8
2.0
1.2

0.9
1.7
3.9
7.3
7.7
4.9
2.7
1.8

1.7
2.9
5.8
9.8
10.0
6.7
4.0
2.9

4.2
6.1
10.0
14.6
14.2
10.5
7.2
5.5

9.9
12.3
16.6
21.3
21.0
17.1
13.2
10.9

19.4
22.0
26.6
30.2
27.3
24.3
21.1
18.9

29.9
32.8
37.0
39.6
32.1
30.1
27.4
25.9

44.0
44.0
44.0
44.2
35.1
33.9
31.9
30.3

64.5
64.5
64.5
64.5
35.5
35.4
35.1
34.6

Dimension m=

Quantile
-5.24
-5.16
-5.05
-4.93
-3.95
-3.85
131.83 -3.66

105.70 216.13 339.65

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

88.6
88.6
88.6
88.6
11.4
11.4
11.4
11.4

96.7
96.7
96.7
96.7
3.3
3.3
3.3
3.3

99.1
99.1
99.1
99.1
0.9
0.9
0.9
0.9

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.04 -0.05 -0.05 -0.06 -0.07 -0.07 -0.10 -0.22 -0.52 -0.96 -6.38 -5.71
-0.02 -0.02 -0.02 -0.02 -0.02 -0.01 0.01 0.01 0.01 -0.01 -0.03 -0.07
1.03 1.05 1.09 1.15 1.29 1.56 2.06 2.96 4.52 7.13 11.54 18.80
0.14 0.17 0.21 0.25 0.26 0.24 0.27 0.41 0.71 1.31 2.46 4.69
3.02 3.03 3.10 3.15 3.16 3.15 3.18 3.33 4.03 6.05 12.21 34.43

-5.02
-0.12
30.91
8.49
95.10

-4.42
-0.22
51.28
15.41

0.00
0.00
1.00
0.00
3.00

Distribution of the BDS Statistic

284.44

/ = 1.0 (c 1 0.52)

Sample size n = 2,500


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.58
-2.35
-1.97
-1.66
1.67
2.02
2.47
2.78

-2.52
-2.27
-1.94
-1.66
1.70
2.06
2.50
2.77

-2.47
-2.26
-1.91
-1.64
1.69
2.08
2.53
2.81

-2.46
-2.24
-1.91
-1.63
1.71
2.10
2.55
2.88

-2.42
-2.22
-1.90
-1.63
1.75
2.12
2.54
2.89

-2.40
-2.20
-1.91
-1.64
1.76
2.16
2.57
2.91

-2.40
-2.20
-1.90
-1.64
1.78
2.18
2.66
3.04

-2.38
-2.22
-1.89
-1.63
1.79
2.20
2.74
3.13

-2.42
-2.22
-1.90
-1.62
1.83
2.24
2.83
3.23

-2.44
-2.22
-1.92
-1.65
1.88
2.31
2.88
3.25

-2.47
-2.25
-1.97
-1.68
1.95
2.39
3.02
3.43

-2.50
-2.31
-2.02
-1.74
2.07
2.53
3.17
3.68

-2.61
-2.41
-2.11
-1.83
2.21
2.72
3.35
3.99

-2.81
-2.63
-2.29
-1.99
2.39
2.97
3.77
4.31

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.5
1.1
2.6
5.2
5.2
2.8
1.4
0.8

0.4
0.8
2.4
5.2
5.5
3.0
1.5
0.9

0.3
0.8
2.2
5.0
5.5
3.2
1.6
0.9

0.3
0.8
2.2
4.9
5.6
3.3
1.6
1.0

0.3
0.7
2.2
4.9
5.9
3.5
1.7
1.0

0.3
0.7
2.1
4.9
6.1
3.6
1.8
1.0

0.2
0.7
2.1
4.9
6.2
3.7
1.8
1.2

0.2
0.7
2.1
4.8
6.4
3.8
2.0
1.3

0.2
0.7
2.1
4.8
6.7
4.0
2.2
1.5

0.3
0.8
2.3
5.1
7.2
4.4
2.4
1.6

0.3
0.8
2.5
5.4
7.7
5.0
2.8
1.9

0.4
1.0
3.0
6.1
8.5
5.7
3.4
2.3

0.6
1.4
3.7
7.5
9.7
6.7
4.3
3.0

1.2
2.3
5.3
9.5
11.3
8.2
5.4
4.0

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.04 -0.05 -0.05 -0.07 -0.07 -0.08 -0.10 -0.09 -0.09 -0.11 -0.13 -0.13 -0.14 -0.13
-0.03 -0.02 -0.03 -0.03 -0.03 -0.03 -0.03 -0.02 -0.02 -0.03 -0.02 -0.02 -0.01 -0.01
1.02 1.02 1.02 1.02 1.03 1.03 1.04 1.05 1.07 1.09 1.12 1.17 1.24 1.35
0.10 0.15 0.19 0.24 0.28 0.32 0.36 0.40 0.44 0.47 0.53 0.58 0.62 0.65
3.08 3.04 3.06 3.12 3.19 3.24 3.30 3.37 3.47 3.48 3.61 3.75 3.87 4.03

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 16,350 normal random samples

BDS Estimation: Appendix B (Tables)

B17

Distribution of the BDS Statistic

/ = 1.5 (c 1 0.71)

Sample size n = 2,500


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.55
-2.34
-1.98
-1.65
1.60
1.95
2.36
2.59

-2.54
-2.32
-1.96
-1.66
1.60
1.91
2.34
2.60

-2.53
-2.31
-1.95
-1.66
1.61
1.92
2.34
2.56

-2.49
-2.29
-1.94
-1.64
1.63
1.93
2.34
2.61

-2.50
-2.27
-1.94
-1.64
1.64
1.95
2.32
2.58

-2.45
-2.23
-1.92
-1.63
1.64
1.98
2.35
2.59

-2.43
-2.21
-1.92
-1.64
1.65
2.00
2.36
2.62

-2.42
-2.22
-1.92
-1.64
1.67
2.01
2.38
2.67

-2.40
-2.20
-1.90
-1.63
1.67
2.02
2.38
2.73

-2.40
-2.18
-1.89
-1.63
1.68
2.04
2.42
2.74

-2.39
-2.16
-1.88
-1.63
1.69
2.07
2.47
2.77

-2.36
-2.16
-1.88
-1.62
1.71
2.11
2.54
2.80

-2.36
-2.15
-1.86
-1.61
1.72
2.14
2.58
2.84

-2.32
-2.13
-1.84
-1.59
1.75
2.17
2.63
2.89

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.5
1.1
2.6
5.1
4.5
2.4
1.1
0.5

0.5
1.0
2.5
5.2
4.5
2.3
1.0
0.5

0.4
1.0
2.5
5.1
4.6
2.3
1.0
0.5

0.4
0.9
2.4
5.0
4.8
2.4
1.0
0.6

0.3
0.8
2.3
4.9
4.9
2.5
1.0
0.5

0.3
0.8
2.2
4.9
5.0
2.6
1.1
0.5

0.3
0.7
2.2
4.9
5.0
2.8
1.1
0.6

0.3
0.7
2.3
4.9
5.3
2.8
1.2
0.6

0.3
0.6
2.1
4.8
5.2
2.9
1.2
0.7

0.3
0.6
2.0
4.9
5.4
3.0
1.3
0.7

0.2
0.6
2.0
4.9
5.5
3.1
1.5
0.7

0.2
0.6
1.9
4.6
5.6
3.2
1.6
0.9

0.2
0.5
1.8
4.5
5.6
3.4
1.8
1.0

0.2
0.5
1.8
4.3
5.9
3.6
1.9
1.1

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.06 -0.07 -0.07 -0.08 -0.08 -0.08 -0.08 -0.08 -0.09 -0.09 -0.10 -0.10 -0.11 -0.11
-0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05
1.00 1.00 1.00 1.00 1.00 1.00 1.01 1.01 1.01 1.01 1.01 1.01 1.01 1.02
0.07 0.09 0.09 0.11 0.13 0.15 0.17 0.19 0.21 0.24 0.27 0.31 0.35 0.39
3.06 3.02 2.95 2.95 2.94 2.92 2.93 2.97 2.99 3.02 3.06 3.11 3.17 3.22

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

/ = 2.0 (c 1 0.84)

Sample size n = 2,500


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.56
-2.33
-1.96
-1.67
1.69
2.06
2.47
2.72

-2.57
-2.33
-2.00
-1.70
1.70
2.08
2.44
2.71

-2.55
-2.32
-2.00
-1.70
1.69
2.03
2.45
2.76

-2.54
-2.30
-1.99
-1.70
1.68
2.01
2.45
2.74

-2.56
-2.32
-1.98
-1.70
1.68
2.03
2.44
2.74

-2.56
-2.33
-1.98
-1.70
1.69
2.03
2.42
2.69

-2.54
-2.31
-1.98
-1.71
1.69
2.04
2.45
2.70

-2.53
-2.29
-1.98
-1.71
1.70
2.07
2.42
2.71

-2.51
-2.28
-1.96
-1.71
1.71
2.07
2.44
2.70

-2.51
-2.25
-1.96
-1.69
1.71
2.07
2.45
2.72

-2.50
-2.26
-1.95
-1.68
1.71
2.06
2.45
2.76

-2.47
-2.26
-1.94
-1.67
1.70
2.07
2.47
2.81

-2.46
-2.24
-1.92
-1.66
1.71
2.07
2.51
2.78

-2.44
-2.24
-1.91
-1.65
1.72
2.07
2.49
2.79

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.5
1.0
2.5
5.2
5.4
2.9
1.4
0.7

0.5
1.0
2.8
5.6
5.5
3.1
1.4
0.7

0.5
1.0
2.7
5.5
5.5
2.9
1.4
0.8

0.5
1.0
2.8
5.7
5.4
2.8
1.4
0.7

0.5
1.0
2.7
5.7
5.3
2.9
1.3
0.7

0.5
1.0
2.6
5.6
5.4
2.9
1.3
0.7

0.5
0.9
2.6
5.7
5.4
3.0
1.3
0.7

0.4
0.9
2.6
5.7
5.6
3.0
1.3
0.7

0.4
0.9
2.5
5.7
5.6
3.1
1.4
0.7

0.4
0.8
2.5
5.6
5.7
3.0
1.4
0.7

0.4
0.8
2.5
5.5
5.6
3.1
1.4
0.8

0.3
0.8
2.4
5.3
5.6
3.2
1.4
0.8

0.3
0.8
2.3
5.1
5.6
3.2
1.4
0.8

0.3
0.8
2.2
5.1
5.7
3.2
1.5
0.8

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.02 -0.04 -0.04 -0.05 -0.06 -0.06 -0.07 -0.07 -0.07 -0.07 -0.07 -0.07 -0.07 -0.07
-0.01 -0.02 -0.03 -0.03 -0.04 -0.04 -0.04 -0.04 -0.04 -0.04 -0.04 -0.04 -0.04 -0.04
1.02 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03
0.11 0.11 0.12 0.12 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.20 0.21 0.22
3.09 3.06 3.02 3.02 3.01 3.01 3.02 3.01 3.00 3.00 3.01 3.02 3.02 3.03

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 16,350 normal random samples

BDS Estimation: Appendix B (Tables)

B18

Distribution of the BDS Statistic

/ = 1.0 (c 1 0.52)

Sample size n = 10,000


2

10

11

12

13

14

15 N(0,1)

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.52
-2.27
-1.97
-1.65
1.69
2.03
2.40
2.65

-2.51
-2.30
-1.94
-1.64
1.68
2.02
2.38
2.69

-2.47
-2.24
-1.92
-1.64
1.66
2.01
2.46
2.70

-2.51
-2.22
-1.88
-1.63
1.66
2.03
2.51
2.71

-2.48
-2.15
-1.88
-1.61
1.70
2.04
2.50
2.78

-2.43
-2.16
-1.87
-1.62
1.70
2.07
2.49
2.81

-2.47
-2.17
-1.88
-1.61
1.72
2.07
2.50
2.76

-2.42
-2.17
-1.88
-1.59
1.73
2.05
2.50
2.80

-2.44
-2.19
-1.86
-1.61
1.72
2.09
2.52
2.77

-2.40
-2.18
-1.87
-1.60
1.74
2.11
2.56
2.80

-2.44
-2.20
-1.88
-1.61
1.76
2.15
2.64
2.86

-2.41
-2.21
-1.87
-1.63
1.78
2.21
2.68

-2.49
-2.27
-1.92
-1.63
1.80
2.24

2.97

3.07

3.15

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.4
0.9
2.5
5.0
5.4
2.9
1.2
0.7

0.4
0.9
2.4
4.8
5.4
3.0
1.2
0.6

0.4
0.7
2.3
4.9
5.1
2.8
1.4
0.7

0.4
0.7
2.1
4.8
5.2
3.0
1.4
0.9

0.4
0.6
2.1
4.7
5.4
3.0
1.5
0.8

0.4
0.6
2.1
4.7
5.6
3.2
1.5
0.8

0.4
0.6
1.9
4.7
5.7
3.1
1.5
0.9

0.4
0.6
2.0
4.4
5.8
3.2
1.5
0.8

0.4
0.7
1.8
4.6
5.8
3.3
1.6
0.9

0.3
0.7
1.9
4.4
6.2
3.4
1.7
1.0

0.3
0.6
2.0
4.5
6.3
3.5
1.8
1.1

0.3
0.7
2.0
4.7
6.2
3.7
2.0
1.2

0.4
0.8
2.4
4.8
6.4
4.1
2.1
1.3

0.4
0.9
2.4
5.2
7.0
4.5
2.6
1.6

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.02 -0.04 -0.04 -0.05 -0.06 -0.07 -0.07 -0.08 -0.08 -0.07 -0.08 -0.08 -0.08 -0.09
0.00 -0.01 -0.02 -0.02 -0.02 -0.03 -0.03 -0.03 -0.03 -0.03 -0.03 -0.02 -0.02 -0.02
1.01 1.01 1.00 1.00 1.01 1.01 1.01 1.01 1.01 1.02 1.02 1.04 1.05 1.08
0.08 0.11 0.14 0.18 0.21 0.24 0.25 0.27 0.29 0.31 0.33 0.34 0.35 0.37
3.13 3.11 3.11 3.16 3.17 3.18 3.18 3.18 3.19 3.23 3.28 3.30 3.29 3.27

0.00
0.00
1.00
0.00
3.00

Dimension m=

Quantile
-2.55
-2.30
-1.95
-1.66
1.88
2.33
2.75 2.80

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Distribution of the BDS Statistic

/ = 1.0 (c 1 0.52)

Sample size n = 15,000


2

10

11

12

13

14

0.5%
1.0%
2.5%
5.0%
95.0%
97.5%
99.0%
99.5%

-2.57
-2.36
-2.02
-1.72
1.69
2.04
2.35
2.56

-2.61
-2.38
-2.01
-1.69
1.67
2.00
2.44
2.63

-2.59
-2.34
-1.99
-1.70
1.71
2.00
2.38
2.71

-2.60
-2.27
-1.96
-1.67
1.69
1.99
2.41
2.71

-2.61
-2.29
-1.96
-1.66
1.70
2.00
2.40
2.63

-2.60
-2.31
-1.95
-1.66
1.71
2.02
2.43
2.67

-2.55
-2.34
-1.97
-1.64
1.70
2.06
2.49
2.67

-2.52
-2.32
-1.96
-1.61
1.70
2.08
2.48
2.74

-2.49
-2.28
-1.94
-1.62
1.71
2.10
2.48
2.87

-2.45
-2.24
-1.91
-1.64
1.74
2.07
2.50
2.79

-2.42
-2.23
-1.88
-1.63
1.75
2.10
2.53
2.81

-2.42
-2.23
-1.92
-1.62
1.76
2.13
2.54
2.82

-2.46
-2.26
-1.92
-1.64
1.79
2.17
2.62
2.92

-2.48
-2.23
-1.95
-1.68
1.85
2.22
2.70
3.01

-2.58
-2.33
-1.96
-1.65
1.65
1.96
2.33
2.58

% < -2.576
% < -2.326
% < -1.960
% < -1.645
% > 1.645
% > 1.960
% > 2.326
% > 2.576

0.5
1.0
2.8
5.7
5.5
2.9
1.1
0.5

0.5
1.2
2.9
5.4
5.3
2.7
1.3
0.6

0.5
1.1
2.7
5.5
5.7
2.8
1.2
0.7

0.5
0.8
2.5
5.4
5.6
2.6
1.2
0.7

0.5
0.9
2.5
5.2
5.8
2.8
1.3
0.6

0.5
0.9
2.5
5.2
5.7
3.0
1.3
0.6

0.4
1.0
2.5
4.8
5.4
3.1
1.4
0.8

0.4
1.0
2.5
4.6
5.5
3.1
1.4
0.8

0.3
0.9
2.4
4.8
5.7
3.0
1.4
0.8

0.3
0.9
2.2
5.0
5.7
3.3
1.5
0.9

0.3
0.8
2.1
4.8
5.8
3.4
1.6
1.0

0.3
0.8
2.2
4.7
6.2
3.4
1.7
0.9

0.3
0.7
2.2
5.0
6.5
3.9
1.8
1.1

0.4
0.8
2.4
5.4
6.7
4.2
2.2
1.2

0.5
1.0
2.5
5.0
5.0
2.5
1.0
0.5

-0.02 -0.02 -0.02 -0.01 0.00 -0.02 -0.03 -0.04 -0.05 -0.06 -0.05 -0.05 -0.05 -0.06
-0.02 -0.02 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 0.00
1.01 1.02 1.03 1.03 1.03 1.03 1.02 1.02 1.02 1.03 1.03 1.04 1.05 1.07
0.02 0.04 0.04 0.05 0.08 0.11 0.13 0.15 0.17 0.20 0.22 0.23 0.25 0.27
3.00 2.98 2.99 2.99 3.00 2.99 3.02 3.04 3.06 3.06 3.06 3.06 3.07 3.09

0.00
0.00
1.00
0.00
3.00

Dimension m=

15 N(0,1)

Quantile

Size

Median
Mean
Std. dev.
Skewness
Kurtosis

Based on BDS test results of 7,500 and 5,000 normal random samples

BDS Estimation: Appendix B (Tables)

B19

Table 3: Distribution of Correlation Integral of Dimension 1 in Normal Samples


/ = 0.5

n=
Median
Mean
Std. dev.
Skewness
Kurtosis

/ = 0.75

50

100

250

500

1,000

2,500

50,000

0.2694
0.2722
0.0157
1.0683
5.0378

0.2731
0.2744
0.0100
0.9013
4.3824

0.2749
0.2755
0.0058
0.6129
3.4846

0.2757
0.2760
0.0040
0.4656
3.2963

0.2760
0.2761
0.0028
0.3258
3.1784

0.2762
0.2762
0.0017
0.2542
3.0917

0.2764
0.2763
0.0004

n=
Median
Mean
Std. dev.
Skewness
Kurtosis

/ = 1.0

n=
Median
Mean
Std. dev.
Skewness
Kurtosis

/ = 1.5

n=
Median
Mean
Std. dev.
Skewness
Kurtosis

/ = 2.0

n=
Median
Mean
Std. dev.
Skewness
Kurtosis

(b. on 55
samples)

100

50,000

0.3998
0.4016
0.0121
0.8768
4.2276

0.4040
0.4040
0.0000
(b. on 1
sample)

50

100

250

500

750

1,000

2,500

50,000

0.5118
0.5149
0.0188
0.9439
4.3014

0.5160
0.5177
0.0128
0.7605
3.9202

0.5187
0.5194
0.0078
0.5081
3.2932

0.5197
0.5200
0.0054
0.3648
3.1600

0.5199
0.5201
0.0044
0.3141
3.1587

0.5201
0.5202
0.0038
0.2501
3.1187

0.5203
0.5204
0.0024
0.1879
3.1015

0.5206
0.5205
0.0005

50

100

250

500

750

1,000

2,500

50,000

0.7053
0.7064
0.0148
0.5794
4.1767

0.7081
0.7088
0.0100
0.5293
3.7649

0.7098
0.7103
0.0061
0.3943
3.3233

0.7105
0.7107
0.0043
0.2941
3.1699

0.7107
0.7108
0.0035
0.2433
3.1535

0.7108
0.7109
0.0030
0.1903
3.0978

0.7110
0.7111
0.0019
0.1459
3.0728

0.7112
0.7112
0.0004

50

100

250

500

750

1,000

2,500

50,000

0.8400
0.8407
0.0078
0.3895
4.9165

0.8412
0.8416
0.0050
0.5977
4.7093

0.8420
0.8423
0.0029
0.6233
4.0685

0.8423
0.8425
0.0020
0.4967
3.6818

0.8424
0.8426
0.0016
0.4131
3.3459

0.8425
0.8426
0.0014
0.3461
3.2653

0.8426
0.8427
0.0009
0.2566
3.1776

0.8427
0.8427
0.0002

Based on 25,000 normal random samples

(b. on 55
samples)

(b. on 55
samples)

(b. on 55
samples)

VERY FAST AND CORRECTLY SIZED


ESTIMATION OF THE BDS STATISTIC
Ludwig Kanzler
Christ Church
University of Oxford

1 February 1999

Appendix C:
Software (MATLAB Code)

Also downloadable in ASCII format from:


http://users.ox.ac.uk/~econlrk

See page 4 for a table of contents.

BDS Estimation: Appendix C (Software)

C2

Programme 1: Brock-Dechert-Scheinkman Test for Independence


function [w, sig, c, c1, k] = bds (series, maxdim, distance, flag, maxram)
%BDS Brock, Dechert & Scheinkman test for independence based on the correlation dimension
%
% [W, SIG, C, C1, K] = BDS (SERIES, MAXDIM, DISTANCE, METHOD, MAXRAM)
%
% uses
- time-series vector SERIES (1),
%
- dimensional distance
%
* either defined as fraction DISTANCE of the standard deviation of SERIES
%
if FLAG = 0,
%
* or defined such that the one dimensional correlation integral of SERIES
%
is equal to DISTANCE if FLAG = 1 (2),
%
- not more than MAXRAM megabytes of memory for the computation (3),
%
% to compute - BDS statistics W for each dimension between 2 and MAXDIM (4),
%
- significance levels SIG at which the null hypotheses of no dependence are
%
rejected ASYMPTOTICALLY (use companion function BDSSIG.M for finite
%
samples) against (almost) any type of linear and non-linear dependence (5),
%
- correlation integral estimates C for each dimension M between 2 and MAXDIM,
%
- first-order correlation integral estimates C1 computed over the last N-M+1
%
observations, and
%
- parameter estimate K (6).
%
% (1) SERIES is normally a vector of residuals obtained from a regression, but it can also
%
be any other stationary time series.
% (2) The default settings are DISTANCE = 1.5 and FLAG = 0. The BDS statistic appears to
%
be most efficient estimated if the measure of dimensional distance EPSILON is chosen
%
such that the first-order correlation integral estimate (C1) lies around 0.7 (see
%
Kanzler, 1998, forthcoming). For settings DISTANCE = 0.7 and FLAG = 1, the
%
programme will chose EPSILON accordingly. Unfortunately, the cost of finding optimal
%
EPSILON is quite high in terms of CPU time and required memory. For a near-normal
%
distribution, the default settings achieve the same without any extra computational
%
burden.
% (3) The default setting is MAXRAM = 150, which is recommended for a system with 192MB
%
physical RAM installed. The programme is highly optimised as to maximise speed given
%
available memory, so it is very important to specify MAXRAM correctly as the amount
%
of physical memory available AFTER starting MATLAB, loading any data and running
%
other applications concurrently. The smaller the amount of RAM available to the
%
programme (in relation to the length of SERIES), the slower the algorithm chosen
%
from six alternatives. However, if MAXRAM is chosen too large, MATLAB will make use
%
of virtual (hard-disk) memory, and this will slow down computation considerably.
% (4) The default setting for MAXDIM is 2. For MAXDIM = 1, W and SIG are empty.
% (5) A vector of NaN is returned if the MATLAB Statistics Toolbox is not installed.
% (6) The BDS statistic W(M) is a function of C(M), C1(1), C1(M) and K, and these
%
estimates are normally of no further interest.
%
% See Kanzler (1998) for some explanation of the main parts of the algorithm (other
% explanations are commented into the below code), for a detailed investigation of the
% finite-sample properties of the BDS statistic, for tables of small-sample quantiles and
% for a comparison with software by Dechert (1988) and LeBaron (1988, 1990, 1997a, 1997b).
% These and other important references can be found at the end of the script.
%
% * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
% * All rights reserved. This script may be redistributed if it is left unaltered in
*
% * its entirety (619 lines, 31422 bytes) and if nothing is charged for redistribution. *
% * Usage of the programme in applications and alterations of the code should be
*
% * referenced properly. See http://users.ox.ac.uk/~econlrk for updated versions.
*
% * The author appreciates suggestions for improvement or other feedback.
*
% * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
%
%
Copyright (c) 14 April 1998 by Ludwig Kanzler
%
Department of Economics, University of Oxford
%
Postal: Christ Church, Oxford OX1 1DP, England
%
E-mail: ludwig.kanzler@economics.oxford.ac.uk
%
$ Revision: 2.41 $ $ Date: 15 September 1998 $

% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
% % % % % % % % % % Executable part of main function BDS.M starts here % % % % % % % % %
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %

BDS Estimation: Appendix C (Software)

C3

%%%%%%%%%%%%%%%%%%%%%% Check and transformation of input arguments %%%%%%%%%%%%%%%%%%%%%%


if nargin < 5
maxram = 150;
elseif maxram > 500
disp('Are you sure you have so much memory available?')
error('If so, you need to edit the code, otherwise try again with a lower value.')
end
if nargin < 4
flag = 0;
elseif ~any(flag == [0 1])
error('Unknown method for determining dimensional distance; try again with 0 or 1.')
end
if nargin < 3
distance = 1.5;
elseif distance < 0
error('The dimensional distance parameter must be positive.')
elseif flag == 1 & distance > 1
error('The correlation integral cannot exceed 1.')
end
if nargin < 2
maxdim = 2;
elseif maxdim < 1
error('The dimension needs to be at least 1.');
end
if nargin < 1
error('Cannot compute the BDS statistic on nothing.')
end
[rows,cols] = size(series);
if rows > 1 & cols == 1
n = rows;
series = series';
elseif cols > 1 & rows == 1
n = cols;
elseif cols > 1 & rows > 1
n = cols*rows;
series = series(:)'; % transformation into a row vector
disp(sprintf('\aTransformed matrix input into a single column.'))
else
error('Cannot compute the BDS statistic on a scalar!')
end
%%%%%%%%%%%% Determination of and preparations for fastest method given MAXRAM %%%%%%%%%%%
fastbuild
slowbuild
holdinfo
wordtable
bitandop

=
=
=
=
=

0.000016 * (1:52) .* pow2(1:52); % memory requirements


0.000045
* pow2(1:52); % for the various
0.000005
* pow2(1:52); % algorithms in
0.000008 *
n^2 ./
(1:52); % megabytes for
0.000024 *
n^2 ./
(1:52); % given N

[ram1,
[ram2,
[ram3,
[ram4,
[ram5,
[ram6,

=
=
=
=
=
=

min(fastbuild
min(fastbuild
min(slowbuild
min(slowbuild
min(
min(

bits1]
bits2]
bits3]
bits4]
bits5]
bits6]

+
+
+
+

holdinfo
holdinfo
holdinfo
holdinfo

+
+
+
+

wordtable + bitandop); % number of bits for


wordtable);
% which each of six
wordtable + bitandop); % methods uses minimum
wordtable);
% memory; this memory
wordtable + bitandop); % is given by
wordtable);
% ram1, ram2,..., ram6

if ram1 < maxram | ram2 < maxram


if ram1 < maxram
method = 1;
bits = bits1; ram = ram1;
else
method = 2;
% maximum number of rows to put
bits = bits2; ram = ram2;
% through BITAND and bit-counting
stepping = floor((maxram-ram)*bits/n/0.000024); % algorithm without exceeding MAXRAM
end

BDS Estimation: Appendix C (Software)

C4

% Vector BITINFO lists the number of bits set for each integer between 0 and 2^bits
% (corresponding to the indices of the vector shifted by 1). See Kanzler (1998) for an
% explanation.
bitinfo = uint8(sum(rem(floor((0:pow2(bits)-1)'*pow2(1-bits:0)),2),2));
elseif ram3 < maxram | ram4 < maxram
if ram3 < maxram
method = 3;
bits = bits3; ram = ram3;
else
method = 4;
bits = bits4; ram = ram4;
stepping = floor((maxram - ram) * bits / n / 0.000024);
end
bitinfo(1:pow2(bits), :) = uint8(0);
% the same as above, but created through
for bit = 1 : bits
% a loop, which consumes less memory
bitinfo(1:pow2(bits)) = sum([bitinfo, ...
kron(ones(pow2(bits-bit),1), [zeros(pow2(bit-1),1); ones(pow2(bit-1),1)])],2);
end
elseif ram5 < maxram | ram6 < maxram
if ram5 < maxram
method = 5;
bits = bits5; ram = ram5;
else
method = 6;
bits = bits6; ram = ram6;
stepping = floor((maxram - ram) * bits / n / 0.000024);
end
else
disp('Insufficient amount of memory. Allocate more memory to the system')
disp('or reduce the number of observations, then try again.')
error(' ')
end
%%%%%%%%%%%%%%%%%%%%% Determination of dimensional distance EPSILON %%%%%%%%%%%%%%%%%%%%%%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%

The empirical investigation by Kanzler (1998) shows that choosing EPSILON such that the
first-order correlation integral is around 0.7 yields the most efficient estimation of
low-dimensional BDS statistics. Hence the objective here is to choose EPSILON such that,
say, 70% of all observations lie within distance EPSILON to each other. If desired, the
programme first determines EPSILON as to fulfil this or a similar requirement.
The conceptually simplest way of setting up the calculation of distance among all
observations is to define a two-dimensional table D (for "distance") of length and width
N and assign to each co-ordinate (x,y) the result of the problem ABS(x-y).
In principle, the entire table could thus be created with the following one-line
statement:
D = ABS( SERIES(ONES(1000,1),:)' - SERIES(ONES(1000,1),:) )
Since the lower triangle of the table only replicates the upper triangle and since the
diagonal values represent own values (ones) which are not desired to be included in the
calculation, only the upper triangle receives further attention.
Unfortunately, sewing all the row vectors of the upper triangle together to form one
single (row) vector makes indexing very messy. To aid understanding of the vector-space
indexing used here (as well as in the optional sub-function further below), one may wish
to refer to the following exemplary matrix table (N=7):

*
*
r
o
w
*
*

* * * *c o l u m n* * * *
1
2
3
4
5
6
7

1
2
3
4
5
6
7

*
.
.
.
.
.
.

1
*
.
.
.
.
.

2
7
*
.
.
.
.

3
8
12
*
.
.
.

4
9
13
16
*
.
.

5
10
14
17
19
*
.

6
11
15
18
20
21
*

Using this example, it is easy to verify


that column vector I is defined by the
following indices in vector space:
I+(0 : I-2)*N - CUMSUM(1 : I-1)
More generally, column vector I starting only
in row J is:
I+(J-1 : I-2)*N - SUM(1:J-1)-CUMSUM(J : I-1)
Row vector I is given by indices:
1+(I-1)*(N-1)-SUM(1:I-2) : I*(N-1)-SUM(1:I-1)

BDS Estimation: Appendix C (Software)

C5

%
% (A formal derivation of the above formulae is beyond the scope of this script.)
%
% To calculate a percentile of the distribution of distance values, the row vector is
% sorted (unfortunately, this requires a lot of time and RAM in MATLAB).
if ~flag
demeaned = series-sum(series)/n;
epsilon = distance * sqrt(demeaned*demeaned'/(n-1));
clear demeaned % to save memory

% fastest algorithm for


% computing the standard
% deviation of SERIES

elseif 0.000008 * 3 * sum(1:n-1) < maxram % check memory requirements for DIST and sorting
dist(1:sum(1:n-1)) = 0;
for i = 1 : n-1
dist(1+(i-1)*(n-1)-sum(0:i-2):i*(n-1)-sum(1:i-1)) = abs(series(i+1:n)-series(i));
end
sorted = sort(dist);
epsilon = sorted(round(distance*sum(1:n-1))); % DISTANCEth percentile of SORTED series
clear dist sorted
else
error('Insufficient RAM to compute EPSILON; allocate more memory or use METHOD = 1.')
end
%%%%%%%%%%%% Computation and storage of one-dimensional distance information %%%%%%%%%%%%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%

Similarly to the above, a two-dimensional table C (for "close") of length and width N
can be defined by assigning to each co-ordinate (x,y) the result of the problem
ABS(x-y) <= EPSILON; (x,y) assumes the value 1 if the statement is true and 0 otherwise.
Formally, for given EPSILON:
C(x,y) = 1 if ABS(x-y) <= EPSILON
= 0 otherwise
Once again, the resulting information needs to be stored in the most efficient way.
In this implementation, this is done by chopping each row of the table into "words" of
several bits, the precise number of bits per word being determined by the above
algorithms. One "word" is thus represented by one integer. This slashes the size of the
table by the number of bits. See Kanzler (1998) for more details.
The below routine stores all rows of the upper triangle of the conceptual table
(described in Kanzler, 1998) left-aligned and assigns zeros to all other elements.
As will also be explained further below, the computation of parameter K requires the sum
of each FULL row, i.e. each row including the elements in the lower triangle and on the
diagonal. The "missing" bits correspond to the sums over each column in the upper
triangle, and these sums are also computed and stored in the below loop. And to make
matters simple, diagonal values are allocated to the column sums by initialising them
with value 1. See also Kanzler (1998).

colsum(1:n)
rowsum(1:n)
nwords
wrdmtrx(1:n-1,1:nwords)

=
=
=
=

1;
0;
ceil((n-1)/bits);
0;

for row = 1 : n-1


bitvec
rowsum(row)
colsum(1+row:n)
nwords
wrdmtrx(row,1:nwords)

=
=
=
=
=

abs(series(1+row:n) - series(row)) <= epsilon;


sum(bitvec);
colsum(1+row:n) + bitvec;
ceil((n-row)/bits);
(reshape([bitvec,zeros(1,nwords*bits-n+row)],...%transformation
bits, nwords)' *pow2(0:bits-1)')'; %into bit-words

% initialisation of bit-word table

end
clear series bitvec
%%%%%%%%%%%%%%%%%% Computation of one-dimensional correlation estimates %%%%%%%%%%%%%%%%%%
%
%
%
%
%
%
%
%

C1(1), the fraction (or estimated probability) of pairs in SERIES being "close" in the
first dimension is just the average over ALL unique elements. C1(1) is hence the most
efficient estimator of C(1), and the resulting estimate is used in the computation of
SIGMA(M) further below.
However, for the difference term C(M) - C(1)^M of the BDS statistic (see further below)
to follow SIGMA asymptotically, both C(M) and C(1) need to be estimated over the same
length vector, and so MAXDIM different C1's need to be estimated here:

BDS Estimation: Appendix C (Software)

C6

%
%
N
N
%
C1(M) = 2/(N-M+1)/(N-M) * SUM
SUM B(S,T)
%
S=M T=S+1
%
% Each C1(M) is easily computed from the sum of all bits set in rows M to N-1 divided by
% the appropriate total number of bits.
bitsum(maxdim:-1:1) = cumsum([sum(rowsum(maxdim:n-1)), rowsum(maxdim-1:-1:1)]);
c1
(maxdim:-1:1) = bitsum(maxdim:-1:1) ./ cumsum([sum(1:n-maxdim), n-maxdim+1 : n-1]);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Computation of parameter K %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%

A parameter needed to estimate SIGMA(M) is K, which is defined as:


N
K = 6/N/(N-1)/(N-2)* SUM
T=1

N
N
SUM
SUM {C(T,S)*C(S,R) + C(T,R)*C(R,S) + C(S,T)*C(T,R)} / 3
T=T+1 R=S+1

As is readily apparent, a literary computation of the above would be very processing


intensive, e.g.:
HT(1 : N) = 0;
FOR T = 1 : N
HS(1 : N-T) = 0;
FOR S = T+1 : N
HR(1 : N-S) = 0;
FOR R = S + 1 : N
HR(R-S) = (C(T,S)*C(S,R) + C(T,R)*C(R,S) + C(S,T)*C(T,R)) / 3;
END
HS(S-T) = SUM(HR(1 : N-S));
END
HT(T)= SUM(HS(1 : N-T));
END
K = SUM(HT) * 6 / N/(N-1)/(N-2);
To understand what k actually estimates, and how this estimation can be made
computationally more efficient, see Kanzler (1998).
The above FOR loop computes the
diagonal in the upper triangle.
row and column sums needs to be
elements in table C is given by
values.

sum over each row and over each column including the
To compute K from this, the sum of the squares of the
adjusted as reasoned above, whereby the sum of all
twice the sum of all vector elements plus the diagonal

fullsum = rowsum + colsum;


k
= (fullsum*fullsum' + 2*n - 3*(2*bitsum(1)+n)) / n/(n-1)/(n-2);
clear rowsum colsum fullsum bitsum
%%%%%%%%%% Computation of correlation estimates and SIGMA for higher dimensions %%%%%%%%%%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%

C(M), the M-dimension correlation


estimate, is defined as:
N
C(M) = 2/(N-M+1)/(N-M) * SUM
S=M

N
SUM
T=S+1

M-1
PROD B(S-J, T-J)
J=0

To see how C can be computed for M > 1, see Kanzler (1998).


In practice, the required BITAND-operation can be performed on the entire table at once
by replacing the entire table between rows M and N-1 with the result of the BITANDoperation between the table formed by rows M to N-1 and M-1 to N-2. But this works only
if sufficient memory is available (methods 1, 3 and 5). Otherwise, the BITAND-operation
has to be performed by looping BACKWARDS through the table, taking as many rows as
possible at once (methods 2, 4 and 6).
The number of bits set in rows M to N-1 (inclusive) is counted either in one go or
through the above loop by looking up the number of bits set for each integer (LeBaron,
1997, uses a similar method), or, if memory was insufficient to create the required
BITINFO array, by column-wise brute-force counting.
MATLAB uses logarithms to compute powers, and this can result in minute deficiencies in
accuracy. To avoid this, integer powers are computed by separate functions in this
script (see further below). Otherwise, SIGMA would be calculated as follows:
sigma(m-1) = 2*sqrt(k^m + 2*k.^(m-(1:m-1))*(c1(1).^(2*(1:m-1)))'...
+ (m-1)^2*c1(1)^(2*m) - m^2*k*c1(1)^(2*m-2));

BDS Estimation: Appendix C (Software)

C7

for m = 2 : maxdim
bitcount = 0;
if sum(method == [1 3])
wrdmtrx(m:n-1,:) = bitand(wrdmtrx(m:n-1,:),wrdmtrx(m-1:n-2,:)); % BITAND and bit
bitcount = sum(sum(bitinfo(wrdmtrx(m:n-1,:)+1)));
% count all at once

elseif sum(method == [2 4])


for row = n-stepping : -stepping : m+1
wrdmtrx(row:row+stepping-1,:) = bitand(wrdmtrx(row:...
row+stepping-1,:), wrdmtrx(row-1:row+stepping-2,:));
bitcount=bitcount+sum(sum(bitinfo(wrdmtrx(row:row+stepping-1,:)+1)));
end
wrdmtrx(m:row-1,:) = bitand(wrdmtrx(m:row-1,:), wrdmtrx(m-1:row-2,:));
bitcount = bitcount + sum(sum(bitinfo(wrdmtrx(m:row-1,:)+1)));

elseif method == 5
wrdmtrx(m:n-1,:) = bitand(wrdmtrx(m:n-1,:), wrdmtrx(m-1:n-2,:));
for col = 1 : ceil((n-1)/bits)
bitcount = bitcount + sum(sum(rem(floor(wrdmtrx(m:...
n-1-(col-1)*bits, col) * pow2(1-bits:0)), 2)));
end

%
%
%
%

%
%
%
%
%
%
%

BITAND
and bit
count in
backward
loops
through
the table

BITAND at once...
bit count
by brute force
in loops

else
for row = n-stepping : -stepping : m+1
wrdmtrx(row:row+stepping-1,:) = bitand(wrdmtrx(row:...
row+stepping-1,:), wrdmtrx(row-1:row+stepping-2,:));
end
wrdmtrx(m:row-1,:) = bitand(wrdmtrx(m:row-1,:), wrdmtrx(m-1:row-2,:));
for col = 1 : ceil((n-1)/bits)
bitcount = bitcount + sum(sum(rem(floor(wrdmtrx(m:...
n-1-(col-1)*bits, col) * pow2(1-bits:0)),2)));
end
end

c(m-1)
= bitcount / sum(1:n-m);
sigma(m-1) = 2*sqrt(prod(ones(1,m)*k) + 2*ivp(k,m-(1:m-1),m-1)...
*(ivp(c1(1),2*(1:m-1),m-1))' + (m-1)*(m-1)...
*prod(ones(1,2*m)*c1(1)) - m*m*k*prod(ones(1,2*m-2)*c1(1)));
end
clear wrdmtrx

%
%
%
%
%
%
%

%
%
%
%

BITAND
operations
and bruteforce bit
counting
in loops

indexing of
C and SIGMA
runs from 1
to MAXDIM-1

%%%%%%%%%%%%%%% Computation of the BDS statistic and level of significance %%%%%%%%%%%%%%%


%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%

Under the null hypothesis of independence, it is obvious that the time-series process
has the property C(1)^M = C(M). In finite samples, C(1) and C(M) are consistently
estimated by C1(M) and C(M) as above. Also, Brock et al. (1996) show that the standard
deviation of the difference C(M) - C1(M)^M can be consistently estimated by SIGMA(M)
divided by SQRT(N-M+1), where:
M-1
SIGMA(M)^2 = 4* [K^M + 2* SUM {K^(M-J)* C^(2*J)} + (M-1)^2* C^(2*M) - M^2* K* C^(2*M-2)]
J=1
and C = C1(1) and K as above.
For given N and EPSILON, the BDS Statistic
is defined as the ratio of the two terms:

C(M) - C1(M)^M
W(M) = SQRT(N-M+1) * -------------SIGMA(M)

Since it follows asymptotically the normal distribution with mean 0 and variance 1,
hypothesis testing is straightforward. If available, this is done here using function
NORMCDF of the MATLAB Statistics Toolbox.
Integer powers are again calculated by a sub-routine which is more accurate than the
MATLAB built-in power function; without using the sub-routine, the line for calculating
W would be: w = sqrt(n-(2:maxdim)+1) .* (c - c1(2:maxdim).^(2:maxdim)) ./ sigma;

BDS Estimation: Appendix C (Software)

C8

if maxdim > 1
w = sqrt(n-(2:maxdim)+1) .* (c - idvp(c1(2:maxdim), 2:maxdim, maxdim-1)) ./ sigma;
if exist('normcdf.m','file') & nargout > 1
sig = min(normcdf(w,0,1), 1-normcdf(w,0,1)) * 2;
elseif nargout > 1
sig(1:maxdim-1) = NaN;
end
else
w
= [];
sig = [];
c
= [];
end
%%%%%%%%%%%%%%%%%%%%%%% Sub-functions for computing integer powers %%%%%%%%%%%%%%%%%%%%%%%
function ipow = ivp (base, intpowvec, veclen)
ipow(1 : veclen) = 0;
for j = 1 : veclen
ipow(j) = prod(ones(1, intpowvec(j)) * base);
end
function ipow = idvp (basevec, intpowvec, veclen)
ipow(1 : veclen) = 0;
for j = 1 : veclen
ipow(j) = prod(ones(1, intpowvec(j)) * basevec(j));
end
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
% % % % % % % % % % Executable part of main function BDS.M ends here % % % % % % % % % %
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %

% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
%
%
The following sub-function is not actually used by the main function and only
%
included for the benefit of those who would like to implement the BDS test in a
%
language which is either incapable of or inefficient in handling bit-wise AND%
operations, or those who would like to cross-check the above computation. Deleting
%
the sub-function from the script will NOT result in any increase in performance.
%
%
To use the function, save the remainder of this code in a file named BDSNOBIT.M.
%
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %

%
%
%
%
%
%
%
%
%
%
%

function w = bdsnobit (series, maxdim, eps)


%BDSNOBIT BDS test for independence IMPLEMENTED WITHOUT USING BIT-WISE FUNCTIONS
%
% Only such comments which relate exclusively to this implementation of the test and
% which cannot be found in the main function are included below.
%
%
Copyright (c) 14 April 1998 by Ludwig Kanzler
%
Department of Economics, University of Oxford
%
Postal: Christ Church, Oxford OX1 1DP, England
%
E-mail: ludwig.kanzler@economics.oxford.ac.uk
%
$ Revision: 1.3 $
$ Date: 30 April 1998 $
%%%%%%%%%%%%%%%%%%%%%% Check and transformation of input arguments %%%%%%%%%%%%%%%%%%%%%%
if nargin < 3
eps = 1;
if nargin == 1
maxdim = 2;
elseif maxdim < 2
error('MAXDIM needs to be at least 2!');
end
end
epsilon
series
n
pairs

=
=
=
=

std(series)*eps;
series(:)';
% PAIRS is the total number of unique pairs which can be
length(series); % formed from all observations (note that while this is
sum(1:n-1);
% just (N-1)*N/2, MATLAB computes SUM(1:N-1) twice as fast!)

BDS Estimation: Appendix C (Software)

C9

%%%%%%%%%%%% Computation and storage of one-dimensional distance information %%%%%%%%%%%%


%
%
%
%
%
%
%
%
%

Recall that in the implementation of the main function above, table C is stored in bitrepresentation. When this is not possible or desirable, the second best method is to use
one continuous vector of unassigned 8-bit integers (called UINT8). This, however,
requires version 5.1 or higher, and a similar option may not be available in other highlevel languages. Implementation does not depend on the ability to use unassigned low-bit
integers and would work equally with double-precision integers, but the memory
requirements would, of course, be higher. Using UINT8's is still a rather inefficient
way of storing zeros and ones, which in principle require only a single bit each. On the
PC, MATLAB actually requires "only" around 5 bytes for each UNIT8.

b(1:pairs) = uint8(0);
for i = 1 : n-1
b(1+(i-1)*(n-1)-sum(0:i-2):i*(n-1)-sum(1:i-1)) = abs(series(i+1:n)-series(i))<=epsilon;
end
clear series
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Computation of parameter K %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
sums(1 : n) = 0;
for i = 1 : n
sums(i) =
sum(b(i+(0 : i-2)*n - cumsum(1 : i-1)))...
% sum over column I
+ 1 ...
% diagonal element
+ sum(b(1+(i-1)*(n-1)-sum(1:i-2) : i*(n-1)-sum(1:i-1))); % sum over row I
end
k = (sum(sums.^2) + 2*n - 3*(2*sum(b)+n)) / n/(n-1)/(n-2);
%%%%%%%%%%%%%%%%%% Computation of one-dimensional correlation estimates %%%%%%%%%%%%%%%%%%
bitsum(1:maxdim) = sum(b(1+(maxdim-1)*(n-1)-sum(0:maxdim-2) : pairs));
for m = maxdim-1 : -1 : 1
bitsum(m) = bitsum(m+1) + sum(b(1+(m-1)*(n-1)-sum(0:m-2):m*(n-1)-sum(1:m-1)));
end
c1(maxdim:-1:1) = bitsum(maxdim:-1:1) ./ cumsum([sum(1:n-maxdim), n-maxdim+1 : n-1]);
%%%%%%%%%% Computation of correlation estimates and SIGMA for higher dimensions %%%%%%%%%%
for m = 2 : maxdim
% Indexing in vector space once again follows the rules set out above. Multiplication
% is done by moving up column by column into north-west direction, so counter I runs
% backwards in the below WHILE loop until the Mth column (from the left) is reached:
i = n;
while i - m
% Multiplication is not defined on UINT8 variables and translating the columns
% twice, once from UINT8 to DOUBLE integer and then back to UINT8, would be
% inefficient, so it is better to sum entries (this operation - undocumented by
% MATLAB - is defined, and even faster than the documented FIND function!) and
% compare them against the value 2:
b(i + (m-1 : i-2)*n - sum(1:m-1) - cumsum(m : i-1)) = ...
sum([ b(i
+ (m-1 : i-2)*n - sum(1:m-1) - cumsum(m
: i-1)); ...
b(i-1 + (m-2 : i-3)*n - sum(1:m-2) - cumsum(m-1 : i-2)) ]) == 2;
% The sum over each column is computed immediately after that column has been
% updated. To store the column sums, the vector SUMS already used above for the row
% sums is recycled (this is more memory-efficient than clearing the above SUMS
% vector and defining a new vector of the column sums, because in the latter case,
% MATLAB's memory space will end up being fragmented by variables K and C added to
% the memory in the meantime!):
sums(i) = sum(b(i + (m-1 : i-2)*n - sum(1:m-1) - cumsum(m : i-1)));
i = i - 1;
end
c(m-1)
= sum(sums(m+1:n)) / sum(1:n-m);
sigma(m-1) = 2*sqrt(k^m + 2*k.^(m-(1:m-1))*(c1(1).^(2*(1:m-1)))'... % could use above
+ (m-1)^2*c1(1)^(2*m) - m^2*k*c1(1)^(2*m-2)); % inter-power subend
% functions instead
%%%%%%%%%%%%%%% Computation of the BDS statistic and level of significance %%%%%%%%%%%%%%%
w = sqrt(n-(2:maxdim)+1) .* (c-c1(2:maxdim).^(2:maxdim)) ./ sigma; % or use sub-functions

BDS Estimation: Appendix C (Software)

C10

% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
% % % % % % % % % % % % % % Sub-function BDSNOBIT.M ends here % % % % % % % % % % % % % %
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %

%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%

REFERENCES:

%
%
%
%
%

ACKNOWLEDGEMENT:

Brock, William, Davis Dechert & Jos Scheinkman (1987), "A Test for Independence
Based on the Correlation Dimension", University of Wisconsin-Madison, Social
Science Research Working Paper, no. 8762
Brock, William, Davis Dechert, Jos Scheinkman & Blake LeBaron (1996), "A test for
independence based on the correlation dimension", Econometric Reviews, vol. 15,
no. 3 (August), pp. 197-235, revised version of Brock et al. (1987)
Dechert, Davis (1988), "BDS STATS: A Program to Calculate the Statistics of the
Grassberger-Procaccia Correlation Dimension Based on the Paper "A Test for
Independence" by W. A. Brock, W. D. Dechert and J. A. Scheinkman", version 8.21
(latest), MS-DOS software available on gopher.econ.wisc.edu
Kanzler, Ludwig (1998), "Very Fast and Correctly Sized Estimation of the BDS Statistic",
Oxford University, Department of Economics, working paper, available on
http://users.ox.ac.uk/~econlrk
LeBaron, Blake (1988, 1990, 1997a), "BDSTEST.C", version June 1997 (latest), C source
code available on gopher.econ.wisc.edu
LeBaron, Blake (1997b), "A Fast Algorithm for the BDS Statistic", Studies in
Nonlinear Dynamics and Econometrics, vol. 2, pp. 53-59

I am grateful to Blake LeBaron for giving me the exclusive opportunity to beta-test his
C programme in its compiled version for MATLAB 5 and thus enabling me to compare the two
programmes directly. I have benefited from the many associated discussions.

% End of file.

BDS Estimation: Appendix C (Software)

C11

Programme 2: Significance of the BDS Statistic in Small Samples


function sig = bdssig(w, n, m, eps)
%BDSSIG Significance level of the BDS statistic in small samples
%
% SIG = BDSSIG (W, N, M, EPS) evaluates the significance of BDS statistics W under the
%
null hypothesis of iidness, using Kanzler's (1998) finite-sample quantile values.
%
%
The significance levels can only assume 0.005, 0.010, 0.025, 0.050 or 1 (indicates
%
failure to reject the null hypothesis) in value. These levels must be doubled to
%
conduct what is normally a two-sided BDS test.
%
%
N is the size of the sample on which the BDS statistics were computed.
%
%
M can be either a scalar or a vector (corresponding to W) and represents
%
the embedding dimension(s) for which the BDS statistics were computed.
%
Only integers between 2 and 15 inclusive are permitted.
%
%
EPS is the dimensional distance for which the BDS statistics were calculated.
%
This is given in units of the standard deviation of approximately normally
%
distributed samples, and only values 0.5, 1.0, 1.5 and 2.0 are allowed.
%
%
See Kanzler (1998) on how estimation of the BDS statistic is to be correctly
%
sized in normally as well as non-normally distributed samples. As the paper
%
shows, EPS = 0.5 or 1.0 yield in many cases BDS distributions which are badly
%
shaped, and while it is in principle possible to use this function to evaluate
%
the significance of BDS statistics computed for either of these values, the
%
results may not be very reliable, in particular if the number of observations
%
is not close to one of the sample sizes for which the BDS distribution is
%
tabulated in Kanzler (1998).
%
%
Note also that estimation of any BDS statistics evaluated through this function
%
must be based on an algorithm which makes use of the most efficient estimators
%
of the various correlation integrals on which the BDS statistic is based. The
%
author's own function BDS.M may be the only function for which this is true.
%
See Kanzler (1998) on why this issue can be crucial to correctly sized estimation
%
of the BDS statistic.
%
%
Requires the MATLAB Statistics Toolbox.
%
% The author assumes no responsibility for errors or damage resulting from usage. All
% rights reserved. Usage of the programme in applications and alterations of the code
% should be referenced. This script may be redistributed if nothing has been added or
% removed and nothing is charged. Positive or negative feedback would be appreciated.
%
%
%
%
%
%

Copyright (c) 15 Sept. 1998 by Ludwig Kanzler


Department of Economics, University of Oxford
Postal: Christ Church, Oxford OX1 1DP, U.K.
E-mail: ludwig.kanzler@economics.oxford.ac.uk
Homepage:
http://users.ox.ac.uk/~econlrk
$ Revision: 1.0 $ $ Date: 16 September 1998 $

%%%%%%%%%%%%%%%%%%%%%%%%%%% Check validity of input arguments %%%%%%%%%%%%%%%%%%%%%%%%%%%


mcases
epscases
ncases
siglevels

=
=
=
=

2 : 15;
[0.5 1.0 1.5 2.0];
[50 100 250 500 750 1000 2500];
[0.005 0.010 0.025 0.050 1 0.050 0.025 0.010 0.005];

if nargin < 1
error('This function needs some argument input!')
elseif length(w) ~= length(w(:))
error('Cannot evaluate a matrix of BDS statistics; input must be a scalar or vector.')
elseif nargin < 2
n = inf;
end
if nargin < 3
m = 2 : length(w);
elseif unique([mcases, m(:)']) ~= 14
error('Cannot handle embedding dimension other than integers between 2 and 15.')
elseif length(m) ~= length(w) & length(m) ~= 1

BDS Estimation: Appendix C (Software)

C12

error('Cannot handle a vector of embedding dimensions which is not as long as W.')


elseif length(m) == 1
m = m * ones(1, length(w));
end
if nargin < 4
eps = 1.5;
elseif ~sum(eps == epscases)
error('Cannot handle a dimensional distance other than 0.5, 1.0, 1.5 or 2.0.')
end
if n <= 5000
%%%%%%%%%%%%%%%%%%%%%%%%%% Setup of the quantile look-up table %%%%%%%%%%%%%%%%%%%%%%%%%%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%

The quantile values are taken from Kanzler (1998) and are found along dimension 1
(with the corresponding values for N(0,1) in parenthesis):
< 0.5% (-2.58)
< 1.0% (-2.33)
< 2.5% (-1.96)
< 5.0% (-1.65)
> 95.0% ( 1.65)
> 97.5% ( 1.96)
> 99.0% ( 2.33)
> 99.5% ( 2.58)
Embedding dimensions m = [2 3 4 5 6 7 8 9 10 11 12 13 14 15] are along dimension 2.
Sample sizes n = [50 100 250 500 750 1000 2500] are along dimension 3.
Dimensional distances in units of the standard deviation of a normally distributed
sample eps = [0.5 1.0 1.5 2.0] are along dimension 4.

quants = NaN * ones(8, 14, 7, 4);


% n = 50, eps = 0.5 (c1 ~ 0.27)
quants(1:8, 1:12, 1, 1) = [...
-22.66 -27.66 -31.41 -40.37 -54.26
-13.87 -16.90 -19.92 -24.97 -28.85
-8.01 -9.87 -12.09 -14.93 -15.79
-5.75 -6.94 -8.74 -10.85 -10.85
5.66
6.88
9.14 13.14 18.81
8.27 10.05 13.92 20.89 31.30
13.99 16.09 23.76 34.03 56.16
21.47 27.66 37.27 57.80 89.30

-48.20
-25.56
-13.73
-9.39
15.41
36.99
74.44
118.37

-40.93 -34.41 -28.99 -26.20 -22.25 -22.19


-21.97 -18.59 -16.45 -15.29 -14.53 -14.65
-11.88 -10.34 -9.30 -8.68 -8.71 -8.58
-7.98 -6.89 -6.33 -5.95 -5.81 -5.80
-0.68 -0.53 -0.40 -0.29 -0.21 -0.15
-0.44 -0.41 -0.30 -0.22 -0.15 -0.10
73.59 -0.29 -0.22 -0.15 -0.10 -0.07
147.67 45.58 -0.18 -0.12 -0.08 -0.05];

quants(1:8, 13:14, 1, 1) = [...


-22.79 -24.73
-14.99 -16.43
-8.98 -9.75
-5.89 -6.29
-0.11 -0.07
-0.07 -0.05
-0.04 -0.03
-0.03 -0.02];
% n = 50, eps = 1.0 (c1 ~ 0.51)
quants(1:8, 1:14, 1, 2) =
-4.66 -5.12 -5.55 -6.05
-4.10 -4.42 -4.67 -5.18
-3.35 -3.55 -3.78 -4.10
-2.84 -2.99 -3.12 -3.32
2.76 2.89 3.03 3.27
3.50 3.70 3.94 4.35
4.52 4.90 5.25 5.97
5.43 5.82 6.46 7.24

[...
-6.45
-5.49
-4.37
-3.54
3.64
4.98
6.78
8.60

-6.99
-5.89
-4.58
-3.75
4.11
5.74
8.21
10.65

-7.46
-6.27
-4.92
-4.01
4.81
6.81
9.89
12.56

-7.65
-6.45
-5.06
-4.12
5.66
8.33
12.32
16.11

-7.89
-6.57
-5.13
-4.20
6.60
10.21
15.44
21.04

-7.72
-6.65
-5.13
-4.16
7.15
11.97
20.26
26.66

-8.09
-6.69
-5.06
-4.03
6.60
13.38
24.66
34.22

-8.60
-6.95
-5.25
-4.06
2.59
12.07
27.84
40.93

-9.37 -10.87
-7.70 -8.25
-5.45 -5.83
-4.17 -4.28
-0.18 -0.15
4.40 -0.10
26.77 16.38
46.54 46.35];

% n = 50, eps = 1.5 (c1 ~ 0.71)


quants(1:8, 1:14, 1, 3) = [...
-4.09 -4.01 -4.14 -4.08 -4.15 -4.30 -4.38 -4.53 -4.98 -5.32 -5.64 -6.19 -7.05 -7.52
-3.69 -3.65 -3.75 -3.69 -3.75 -3.92 -3.98 -4.12 -4.37 -4.64 -4.85 -5.19 -5.76 -6.35

BDS Estimation: Appendix C (Software)

C13

-3.08 -3.12 -3.19 -3.19 -3.25 -3.32 -3.36 -3.49 -3.63 -3.74 -3.94 -4.21 -4.46 -4.74
-2.63 -2.67 -2.73 -2.75 -2.80 -2.84 -2.88 -2.97 -3.05 -3.14 -3.24 -3.40 -3.55 -3.74
2.46 2.46 2.47 2.46 2.44 2.47 2.53 2.60 2.67 2.76 2.85 2.95 3.07 3.20
3.01 3.04 3.07 3.07 3.14 3.27 3.34 3.48 3.64 3.78 3.96 4.25 4.54 4.87
3.64 3.69 3.76 3.91 4.05 4.26 4.43 4.62 4.92 5.22 5.65 6.13 6.54 7.09
4.15 4.11 4.26 4.57 4.70 4.96 5.22 5.56 6.05 6.40 6.90 7.41 8.29 9.19];
% n = 50, eps = 2.0 (c1 ~ 0.84)
quants(1:8, 1:14, 1, 4) =
-4.86 -4.77 -4.74 -4.67
-4.42 -4.34 -4.35 -4.28
-3.69 -3.68 -3.67 -3.69
-3.02 -3.09 -3.11 -3.15
2.83 2.80 2.79 2.76
3.50 3.42 3.46 3.44
4.23 4.26 4.19 4.22
4.70 4.77 4.71 4.72

[...
-4.83
-4.40
-3.78
-3.23
2.72
3.44
4.27
4.81

-4.89
-4.48
-3.79
-3.25
2.72
3.42
4.22
4.82

-4.86
-4.52
-3.86
-3.33
2.71
3.40
4.25
4.91

-5.14
-4.62
-3.96
-3.38
2.71
3.40
4.24
4.93

-5.33
-4.78
-4.02
-3.44
2.67
3.41
4.30
5.08

-5.48
-4.88
-4.09
-3.53
2.68
3.39
4.40
5.16

-5.73
-5.11
-4.26
-3.60
2.65
3.44
4.48
5.21

-6.05
-5.32
-4.41
-3.69
2.62
3.46
4.53
5.30

-6.45
-5.65
-4.63
-3.82
2.61
3.48
4.55
5.44

-6.89
-5.94
-4.87
-3.98
2.62
3.52
4.74
5.53];

% n = 100, eps = 0.5 (c1 ~ 0.27)


quants(1:8, 1:14, 2, 1) = [...
-5.33 -6.39 -7.91 -9.88 -11.20
-4.66 -5.51 -6.74 -8.49 -9.70
-3.78 -4.50 -5.56 -6.90 -8.11
-3.18 -3.71 -4.57 -5.67 -6.93
3.24 3.83 4.77 6.61
9.80
4.14 4.90 6.21 8.75 13.58
5.49 6.35 8.22 11.79 19.23
6.49 7.74 9.72 14.34 23.76

-9.92
-8.72
-7.44
-6.44
14.90
21.82
33.76
43.12

-8.48 -7.07 -6.10


-7.42 -6.29 -5.49
-6.27 -5.35 -4.59
-5.45 -4.62 -3.98
20.07 -1.13 -0.96
34.20 19.50 -0.82
58.23 87.29 -0.61
79.06 130.73 137.15

-5.59
-4.88
-4.04
-3.50
-0.78
-0.67
-0.56
-0.47

-5.07
-4.48
-3.66
-3.14
-0.63
-0.53
-0.45
-0.39

-4.83
-4.14
-3.39
-2.86
-0.50
-0.43
-0.35
-0.30

-4.58
-3.90
-3.20
-2.64
-0.41
-0.34
-0.28
-0.24

-4.41
-3.83
-3.03
-2.50
-0.33
-0.27
-0.21
-0.18];

% n = 100, eps = 1.0 (c1 ~ 0.52)


quants(1:8, 1:14, 2, 2) =
-3.16 -3.23 -3.32 -3.42
-2.88 -2.93 -3.00 -3.12
-2.47 -2.52 -2.58 -2.66
-2.12 -2.16 -2.21 -2.29
2.16 2.20 2.28 2.41
2.64 2.73 2.85 3.05
3.25 3.37 3.56 3.87
3.72 3.81 4.06 4.39

[...
-3.66
-3.27
-2.80
-2.40
2.60
3.28
4.18
4.94

-3.90
-3.50
-2.93
-2.53
2.80
3.67
4.77
5.58

-4.20
-3.72
-3.14
-2.69
3.13
4.16
5.42
6.37

-4.41
-3.97
-3.34
-2.87
3.61
4.84
6.40
7.58

-4.57
-4.10
-3.49
-3.03
4.18
5.68
7.74
9.39

-4.57
-4.12
-3.52
-3.07
5.01
6.96
9.92
12.14

-4.40
-4.01
-3.42
-3.03
6.01
8.84
12.57
15.70

-4.27
-3.85
-3.30
-2.90
7.12
10.88
16.04
20.87

-4.14
-3.73
-3.18
-2.76
7.81
12.77
21.06
27.11

-4.03
-3.63
-3.06
-2.64
7.05
14.23
26.29
35.25];

-3.14
-2.86
-2.49
-2.16
2.05
2.59
3.23
3.72

-3.08
-2.86
-2.48
-2.17
2.06
2.65
3.31
3.86

-3.13
-2.85
-2.48
-2.16
2.13
2.72
3.48
4.05

-3.15
-2.89
-2.50
-2.16
2.18
2.82
3.64
4.26

-3.23
-2.95
-2.50
-2.17
2.24
2.94
3.87
4.46

-3.21
-2.95
-2.53
-2.19
2.31
3.09
4.10
4.84

-3.27
-2.99
-2.56
-2.22
2.39
3.25
4.35
5.16

-3.38
-3.03
-2.60
-2.25
2.50
3.41
4.64
5.56

-3.49
-3.15
-2.63
-2.28
2.62
3.60
5.02
6.10];

-3.44
-3.14
-2.73
-2.36
2.15
2.67
3.34
3.78

-3.43
-3.15
-2.74
-2.37
2.14
2.67
3.37
3.84

-3.47
-3.16
-2.72
-2.37
2.14
2.72
3.39
3.92

-3.46
-3.18
-2.74
-2.38
2.14
2.71
3.40
3.93

-3.48
-3.20
-2.76
-2.37
2.13
2.72
3.44
4.00

-3.48
-3.16
-2.76
-2.39
2.13
2.75
3.48
4.04

-3.56
-3.24
-2.80
-2.41
2.13
2.77
3.56
4.07

-3.56
-3.26
-2.81
-2.43
2.14
2.77
3.59
4.10

-3.62
-3.31
-2.82
-2.44
2.14
2.78
3.60
4.23];

-8.03
-7.40
-6.55
-5.78

-7.35
-6.95
-6.30
-5.83

-6.27
-5.91
-5.40
-5.00

% n = 100, eps = 1.5 (c1 ~ 0.71)


quants(1:8, 1:14, 2, 3) =
-3.15 -3.12 -3.15 -3.14
-2.88 -2.88 -2.87 -2.87
-2.45 -2.48 -2.44 -2.47
-2.09 -2.11 -2.13 -2.12
2.02 2.02 2.00 2.03
2.47 2.48 2.45 2.49
2.98 3.01 3.03 3.07
3.27 3.41 3.45 3.50

[...
-3.10
-2.86
-2.49
-2.15
2.02
2.54
3.16
3.61

% n = 100, eps = 2.0 (c1 ~0.84)


quants(1:8, 1:14, 2, 4) =
-3.66 -3.60 -3.56 -3.52
-3.28 -3.24 -3.25 -3.25
-2.73 -2.75 -2.77 -2.81
-2.28 -2.33 -2.37 -2.38
2.26 2.22 2.17 2.17
2.77 2.71 2.69 2.67
3.34 3.35 3.32 3.32
3.73 3.74 3.76 3.78

[...
-3.43
-3.17
-2.76
-2.37
2.17
2.66
3.32
3.74

% n = 250, eps = 0.5 (c1 ~ 0.27)


quants(1:8, 1:14, 3, 1) =
-3.25 -3.64 -4.32 -5.44
-2.94 -3.32 -3.91 -4.85
-2.54 -2.86 -3.29 -4.11
-2.18 -2.40 -2.79 -3.47

[...
-6.76
-6.19
-5.19
-4.45

-5.29
-5.02
-4.59
-4.25

-4.57
-4.33
-3.95
-3.65

-4.01
-3.77
-3.46
-3.17

-3.58
-3.35
-3.05
-2.80

-3.22
-3.02
-2.71
-2.48

-2.90
-2.72
-2.45
-2.23

BDS Estimation: Appendix C (Software)


2.27
2.81
3.49
3.96

2.50
3.13
3.90
4.49

2.98
3.75
4.70
5.42

3.79
4.78
6.00
7.01

C14

5.19 7.68 12.18 18.51 13.10 -1.69


6.63 10.09 16.49 28.55 43.94 -1.52
8.37 13.11 22.93 42.19 73.31 98.79
9.81 15.49 28.61 54.34 102.44 165.50

-1.44
-1.34
-1.20
-1.08

-1.23
-1.13
-1.03
-0.94

-1.05
-0.96
-0.87
-0.82

-0.90
-0.82
-0.74
-0.69];

% n = 250, eps = 1.0 (c1 ~ 0.52)


quants(1:8, 1:14, 3, 2) =
-2.63 -2.57 -2.58 -2.61
-2.42 -2.38 -2.39 -2.40
-2.09 -2.11 -2.09 -2.10
-1.81 -1.83 -1.83 -1.84
1.83 1.88 1.88 1.95
2.26 2.30 2.35 2.44
2.76 2.82 2.93 3.01
3.08 3.15 3.31 3.47

[...
-2.67
-2.44
-2.13
-1.86
2.03
2.54
3.18
3.70

-2.70
-2.48
-2.18
-1.90
2.15
2.69
3.40
3.97

-2.81
-2.59
-2.24
-1.97
2.25
2.88
3.78
4.39

-2.96
-2.72
-2.37
-2.06
2.45
3.12
4.11
4.91

-3.12
-2.87
-2.51
-2.20
2.71
3.49
4.58
5.38

-3.32
-3.06
-2.71
-2.37
3.05
3.93
5.21
6.49

-3.51
-3.28
-2.91
-2.57
3.56
4.68
6.22
7.70

-3.53
-3.33
-3.01
-2.73
4.21
5.60
7.76
9.43

-3.44
-3.24
-2.99
-2.76
5.05
6.99
9.79
12.14

-3.24
-3.08
-2.83
-2.63
6.33
8.82
12.75
15.87];

-2.55
-2.37
-2.08
-1.81
1.87
2.33
2.90
3.27

-2.51
-2.32
-2.04
-1.79
1.89
2.35
2.94
3.28

-2.49
-2.31
-2.02
-1.77
1.92
2.40
3.02
3.41

-2.49
-2.29
-2.03
-1.77
1.94
2.44
3.09
3.52

-2.45
-2.29
-2.01
-1.75
1.98
2.52
3.15
3.66

-2.46
-2.27
-2.00
-1.76
2.02
2.55
3.28
3.85

-2.44
-2.26
-1.98
-1.75
2.06
2.63
3.46
3.97

-2.43
-2.25
-1.99
-1.75
2.11
2.73
3.59
4.20

-2.43
-2.25
-1.99
-1.74
2.16
2.82
3.73
4.42];

-2.83
-2.59
-2.23
-1.94
1.83
2.29
2.79
3.17

-2.77
-2.56
-2.22
-1.93
1.83
2.28
2.81
3.14

-2.74
-2.51
-2.22
-1.93
1.84
2.28
2.83
3.18

-2.76
-2.52
-2.19
-1.91
1.85
2.29
2.83
3.20

-2.73
-2.50
-2.18
-1.91
1.85
2.29
2.81
3.24

-2.68
-2.49
-2.18
-1.89
1.85
2.31
2.84
3.22

-2.68
-2.48
-2.17
-1.89
1.87
2.33
2.88
3.24

-2.66
-2.45
-2.15
-1.89
1.88
2.35
2.92
3.32

-2.62
-2.43
-2.13
-1.88
1.88
2.36
2.93
3.36];

-6.40
-5.92
-5.15
-4.38
5.25
6.61
8.50
9.75

-7.60
-7.22
-6.55
-5.78
8.05
10.41
13.55
15.80

-6.95
-6.69
-6.33
-6.00
13.57
17.88
23.17
27.76

-5.99 -5.12 -4.43 -3.88


-5.74 -4.93 -4.28 -3.74
-5.44 -4.66 -4.03 -3.54
-5.20 -4.46 -3.85 -3.35
22.54 31.55 -2.28 -2.00
31.94 49.71 -2.08 -1.89
44.46 84.82 133.43 -1.73
55.17 109.55 198.94 264.86

-2.46
-2.27
-2.00
-1.75
1.93
2.43
3.04
3.50

-2.48
-2.30
-2.03
-1.76
2.01
2.55
3.23
3.68

-2.54
-2.36
-2.07
-1.81
2.12
2.71
3.47
4.02

-2.65
-2.47
-2.16
-1.88
2.25
2.95
3.79
4.41

-2.80
-2.61
-2.28
-1.98
2.47
3.21
4.15
4.84

-3.01
-2.78
-2.46
-2.14
2.76
3.57
4.58
5.51

-3.24
-3.01
-2.66
-2.35
3.16
4.06
5.23
6.24

-3.43
-3.22
-2.91
-2.59
3.64
4.74
6.31
7.34

-3.47
-3.31
-3.04
-2.80
4.34
5.71
7.75
9.28];

-2.45
-2.25
-1.96
-1.69
1.74
2.15

-2.44
-2.23
-1.94
-1.69
1.76
2.17

-2.41
-2.22
-1.93
-1.67
1.76
2.20

-2.37
-2.19
-1.91
-1.67
1.79
2.23

-2.33
-2.17
-1.89
-1.66
1.82
2.28

-2.29
-2.14
-1.89
-1.65
1.85
2.33

-2.29
-2.12
-1.88
-1.65
1.89
2.40

-2.28
-2.10
-1.87
-1.64
1.94
2.45

-2.25
-2.09
-1.85
-1.63
1.98
2.51

% n = 250, eps = 1.5 (c1 ~ 0.71)


quants(1:8, 1:14, 3, 3) =
-2.72 -2.70 -2.67 -2.63
-2.48 -2.49 -2.46 -2.43
-2.13 -2.12 -2.14 -2.13
-1.81 -1.81 -1.85 -1.84
1.82 1.82 1.82 1.84
2.22 2.22 2.25 2.25
2.68 2.67 2.74 2.77
2.97 3.01 3.09 3.12

[...
-2.60
-2.40
-2.10
-1.82
1.85
2.29
2.84
3.19

% n = 250, eps = 2.0 (c1 ~ 0.84)


quants(1:8, 1:14, 3, 4) =
-2.90 -2.88 -2.87 -2.86
-2.62 -2.64 -2.63 -2.61
-2.25 -2.26 -2.24 -2.24
-1.91 -1.92 -1.93 -1.93
1.88 1.86 1.86 1.86
2.30 2.29 2.29 2.28
2.79 2.78 2.75 2.78
3.16 3.15 3.09 3.07

[...
-2.89
-2.59
-2.24
-1.93
1.85
2.27
2.80
3.13

% n = 500, eps = 0.5 (c1 ~ 0.28)


quants(1:8, 1:14, 4, 1) =
-2.88 -3.01 -3.35 -4.01
-2.63 -2.78 -3.04 -3.67
-2.22 -2.38 -2.62 -3.09
-1.89 -2.01 -2.24 -2.64
2.01 2.13 2.37 2.87
2.46 2.64 2.92 3.55
2.95 3.27 3.72 4.41
3.32 3.61 4.14 5.03

[...
-5.05
-4.52
-3.87
-3.32
3.68
4.55
5.74
6.58

-3.43
-3.31
-3.11
-2.95
-1.73
-1.64
-1.53
-1.45

-3.07
-2.95
-2.77
-2.63
-1.51
-1.43
-1.33
-1.27];

% n = 500, eps = 1.0 (c1 ~ 0.52)


quants(1:8, 1:14, 4, 2) =
-2.55 -2.52 -2.53 -2.51
-2.34 -2.33 -2.30 -2.30
-2.01 -2.00 -1.99 -1.96
-1.72 -1.72 -1.71 -1.70
1.77 1.77 1.77 1.81
2.14 2.15 2.20 2.26
2.58 2.62 2.64 2.76
2.89 2.95 3.00 3.10

[...
-2.46
-2.27
-1.99
-1.73
1.87
2.34
2.89
3.28

% n = 500, eps = 1.5 (c1 ~ 0.71)


quants(1:8, 1:14, 4, 3) =
-2.61 -2.54 -2.54 -2.54
-2.37 -2.35 -2.31 -2.30
-2.01 -2.02 -2.00 -1.98
-1.71 -1.71 -1.71 -1.71
1.74 1.72 1.73 1.74
2.10 2.09 2.12 2.11

[...
-2.48
-2.26
-1.98
-1.69
1.74
2.13

BDS Estimation: Appendix C (Software)


2.50
2.80

2.54
2.86

2.52
2.86

2.57
2.94

2.60
2.93

C15

2.63
2.99

2.65
3.04

2.71
3.05

2.75
3.14

2.85
3.25

2.88
3.33

2.98
3.46

3.09
3.53

3.21
3.68];

-2.63
-2.40
-2.07
-1.78
1.75
2.11
2.57
2.87

-2.63
-2.40
-2.07
-1.78
1.75
2.10
2.57
2.88

-2.64
-2.39
-2.06
-1.78
1.74
2.10
2.56
2.92

-2.60
-2.40
-2.07
-1.78
1.74
2.12
2.60
2.92

-2.55
-2.37
-2.05
-1.78
1.74
2.15
2.63
2.95

-2.52
-2.34
-2.04
-1.78
1.75
2.16
2.66
2.99

-2.49
-2.31
-2.02
-1.76
1.77
2.18
2.69
2.99

-2.46
-2.30
-2.01
-1.74
1.77
2.19
2.71
3.06

-2.46
-2.28
-1.98
-1.73
1.78
2.20
2.75
3.10];

-2.45
-2.25
-1.94
-1.69
1.87
2.35
2.88
3.27

-2.44
-2.23
-1.95
-1.70
1.91
2.43
3.00
3.50

-2.47
-2.28
-1.97
-1.73
2.01
2.52
3.21
3.73

-2.51
-2.32
-2.04
-1.77
2.13
2.68
3.46
3.94

-2.61
-2.40
-2.11
-1.83
2.29
2.90
3.64
4.22

-2.78
-2.55
-2.23
-1.96
2.49
3.17
4.02
4.70

-2.96
-2.76
-2.42
-2.13
2.74
3.55
4.53
5.22

-3.24
-3.03
-2.66
-2.34
3.13
4.03
5.16
6.06

-3.46
-3.24
-2.91
-2.59
3.70
4.71
6.21
7.30];

-2.45
-2.27
-1.96
-1.69
1.72
2.11
2.58
2.91

-2.40
-2.24
-1.95
-1.68
1.73
2.12
2.61
2.97

-2.39
-2.21
-1.93
-1.67
1.74
2.17
2.63
3.02

-2.36
-2.17
-1.91
-1.65
1.75
2.19
2.69
3.08

-2.34
-2.16
-1.90
-1.65
1.78
2.23
2.75
3.16

-2.31
-2.14
-1.87
-1.63
1.79
2.25
2.84
3.29

-2.28
-2.10
-1.85
-1.61
1.80
2.28
2.93
3.33

-2.26
-2.09
-1.84
-1.60
1.84
2.32
2.95
3.43

-2.23
-2.07
-1.82
-1.60
1.87
2.39
3.04
3.56];

-2.57
-2.36
-2.03
-1.73
1.69
2.04
2.48
2.73

-2.55
-2.36
-2.03
-1.73
1.69
2.04
2.48
2.77

-2.55
-2.36
-2.04
-1.73
1.69
2.06
2.52
2.80

-2.53
-2.35
-2.04
-1.74
1.70
2.08
2.55
2.88

-2.51
-2.34
-2.02
-1.72
1.71
2.08
2.57
2.88

-2.51
-2.31
-2.00
-1.72
1.71
2.09
2.59
2.91

-2.50
-2.30
-1.99
-1.72
1.72
2.12
2.60
2.95

-2.49
-2.29
-1.97
-1.71
1.73
2.13
2.62
2.97

-2.46
-2.25
-1.97
-1.70
1.75
2.14
2.64
2.99];

-6.63
-6.10
-5.30
-4.59
5.53
6.81
8.59
9.84

-8.03
-7.69
-7.04
-6.15
8.69
10.86
13.50
16.19

-7.33
-7.15
-6.85
-6.60
14.19
18.53
24.27
28.67

-6.36 -5.50 -4.79 -4.21


-6.19 -5.35 -4.66 -4.10
-5.97 -5.16 -4.49 -3.94
-5.76 -4.99 -4.34 -3.81
24.40 36.34 -3.02 -2.66
32.76 49.56 -2.85 -2.56
46.74 89.25 137.59 -2.44
56.34 115.13 190.18 -2.25

-2.40
-2.21
-1.92
-1.65
1.88
2.31
2.82
3.25

-2.40
-2.21
-1.93
-1.68
1.95
2.40
3.00
3.39

-2.44
-2.25
-1.97
-1.71
2.04
2.54
3.15
3.64

-2.52
-2.34
-2.06
-1.80
2.12
2.66
3.35
3.84

% n = 500, eps = 2.0 (c1 ~ 0.84)


quants(1:8, 1:14, 4, 4) =
-2.65 -2.65 -2.70 -2.63
-2.43 -2.43 -2.45 -2.43
-2.09 -2.08 -2.08 -2.09
-1.76 -1.78 -1.79 -1.79
1.78 1.76 1.74 1.73
2.13 2.14 2.14 2.13
2.58 2.57 2.58 2.58
2.91 2.87 2.88 2.87

[...
-2.62
-2.39
-2.08
-1.79
1.74
2.12
2.55
2.88

% n = 750, eps = 1.0 (c1 ~ 0.52)


quants(1:8, 1:14, 5, 2) =
-2.52 -2.53 -2.49 -2.48
-2.32 -2.31 -2.30 -2.28
-1.98 -1.97 -1.98 -1.97
-1.68 -1.69 -1.69 -1.69
1.72 1.74 1.76 1.79
2.08 2.13 2.15 2.21
2.48 2.59 2.65 2.69
2.78 2.91 2.99 3.09

[...
-2.49
-2.29
-1.96
-1.69
1.80
2.27
2.78
3.16

% n = 750, eps = 1.5 (c1 ~ 0.71)


quants(1:8, 1:14, 5, 3) =
-2.57 -2.56 -2.54 -2.53
-2.34 -2.32 -2.33 -2.30
-2.01 -1.99 -1.98 -1.98
-1.68 -1.69 -1.69 -1.69
1.69 1.68 1.68 1.69
2.02 2.03 2.03 2.06
2.48 2.50 2.48 2.50
2.79 2.78 2.77 2.80

[...
-2.50
-2.29
-1.97
-1.71
1.71
2.07
2.54
2.88

% n = 750, eps = 2.0 (c1 ~ 0.84)


quants(1:8, 1:14, 5, 4) =
-2.66 -2.64 -2.61 -2.65
-2.40 -2.40 -2.40 -2.40
-2.03 -2.03 -2.05 -2.04
-1.72 -1.72 -1.74 -1.74
1.75 1.71 1.71 1.69
2.09 2.09 2.09 2.05
2.53 2.54 2.49 2.47
2.86 2.81 2.77 2.78

[...
-2.58
-2.37
-2.03
-1.73
1.70
2.05
2.47
2.75

% n = 1000, eps = 0.5 (c1 ~ 0.28)


quants(1:8, 1:14, 6, 1) =
-2.66 -2.72 -2.83 -3.17
-2.44 -2.48 -2.59 -2.91
-2.08 -2.11 -2.24 -2.50
-1.77 -1.79 -1.92 -2.16
1.87 1.95 2.10 2.36
2.26 2.37 2.54 2.88
2.72 2.86 3.12 3.54
3.04 3.25 3.52 4.03

[...
-3.82
-3.51
-3.01
-2.58
2.83
3.46
4.31
4.82

-4.92
-4.49
-3.90
-3.31
3.81
4.69
5.75
6.53

-3.73
-3.62
-3.49
-3.37
-2.33
-2.25
-2.16
-2.10];

% n = 1000, eps = 1.0 (c1 ~ 0.52)


quants(1:8, 1:14, 6, 2) =
-2.52 -2.53 -2.47 -2.45
-2.28 -2.29 -2.26 -2.22
-1.96 -1.96 -1.93 -1.94
-1.68 -1.67 -1.66 -1.65
1.68 1.70 1.72 1.75
2.03 2.07 2.11 2.15
2.43 2.50 2.56 2.57
2.74 2.78 2.86 2.89

[...
-2.43
-2.20
-1.92
-1.65
1.78
2.19
2.66
2.99

-2.42
-2.22
-1.90
-1.65
1.82
2.25
2.73
3.13

-2.65
-2.48
-2.15
-1.88
2.28
2.87
3.62
4.25

-2.87
-2.63
-2.31
-2.00
2.50
3.13
3.94
4.63

-3.07
-2.86
-2.51
-2.20
2.79
3.52
4.44
5.21

-3.33
-3.11
-2.77
-2.44
3.19
4.07
5.20
5.94];

BDS Estimation: Appendix C (Software)

C16

% n = 1000, eps = 1.5 (c1 ~ 0.71)


quants(1:8, 1:14, 6, 3) =
-2.53 -2.54 -2.50 -2.50
-2.33 -2.34 -2.30 -2.28
-2.00 -1.99 -2.01 -1.99
-1.69 -1.71 -1.71 -1.71
1.68 1.69 1.68 1.68
2.01 2.02 2.03 2.04
2.43 2.46 2.45 2.48
2.70 2.74 2.78 2.81

[...
-2.47
-2.27
-1.97
-1.71
1.70
2.06
2.49
2.82

-2.46
-2.24
-1.95
-1.67
1.71
2.08
2.53
2.87

-2.44
-2.22
-1.94
-1.67
1.73
2.12
2.55
2.89

-2.41
-2.21
-1.92
-1.66
1.73
2.16
2.58
2.89

-2.38
-2.19
-1.92
-1.66
1.74
2.16
2.65
2.93

-2.34
-2.17
-1.91
-1.65
1.75
2.19
2.69
3.01

-2.31
-2.15
-1.89
-1.63
1.77
2.22
2.73
3.07

-2.27
-2.12
-1.87
-1.63
1.79
2.25
2.78
3.16

-2.26
-2.09
-1.86
-1.62
1.81
2.27
2.84
3.23

-2.23
-2.07
-1.83
-1.61
1.83
2.31
2.90
3.30];

-2.58
-2.35
-2.05
-1.74
1.69
2.06
2.51
2.80

-2.60
-2.36
-2.04
-1.74
1.69
2.07
2.51
2.82

-2.56
-2.35
-2.03
-1.73
1.69
2.07
2.50
2.83

-2.53
-2.33
-2.02
-1.72
1.70
2.07
2.52
2.85

-2.51
-2.30
-2.00
-1.73
1.71
2.08
2.52
2.88

-2.49
-2.28
-2.00
-1.73
1.72
2.08
2.55
2.87

-2.47
-2.26
-1.98
-1.71
1.73
2.09
2.57
2.90

-2.46
-2.25
-1.96
-1.71
1.73
2.11
2.58
2.93];

-4.78
-4.41
-3.82
-3.21
3.53
4.27
5.28
5.93

-6.51
-6.00
-5.23
-4.52
5.18
6.32
7.70
8.69

-8.74
-8.28
-7.22
-6.35
8.20
10.24
12.65
14.35

-8.85
-8.68
-8.41
-8.19
13.75
17.79
22.49
26.23

-7.73 -6.77 -5.93 -5.24


-7.63 -6.66 -5.84 -5.16
-7.45 -6.51 -5.71 -5.05
-7.29 -6.38 -5.60 -4.93
24.50 32.59 -4.41 -3.95
31.87 60.12 95.19 -3.85
43.30 83.54 131.83 -3.66
52.23 105.70 216.13 339.65];

-2.40
-2.20
-1.90
-1.64
1.78
2.18
2.66
3.04

-2.38
-2.22
-1.89
-1.63
1.79
2.20
2.74
3.13

-2.42
-2.22
-1.90
-1.62
1.83
2.24
2.83
3.23

-2.44
-2.22
-1.92
-1.65
1.88
2.31
2.88
3.25

-2.47
-2.25
-1.97
-1.68
1.95
2.39
3.02
3.43

-2.50
-2.31
-2.02
-1.74
2.07
2.53
3.17
3.68

-2.61
-2.41
-2.11
-1.83
2.21
2.72
3.35
3.99

-2.81
-2.63
-2.29
-1.99
2.39
2.97
3.77
4.31];

-2.43
-2.21
-1.92
-1.64
1.65
2.00
2.36
2.62

-2.42
-2.22
-1.92
-1.64
1.67
2.01
2.38
2.67

-2.40
-2.20
-1.90
-1.63
1.67
2.02
2.38
2.73

-2.40
-2.18
-1.89
-1.63
1.68
2.04
2.42
2.74

-2.39
-2.16
-1.88
-1.63
1.69
2.07
2.47
2.77

-2.36
-2.16
-1.88
-1.62
1.71
2.11
2.54
2.80

-2.36
-2.15
-1.86
-1.61
1.72
2.14
2.58
2.84

-2.32
-2.13
-1.84
-1.59
1.75
2.17
2.63
2.89];

-2.54
-2.31
-1.98
-1.71
1.69
2.04
2.45
2.70

-2.53
-2.29
-1.98
-1.71
1.70
2.07
2.42
2.71

-2.51
-2.28
-1.96
-1.71
1.71
2.07
2.44
2.70

-2.51
-2.25
-1.96
-1.69
1.71
2.07
2.45
2.72

-2.50
-2.26
-1.95
-1.68
1.71
2.06
2.45
2.76

-2.47
-2.26
-1.94
-1.67
1.70
2.07
2.47
2.81

-2.46
-2.24
-1.92
-1.66
1.71
2.07
2.51
2.78

-2.44
-2.24
-1.91
-1.65
1.72
2.07
2.49
2.79];

% n = 1000, eps = 2.0 (c1 ~ 0.84)


quants(1:8, 1:14, 6, 4) =
-2.70 -2.65 -2.62 -2.59
-2.40 -2.41 -2.39 -2.37
-2.05 -2.04 -2.04 -2.05
-1.73 -1.72 -1.74 -1.74
1.70 1.68 1.69 1.69
2.04 2.04 2.04 2.05
2.45 2.47 2.45 2.47
2.77 2.76 2.80 2.79

[...
-2.58
-2.37
-2.04
-1.73
1.69
2.04
2.46
2.77

-2.57
-2.37
-2.04
-1.74
1.69
2.04
2.47
2.76

% n = 2500, eps = 0.5 (c1 ~ 0.28)


quants(1:8, 1:14, 7, 1) =
-2.51 -2.53 -2.59 -2.74
-2.31 -2.33 -2.37 -2.55
-1.97 -1.99 -2.06 -2.16
-1.69 -1.70 -1.74 -1.85
1.73 1.75 1.79 1.94
2.07 2.11 2.21 2.36
2.45 2.54 2.69 2.93
2.78 2.85 3.02 3.26

[...
-3.00
-2.77
-2.39
-2.04
2.18
2.68
3.22
3.60

-3.67
-3.35
-2.89
-2.46
2.66
3.19
3.93
4.36

% n = 2500, eps = 1.0 (c1 ~ 0.52)


quants(1:8, 1:14, 7, 2) =
-2.58 -2.52 -2.47 -2.46
-2.35 -2.27 -2.26 -2.24
-1.97 -1.94 -1.91 -1.91
-1.66 -1.66 -1.64 -1.63
1.67 1.70 1.69 1.71
2.02 2.06 2.08 2.10
2.47 2.50 2.53 2.55
2.78 2.77 2.81 2.88

[...
-2.42
-2.22
-1.90
-1.63
1.75
2.12
2.54
2.89

-2.40
-2.20
-1.91
-1.64
1.76
2.16
2.57
2.91

% n = 2500, eps = 1.5 (c1 ~ 0.71)


quants(1:8, 1:14, 7, 3) =
-2.55 -2.54 -2.53 -2.49
-2.34 -2.32 -2.31 -2.29
-1.98 -1.96 -1.95 -1.94
-1.65 -1.66 -1.66 -1.64
1.60 1.60 1.61 1.63
1.95 1.91 1.92 1.93
2.36 2.34 2.34 2.34
2.59 2.60 2.56 2.61

[...
-2.50
-2.27
-1.94
-1.64
1.64
1.95
2.32
2.58

-2.45
-2.23
-1.92
-1.63
1.64
1.98
2.35
2.59

% n = 2500, eps = 2.0 (c1 ~ 0.84)


quants(1:8, 1:14, 7, 4) =
-2.56 -2.57 -2.55 -2.54
-2.33 -2.33 -2.32 -2.30
-1.96 -2.00 -2.00 -1.99
-1.67 -1.70 -1.70 -1.70
1.69 1.70 1.69 1.68
2.06 2.08 2.03 2.01
2.47 2.44 2.45 2.45
2.72 2.71 2.76 2.74

[...
-2.56
-2.32
-1.98
-1.70
1.68
2.03
2.44
2.74

-2.56
-2.33
-1.98
-1.70
1.69
2.03
2.42
2.69

BDS Estimation: Appendix C (Software)

C17

%%%%%%%%%%%%%%%%%%%%%%%%%% Look-up of the appropriate quantiles %%%%%%%%%%%%%%%%%%%%%%%%%%


% Determine in between which two sample sizes tabulated the current n lies:
lower =
sum(n >= ncases);
upper = 8 - sum(n <= ncases);
% Fix some special cases; for samples of less than 50, use the values for 50:
if lower == 0
lower = 1;
% and since there are no tabulated values for n = 750, eps = 0.5, reference to the
% corresponding part of the quantile table must be avoided:
elseif eps == 0.5
if lower == 5
lower = 4;
end
if upper == 5
upper = 6;
end
end
% Determine the significance level in turn for each BDS statistic contained in W:
for i = 1 : length(w)
% Find the eight quantile values each for the lower and upper sample sizes:
lowerqus

= reshape(quants(1:8, m(i)-1, lower, eps*2), 8, 1);

if n <= 2500
upperqus = reshape(quants(1:8, m(i)-1, upper, eps*2), 8, 1);
else % i.e. approaching standard normality:
upperqus = norminv(siglevels([1:4 6:9]))';
ncases
= [ncases 5000];
end
%
%
%
%
%

Interpolate the quantile values for the actual sample size from the quantile
values of the surrounding sample sizes; note that this method may slightly
increase the size of a type I error for sample sizes which are not close to one
of the tabulated cases; this problem could be mitigated by a response surface
yet to be developed.

if lower ~= upper
qus = lowerqus + (upperqus - lowerqus) * (n - ncases(lower)) /...
(ncases(upper) - ncases(lower));
else
qus = lowerqus;
end
% Find the matching significance levels; at least one of the terms must be 1, or
% both, so their product yields the overall one-sided significance level:
sig(i) = siglevels(5 - sum(w(i)<=qus(1:4))) * siglevels(5 + sum(w(i)>=qus(5:8)));
end
%%%%%%%%%%%%%%%%%%%%%%%%% Otherwise use standard-normal look-up %%%%%%%%%%%%%%%%%%%%%%%%%
else
qus = [norminv(siglevels(1:4)) norminv(1 - siglevels(6:9))]';
for i = 1 : length(w)
sig(i) = siglevels(5 - sum(w(i)<=qus(1:4))) * siglevels(5 + sum(w(i)>=qus(5:8)));
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% REFERENCES:
%
% Kanzler, Ludwig (1998), "Very Fast and Correctly Sized Estimation of the BDS Statistic",
%
Oxford University, Department of Economics, working paper, available on
%
http://users.ox.ac.uk/~econlrk
% End of file.

Das könnte Ihnen auch gefallen