Beruflich Dokumente
Kultur Dokumente
Time-Series Analysis in
the Frequency Domain
and
E(yt ) =
C(yt , ys ) = |ts| .
and
On their own, the conditions of (2) constitute the conditions of weak stationarity.
A normal process is completely characterised by its mean and its autocovariances. Therefore, a normal process y(t) which satisfies the conditions for
weak stationarity is also stationary in the strict sense.
52
(4.3)
=
0 ,
where 0 is the variance of the process y(t).
The stationarity conditions imply that the autocovariances of y(t) satisfy
the equality
=
(4.4)
. . . n3 .
2
1
0
(4.5)
=
.
..
..
..
..
.
.
.
.
.
.
n1
n2
n3
...
The sequences { } and { } are described as the autocovariance and autocorrelation functions respectively.
The Filtering of White Noise
A white-noise process is a sequence (t) of uncorrelated random variables
with mean zero and common variance 2 . Thus
E(t ) = 0, for all t
2 , if t = s;
E(t s ) =
0,
if t 6= s.
(4.6)
(4.7)
y(t) = (L)(t)
= 0 (t) + 1 (t 1) + 2 (t 2) + + q (t q)
q
X
i (t i).
=
i=0
53
Given the value of 2 = V {(t)}, the autocovariances of the filtered sequence y(t) = (L)(t) may be determined by evaluating the expression
(4.9)
= E(yt yt )
X
X
i ti
j t j
=E
i
XX
i
i j E(ti t j ).
= 2
j j+ ;
The condition under equation (8) guarantees that these quantities are finite, as
is required by the condition of stationarity.
The z-transform
In the subsequent analysis, it will prove helpful to present the results in the
notation of the z-transform. The z-transform of the infinite sequence y(t) =
{yt ; t = 0, 1, 2, . . .} is defined by
(4.12)
y(z) =
yt z t .
Here z is a complex number which may be placed on the perimeter of the unit
circle provided that the series converges. Thus z = ei with [0, 2]
54
= 2
XX
i
(4.13)
=
i j z ij
j j+ z
=ij
z .
yt =
n n
X
o
j cos(j t) + j sin(j t) ,
j=0
(4.15)
sin(0 t) = sin(0) = 0,
cos(0 t) = cos(0) = 1,
sin(n t) = sin(t) = 0,
cos(n t) = cos(t) = (1)t .
55
yt = 0 +
n1
Xn
j=1
yt = 0 +
n n
X
o
j cos(j t) + j sin(j t) .
j=1
cos(t) = cos (2 )t
= cos(2) cos( t) + sin(2) sin( t)
(4.18)
= cos( t);
which indicates that and are observationally indistinguishable. Here,
< is described as the alias of > .
The Spectral Representation of a Stationary Process
By allowing the value of n in the expression (14) to tend to infinity, it is
possible to express a sequence of indefinite length in terms of a sum of sine and
cosine functions. However, in the limit as n , the coefficients j , j tend
to vanish; and therefore an alternative representation in terms of differentials
is called for.
By writing j = dA(j ), j = dB(j ) where A(), B() are step functions
with discontinuities at the points {j ; j = 0, . . . , n}, the expression (14) can be
rendered as
o
Xn
cos(j t)dA(j ) + sin(j t)dB(j ) .
(4.19)
yt =
j
56
0.6
0.4
0.2
0.0
0.2
0.4
0
25
50
75
100
125
0.125
0.100
0.075
0.050
0.025
0.000
0
/4
/2
3/4
Figure 1.
The graph of 134 observations on the monthly purchase of
clothing after a logarithmic transformation and the removal of a linear trend
together with the corresponding periodogram.
57
Here, cos(t) and sin(t), and therefore y(t), may be regarded as infinite sequences defined over the entire set of positive and negative integers.
Since A() and B() are discontinuous functions for which no derivatives
exist, one must avoid using ()d and ()d in place of dA() and dB().
Moreover, the integral in equation (20) is a FourierStieltjes integral.
In order to derive a statistical theory for the process that generates y(t),
one must make some assumptions concerning the functions A() and B().
So far, the sequence y(t) has been interpreted as a realisation of a stochastic
process. If y(t) is regarded as the stochastic process itself, then the functions
A(), B() must, likewise, be regarded as stochastic processes defined over
the interval [0, ]. A single realisation of these processes now corresponds to a
single realisation of the process y(t).
The first assumption to be made is that the functions A() and B()
represent a pair of stochastic processes of zero mean which are indexed on the
continuous parameter . Thus
(4.21)
E dA() = E dB() = 0.
The second and third assumptions are that the two processes are mutually uncorrelated and that non-overlapping increments within each process are
uncorrelated. Thus
(4.22)
E dA()dA() = 0 if 6= ,
E dB()dB() = 0 if 6= .
The final assumption is that the variance of the increments is given by
1
dZ() =
dA() idB() ,
2
(4.24)
1
dA() + idB() .
dZ () =
2
58
(4.25)
E dZ()dZ () = 0
(4.26)
6= ,
if
(4.27)
(eit eit )
(eit + eit )
y(t) =
dA() i
dB()
2
2
0
Z
it {dA() idB()}
it {dA() + idB()}
=
+e
e
2
2
0
Z
=
eit dZ() + eit dZ () .
Z
eit dZ().
y(t) =
Z
it
(4.29)
ik
e dZ() e
dZ()
= C(yt , yk ) = E
Z Z
=
eit eik E{dZ()dZ ()}
Z
=
ei E{dZ()dZ ()}
Z
=
ei f ()d.
59
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
0
10
15
20
25
40
30
20
10
0
0
/4
/2
3/4
60
(4.30)
1 X
f () =
ei
2 =
o
X
1 n
0 + 2
=
cos( ) .
2
=1
n 1 X
o
ei d
2 =
Z
ei( ) d.
ei
1
2
i( )
(4.32)
d =
2,
if = ;
0,
if 6= ,
it can be seen that the RHS of the equation reduces to . This serves to show
that equations (29) and (30) do indeed represent a Fourier transform and its
inverse.
The essential interpretation of the spectral density function is indicated by
the equation
Z
(4.33)
0 =
f ()d,
which comes from setting = 0 in equation (29). This equation shows how the
variance or power of y(t), which is 0 , is attributed to the cyclical components
of which the process is composed.
61
0 =
f ()d
= 2f ,
and, from equation (29), it follows that
Z
=
(4.35)
= f
f ()ei d
ei d
= 0.
These are the same as the conditions under (6) which have served to define a
white-noise process. When the variance is denoted by 2 , the expression for
the spectrum of the white-noise process becomes
(4.36)
f () =
2
.
2
fy () =
1
() ().
2
() =
j eij .
j=0
On defining
(4.39)
dZ () =
62
dZy ()
,
()
Expanding the expression of () and interchanging the order of integration and summation gives
X
Z
it
ij
dZ ()
e
j e
y(t) =
(4.41)
i(tj)
dZ ()
j (t j),
eit dZ ().
(t) =
dZy ()dZy ()
=E
() ()
fy ()
=
() ()
1
.
=
2
63
j x(t j)
(4.45)
Z
j
j
it
On writing
(4.46)
dZx ()
Z
=
i(tj)
ij
j e
dZx ().
Z
=
eit dZy ().
It follows that the spectral density function fy () of the filtered process y(t) is
given by
fy ()d = E{dZy ()dZy ()}
= () ()E{dZx ()dZx ()}
(4.47)
= |()|2 fx ()d.
In the case of the process defined in equation (7), where y(t) is obtained by
filtering a white-noise sequence, the result is specialised to give
fy () = |()|2 f ()
(4.48)
=
Let (z) =
(4.49)
2
|()|2 .
2
1 X
2
=
j j+ z .
2
j
fy () =
(4.50)
64
(4.51)
1 X i
fy () =
e
2 =
X
1
0 + 2
=
cos( ) ,
2
=1
which indicates that the spectral density function is the Fourier transform of the
autocovariance function of the filtered sequence. This is known as the Wiener
Khintchine theorem. The importance of this theorem is that it provides a link
between the time domain and the frequency domain.
The Gain and Phase
The complex-valued function (), which is entailed in the process of linear
filtering, can be written as
() = |()|ei() .
(4.52)
where
|()| =
2
(4.53)
2
j cos(j)
j=0
2
j sin(j)
j=0
P
j sin(j)
.
() = arctan P
j cos(j)
The function |()|, which is described as the gain of the filter, indicates
the extent to which the amplitude of the cyclical components of which x(t) is
composed are altered in the process of filtering.
The function (), which is described as the phase displacement and which
gives a measure in radians, indicates the extent to which the cyclical components are displaced along the time axis.
The substitution of expression (52) in equation (46) gives
Z
(4.54)
y(t) =
The importance of this equation is that it summarises the two effects of the
filter.
65