Sie sind auf Seite 1von 4

Time Series Exam, 2010: Solutions

1. The autocorrelation function (ACF) of a process Yt is defined as

(s, t)
(s, t) = p ,
(s, s)(t, t)

where
(s, t) = E[(Ys E(Ys ))(Yt E(Yt ))]
is the autocovariance function, provided the expectations exist.
A strictly stationary time series is one for which the probabilistic behavior of every collection
of values {Yt1 , Yt2 , . . . , Ytk } is identical to that of the time shifted set {Yt1 +h , Yt2 +h , . . . , Ytk +h }.
That is,
P{Yt1 c1 , . . . , Ytk ck } = P rob{Yt1 +h c1 , . . . , Ytk +h ck }
for all k = 1, 2, . . ., all time points t1 , t2 , . . . , tk , all numbers c1 , c2 , . . . , ck , and all time shifts
h = 0, 1, 2, . . ..
A second-order stationary time series, Yt , is a finite variance process such that
(a) the mean value function, t = E(Yt ) is constant and does not depend on time t, and
(b) the covariance function, (s, t) = E[(Ys s )(Yt t )] depends on s and t only through their
difference |s t|.
A common tool to remove trends is differencing. It allows to remove linear or polynomial trends,
but it usually complicates the dependence structure of the process.

(a) We have

Yt = + Yt1 + t
= 2 + Yt2 + t1 + t
= ...
Xt
= t + Y0 + k .
k=1

(b) We have
t
!
X
t = E(Yt ) = E t + Y0 + k = t
k=1

and

( s, t) = cov(Ys , Yt )
s t
!
X X
= cov s + Y0 + k , t + Y0 + k
k=1 k=1

min(s,t) min(s,t)
X X
= cov k , k
k=1 k=1

= min(s, t) 2 .

1
(c) We have

(t 1, t)
(t 1, t) = p
(t 1, t 1)(t, t)
(t 1) 2
= p
(t 1) 2 t 2

t1
= 1, as t .
t
This means two observations far apart are strongly correlated.
(d) We showed in (b) that t depends on t and (s, t), on min(s, t).
(e) Let us consider the differenced series Yt = Yt Yt1 = t + . We have then t = , which
does not depend on t and cov(Ys , Yt ) = cov(s + , t + ) = 2 (s t), where (s t) = 1 if
s = t and 0 otherwise.

2. (a) See Lemma 18.


(b) Conditional on A = {Ytr , . . . , Yt1 }, we find that E(Yt | A) = yt1 and E(Ytr1 | A) =
1 ytr , the latter because we can write Ytr1 = 1 (Ytr tr ). Moreover,

E(Yt Ytr1 | A) = E{(Yt1 + t )1 (Ytr tr ) | A} = yt1 ytr , r 1,

so the conditional covariance cov(Yt , Ytr1 | A) = E(Yt Ytr1 | A) E(Yt | A)E(Ytr1 | A) = 0.


If we see a series in which the partial autocorrelation function has zeros after a certain point, then
we use this property to diagnose an AR model, whereas the same property for the ACF suggests
an MA model. See slide 154 and previous material.
(c) The plots show the correlogram (empirical ACF) and partial correlogram empirical PACF)
for the data. The estimates on the left are moment-based, and those on the right are obtained
from them using the YuleWalker equations. The horizontal dashed lines show significance limits
for the correlogram and partial correlogram
elements, based on assumptions of white noise with
finite fourth moments; the limits are at 2/ n.
The correlogram shows geometric decline with alternating sign, and suggests that this is an AR(1)
model with 0.9. This is confirmed by the PACF, which has just one significant value, at
h = 1, with value around 0.9.
3. If {h }hZ is the ACF of a stationary random sequence, then there exists a unique function F
defined on [1/2, 1/2] such that F (1/2) = 0, F is right-continuous and non-decreasing, with
symmetric increments about zero, and
Z
h = e2ihu dF (u), h Z.
(1/2,1/2]

The function F is called the spectral distribution function of h , and its derivative f , if it exists,
is called the spectral density function. If h |h | < , then f exists. A function f () defined on
P

R[1/2, 1/2] is the spectrum of a stationary process if and only if f () = f (), f () 0, and
f ()d < .
We have

(h) = E[(Yt+h E[Yt+h ])(Yt E[Yt ])]


" #
X X
= E j t+hj l tl
j l
X
2
= j jh ,
j

2
hence,
X
f () = (h)e2h
h
XX
= 2 j jh e2ij e2i(jh)
h j
X X
= 2 j e2ij k e2ik
j k
2
= 2 |()| .

In the case of an ARMA(1,1) process (1 B)Yt = (1 + B)t , we have


2 2
|(| fY () = |()| f ().

We also have
2
|(| = (1 1 e2i )(1 1 e2i )
and
2
|()| = (1 + 1 e2i )(1 + 1 e2i )
hence
2
|()|
fY () = 2 2
|()|
(1 + 1 e2i )(1 + 1 e2i )
= 2
(1 1 e2i )(1 1 e2i )
1 + 22 cos(2i) + 12
= .
1 21 cos(2i) + 21

4. A time series {Yt } is an autoregressive-moving average process of order p, q, ARMA(p, q), model,
if it is stationary and of the form

Yt = 1 Yt1 + 2 Yt2 + + p Ytp + t + 1 t1 + + q tq ,

where 1 , . . . , p , 1 , . . . , q are constants with p , q 6= 0, and vt is white noise.


An ARMA(p, q) process (B)Yt = (B)t is causal if it can be written as a linear process

X
Yt = j tj = (B)t ,
j=0

where |j | < , and we set 0 = 1. It is invertible if it can be written as


P


X
t = j Ytj = (B)Yt ,
j=0

where |j | < , and we set 0 = 1.


P

An ARMA(p, q) process (B)Yt = (B)t is causal iff (z) 6= 0 within the unit disk D. If so, then
the coefficients of (z) satisfy (z) = (z)/(z) for z D. The process is invertible iff (z) 6= 0
for for z D. If so, then the coefficients of (z) satisfy (z) = (z)/(z) for for z D.

(a) Since 1 0.8x + 0.15x2 = (1 0.3x)(1 0.5x), we get

Yt = t 0.5t1

and so this is a causal and invertible MA(1) process ((z) = 1 and (z) = 1 0.3z has root
1/0.3). We have
(h) = 1.25 2 (h) 0.5 2 ((h + 1) + (h 1))
so except 0 = 1, well only have 1 = 0.4.

3
(b) Since the roots of 1 x + 0 5x2 are (1 i)/2, this is an ARMA(2,1) process. Moreover,
it is neither causal nor invertible since the two roots of (z) lie inside the unit disk and the
root of (z) is 1.
(c) Here, we have
(h) = 5 2 (h) 2 2 ((h + 1) + (h 1))
and so we have the same ACF as we had in (a). The two processes are the same (provided
the variance of the white noise is such that the two autocovariance functions are identical),
but the model in (a) is invertible, whereas the model in (c) is not.

5. See the notes.


6. All linear state space models involve two equations, the state equation, which determines the
evolution of an underlying unobserved state, and the observation equation, which determines how
the observed data are related to the state. The local trend model (a simple special case) has
iid
State equation: t+1 = t + t , t N (0, 2 ),
iid
Observation equation: y t = t + t , t N (0, 2 ),

where the t and t are mutually independent. We suppose that data y1 , . . . , yn are available.
Let Ht denote the information available at time t. Filtering is the estimation of t using Ht ,
smoothing is the estimation of t using Hn and prediction is the forecasting t+h fot h > 0 using
Ht .
(a) We have
iid
State equation: Xt = 0.9Xt2 + t , t N (0, 2 ),
iid
Observation equation: Yt = Xt + t , t N (0, 2 ).

(b) Since t is an independent white noise, Yt is stationary if and only if Xt is stationary.


Moreover, Xt is an AR(2) model, so provided the variances 02 and 12 are such that var(Xt )
does not depend on t, Yt is stationary. Since
t/2 Pt/2 t/2k
(0.9) X0 + k=1 2k (0.9) , t even,
Xt = (t1)/2
(0.9)(t+1)/2 X1 + k=0 2k+1 (0.9)(t1)/2k , t odd,
P

the variance of Xt is given by


( t/2
02 (0.9)t + 2 1(0.81)
10.81 , t even,
var(Xt ) = (t+1)/2
12 (0.9)t+1 + 2 1(0.81)
10.81 , t odd,

and so Xt and Yt are stationary if and only if

2
02 = 12 = .
1 0.81

(c) The left time plot (Xt ) shows clearly the AR(2) structure, whereas on the right time plot, it
is more difficult to see, because of the noise t . The range is also more important (from -10
to 5 instead of -8 to 2). The left ACF is typical from an AR(2) model with such parameters.
On the right one, the added noise reduces the proportion of information in the observation
and the correlation, so the values are diminished on the plot. Finally, on the left PACF, we
clearly find the model structure, whereas on the right one, we also have the consequences of
the added noise.

Das könnte Ihnen auch gefallen