Sie sind auf Seite 1von 26

Time Series and Prediction: Notes

Christian Brownlees

January 22, 2019

Abstract
These notes contain the derivations of some of the results presented in the course.

1
1 Intro to Time Series

1.1 Time Series as Stochastic Processes

Denition 1.1. (Univariate) Stochastic Process. A stochastic process

{yt : t ∈ T}

is a collection of univariate random variables dened on a probability space (Ω, F, P)

indexed by an index set T ⊂ R.

Unless stated otherwise, T can be thought of set of natural numbers N = {0, 1, 2, . . .}

or integer numbers Z = {. . . , −2, −1, 0, 1, 2, . . .}.

Denition 1.2. Realization of a Stochastic Process. For a given (xed) ω ∈ Ω

{yt (ω) : t ∈ T}

is called realization or samplepath of the process {yt : t ∈ T}.

Denition 1.3. Finite Dimensional Distributions of a Stochastic Process.

Consider the vector t = (t1 , t2 , ..., tk )0 with ti ∈ T and k ∈ N. Note that the k tuple

yt1 , yt2 , ..., ytk is a multivariate random variable in Rk . The distribution of the random
vector

Fyt1 ,yt2 ,...,ytk (y1 , . . . , yk ) = P(yt1 ≤ y1 , yt2 ≤ y2 , . . . , ytk ≤ yk )

is called the nite dimensional distribution of the time-series {yt } for t = (t1 , t2 , . . . , tk ).

The set of all nite dimensional distributions is denoted by F.

Theorem 1.1. Kolmogorov Extension Theorem The set of all nite dimensional

distributions F is the set of all nite dimensional distributions of some stochastic process

i for any k ∈ N, t = (t1 , t2 , ..., tk )0 with ti ∈ T and 1 ≤ i ≤ k

lim Fyt1 ,yt2 ,...,yti ,...,ytk (y1 , . . . , yi , . . . , yk ) = Fyt1 ,yt2 ,...,yti−1 ,yti+1 ,...,ytk (y1 , . . . , yi−1 , yi+1 , . . . , yk ) .
yti →∞

2
1.2 Stationarity

Denition 1.4. Strict Stationarity. Consider the vector t = (t1 , t2 , ..., tk )0 with

ti ∈ T and k ∈ N.
The time series {yt } is strictly stationary if for each t the nite dimensional distribution

of

yt1 , yt2 , ..., ytk

and

yt1 +h , yt2 +h , ..., ytk +h for any h ∈ N

are the same.

Denition 1.5. Autocovariance Function. Let {yt } be a time series such that

Var(yt ) < ∞ for each t. The autocovariance function γ(s, t) of {yt } is dened as

γ(s, t) = Cov(yt , ys ) = E((yt − µt )(ys − µs )) s, t ∈ T.

Denition 1.6. Covariance Stationarity. A stochastic process {yt } is covariance

stationary if

1. E(yt2 ) is nite for any t

2. E(yt ) = µ for any t

3. For any t, s,

γ(s, t) = Cov(yt , ys ) = γt−s

that is, the autocovariance of the process only depends on the lag k = t − s.

Denition 1.7. White Noise. A covariance stationary time series {yt } is white noise

i E(yt ) = 0 and γk = 0 for all k 6= 0. We denote a white noise process as

yt ∼ W N (σ 2 )

where σ 2 is the variance of the white noise.

3
Exercises

Exercise 1.1. Consider the simple Gaussian random walk process for t = 1, 2, ...

t
X
xt = zi
i=1

where zi are iid Gaussian with zero mean and unit variance. Is the process strictly station-

ary? Explain. Is the process covariance stationary? Explain. Is the process Gaussian?

Explain.

Exercise 1.2. Consider an iid sample of observations zt t = 1, ..., T from a continuous

distribution F . Can the iid sample be considered a stochastic process? Explain. If yes,

is the process strictly stationary? Explain. If yes, is the process covariance stationary?

Explain. If yes, is the process Gaussian? Explain.

Exercise 1.3. Show that if {yt } is covariance stationary, then the process {xt } dened

as xt = a + byt where a, b ∈ R is also covariance stationary.

Exercise 1.4. Let {yt } be covariance stationary. Consider the process {xt } dened as

xt = at + byt where a, b ∈ R and the process {zt } dened as zt = xt − xt−1 . Note that the
{xt } process can be thought of a process made up of a deterministic linear trend (at) and
a stochastic component (byt ). Is {xt } covariance stationary? Explain. Is {zt } covariance

stationary? Explain.

Exercise 1.5. Let {yt } and {xt } be covariance stationary processes. Also, the {yt } and

{xt } processes are uncorrelated, meaning that is Cor(yt , xs ) = 0 for any t, s. Is the {zt }
process dened as zt = yt + xt covariance stationary? Explain.

1.3 Limit Theorems for Serially Dependent Data

Theorem 1.2. Law of Large Numbers. Let yt be a covariance stationary process

with absolutely summable covariances γk , then the sample mean y T satises:

4
1. Expectation of the Sample Mean

E(y T ) = µ

2. Variance of the Sample Mean


X
2
lim T Var(y T ) = γk = σLR
T →∞
k=−∞

where

X
2
σLR = γk .
k=−∞

3. Law of Large Numbers


p
yT → µ

Proof. Proof of 1.2

Part 1. It follows from the properties of expectations that

T
! T
1X 1X
E yt = E (yt ) .
T t=1 T t=1

Note that this property holds irrespective of the dependence properties of the the process

{yt }.
Part 2. We begin by noting that

 !2 
T
1  X
E(y T − µ)2 = E (yt − µ) 
T2 t=1
T T T
!
1 X X X
= E (y1 − µ) (yt − µ) + (y2 − µ) (yt − µ) + . . . (yT − µ) (yt − µ)
T2 t=1 t=1 t=1
1
= E ((γ0 + γ1 + . . . + γT −1 ) + (γ1 + γ0 + . . . + γT −2 ) + . . . + (γT −1 + γT −2 + . . . + γ0 ))
T2
1
= (T γ0 + 2(T − 1)γ1 + 2(T − 2)γ2 + . . . + 2γT −1 )
T 2 
1 T −1 T −2 1
= γ0 + 2γ1 + 2γ2 + . . . + 2γT −1 . (1)
T T T T

5
First we show that the variance of the mean goes to zero as T increases.


2
T − 1 T − 2 1
T E(y T − µ) = γ0 + 2γ1 + 2γ2 + . . . + 2γT −1
T T T
T −1 T −2 1
≤ |γ0 | + 2|γ1 | + 2|γ2 | + . . . + 2|γT −1 |
T T T
≤ |γ0 | + 2|γ1 | + 2|γ2 | + . . . + 2|γT −1 | .

Note that the rhs of the last expression is nite since we are assuming the the covariances

are absolutely summable, thus E(y t − µ)2 → 0. Next we have to show that

T −1 T −2 1
lim T E(y T − µ)2 = lim γ0 + 2γ1 + 2γ2 + . . . + 2γT −1
T →∞ T →∞ T T T
+∞
X
= γj .
j=−∞

This results is established in Hamilton (1994) in Section 7.2. The intuition of the result is

the following. Consider the sum of the autocovariances in (1) and let j denote the index

of the autocovariances. For large j the autocovariances are small and do not aect the

sum. For small j the autocovariances have a weight that approches 1 as T grows. Thus,

intuitively we have that limT →∞ T E(y t − µ)2 = +∞


P
j=−∞ γj .

Part 3. The last claim of the theorem is a straightforard implication of Parts 1 and 2

and Chebyshev's inequality. We have that

P+∞
Var(y) j=−∞ γj
P(|y T − µ| ≥ ε) ≤ = .
ε2 T ε2

Thus P+∞
j=−∞ γj
lim P(|y T − µ| ≥ ε) ≤ lim =0,
T →∞ T →∞ T ε2

which implies the last claim of the theorem.

6
Theorem 1.3. Central Limit Theorem. Let yt be


X
yt = µ + φi t−i
i=0

P∞
where t is a sequence of iid random variables with nite variance and i=0 |φi | < ∞.
Then,
 
a 1 2
yT ∼ N µ, σLR
T

where

X
2
σLR = γk
k=−∞

Proof. Andersen (1971), p. 429.

7
2 Linear Time Series Models

2.1 Linear Time Series Processes

Denition 2.1. Linear Time Series Process. The time series yt is said to be linear

if it can be written as

X
yt = µ + ψi t−i
i=0

where t is a white noise process with variance σ2 and ψ0 = 1.

Proposition 2.1. Covariance Stationarity. Let {yt } be a linear time series process


X
yt = µ + ψi t−i
i=0

P∞
where t is a white noise process with variance σ2 and ψ0 = 1. If i=0 ψi2 < ∞, then yt
is weakly stationary and it has the following properties

E(yt ) = µ

X
Var(yt ) = σ2 ψi2
i=0

X
Cov(yt , yt−k ) = σ2 ψi ψi−k
i=0

2.2 ARMA Processes

2.2.1 The First-Order Moving Average Process (MA)

Denition 2.2. MA(1) Process. The MA(1) models is dened as

yt = c + t + θ1 t−1 t ∼ W N (σ2 ).

Proposition 2.2. Unconditional Moments of a MA(1). Let {yt } be a stationary

8
MA(1) process. Then the unconditional mean and unconditional variance are respectively

E(yt ) = c and Var(yt ) = σ2 (1 + θ12 ).

Proof of proposition 2.2. The claim follows from observing that

E(c + t + θ1 t−1 ) = c + E(t ) + θ1 E(t−1 ) = c ;

and

Var(c + t + θ1 t−1 ) = Var(t + θ1 t−1 ) = Var(t ) + θ12 Var(t−1 ) = σ2 + θ12 σ2 .

Proposition 2.3. Conditional Moments of a MA(1). FIX Let {yt } be a stationary

MA(1) process. Then the conditional mean and conditional variance given the information

set available at t − 1 are respectively

Et−1 (yt ) = c + θ1 t−1 and Vart−1 (yt ) = σ2 .

Proof of proposition 2.3. The claim follows from observing that

Et (yt ) = Et (c + t + θ1 t−1 ) = c + E(t ) + θ1 t−1 = c + θ1 t−1 ;

and

Vart (c + t + θ1 t−1 ) = Vart (t ) = σ2 .

Proposition 2.4. Autocovariance of a MA(1). Let {yt } be a MA(1) process. Then

9
the autocovariance and autocorrelation functions of the process for k ≥ 1 are
 
θσ 2 θ
 

 k=1 

1+θ2
k=1
γk = and ρk = .
otherwise otherwise
 
0
 0

Proof of Proposition 2.4. Note that

Cov(yt , yt−1 ) = E((t + θt−1 )(t−1 + θt−2 ))

= E(t t−1 + θt t−2 + θ2t−1 + θ2 t−1 t−2 )

= θE(2t−1 ) = θσ2 ,

and that for any k > 1

Cov(yt , yt−k ) = E((t + θt−1 )(t−k + θt−k−1 ))

= E(t t−k + θt t−k−1 + θt−1 t−k + θ2 t−1 t−k−1 )

= 0.

Proposition 2.5. Invertibility of the MA(1) process. Consider an autoregressive

model with innite lags

yt = φ0 + φ1 yt−1 + φ2 yt−2 + φ3 yt−3 + φ4 yt−4 + ... + t ,

where φi = −θi with |θ| < 1. Then, yt can equivalently be represented as the MA(1)

process

yt = φ0 (1 − θ1 ) + t − θ1 t−1 .

10
Proof of Proposition 2.5. Note that

yt = φ0 + φ1 yt−1 + φ2 yt−2 + φ3 yt−3 + φ4 yt−4 + ... + t

= φ0 − θ1 yt−1 − θ12 yt−2 − θ13 yt−3 − θ14 yt−4 + ... + t .

We can rewrite the model as

yt + θ1 yt−1 + θ12 yt−2 + θ13 yt−3 + θ14 yt−4 + ... = φ0 + t

and it is also true that

yt−1 + θ1 yt−2 + θ12 yt−3 + θ13 yt−4 + θ14 yt−5 + ... = φ0 + t−1 .

If you multiply the second equation by θ1 and subtract it from the rst one you get

yt = φ0 + t − θ1 c − θ1 t−1 = φ0 (1 − θ1 ) + t − θ1 t−1 .

2.2.2 The First-Order Autoregressive Process (AR)

Denition 2.3. AR(1) Process. The AR(1) models is dened as

yt = c + φ1 yt−1 + t t ∼ W N (σ2 ).

Proposition 2.6. Stationarity of an AR(1). Let {yt } be a an AR(1) process as in

denition 2.3 such that |φ1 | < 1. Then {yt } is a stationary linear time series process with

representation

c X
yt = + φj t−j .
1 − φ1 j=0

Remark 2.1. An AR(1) process with |φ1 | < 1 is called stationary.

11
Proof of proposition 2.6. We begin

yt = c + φ1 yt−1 + t

= c + φ1 (c + φ1 yt−2 + t−1 ) + t

= c(1 + φ1 ) + t + φ1 t−1 + φ21 yt−2

= c(1 + φ1 ) + t + φ1 t−1 + φ21 (c + φ1 yt−3 + t−2 )

= c(1 + φ1 + φ21 ) + t + φ1 t−1 + φ21 t−2 + φ31 yt−3


..
.
k
X k
X
= c φj1 + φj1 t−j + φk+1
1 yt−k−1
j=0 j=0

As the stochastic process stretches back in time, k → ∞, one gets


c X j
yt = + φ t−j
1 − φ1 j=0 1

Proposition 2.7. Unconditional Moments of an AR(1). Let {yt } be a stationary

AR(1) process. Then the unconditional mean and unconditional variance are respectively

c σ2
E(yt ) = and Var(yt ) = .
1 − φ1 1 − φ21

Proof of proposition 2.7. Note that


!
c X j
E(yt ) = E + φ t−j
1 − φ1 j=0 1

!
c X
= +E φj1 t−j
1 − φ1 j=0

c X j
= + φ E (t−j )
1 − φ1 j=0 1
c
= ,
1 − φ1

12
and


!
c X j
Var(yt ) = Var + φ t−j
1 − φ1 j=0 1

!
X
= Var φj1 t−j
j=0

!
X
= φ2j
1 σ2
j=0

σ2
= .
1 − φ21

Proposition 2.8. Autocovariance of an AR(1). Let {yt } be a stationary AR(1)

process. Then the autocovariance and autocorrelation functions of the process for k > 0

are
σ 2 φk1
γk = and ρk = φk1 .
1 − φ21

Proof of proposition 2.8. We rst nd the autocovariance of order 1

γ1 = Cov(yt , yt−1 )

= Cov(c + φ1 yt−1 + t , yt−1 )

= Cov(c, yt−1 ) + φ1 Cov(yt−1 , yt−1 ) + Cov(t , yt−1 )


σ 2 φ1
= φ1 Var(yt ) = .
1 − φ21

Next we nd an expression for the autocovariance of order k > 1 as a function of the

autocovariance of order k − 1

γk = Cov(yt , yt−k )

= Cov(c + φ1 yt−1 + t , yt−k )

= Cov(c, yt−k ) + φ1 Cov(yt−1 , yt−k ) + Cov(t , yt−k )

= φ1 Cov(yt−1 , yt−k ) = φ1 Cov(yt , yt−k+1 ) = φ1 γk−1 .

13
By applying recursively the last expression we get the autocovariance function formula

stated in the claim of the proposition.

2.2.3 The First-Order Autoregressive Moving Average Process (ARMA)

Denition 2.4. ARMA(1,1) Process. The ARMA(1,1) models is dened as

yt = c + φ1 yt−1 + θ1 t−1 + t t ∼ W N (σ2 ).

Proposition 2.9. Stationarity of an ARMA(1,1). Let {yt } be an ARMA(1,1)

process as in denition 2.4 such that |φ1 | < 1. Then {yt } is a stationary linear time

series process with representation


c X
yt = + (φ1 + θ1 ) φj t−j .
1 − φ1 j=0

Remark 2.2. An ARMA(1,1) process with |φ1 | < 1 is called stationary.

Proof of proposition 2.6. We begin

yt = c + φ1 yt−1 + θ1 t−1 + t

= c + φ1 (c + φ1 yt−2 + θ1 t−2 + t−1 ) + θ1 t−1 + t

= c(1 + φ1 ) + t + (φ1 + θ1 )t−1 + φ1 θ1 t−2 + φ21 yt−2

= c(1 + φ1 ) + t + (φ1 + θ1 )t−1 + φ1 θ1 t−2 + φ21 (c + φ1 yt−3 + θt−3 + t−2 )

= c(1 + φ1 + φ21 ) + t + (φ1 + θ1 )t−1 + φ1 (φ1 + θ1 )t−2 + φ31 yt−3


..
.
k
X k
X
= c φj1 + t + (φ1 + θ1 ) φj−1 k
1 t−j + φ1 yt−k
j=0 j=1

14
As the stochastic process stretches back in time, k → ∞, one gets


c X
yt = + t + (φ1 + θ1 ) φj−1
1 t−j
1 − φ1 j=1

Proposition 2.10. Unconditional Moments of an ARMA(1,1). Let {yt } be a

stationary ARMA(1,1) process. Then the unconditional mean and unconditional variance

are respectively
c θ2 + 2θ1 φ1 + 1
E(yt ) = and Var(yt ) = σ2 1 .
1 − φ1 1 − φ21

Proof of proposition 2.10. Note that

E(yt ) = E (c + φ1 yt−1 + θ1 t−1 + t )

= c + φ1 E(yt−1 )

= c + φ1 E(yt ) .

Solving with respect to E(yt ) the last expression one gets

c
E(yt ) = .
1 − φ1

Next, note that

Var(yt ) = Var(c + φ1 yt−1 + θ1 t−1 + t )

= φ21 Var(yt−1 ) + θ12 Var(t−1 ) + 2θ1 φ1 Cov(yt−1 , t−1 ) + Var(t )

Note that Cov(yt−1 , t−1 ) = σ2 , thus we get the equation

Var(yt ) = φ21 Var(yt ) + θ12 σ2 + 2θ1 φ1 σ2 + σ2 .

15
Solving with respect to Var(yt ) gives is

θ12 + 2θ1 φ1 + 1
Var(yt ) = σ2 .
1 − φ21

Proposition 2.11. Autocovariance of an ARMA(1,1). Let {yt } be a stationary

ARMA(1,1) process. Then the autocovariance function of the process for k > 0 is

 (φ1 +θ1 )(1+φ1 θ1 )


1−φ21
k=1
γk = .
φ1k−1 γ1 otherwise

Proof of proposition 2.11. We rst nd the autocovariance of order 1

γ1 = Cov(yt , yt−1 )

= Cov(c + φ1 yt−1 + θ1 t−1 + t , yt−1 )

= Cov(c, yt−1 ) + φ1 Cov(yt−1 , yt−1 ) + θ1 Cov(t−1 , yt−1 ) + Cov(t , yt−1 )


θ12 + 2θ1 φ1 + 1
= φ1 Var(yt ) + θ1 σ2 = φ1 σ2 2
+ θ1 σ2 .
1 − φ1

Next we nd an expression for the autocovariance of order k > 1 as a function of the

autocovariance of order k − 1

γk = Cov(yt , yt−k )

= Cov(c + φ1 yt−1 + θt−1 + t , yt−k )

= Cov(c, yt−k ) + φ1 Cov(yt−1 , yt−k ) + θ1 Cov(t−1 , yt−k ) + Cov(t , yt−k )

= φ1 Cov(yt−1 , yt−k ) = φ1 Cov(yt , yt−k−1 ) = φ1 γk−1 .

By applying recursively the last expression we get the autocovariance function formula

stated in the claim of the proposition.

16
2.3 Forecasting Under Square Loss

Proposition 2.12. Let yt be a stationary time series. Let ŷT +h|T denote the forecast of

the series at period T + h conditional on the information available at time T .

Then, under the square loss, the optimal forecast of yT +h is its conditional mean given

the information available at time T .

Proof. To see this note that

ET (yT +h − ŷT +h|T )2 = ET (yT +h + µT +h|T − µT +h|T − ŷT +h|T )2

= ET (yT +h − µT +h|T )2 + ET (µT +h|T − ŷT +h|T )2 .

The last equation follows from the fact that ET (yT +h − µT +h|T )(µT +h|T − ŷT +h|T ) = 0.

Notice that the ŷT +h|T only enters in the second term of the last expression. Thus,

arg min ET (yT +h − ŷT +h|T )2 = arg min ET (µT +h|T − ŷT +h|T )2 = µT +h|T .
ŷT +h|T ŷT +h|T

Proposition 2.13. Let yt be a MA(1) process. Then, the optimal h-step ahead forecast

and h-step ahead forecast error variance conditional on the information available at time

T are
 
σ 2
 
c + θ1 T
 h=1 
 h=1
ŷT +h|T = and Var(eT +h|T ) = .
otherwise σ2 (1 + θ12 ) otherwise
 
c
 

Proof. Note that for h equal 1 we have

ŷT +1|T = ET (yT +1 ) = ET (c + θ1 T + T +1 ) = c + θ1 T

and

VarT (eT +1|T ) = VarT (yT +1 − ŷT +1|T ) = VarT (T +1 ) = σ2 .

17
For h > 1 we have

ŷT +h|T = ET (yT +h ) = ET (c + θ1 T +h−1 + T +h ) = c

and

VarT (eT +1|T ) = VarT (yT +h − ŷT +h|T ) = VarT (θ1 T +h−1 + T +h ) = (1 + θ12 )σ2 .

Proposition 2.14. Let yt be a stationary AR(1) process. Then, the optimal h-step ahead

forecast and h-step ahead forecast error variance conditional on the information available

at time T are

h
X
ŷT +h|T = µ + φh1 (yT − µ) and Var(eT +h|T ) = φ2i 2
1 σ
i=0

Proof. Note that a stationary AR(1) can be equivalently be represented as

yt = c + φ1 yt−1 + t = µ(1 − φ1 ) + φ1 yt−1 + t = µ + φ1 (yt−1 − µ) + t ,

since µ = c/(1 − φ1 ). Note that for h equal 1 we have

ŷT +1|T = ET (yT +1 ) = ET (µ + φ1 (yT − µ) + T +1 ) = µ + φ1 (yT − µ)

and

VarT (eT +1|T ) = VarT (yT +1 − ŷT +1|T ) = VarT (T +1 ) = σ2 .

For h = 2 we have

ŷT +2|T = ET (yT +2 ) = ET (µ + φ1 (yT +1 − µ) + T +2 )

= µ + φ1 ET (yT +1 − µ)

= µ + φ21 (yT − µ)

18
and

VarT (eT +2|T ) = VarT (yT +2 − ŷT +2|T ) = VarT (φ1 (yT +1 − µ) + T +2 − φ21 (yT − µ))

= VarT (φ21 (yT − µ) + φ1 T +1 + T +2 − φ21 (yT − µ))

= VarT (φ1 T +1 + T +2 )

= (1 + φ21 )σ2 .

19
3 Modelling the Conditional Variance

3.1 ARCH Model

Denition 3.1. ARCH(1) Process. The ARCH(1) models is dened as

p
yt = σt2 zt zt ∼ iidD(0, 1)
2
σt2 = ω + αyt−1

where ω > 0 and 0 ≤ α < 1.

Proposition 3.1. Unconditional Moments of an ARCH(1). Let {yt } be an ARCH(1)

process. Then the unconditional mean and unconditional variance are respectively

ω (1 − α2 )
E(yt ) = 0, Var(yt ) = and kurt(yt ) = µ4
1−α (1 − µ4 α2 )

where µ4 = E(zt4 ).

Proof of proposition 3.1. Note that

p p p
E(yt ) = E( σt2 zt ) = E(Et−1 ( σt2 zt )) = E( σt2 Et−1 (zt )) = 0

and

Var(yt ) = E(yt2 ) = E(σt2 zt2 )

= E(Et−1 (σt2 zt2 )) = E(σt2 Et−1 (zt2 )) = E(σt2 )


2
= E(ω + αyt−1 )

Var(yt ) = ω + αVar(yt−1 )

σ 2 = ω + ασ 2 .

Solving the last equation with respect to σ 2 gives the claim of the proposition.

20
Last note that

E(yt4 ) E((σt2 )2 Et−1 (zt4 )) E((σt2 )2 )


kurt(yt ) = = = µ 4
[E(yt2 )]2 [E(σt2 )]2 [E(σt2 )]2

Note that

[E(σt2 )]2 = (σ 2 )2

and

E((σt2 )2 ) = E( (σ 2 + α(yt−1
2
− σ 2 ))2 )

= (σ 2 )2 + α2 E( (yt−1
2
− σ 2 )2 )

= (σ 2 )2 + α2 E(yt−1
4
) + α2 (σ 2 )2 − 2α2 E(yt−1
2
)σ 2

= (σ 2 )2 + α2 E(yt−1
4
) − α2 (σ 2 )2

= (σ 2 )2 + α2 µ4 E((σt−1
2
)2 ) − α2 (σ 2 )2
1 − α2
E((σt2 )2 ) = (σ 2 )2
1 − µ4 α 2

Proposition 3.2. ACF of yt of an ARCH(1). Let {yt } be an ARCH(1) process. Then

the autcovariance function of the observations yt is γk (yt ) = 0 for any k 6= 0.

Proof of proposition 3.2. Note that

p p
Cov(yt , yt−k ) = E(yt yt−k ) = E(Et−1 ( σt2 zt yt−k )) = E( σt2 yt−k Et−1 (zt )) = 0 .

Proposition 3.3. ACF of yt2 of an ARCH(1). Let {yt } be an ARCH(1) process. Then

the autcorrelation function of the observations yt is ρk (yt2 ) = αk for any k 6= 0.

21
Proof of proposition 3.3. Note that

Cov(yt2 , yt−1
2
) = E((yt2 − σ 2 )(yt−1
2
− σ 2 ))

= E((σ 2 + α(yt−1
2
− σ 2 ))zt2 − σ 2 )(yt−1
2
− σ 2 ))

= E(((σ 2 zt2 − σ 2 ) + α(yt−1


2
− σ 2 )zt2 )(yt−1
2
− σ 2 ))

= E(σ 2 (yt−1
2
− σ 2 )(zt2 − 1)) + αE((yt−1
2
− σ 2 )(yt−1
2
− σ 2 )zt2 )
2
= αE((yt−1 − σ 2 )2 )

Next

Cov(yt2 , yt−k
2
) = E((yt2 − σ 2 )(yt−k
2
− σ 2 ))

= E((σ 2 + α(yt−1
2
− σ 2 ))zt2 − σ 2 )(yt−k
2
− σ 2 ))

= E(((σ 2 zt2 − σ 2 ) + α(yt−1


2
− σ 2 )zt2 )(yt−k
2
− σ 2 ))

= E(σ 2 (yt−1
2
− σ 2 )(zt2 − 1)) + αE((yt−1
2
− σ 2 )(yt−k
2
− σ 2 )zt2 )
2
= αE((yt−1 − σ 2 )(yt−k
2
− σ 2 ))

= αCov(yt2 , yt−k+1
2
)

= αγk−1

Proposition 3.4. h-step ahead variance forecast ARCH(1). Let {yt } be an

ARCH(1) process. Then the h-step ahead forecast of the conditional variance is

k−1
X
σT2 +h|T = ω αi + αk rT2 .
i=1

Proof of proposition 3.4.

σT2 +1|T = ET (σT2 +1 ) = ET (ω + αrT2 ) = ω + αrT2

22
σT2 +2|T = ET (σT2 +2 ) = ET (ω + αrT2 +1 ) = ω + αET (rT2 +1 ) = ω + αET (σT2 +1 )

= ω + α(ω + αrT2 ) = ω(1 + α) + α2 rT2

3.2 GARCH Model

Denition 3.2. GARCH(1,1) Process. The GARCH(1,1) model is dened as

p
yt = σt2 zt zt ∼ iidD(0, 1)
2
σt2 = ω + αyt−1 2
+ βσt−1

where ω > 0, 0 < α < 1, 0 ≤ β < 1 and α + β < 1

Proposition 3.5. Unconditional Variance of a GARCH(1,1). Let {yt } be an

GARCH(1,1) process. Then the unconditional variance of the process is

ω
Var(yt ) = .
1−α−β

Proof of proposition 3.5.

Var(yt ) = E(Et−1 (σt2 zt2 )) = E(σt2 Et−1 (zt2 )) = E(σt2 )


2 2
= E(ω + αyt−1 + βσt−1 )
2
Var(yt ) = ω + αVar(yt−1 ) + βEσt−1

σ 2 = ω + ασ 2 + βσ 2 .

Solving the last equation with respect to σ 2 gives the claim of the proposition.

Proposition 3.6. ARCH(∞) representation of a GARCH(1,1). Let {yt } be a

23
GARCH(1.1) process as in denition 3.2. Then


ω X
σt2 = +α β j−1 rt−j
2
.
1−β j=1

Proof of proposition 3.6. We begin

2
σt2 = ω + αrt−1 2
+ βσt−1
2 2 2
= ω + αrt−1 + β(ω + αrt−2 + βσt−1 )
2 2 2
= ω(1 + β) + αrt−1 + αβrt−2 + αβσt−2
..
.
k−1
X k
X
j
= ω β +α β j−1 rt−j
2
+ β k σt−k
2

j=0 j=1

Proposition 3.7. ARMA(1,1) representation of a GARCH(1,1). Let {yt } be a

GARCH(1.1) process as in denition 3.2. Then

rt2 = ω + (α + β)rt−1
2
− βηt−1 + ηt

where ηt = rt2 − σt2 = (zt2 − 1)σt2 .

Proof of proposition 3.7.

σt2 = ω + αrt−1
2 2
+ βσt−1
2
rt2 − rt2 + σt2 = ω + αrt−1 2
+ βσt−1 2
− βrt−1 2
+ βrt−1

rt2 = ω + (α + β)rt−1
2 2
− β(rt−1 2
− σt−1 ) + (rt2 − σt2 )

Proposition 3.8. h-step ahead variance forecast GARCH(1,1). Let {yt } be an

24
GARCH(1,1) process. Then the h-step ahead forecast of the conditional variance is

ω + αrT2 + βσT2

 h=1
σT2 +h|T =
σ 2 + (α + β)h−1 (σT2 +1|T − σ 2 ) otherwise

25
4 Factor ARCH
Denition 4.1. Factor ARCH Process. Let (r1 t , ..., rn−1 t , rm t )0 be generated as

q
ri t = λi rm t + σi2t zi t zm t ∼ iidD(0, 1)
p
2
rm t = σm t zm t zm t ∼ iidD(0, 1)

where Cov(zi t , zm t ) = 0 for any i, Cov(zi t , zj t ) = 0 for any i 6= j and σi2t , σm


2
t are known

given t − 1.

Proposition 4.1. Covariance Matrix of the Factor ARCH Process. Let (r1 t , ..., rn−1 t , rm t )0

be generated by a Factor ARCH process. Then,

Vart−1 (ri t ) = λ2i σm t + σi t and Covt−1 (ri t , rj t ) = λi λj σm t .


2 2 2

Proof of proposition 4.1.

 q 
Vart−1 (ri t ) = E(ri2t )
= Et−1 (λi rm t + σi t zi t )2 2

q
= Et−1 (λ2i rm
2
t + σ 2 2
z
it it + 2λ r
i mt σi2t zi t )
q p
2 2 2 2 2
= Et−1 (λi σm t zm t + σi t zi t + 2λi σi2t σm 2
t zm t zi t )

= λ2i σm
2 2
t + σi t .

 q q 
2 2
Covt−1 (ri t , rj t ) = E(ri t rj t ) = Et−1 (λi rm t + σi t zi t )(λj rm t + σj t zj t )
q q q q
2 2 2 2
= Et−1 (λi λj rm t + λ r
j mt σ z
it it + λ r
i mt σ z
jt jt + σ z
it it σj2 t zj t )
2
= λi λj σm t .

26

Das könnte Ihnen auch gefallen