Sie sind auf Seite 1von 47

Table&of&Contents &

1 ! A#AR#PROCESSES#

3 !

1.1 ! N OTATION#

3

1.1.1 ! AR(1) ! 1.1.2 ! AR(2) ! 1.1.3 ! AR( P) !

3

4

4

1.2 ! F INDING#R OOTS # 1.3 ! A UTOCOVARIANCE/A UTOCORRELATION #

4

5

1.3.1 ! AR(1) ! 1.3.2 ! AR(2) ! 1.3.3 ! AR( P) !

5

5

6

1.4 ! P ARTIAL#A UTOCORRELATION # 1.5 ! P ARAMETER#E STIMATION # 1.5.1 ! L EAST!SQUARES ! 1.5.2 ! A LTERNATIVE : ! ML ! E STIMATION! 1.5.3 ! CONFIDENCE!INTERVALS !

6

8

8

8

10

2 ! B#MA#PROCESSES #

11

2.1 ! W OLD# D ECOMPOSITION#

11

2.1.1 ! W OLD! D ECOMPOSITION!AND!OPTIMAL!LINEAR!ESTIMATI ON!NOT!RELEVANT!UNTI L! S ATZ! 6.2 !P. ! 145! 11 !

2.2 ! AR(1) MA( ) # 2.3 ! MA(1) AR( ) #

12

12

2.4 ! MA(1) # S TATIONARITY # 2.5 ! P ARAMETER#ESTIMATION #

12

13

2.6 ! AR/MA # AC/PAC #STRUCTURE #

14

2.7 ! I NVERTIBILITY # 2.8 ! P ARAMETER#ESTIMATION #

14

14

3 ! C#ARMA#PROCESSES #

15

3.1 ! ARMA(1,1) # 3.2 ! ARMA(1,1) MA( ) # 3.3 ! A UTOCOVARIANCE#ARMA(1,1) #

15

15

15

3.4 ! P ARAMETER#ESTIMATION #

16

3.5 ! ARIMA( P, D, Q ) #

17

3.6 ! R EGRESSION#WITH#AUTOCORRELATED# E RRORS# 3.6.1 ! GLS !ESTIMATION!

17

18

3.7 ! ARMAX #MODELS #

18

3.7.1 ! P ARAMETER!ESTIMATION! ARMAX ! (1,0,1) !

19

3.7.2 ! ARMAX( P, S , Q ) !

19

3.7.3 ! ARMAX !AND! R EGRESSION!

19

3.7.4 ! ARMAX !ESTIMATION!

19

4 ! D#INFORMATION#CRITER IA#AND#MODEL#SELECTION #

20

4.1 ! K ULLBACK H L EIBLER#ENTROPY#

20

4.2 ! A KAIKE# 4.3 ! B AYES#

20

21

4.4 ! H ANNAN H Q UINN H CRITERION # 4.5 ! M ODEL# S ELECTION#À#LA#B OXH J ENKINS# 4.6 ! M ODEL#SELECTION#WITH# INFORMATION#CRITERIA# 4.7 ! T EST#DIAGNOSTICS# 4.7.1 ! N ORMALITY!OF!ERRORS! ( RESIDUALS ) !

21

21

21

21

21

4.7.2 ! P ORTMANTEAUH T EST, ! L JUNG H B OX!STATISTIC! 4.7.3 ! D URBIN H W ATSONH T EST! 4.7.4 ! B REUSCHH G ODFREY H L AGRANGE H M ULTIPLIER! ( SCORE ) !TEST!

22

22

23

4.7.5 ! ARCH H LM H T EST!

23

4.7.6 ! J ARQUE H B ERA! H ! T ESTING!FOR!NORMALITY ! 4.7.7 ! L IKELIHOOD H Q UOTIENT! (LQ), ! L AGRANGE H M ULTIPLIER! (LM) !AND! W ALDH T ESTS!

24

24

4.7.8 ! ADF H T EST!

25

5 ! E#BASIC#VOCAB #

26

5.1 ! C ONVERGENCE#IN#PROBAB ILITY # 5.2 ! E QUICORRELATION # 5.3 ! V ERGANGENHEIT#

26

26

26

 

5.4 ! P ANEL#

26

5.5 ! M OMENTS#AND#TIMEH MEAN#VALUES # 5.6 ! S TRONG# S TATIONARITY # 5.7 ! W EAK# S TATIONARITY # 5.8 ! E RGODICITY# 5.9 ! G AUSSIAN#WHITE#NOISE # 5.10 ! R ANDOM# W ALK# 5.11 ! O PERATORS# 5.12 ! M AXIMUM# L IKELIHOOD, # F ISHER#I NFORMATION # 5.13 ! L INEAR# P REDICTOR#

26

27

27

27

27

27

28

28

28

6 ! CONDITIONALLY#HETEROSCEDASTIC#PROC ESSES #

28

6.1 ! D EFINITIONS#

28

6.2 ! ARCH(1) # 6.3 ! K URTOSIS#OF# ARCH(1) #

28

29

6.4 ! P ARAMETER#E STIMATION # 6.4.1 ! G ENERAL!PARAMETER!ESTIMATION!FOR! GARCH !MODELS ! 6.4.2 ! A SYMPTOTICS !

29

30

31

6.5 ! ARCH( P) # 6.6 ! GARCH # 6.6.1 ! GARCH(1,1) ! 6.6.2 ! GARCH( P, Q ) !

32

32

32

33

6.7 ! S TOCHASTIC# V OLATILITY : # ARV # M ODELS # 6 .7.1 ! ARV !MODEL!IN!STATE!SPACE!FORM ! 6.8 ! A SYMMETRIC# E XPANSIONS#

33

34

35

6.8.1 ! ARMA H GARCH ! 6.8.2 ! T HRESHOLD!ARCH ! 6.8.3 ! Q ADRATIC! GARCH ! 6.8.4 ! E XPONENTIAL! GARCH ! 6.8.5 ! ARCH !IN! M EAN!

35

36

36

36

37

6.9 ! M ODEL# S ELECTION#

38

7 ! STATE#SPACE#MODELS#A ND#KALMAN H FILTERS #

38

7.1 ! T HE#LINEAR#FILTERING#PROBLEM# 7.2 ! S TATE#SPACE#MODELS # 7.2.1 ! A RCHITECTURE ! 7.2.2 ! A RCHITECTURE!GENERAL! CASE ! 7.2.3 ! M OMENTS!AND!STATIONAR ITY! 7.2.4 ! AR( P) !MODELS!AND! VAR !FORM ! 7.2.5 ! ARMA !MODELS!IN!STATE!SPA CE!FORM! 7.3 ! K ALMAN#FILTERS # 7.3.1 ! P RELIMINARIES ! 7.3.2 ! G ENERAL!FUNCTIONING!O F! K ALMAN! F ILTER !

38

38

38

39

39

40

40

41

41

42

7.3.3 ! M EASUREMENT!UPDATE !

42

7.3.4 ! I NNOVATIO N!AND!EFFICIENT! K ALMANH U PDATE !

44

7.3.5 ! T IME H U PDATE!AND! K ALMANH F ILTER H A LGORITHM!

44

7.3.6 ! S TATE!ESTIMATION!AND! FORECASTING!

45

7.3.7 ! P ARAMETER!ESTIMATION!

45

1 A AR Processes

1.1 Notation

1.1.1 AR(1)

From the Wold decomp. substituting ϕ j for ψ j y t =ε t +ϕε t-1 +ϕ 2 ε t-2 +ϕ 3 ε t-3 +…

=ε t +ϕ( ε t-1 +ϕε

t-2 +…

=ε t +ϕy t-1 Where ε t is an error that takes into account the stochastic nature of the process. If the time series oscillates around a mean value µ one can write the model in terms of the

deciations from the mean:

y t -µ =ϕ(y t-1 -µ)+ε t y t =(1- ϕ)µ+ϕy t-1 +ε t This shows that in general one must take account of a constant term θ 0 =(1- ϕ.

Expected value

 

Variance

 

µ

t =E(y t )=E( ϕy t-1 +ε t )=ϕµ t-1 =ϕ t

µ

0

σ y t 2 = Var(y t )=Var( ϕy t-1 +ε t )=ϕ 2 σ y t 1 2 +σ 2

The process is stationary for |ϕ|<1 and

µ

t µ =0

=ϕ 2 σ y t 1 2 ( ϕ 2 σ y t 2 2 +σ 2 )+σ 2 =…

=

 
 

t

=

φ 2 t σ y 0 2 +

φ 2( t j ) σ 2

 
 

j =1

Letting t → ∞, for

φ

< 1 gives the stationary variance

 

t

σ

y 2 = lim

t → ∞

φ 2 t σ y 0 2 +

φ 2( t j ) σ 2

 

j =1

 

t 1

φ 2 k σ 2

 

=

lim

t → ∞

 

k = t j =0

 

σ 2

=

 

1 φ 2

 

=

γ 0

Or similarly from

 

σ

y 2 = φ 2 σ y 2 + σ 2

Random Walk -> special case of AR(1) model with |ϕ|=1

y t

=y t-1 +ε t

with initial values y 0 =0, µ 0 =0 and σ y0 2 =0

y t =

t

ε j

j =1

σ 2 y t =

t

σ 2

j =1

= σ 2 t

= 1 σ 2 y t = t ∑ σ 2 j = 1 = σ

y t = σ t It is easy to see that the standard deviation increases in time. Thus the model is not covariance stationary in the presence of a unit root.

σ

1.1.2 AR(2)

ϕ(B)y t =y t - ϕ 1 y t-1 - ϕ 2 y t-2 =ε t

1.1.3 AR(p)

ϕ(B)y t =y t - ϕ 1 y t-1 -…- ϕ p y t-p =ε t i.e. the past values of the process weighted by the coefficients ϕ explain the current value but for an error ε t ~(0, σ 2 ). The current state of the system is described by a weighted sum of p past states plus a random error. This is a local description.

1.2 Finding Roots

AR(2)

ϕ(B)y t =(1- ϕ 1 B)y t =ε t

ϕ(B)y t =y t - ϕ 1 y t-1 - ϕ 2 y t-2 =ε t

(1- ϕ 1 B- ϕ 2 B 2 )=(1- λ 1 B)(1- λ 2 B)

=1-( λ 1 +λ 2 )B+λ 1 λ 2 B 2

- AR(2) Operator is product of 2 AR(1) Operators, each of which must satisfy stationar- ity. Solving for parameters:

ϕ 1 =(λ 1 +λ 2 ); ϕ 2 =- λ 1 λ 2 Substitute z for B

ϕ(z)=(1- ϕ 1 z- ϕ 2 z 2 )=(1- λ 1 z)(1- λ 2 z)

RHS: roots are z=1/λ 1 , 1/λ 2 Multiply through by z -2 =λ 2

( λ 2 - ϕ 1 λ - ϕ 2 )=( λ - λ 1 )( λ - λ 2 ) Solve LHS for λ

Alternatively: VAR

&

y t =

#

%

$

y

t

y t 1

#

%

$

φ 1 φ 2

1

0

( ;Φ =

'

&

( ;ε t =

'

#

%

$

ε

t

0

&

(

'

AR(2) process as difference equation :

y t = Φy t 1 + ε t

#

%

$

To solve find eigenvalues

det(Φ − λI ) = 0

= λ 2 φ 1 λ φ 2

y

t

y t 1

&

( =

'

#

%

$

φ 1 φ 2

1

0

&

(

'

#

% $

y t 1

y t 2

&

( +

'

#

% $

ε

t

0

&

(

'

AR(p)

- The operator contains all relevant information.

- So substitute z for B to obtain the characteristic equation:

ϕ(z)=1- ϕ 1 z- ϕ 2 z 2 -…- ϕ p z p

- Multiply through by p-th power of inverted roots λ : λ p =z -p

- Solve for λ

- Stationarity: The AR(p) process is stationary if all roots z are outside the unit circle i.e. if all eigenvalues λ are inside.

AR( p ) process as difference equation :

y t = Φ y t 1 + ε t

%

 

To solve find eigenvalues of p × p Matrix Φ

&

'

'

'

'

'

'

y

t

y t 1

y t 2

y t p

(

*

*

*

*

*

*

)

%

φ 1 φ 2 φ 3 φ p

1

0

1

0

0

0

0

0

0

0

(

*

*

*

*

*

*

)

%

'

'

'

'

'

' &

y

y

y

t 1

t 2

t 3

y t p 1

(

*

*

*

*

*

*

)

+

%

'

'

' 0

0

ε

'

'

t

' & 0

'

'

= ' 0

'

'

' 0

&

det(Φ − λI ) = 0

(

*

*

*

*

*

) *

1.3 Autocovariance/Autocorrelation

1.3.1 AR(1)

E(y t y t-k )=E(ϕy t-1 y t-k )+E( ε t y t-k ) and since errors are uncorrelated with time values:

γ k =ϕγ k-1

Using the starting value γ 0 =

γ

ρ k =ϕ k This result can also be obtained by treating y t =ϕy t-1 +ε t as an initial value problem with ran- dom starting value y 0 , E(y 0 )=µ 0 , Var(y 0 )=σ 2 y0 . Then iterate from y 1 onwards to y t and simplify. See also 2.5

σ φ 2 2 We obtain Yule-Walker eqn 1

k =ϕ k

γ

0

1.3.2 AR(2)

The roots govern the decay of the autocorrelation function ρ k

# 1

%

$

1 & #

%

$

(

'

λ 1 λ 2

A

A

1

2

&

(

'

#

= %

$

ρ

0

ρ

1

&

(

'

From Yule - Walker we know that ρ 1 = φ 1 + φ 2 ρ 1 ρ 1 =

So solve

# 1

%

$

1 & # %

$

(

'

λ 1 λ 2

A

A

1

2

&

( =

'

#

%

%

$

ρ

0

φ

1

(1 φ 2 )

&

(

(

'

#
%

$

A 1

A 2

&

( =
'

1

#
%

$

λ 2

λ 1

φ 1

λ 2 λ 1

λ 2,1

(1 φ 2 )

A 1,2 = ±

λ 2 λ 1

1 &

(
'

1

#

%

%

$

ρ

0

φ

1

(1 φ 2 )

&

(

(

'

φ 1

(1 φ 2 )

In polar coordinates

ρ k = A 1 r k (cos ω +

i sin ω ) k + A 2 r k (cos ω i sin ω ) k

= A 1 r k e iωk + A 2 r k e iω k

= r k ( A 1 +

A 2 )cos ωk + ir k ( A 1 A 2 )sin ωk

Since the autocorrelation is real, the expression simplifies to

ρ k = 2 Ar k cos ωk

We know that the autocorrelation with lag 0 is 1

ρ 0 = 2 Ar 0 cos ω 0

1

= 2 A since ir k ( A 1 A 2 )sin ωk = 0 A 1 = A 2 A = 1

2

ρ k =

1 ( λ 1 k + λ 2 k ) = r k cos ωk
2

The Period T=2 π /ω is visible in the graph.

1.3.3 AR(p)

ϕ(B)y t =ε t

To find ACF, PACF of degree k multiply through by y t-k

y t y t-k =ϕ 1 y t-1 y t-k +ϕ 2 y t-2 y t-k +…+ε t y

Take expected value:

E[y t y t-k ] =ϕ 1 E[y t-1 y t-k ]+ϕ 2 E[y t-2 y t-k ]+…+E[ε t y t-k ] =ϕ 1 Cov(y t-1 ,y t-k )+ϕ 2 Cov(y t-2 ,y t-k )+…+Cov( ε t ,y t-k )

γ k

The above is the Yule-Walker equation. It constitutes the AR representation of the autoco-

variance with lag k. We can also write this as:

t-k

=ϕ 1 γ k-1 +ϕ 2 γ k-2 +…

ϕ(B) γ k =0 To obtain autocorrelation divide through by γ 0 :

ϕ(B) ρ k =0 To solve for autocorrelation solve:

ρ k =

p

A j λ j

j =1

k

&

(

(

(

(

(

(

'

The autocorrelation is a linear combination of the inv. roots

1.4 Partial Autocorrelation

ρ

0

ρ

ρ

1

2

ρ p 1

)

+

+

+

+

+

+

*

&

(

(

(

(

(

(

'

1

λ 1

1 2

λ

λ 1 p 1

1

λ 2

λ 2 2

λ 2 p 1

1

λ 3

λ 3 2

λ 3 p 1

1

λ p

λ p

2

λ p p 1

)

+

+

+

+

+

+

* ' A p *

)

+

+

+

+

+

+

&

(

(

(

(

(

(

A

A

A

1

2

3

=

AR(1)

For a AR(1) process we have:

#

%

$

#

%

$

#

%

$

1

ρ 1

φ

21

φ

22

φ

21

φ

22

&

(

'

&

(

'

ρ 1

1

=

=

ρ k = φ k

#

%

$

φ

21

φ

22

&

( =

'

&

(

'

#

% $

φ

21

φ

22

&

( =

'

#

%

$

ρ

ρ

1

2

&

(

'

1

1

2

2

#

%

1

ρ 1 & #

1

(

%

' $

&

(

'

1 ρ 1 $ ρ 1

#

%

1 ρ 1 $

ρ 1 ρ 1 ρ 2

ρ 1 2 + ρ 2

1

1 φ 2

#

%

$

φ (1 φ 2 )

0

&

( =

'

ρ

ρ

1

2

&

(

'

#

%

$

φ

0

&

( =

'

#

%

$

ρ

1

0

&

(

'

We are in particular interested in ϕ 11 , ϕ 22 , ϕ 33,

ϕ 11 =ρ 1 . AR(p)

, ϕ qq as these contain the PACs. also remember

Autocorrelation does not vanish instantly, there is an echo

If p>1 there are also interferences from interaction of coefficients

These effects are resolved by the PACF

Solve

#

%

%

%

%

%

%

$

ρ

ρ

1

2

ρ

3

ρ

k

&

(

(

( =

(

(

(

'

#

%

%

%

%

%

%

$

ρ

0

ρ

ρ

1

2

ρ k 1

ρ 1

ρ 0

ρ 1

ρ k 2

ρ 2

ρ 1

ρ 0

ρ k 3

ρ k 1

ρ k 2

ρ k 3

ρ 0

&

(

(

(

(

(

(

'

#

%

%

%

%

%

% $

φ k1

φ k 2

φ k 3

φ kk

&

(

(

(

(

(

(

'

Where ρ 0 =1, ϕ k1kk partial autocorrelations. The first index k gives the number of total inter- mediate correlations. The second index k gives the PACF-lag order. Generally for k>p the PAC goes to 0!

Equivalently the k th partial autocorrelation is given by the coefficient ϕ kk in the equation:

y t =ϕ k1 y t-1 +ϕ k2 y t-2 +…+ϕ kk y t-k +ε t

This corresponds to the correlation between y t -E(y t |y t-1 ,…,y t-k+1 ) and

y t-k -E(y t-k |y t-1 ,…,y t-k+1 )

To obtain the PAC one estimates the coefficients in the equations:

y t =ϕ

y t =ϕ

y t =ϕ

y

y

11

21

t-1 +ε t t-1 +ϕ 22 y t-2 +ε t

k1 y t-1 +ϕ

k2 y t-2 +…+ϕ kk y t-k +ε t

by subtracting the appropriate equations one obtains e.g. in the case of AR(2):

y t -E(y t |y t-1 ,…,y t-k+1 ) =y t -( ϕ 21 y t-1 +ϕ 22 y t-2 +ε t )

y t-1 -E(y t-1 |y t-1 ,…,y t-k+1 ) =y t-1 E[(y t -( ϕ 21 y t-1 +ϕ 22 y t-2 +ε t ))y t-1 ] =E[y t y t-1 - ϕ 21 y t-1 2 - ϕ 22 y t-2 y t-1 - ε t y t-1 ]

=γ 1 - ϕ 21 γ 0 - ϕ 22 γ 1

=ϕ

=ϕ

γ 1

ρ

ϕ

1

21

γ 0 /(1- ϕ 22 ) /(1- ϕ 22 ) 1 (1- ϕ 22 )

21

21

=ρ

Analog for t and t-2

E[(y t -( ϕ 21 y t-1 +ϕ 22 y t-2 +ε t ))y t-2 ] =E[y t y t-2 - ϕ 21 y t-1 y t-2 - ϕ 22 y t-2 2 - ε t y t-2 ]

=γ

=γ

2 - ϕ 2 - ϕ

21

21

γ 1 - ϕ 22 γ 0 γ 1

ϕ 22 γ 0

ϕ

ϕ

22

22

=ρ 2 - ρ 1 2 (1- ϕ 22 ) =( ρ 2 - ρ 1 2 )/(1- ρ 1 2 )

1.5 Parameter Estimation

1.5.1 Least squares

By least squares solve:

X θ = y

Where

ˆ

1

1

X =

1

$

&

&

&

&

&

&

%

ˆ

θ

=

$

&

&

&

&

&

&

%

θ 0

φ

1,

φ

p

'

)

)

)

)

)

)

(

y t 1

y t 2

y t 2

y t 3

y ( t 1) ( t p 1)

y ( t 2) ( t p 1)

; y =

$

&

&

&

&

&

&

%

y

t

y t 1

y t 2

y t ( t p 1)

'

)

)

)

)

)

)

(

ˆ

θ

= ( X ' X ) 1 X ' y

y t p

y t p 1

y ( t p ) ( t p 1)

'

)

)

)

)

)

)

(

1.5.2 Alternative: ML Estimation

Because AR-Processes are Markov (i.e. depends only on the last value) one can write:

T

t =1

p( y t y t 1 ) p( y 0 )

p(y t |y t-1 ,…,y 0 ) =p(y t |y t-1 )=p(y T ,…,y 0 )= Letting ε t ~N(0, σ 2 ) one obtains E[y t |y t-1 ]=θ 0 +ϕy t-1 :=ŷ t and Var[y t |y t-1 ]=σ 2 one obtains the like- lihood as the product of T-1 (i.e. from T=1 to T) standard normally distributed variables y t whose mean and variance are given by E and Var above:

p(y T ,…,y 0 ) =(2 πσ 2 ) -T/2 exp(-1/2 σ 2 Σ t=1 T (y t - θ 0 - ϕy t-1 ) 2 )p(y 0 ) Taking the log:

l( θ ;y)

Where the parameter vector θ =( θ 0 ϕ σ 2 ). The assumption of stationarity yields: E(y 0 )=θ 0 /(1- ϕ) and Var(y 0 )=σ 2 /(1- ϕ 2 ) so that

ϕ ) and Var(y 0 )= σ 2 /(1- ϕ 2 ) so that = (-T/2)

=(-T/2) log(2 πσ 2 )-

1/2 σ 2 Σ t=1 T (y t - θ 0 - ϕy t-1 ) 2

+log p(y 0 )

p(y 0 )

=ϕ(y 0 ; θ 0 /(1- ϕ), σ 2 /(1- ϕ 2 ))

=(2 πσ 2 /(1- ϕ 2 )) -1/2 exp{-[y 0 - θ 0 /(1- ϕ)] 2 /[2 σ 2 /(1- ϕ 2 )]} If the variance σ 2 were known one would obtain the LS estimators and the log likelihood would be proportional to Q( θ 0 , ϕ)=Σ t=1 T (y t - θ 0 - ϕy t-1 ) 2 . For unknown variance however taking the derivative w.r.t θ gives the three ML equations:

l( θ ;y)/σ 2 =0

l( θ ;y)/θ 0 =0

l( θ ;y)/ϕ =0 gives score function s( ϕ) The first of these gives

l( θ ;y)/σ 2

The estimator for the variance is the mean of the squared residuals. Substitute into log- likelihood to obtain concentrated likelihood:

l(θ 0 , ϕ;y)

=-T/2 log(2 π ) – T/2 – T/2 log[Q( θ 0 , ϕ)/T] -logQ( θ 0 , ϕ) So the ML estimator is equal to the LS estimator:

=-T/2 σ 2 +Q/2 σ 4 σ 2 =Q( θ 0 , ϕ)/T

=-T/2 log(2 π ) –

TQ/2Q – T/2 logQ/T
TQ/2Q
– T/2 logQ/T

ˆ

ˆ

θ 0 = y

φx

 

T

ˆ

1

T

(y t y)(y t 1 x)

φ =

t=1

 

T

 

1

T

t=1

(y t 1 x) 2

T

y

= 1

t=1

y

t

 

T

 

T

x

= 1

t=1

y

 

T

t 1

This generalizes to AR(p), especially:

θ 0 = 1

ˆ

T

T

t =1

ˆ

(y t

φ 1 y t 1

= 1

T

ˆ

T

t