Sie sind auf Seite 1von 46

Stationary Time Series Models

(Well see non-stationary models later in the course)



Univariate Time Series Analysis
ARIMA Models
Stationary Series
Review from earlier lectures:

A series is covariance stationary when
Mean: E(Y
t
) = u
Variance: Var(Y
t
) = E (Y
t
u)
2
=
2
Covariance Cov(Y
t
,Y
t-k
) = Constant for all t and
k0.

Shocks to a stationary series dissipate over time and the long
term forecast of a series will converge to the unconditional
mean of the series. The series is said to exhibit mean
reversion.
Has a finite variance that is time invariant
Has a theoretical covariance between values of y
t
that
depends only on the difference apart in time

A white noise process is one with (virtually) no
discernible structure. A definition of a white
noise process is
Mean: E(Y
t
) = u
Variance: Var(Y
t
) = E (Y
t
u)
2
=
2
Covariance: Cov(Y
t
,Y
t-k
) = 0 for all t and k0.







A White Noise Process

ARIMA Models
ARMA models Introduced by Box and
Jenkins (1976)
An approach for analysing stationary time
series data, if the series is I(1) then the
approach can be applied to the first
difference of the data.
AR = Autoregressive
I = Integrated
MA = Moving Average

A first order autoregressive process AR(1) can be
expressed as
y
t
=
0
+
1
y
t-1
+ e
t
e
t
= white noise error term
Similarly and AR(2) process can be expressed as
y
t
=
0
+
1
y
t-1
+
2
y
t-2
+ e
t
In general an AR(p) process is of the form
y
t
=
0
+
1
y
t-1
+
2
y
t-2
+ +
p
y
t-p
+ e
t
The y
t
is said to follow an AR(p) process
Autoregressive Processes
Moving Average Process
MA(1) y
t
= +
0
e
t
+
1
e
t-1
e
t
is a white noise (stationary process) and y
t
is a moving average
of this random process
Maybe something like the change in a stock price in hourly data.
Change should be mostly random but the previous shock may be
taking time to dissipate fully (Not in a fully efficient market.)
MA(2) y
t
= +
0
e
t
+
1
e
t-1
+
2
e
t-2

MA(q) y
t
= +
0
e
t
+
1
e
t-1
+ +
q
e
t-q
MA(q) process is stationary as it is linear combination of
stationary variables, however an MA(q) process is not
necessarily a white noise process
Generally our series may be
comprised of AR and MA
components

[when choosing our models we will
include both and then try to reduce
toward a succinct model]


For a series that exhibits both AR and MA characteristics we can
combine the AR(p) and MA(q) models, to obtain an ARMA(1,1) model
for example

y
t
= w +
1
y
t-1
+
0
e
t
+
1
e
t-1



or more generally an ARMA(p,q) model:

y
t
= w +
1
y
t-1
+
2
y
t-2
+ +
p
y
t-p
+
0
e
t
+
1
e
t-1
+ +
p
e
t-q


Autoregressive Moving Average Process (ARMA)
We can use backwards substitution to
transform an AR(1) process into an
MA() process [or MA() to AR(1)]


An AR process as an MA() process
For an AR(1) process with no constant let
y
t
=
1
y
t-1
+ e
t
then,

y
1
=
1
y
0
+ e
1
and,

y
2
=
1
y
1
+ e
2
=
1
(
1
y
0
+ e
1
) + e
2

=
1
2
y
0
+
1
e
1
+ e
2
By continued substitution
y
t
=
1
t
y
0
+
As t and < 1, then
y
t
= , which is an MA() process


Note:< 1 since we said at the start the series is stationary
[This isnt true for non stationary series!!!]
i t
i
i
e

0
1
o
i t
i
i
e

0
1
o
Proof that an AR(1) process,
y
t
=
0
+
1
y
t-1
+ e
t
,

is stationary if
1
<1


{Lots of maths try to follow the logic
though! (On Monday we showed that a
series with a unit root was non-
stationary)}
Requirement 1: Mean must be
constant
Stationarity Conditions for an AR(1) process
Let y
t
=
0
+
1
y
t-1
+ e
t
Given the initial value of y
0
it follows that y
1
will be given by
y
1
=
0
+
1
y
0
+ e
1
Then:
y
2
=
0
+
1
y
1
+ e
2

=
0
+
1
(
0
+
1
y
0
+ e
1
) + e
2

=
0
+
1

0
+
1
2
y
0
+
1
e
1
+ e
2
y
3
=
0
+
1
y
2
+ e
3
=
0
+
1
(
0
+
1

0
+
1
2
y
0
+
1
e
1
+ e
2
) + e
3
=
0
+
1

0
+
1
2

0
+
1
3
y
0
+
1
2
e
1
+
1
e
2
+ e
3
= [
0
+
1

0
+
1
2

0
] + [
1
3
y
0
] + [
1
2
e
1
+
1
e
2
+ e
3
]


Continuing the substitution we can write

Y
t
=


i.e. a solution to the AR(1) process in terms of its initial condition, y
0
.


Could use Beta instead of Alpha..
1
1
0
1 0
1
0
1 0 1

=

+ +
t
t
i
i
t
i
t i
e y o o o o
Stationarity Conditions for an AR(1) process
Y
t
=

Consider the Mean of the AR(1) process
If
1
<1 and allowing t

Note:


Then


Taking expectations, for sufficiently large values of t the
second term goes to zero
E(y
t
) =
0
/(1-
1
)
The mean of y
t
is finite and time independent
Recall: This was the first requirement for stationarity
i t
i
i
t
e y

=
0
1
1
0
1
o
o
o
1
1
0
1 0
1
0
1 0 1

=

+ +
t
t
i
i
t
i
t i
e y o o o o

=

=
0
1
1
1
1
i
i
o
o
Requirement 2: Variance must
be constant / non-time
dependent
Stationarity Conditions for an AR(1) process
Now we want to check that the variance is not time dependent.
Variance
Allow u =
0
/(1-
1
) [i.e. u is the average for y
t
as shown earlier]
Var(y
t
) = E(y
t
u)
2

Set
0
= 0 (for simplicity) => u=0!
Then:
Var(y
t
) = E(y
t
0)
2
= E[(y
t
)(y
t
)],
Recall: AR(1) = MA(), so y
t
= e
t
+
1
1
e
t-1
+
1
2
e
t-2
+

Var(y
t
) = E[(e
t
+
1
1
e
t-1
+
1
2
e
t-2
+ )(e
t
+
1
1
e
t-1
+
1
2
e
t-2
+ )]

Continued:
Multiplying this out:
Var(y
t
) = E(e
t
2
+
1
2
e
t-1
2
+
1
4
e
t-2
2
+ + cross products)

E(cross products) can be set =0

Left with:
Var(y
t
) = E(e
t
2
+
1
2
e
t-1
2
+
1
4
e
t-2
2
+ )
= E(e
t
2
) + E(
1
2
e
t-1
2
) + E(
1
4
e
t-2
2
) +
Note: E(e
i
2
) =
2


Var(y
t
) =
2
+
1
2

2
+
1
4

2
+
=
2
(1+
1
2
+
1
4
+ )

Since
1
<1 then Var(y
t
) [or
0
for short!

] can be written as
Var(y
t
) =
2
/(1-
1
2
)
Hence Var(y
t
) is independent of time
Recall this was our second requirement for stationarity.


0
should be gamma but powerpoints gamma symbol is like y!!!
Requirement 3: Covariance between
observation k periods apart must be
the same

Stationarity Conditions for an AR(1) process
Now to show Covariance is independent of the time period.
Covariance

s
= Cov(y
t
,

y
t-s
) = E(y
t
u)(y
t-s
-u)

Again Set
0
= 0 (for simplicity), u = 0
Cov(y
t
,

y
t-s
) = E[(e
t
+
1
1
e
t-1
+
1
2
e
t-2
+ )(e
t-s
+
1
1
e
t-s-1
+
1
2
e
t-s-2
+ )]
=E(
1
s
e
t-s
2
+
1
s+2
e
t-s-1
2
+ )
=
1
s

2
+
1
s+2

2
+
=
2

1
s
(1 +
1
2
+
1
4
+ )
=
2

1
s
/(1-
1
2
)

Hence the covariance,
s
= Cov(y
t
,

y
t-s
) is time independent
Recall this was our last requirement for stationarity.

Overall we have proved that for an AR(1) process y
t
=
0
+
1
y
t-1
+ e
t

as t and
1
<1 y
t
is stationary
ARIMA Models



(Can think of these as ARMA models
for non-statioanry data)
An Autoregressive Integrated Moving Average
(ARIMA) Process
Using an ARMA(p,q) model requires that the series is
stationary.
If the series is not stationary researchers typically
difference the variable as necessary and then build an
ARMA model on those differenced variables.
An ARMA(p,q) model in the variable differenced d
times is equivalent to an ARIMA(p,d,q) model on the
original data.
[In other words if the model is ARIMA(2,1,2) it means
you difference the series once and then use an
ARMA(2,2) Model!]
The Box-Jenkins Approach to
building an ARIMA Model
(same for ARMA)
Box and Jenkins (1976) were the first to approach the task
of estimating an ARMA model in a systematic manner.
There are 3 steps to their approach:
1. Identification
2. Estimation
3. Model diagnostic checking

Building ARMA Models
- The Box Jenkins Approach

Step 1: Identification

Involves determining the order of the model.
The Autocorrelation function (ACF) and Partial
Autocorrelation Function (PACF) can be used to identify
the most appropriate ARIMA specification
The Autocovariance
Cov (y
t
, y
t-k
) is knows as
k

Cov (y
t
, y
t-k
) = E(y
t
u)(y
t-k
-u)
= E(y
t
,y
t-k
)

The Autocorrelation = Cor(y
t
,y
t-k
) is known as
k


k
=
k
/
0
= Covariance / Variance [remember
0
= Var(y
t
)]


Step 1: Identification
The Partial correlation Coefficient
Measures the correlation between an observation k periods ago
and the current observation, after controlling for observations at
intermediate lags (i.e. all lags < k).

So |
kk
measures the correlation between y
t
and y
t-k
after removing
the effects of y
t-k+1
, y
t-k+2
, , y
t-1
.

At lag 1, the ACF = PACF
Plotting the ACF (
k
) against k and the PACF(|
kk
) against k can
help to reveal the appropriate ARIMA specification for the data
series.

The ACF of an AR(1) process
We know from earlier

0
=

Var(y
t
) =
2
/(1-
1
2
)

s
= Cov(y
t
,

y
t-s
) =
2

1
s
/(1-
1
2
)
Hence

0
=
0
/

0
= 1

1
=
1
/

0
=
1


2
=
2
/

0
=
1
2

So in general
s
=
s
/

0
=
1
s

Recall that for y
t
to be stationary requires that
1
<1
If 0<
1
<1 therefore ACF decays exponentially
If -1<
1
<0 therefore ACF shows oscillating decay


The PACF of an AR(1) and MA(q) process
The PACF is useful for telling the difference between an AR process and an
ARMA process.

In the case of an AR(p), there are direct connections between y
t
and y
t-k
only
for ks p.
For AR(1), the PACF |
kk
= 0 for k>1
For AR(p), the PACF |
kk
=0 for k>p
So for an AR(p), the theoretical pacf will be zero after lag p.

In the case of an MA(q), this can be written as an AR(), so there are direct
connections between y
t
and all its previous values.
For an MA(q), the theoretical PACF will be geometrically declining.


The ACF of an MA(1) process
It can be easily shown that the ACF of an
MA(q) process decays to 0 for k>q
In summary: The ACF and PACF for an
AR, MA and ARMA process
An autoregressive process has
a geometrically decaying acf
number of spikes of pacf = AR order

A moving average process has
Number of spikes of acf = MA order
a geometrically decaying pacf

An ARMA (combination) process has
a geometrically decaying acf
a geometrically decaying pacf
See A&H page242 table13.1

Examples:
ACF and PACF plots for
AR processes

ACF and PACF for a Non-stationary Model
(i.e. a unit coefficient): y
t
= y
t-1
+ u
t

ACF:
Autocorrelations decline
towards 0 as the number
of lags increases
PACF:
The first partial
correlation is high but all
the rest are not
significantly different
from 0.
ACF and PACF for a slowly decaying AR(1) Model:
y
t
= 0.9y
t-1
+ u
t

Because the coefficient (0.9)
is close to 1, this is difficult to
distinguish from the unit root
process
ACF:
Autocorrelations decline
towards 0 more quickly than
in case of unit root
PACF:
The first partial correlation
is high but all the rest are
not significantly different
from 0.


ACF and PACF for a more rapidly decaying AR(1) Model: y
t
=
0.6y
t-1
+ u
t

ACF:
Autocorrelations
decline towards 0 very
quickly (within 4
periods!)
The lower the
coefficient the quicker
the ACF reaches 0
PACF:
The first partial
correlation is high but
all the rest are not
significantly different
from 0.

ACF and PACF with a negative coefficient AR(1) Model:
y
t
= -0.9y
t-1
+ u
t

ACF:
Autocorrelations
decline towards 0 but
alternate between
positive and negative
values.
PACF:
The first partial
correlation is high but
all the rest are not
significantly different
from 0.

ACF and PACF with an AR(2) model) negative coefficient AR(1)
Model: y
t
= 0.5y
t-1
+0.25y
t-2
+ u
t

ACF:
Autocorrelations
decline towards 0.
PACF:
The first and
second partial
correlations are
different from 0
All the rest are not
significantly
different from 0.

Examples:
ACF and PACF plots for
MA processes

ACF and PACF for an MA(1) Model:
y
t
= 0.9u
t-1
+ u
t

ACF is significant for one
lag only
PACF alternates between
positive and negatives but
falls to 0 as lag increases.
Recall: For the AR process
it was the ACF that
persisted and the PACF
that was 0 after 1 lag!)
ACF and PACF for an MA(1) Model:
y
t
= -0.9u
t-1
+ u
t

ACF is significant for one
lag only
PACF alternates between
positive and negatives but
falls to 0 as lag increases.
Recall: For the AR process
it was the ACF that
persisted and the PACF
that was 0 after 1 lag!)
ACF and PACF for an MA(2) Model:
y
t
= 0.5u
t-1
-

0.25u
t-2
+ u
t
ACF is now significant for
two period
The first is positive because
the coefficient on u
t-1
is
positive.
The second is negative
because the sign on u
t-2
is
negative.
PACF alternates between
positive and negatives but
falls to 0 as lag increases.
Examples:
ACF and PACF plots for
ARMA process
[i.e. has AR terms and MA terms!]

ACF and PACF for an ARMA(1,1):
y
t
= 0.5y
t-1
+ 0.5u
t-1
+ u
t

Distinguishing the
process from the
correlogram is not as
straightforward here!
The fact the ACF is
different from 0 for a
few period suggests
an AR element
The fact the PACF is
different from 0 for a
few periods suggests
an MA element
Recall from Earlier: Box Pierce Test
[we also saw the Ljung Box Q test]
The Box-Pierce statistic tests the joint hypothesis
that all
k
are simultaneously equal to zero. The test
statistic is approx. distributed as a
2
distribution
with m df.


n = sample size
m = lag length
If B.P. >
2
m
() then reject H
0
:
k
= 0

2
1
. .
k
M
k
n P B

=
=
Step 2:
Estimation of the parameters
Since the model is stationary we should be able to use OLS
(provided there are no other issues)

Step 3:
Model checking
Box and Jenkins suggest 2 methods:
Deliberate over-fitting:
if we think its ARMA(1,2) try some others like
ARMA(2,3), ARMA(2,2) etc.
residual diagnostics


Building ARMA Models
- The Box Jenkins Approach (contd)

Identification would typically not be done using ACF and PACF. Rather
they are used to identify starting points
We want to form a parsimonious model.
This gives motivation for using information criteria, which embody 2
factors
a term which is a function of the RSS
some penalty for adding extra parameters
The object is to choose the number of parameters which minimises the
information criterion. Use the AIC and SBC.
To properly compare AIC and SBC across models the AIC and SBC
should be based on model estimations using the same number of
observations.
Some More Recent Developments in
ARMA Modelling

Diagnostic Checking
Check the residuals for serial correlation
using the LM test described earlier in the
course
Test the residuals for normality.
More on this later!
Thats all for today!
Some more advanced info on ARIMA is available at:
http://www.stats.uwo.ca/faculty/aim/vita/pdf/Advances1.pdf
But not necessary for this course.


As usual there are some questions on Blackboard for next week
Were halfway through the course now and things will be speeding
up so if you havent kept up to date with the questions it will
become tougher

So far only the class have sent me any.

Das könnte Ihnen auch gefallen