Beruflich Dokumente
Kultur Dokumente
edu/stat510/print/book/export/html/47
Definition:
A univariate time series is a sequence of measurements of the same variable collected over time. Most often, the measurements are made
at regular time intervals.
One difference from standard linear regression is that the data are not necessarily independent and not necessarily identically distributed. One
defining characteristic of time series is that this is a list of observations where the ordering matters. Ordering is very important because there is
dependency and changing the order could change the meaning of the data.
The basic objective usually is to determine a model that describes the pattern of the time series. Uses for such a model are:
Types of Models
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 1/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47
1. Models that relate the present value of a series to past values and past prediction errors - these are called ARIMA models (for
Autoregressive Integrated Moving Average). Well spend substantial time on these.
2. Ordinary regression models that use time indices as x-variables. These can be helpful for an initial description of the data and form the
basis of several simple forecasting methods.
Some important questions to first consider when first looking at a time series are:
Is there a trend, meaning that, on average, the measurements tend to increase (or decrease) over time?
Is there seasonality, meaning that there is a regularly repeating pattern of highs and lows related to calendar time such as seasons,
quarters, months, days of the week, and so on?
Are their outliers? In regression, outliers are far away from your line. With time series data, your outliers are far away from your other
data.
Is there a long-run cycle or period unrelated to seasonality factors?
Is there constant varianceover time, or is the variance non-constant?
Are there any abrupt changes to either the level of the series or the variance?
Example 1
The following plot is a time series plot of the annual number of earthquakes in the world with seismic magnitude over 7.0, for a 99 consecutive
years. By a time series plot, we simply mean that the variable is plotted against time.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 2/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47
There is no consistent trend (upward or downward) over the entire time span. The series appears to slowly wander up and down. The
horizontal line drawn at quakes = 20.2 indicates the mean of the series. Notice that the series tends to stay on the same side of the
mean (above or below) for a while and then wanders to the other side.
Almost by definition, there is no seasonality as the data are annual data.
There are no obvious outliers.
Its difficult to judge whether the variance is constant or not.
One of the simplest ARIMA type models is a model in which we use a linear model to predict the value at the present time using the value at
the previous time. This is called an AR(1) model, standing for autoregressive model of order 1. The order of the model indicates how many
previous times we use to predict the present time.
A start in evaluating whether an AR(1) might work is to plot values of the series against lag 1 values of the series. Let xt denote the value of
the series at any particular time t, so xt-1 denotes the value of the series one time before time t. That is, xt-1 is the lag 1 value of xt. As a short
example, here are the first five values in the earthquake series along with their lag 1 values:
For the complete earthquake data set, heres a plot of xt versus xt-1:
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 3/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47
Although, its only a moderately strong relationship, there is a positive linear association so an AR(1) model might be a useful model.
xt = + 1 xt1 + w t
Assumptions:
iid
w t N (0, w ) , meaning that the errors are independently distributed with a normal distribution that has mean 0 and constant variance.
2
This is essentially the ordinary simple linear regression equation, but there is one difference. Although its not usually true, in ordinary least
squares regression we assume that the x-variable is not random but instead is something we can control. Thats not the case here, but in our
first encounter with time series well overlook that and use ordinary regression methods. Well do things the right way later in the course.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 4/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47
We see that the slope coefficient is significantly different from 0, so the lag 1 variable is a helpful predictor. The R2 value is relatively weak at
29.7%, though, so the model wont give us great predictions.
Residual Analysis
In traditional regression, a plot of residuals versus fits is a useful diagnostic tool. The ideal for this plot is a horizontal band of points. Following
is a plot of residuals versus predicted values for our estimated model. It doesnt show any serious problems. There might be one possible
outlier at a fitted value of about 28.
Example 2
The plot at the top of the next page shows a time series of quarterly production of beer in Australia for 18 years.
There is seasonality a regularly repeating pattern of highs and lows related to quarters of the year.
There are no obvious outliers.
There might be increasing variation as we move across time, although thats uncertain.
There are ARIMA methods for dealing with series that exhibit both trend and seasonality, but for this example well use ordinary regression
methods.
To use traditional regression methods, we might model the pattern in the beer production data as a combination of trend over time and
quarterly effect variables.
For a linear trend, use t (the time index) as a predictor variable in a regression.
For a quadratic trend, we might consider using both t and t2.
For quarterly data, with possible seasonal (quarterly) effects, we can define indicator variables such as Sj = 1 if observation is in quarter j
of a year and 0 otherwise. There are 4 such indicators.
iid
Let t N (0, )
2
. A model with additive components for linear trend and seasonal (quarterly) effects might be written
xt = 1 t + 1 S1 + 2 S2 + 3 S3 + 4 S4 + t
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 6/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47
To add a quadratic trend, which may be the case in our example, the model is
2
xt = 1 t + 2 t + 1 S1 + 2 S2 + 3 S3 + 4 S4 + t
Note that weve deleted the intercept from the model. This isnt necessary, but if we include it well have to drop one of the seasonal effect
variables from the model to avoid collinearity issues.
Back to Example 2: Following is the Minitab output for a model with a quadratic trend and seasonal effects. All factors are statistically
significant.
Noconstant
Residual Analysis
For this example, the plot of residuals versus fits doesnt look too bad, although we might be concerned by the string of positive residuals at the
far right.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 7/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47
When data are gathered over time, we typically are concerned with whether a value at the present time can be predicted from values at past
times. We saw this in the earthquake data of example 1 when we used an AR(1) structure to model the data. For residuals, however, the
desirable result is that the correlation is 0 between residuals separated by any given time span. In other words, residuals should be unrelated
to each other.
The sample autocorrelation function (ACF) for a series gives correlations between the series xt and lagged values of the series for lags of 1, 2,
3, and so on. The lagged values can be written as xt-1, xt-2, xt-3,and so on. The ACF gives correlations between xt and xt-1, xt and xt-2, and so
on.
The ACF can be used to identify the possible structure of time series data. That can be tricky going as there often isnt a single clear-cut
interpretation of a sample autocorrelation function. Well get started on that in Lesson 1.2 this week. The ACF of the residuals for a model is
also useful. The ideal for a sample ACF of residuals is that there arent any significant correlations for any lag.
Following is the ACF of the residuals for the Example 1, the earthquake example, where we used an AR(1) model. The lag (time span
between observations) is shown along the horizontal, and the autocorrelation is on the vertical. The red lines indicated bounds for statistical
significance. This is a good ACF for residuals. Nothing is significant; thats what we want for residuals.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 8/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47
The ACF of the residuals for the quadratic trend plus seasonality model we used for Example 2 looks good too. Again, there appears to be no
significant autocorrelation in the residuals. The ACF of the residual follows:
Lesson 1.2 will give more details about the ACF. Lesson 1.3 will give some R code for examples in Lessons 1.1 and 1.2.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/47 9/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60
Stationary Series
As a preliminary, we define an important concept, that of a stationary series. For an ACF to make sense, the series must be a weakly
stationary series. This means that the autocorrelation for any particular lag is the same regardless of where we are in time.
Definition: Let xt denote the value of a time series at time t. The ACF of the series gives correlations between xt and xt-h for h = 1, 2, 3, etc.
Theoretically, the autocorrelation between xt and xt-h equals
The denominator in the second formula occurs because the standard deviation of a stationary series is the same at all times.
The last property of a weakly stationary series says that the theoretical value of an autocorrelation of particular lag is the same across the
whole series. An interesting property of a stationary series is that theoretically it has the same structure forwards as it does backwards.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 1/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60
Many stationary series have recognizable ACF patterns. Most series that we encounter in practice, however, are not stationary. A continual
upward trend, for example, is a violation of the requirement that the mean is the same for all t. Distinct seasonal patterns also violate that
requirement. The strategies for dealing with nonstationary series will unfold during the first three weeks of the semester.
Well now look at theoretical properties of the AR(1) model. Recall from Lesson 1.1, that the 1storder autoregression model is denoted as
AR(1). In this model, the value of x at time t is a linear function of the value of x at time t1. The algebraic expression of the model is as
follows:
xt = + 1 xt1 + w t
Assumptions:
iid
w t N (0, w ) , meaning that the errors are independently distributed with a normal distribution that has mean 0 and constant variance.
2
Formulas for the mean, variance, and ACF for a time series process with an AR(1) model follow.
=
1 1
The variance of is
2
w
Var(xt ) =
2
1
1
This defines the theoretical ACF for a time series variable with an AR(1) model. (Note: 1 is the slope in the AR(1) model and we now see that
it also is the lag 1 autocorrelation.)
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 2/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60
Details of the derivations of these properties are in the Appendix to this lesson for interested students.
The ACF property defines a distinct pattern for the autocorrelations. For a positive value of 1, the ACF exponentially decreases to 0 as the lag
h increases. For negative 1, the ACF also exponentially decays to 0 as the lag increases, but the algebraic signs for the autocorrelations
alternate between positive and negative.
Following is the ACF of an AR(1) with 1= 0.6, for the first 12 lags. Note the tapering pattern.
The ACF of an AR(1) with 1 = 0.7 follows. Note the alternating and tapering pattern.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 3/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60
Example 1: In Example 1 of Lesson 1.1, we used an AR(1) model for annual earthquakes in the world with seismic magnitude greater than 7.
Heres the sample ACF of the series:
Lag. ACF
1. 0.541733
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 4/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60
2. 0.418884
3. 0.397955
4. 0.324047
5. 0.237164
6. 0.171794
7. 0.190228
8. 0.061202
9. -0.048505
10. -0.106730
11. -0.043271
12. -0.072305
The sample autocorrelations taper, although not as fast as they should for an AR(1). For instance, theoretically the lag 2 autocorrelation for an
AR(1) = squared value of lag 1 autocorrelation. Here, the observed lag 2 autocorrelation = .418884. Thats somewhat greater than the
squared value of the first lag autocorrelation (.5417332= 0.293). But, we managed to do okay (in Lesson 1.1) with an AR(1) model for the
data. For instance, the residuals looked okay. This brings up an important point the sample ACF will rarely fit a perfect theoretical pattern. A
lot of the time you just have to try a few models to see what fits.
Well study the ACF patterns of other ARIMA models during the next three weeks. Each model has a different pattern for its ACF, but in
practice the interpretation of a sample ACF is not always so clear-cut.
A reminder: Residuals usually are theoretically assumed to have an ACF that has correlation = 0 for all lags.
Example 2:
Heres a time series of the daily cardiovascular mortality rate in Los Angeles County, 1970-1979
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 5/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60
There is a slight downward trend, so the series may not be stationary. To create a (possibly) stationary series, well examine the first
differences yt = xt - xt-1. This is a common time series method for creating a de-trended series and thus potentially a stationary series. Think
about a straight line there are constant differences in average y for each change of 1-unit in x.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 6/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60
The following plot is the sample estimate of the autocorrelation function of 1st differences:
Lag. ACF
1. -0.506029
2. 0.205100
3. -0.126110
4. 0.062476
5. -0.015190
This looks like the pattern of an AR(1) with a negative lag 1 autocorrelation. Note that the lag 2 correlation is roughly equal to the squared
value of the lag 1 correlation. The lag 3 correlation is nearly exactly equal to the cubed value of the lag 1 correlation, and the lag 4 correlation
nearly equals the fourth power of the lag 1 correlation. Thus an AR(1) model may be a suitable model for the first differences yt = xt xt1 .
Let yt denote the first differences, so that yt = xt xt1 and yt1 = xt1 xt2 . We can write this AR(1) model as
yt = + 1 yt1 + w t
Using R, we found that the estimated model for the first differences is
^
y = 0.04627 0.50636yt1
t
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 7/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60
Some R code for this example will be given in Lesson 1.3 for this week.
Generally you wont be responsible for reproducing theoretical derivations, but interested students may want to see the derivations for the
theoretical properties of an AR(1).
xt = + 1 xt1 + w t
Assumptions:
iid
w t N (0, w ) , meaning that the errors are independently distributed with a normal distribution that has mean 0 and constant variance.
2
Mean:
With the stationary assumption, E(xt ) = E(xt1 ) . Let denote this common mean. Thus = + 1 . Solve for to get
=
1 1
Variance:
By the stationary assumption, Var(xt ) = Var(xt1 ). Substitute Var(xt ) for Var(xt1 ) and then solve for Var(xt ) . Because
Var(xt ) > 0 , it follows that (1 ) > 0 and therefore |1 | < 1.
2
1
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 8/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60
To start, assume the data have mean 0, which happens when =0, and xt = 1xt-1 + wt. In practice this isnt necessary, but it simplifies
matters. Values of variances, covariances and correlations are not affected by the specific value of the mean.
Let h = E(xtxt+h) = E(xtxth), the covariance observations h time periods apart (when the mean = 0). Let h = correlation between
observations that are h time periods apart.
To find the covariance h, multiply each side of the model for xt by xt-h, then take expectations.
xt = 1 xt1 + w t
h = 1 h1
h
h Var(xt )
1 h
h = = =
1
Var(xt ) Var(xt )
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/60 9/9
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61
The commands below include explanatory comments, following the #. Those comments do not have to be entered for the command to work.
x=scan("quakes.dat")
x=ts(x) #this makes sure R knows that x is a time series
plot(x, type="b") #time series plot of x with points marked as o
install.packages("astsa")
library(astsa) # See note 1 below
lag1.plot(x,1) # Plots x versus lag 1 of x.
acf(x, xlim=c(1,19)) # Plots the ACF of x for lags 1 to 19
xlag1=lag(x,-1) # Creates a lag 1 of x variable. See note 2
y=cbind(x,xlag1) # See note 3 below
ar1fit=lm(y[,1]~y[,2])#Does regression, stores results object named ar1fit
summary(ar1fit) # This lists the regression results
plot(ar1fit$fit,ar1fit$residuals) #plot of residuals versus fits
acf(ar1fit$residuals, xlim=c(1,18)) # ACF of the residuals for lags 1 to 18
Note 1: The astsa library accesses R script(s) written by one of the authors of our textbook (Stoffer). In our program, the lag1.plot command is
part of that script. You may read more about the library on the website for our text: http://www.stat.pitt.edu/stoffer/tsa3/xChanges.htm [1]. You
must install the astsa package in R before loading the commands in the library statement. Not all available packages are included when you
install R on your machine (cran.r-project.org/web/packages/). You only need to run install.packages("astsa") once. In subsequent sessions, the
library command alone will bring the commands into your current session.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 1/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61
Note 2: Note the negative value for the lag in xlag1=lag(x,1). To lag back in time in R, use a negative lag.
Note 3: This is a bit tricky. For whatever reason, R has to bind together a variable with its lags for the lags to be in the proper connection with
the original variable. The cbind and the ts.intersect commands both accomplish this task. In the code above, the lagged variable and the original
variable become the first and second columns of a matrix named y. The regression command (lm) uses these two columns of y as the
response and predictor variables in the regression.
General Note: If a command that includes quotation marks doesnt work when you copy and paste from course notes to R, try typing the
command in R instead.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 2/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 3/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61
Coefficients:
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 4/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61
ACF of residuals
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 5/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61
An example in Lesson 1.2 for this week concerned the weekly cardiovascular mortality rate in Los Angeles County. We used a first difference
to account for a linear trend and determine that the first differences may have an AR(1) model.
The data are in the cmort.dat file in the Week 1 folder of the course website.
Following are R commands for the analysis. Again, the commands are commented using #comment.
mort=scan("cmort.dat")
plot(mort, type="o") # plot of mortality rate
mort=ts(mort)
mortdiff=diff(mort,1) # creates a variable = x(t) x(t-1)
plot(mortdiff,type="o") # plot of first differences
acf(mortdiff,xlim=c(1,24)) # plot of first differences, for 24 lags
mortdifflag1=lag(mortdiff,-1)
y=cbind(mortdiff,mortdifflag1) # bind first differences and lagged first differences
mortdiffar1=lm(y[,1]~y[,2]) # AR(1) regression for first differences
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 6/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61
Well leave it to you to try the code and see the output, if you wish.
Links:
[1] http://www.stat.pitt.edu/stoffer/tsa3/xChanges.htm
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/61 7/7
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48
A moving average term in a time series model is a past error (multiplied by a coefficient).
iid
Let wt N (0, w
2
), meaning that the wt are identically, independently distributed, each with a normal distribution having mean 0 and the
same variance.
xt = + w t + 1 w t1
xt = + w t + 1 w t1 + 2 w t2
xt = + w t + 1 w t1 + 2 w t2 + + q w tq
Note: Many textbooks and software programs define the model with negative signs before the terms. This doesnt change the general
theoretical properties of the model, although it does flip the algebraic signs of estimated coefficient values and (unsquared) terms in formulas
for ACFs and variances. You need to check your software to verify whether negative or positive signs have been used in order to correctly
write the estimated model. R uses positive signs in its underlying model, as we do here.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 1/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48
Mean is E(xt) =
Variance is Var(xt) = w 2(1 + 12)
Autocorrelation function (ACF) is
1
1 = , andh = 0forh 2
2
1 +
1
Note that the only nonzero value in the theoretical ACF is for lag 1. All other autocorrelations are 0. Thus a sample ACF with a significant
autocorrelation only at lag 1 is an indicator of a possible MA(1) model.
For interested students, proofs of these properties are an appendix to this handout.
iid
Example 1 Suppose that an MA(1) model is xt = 10 + wt + .7wt-1, where wt N (0, 1) . Thus the coefficient 1= 0.7. The theoretical ACF is
given by
0.7
1 = = 0.4698, andh = 0for all lagsh 2
2
1 + 0.7
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 2/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48
The plot just shown is the theoretical ACF for an MA(1) with 1= 0.7. In practice, a sample wont usually provide such a clear pattern. Using R,
we simulated n = 100 sample values using the model xt = 10 + wt + .7wt-1 where wt ~ iid N(0,1). For this simulation, a time series plot of the
sample data follows. We cant tell much from this plot.
The sample ACF for the simulated data follows. We see a spike at lag 1 followed by generally non-significant values for lags past 1. Note
that the sample ACF does not match the theoretical pattern of the underlying MA(1), which is that all autocorrelations for lags past 1 will be 0.
A different sample would have a slightly different sample ACF shown below, but would likely have the same broad features.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 3/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48
Mean is E(xt) =
Variance is Var(xt) = w 2(1 + 12 + 22)
Autocorrelation function (ACF) is
1 + 1 2 2
1 = , 2 = , andh = 0forh 3
2 2 2 2
1 + + 1 + +
1 2 1 2
Note that the only nonzero values in the theoretical ACF are for lags 1 and 2. Autocorrelations for higher lags are 0. So, a sample ACF with
significant autocorrelations at lags 1 and 2, but non-significant autocorrelations for higher lags indicates a possible MA(2) model.
Example 2 Consider the MA(2) model xt = 10 + wt + .5wt-1 + .3wt-2, where wt ~ iid N(0,1). The coefficients are 1= 0.5 and 2= 0.3. Because
this is an MA(2), the theoretical ACF will have nonzero values only at lags 1 and 2.
As nearly always is the case, sample data wont behave quite so perfectly as theory. We simulated n = 150 sample values for the model xt =
10 + wt + .5wt-1 + .3wt-2, where wt ~ iid N(0,1). The time series plot of the data follows. As with the time series plot for the MA(1) sample data,
you cant tell much from it.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 5/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48
The sample ACF for the simulated data follows. The pattern is typical for situations where an MA(2) model may be useful. There are two
statistically significant spikes at lags 1 and 2 followed by non-significant values for other lags. Note that due to sampling error, the sample
ACF did not match the theoretical pattern exactly.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 6/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48
A property of MA(q) models in general is that there are nonzero autocorrelations for the first q lags and autocorrelations = 0 for all lags > q.
In the MA(1) model, for any value of 1, the reciprocal 1/1 gives the same value for
1
1 =
2
1 +
1
As an example, use +0.5 for 1, and then use 1/(0.5) = 2 for 1. Youll get 1 = 0.4 in both instances.
To satisfy a theoretical restriction called invertibility, we restrict MA(1) models to have values with absolute value less than 1. In the example
just given, 1 = 0.5 will be an allowable parameter value, whereas 1 = 1/0.5 = 2 will not.
Invertibility of MA models
An MA model is said to be invertible if it is algebraically equivalent to a converging infinite order AR model. By converging, we mean that the
AR coefficients decrease to 0 as we move back in time.
Invertibility is a restriction programmed into time series software used to estimate the coefficients of models with MA terms. Its not something
that we check for in the data analysis. Additional information about the invertibility restriction for MA(1) models is given in the appendix.
Advanced Theory Note: For a MA(q) model with a specified ACF, there is only one invertible model. The necessary condition for invertibility
is that the coefficients have values such that the equation 1-1y- ... - qyq = 0 has solutions for y that fall outside the unit circle.
In Example 1, we plotted the theoretical ACF of the model xt = 10 + wt + .7wt-1, and then simulated n = 150 values from this model and plotted
the sample time series and the sample ACF for the simulated data. The R commands used to plot the theoretical ACF were:
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 7/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48
The first command determines the ACF and stores it in an object named acfma1 (our choice of name).
The plot command (the 3rd command) plots lags versus the ACF values for lags 1 to 10. The ylab parameter labels the y-axis and the main
parameter puts a title on the plot.
To see the numerical values of the ACF simply use the command acfma1.
The simulation and plots were done with the following commands.
In Example 2, we plotted the theoretical ACF of the model xt = 10 + wt + .5wt-1 + .3wt-2 , and then simulated n = 150 values from this model
and plotted the sample time series and the sample ACF for the simulated data. The R commands used were
acfma2=ARMAacf(ma=c(0.5,0.3), lag.max=10)
acfma2
lags=0:10
plot(lags,acfma2,xlim=c(1,10), ylab="r",type="h", main = "ACF for MA(2) with theta1 =
0.5,theta2=0.3")
abline (h=0)
xc=arima.sim(n=150, list(ma=c(0.5, 0.3)))
x=xc+10
plot (x, type="b", main = "Simulated MA(2) Series")
acf(x, xlim=c(1,10), main="ACF for simulated MA(2) Data")
For interested students, here are proofs for theoretical properties of the MA(1) model.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 8/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48
The 1st order moving average model, denoted by MA(1) is xt = + wt + 1wt-1, where wt ~ iid N(0, w 2 ).
ACF: Consider the covariance between xt and xt-h. This is E(xt - )(xt-h- ), which equals
2
E[(w t + 1 w t1 )(w th + 1 w th1 )] = E[w t w th + 1 w t1 w th + 1 w t w th1 + w t1 w th1 ]
1
When h = 1, the previous expression = 1w 2 . For any h 2, the previous expression = 0. The reason is that, by definition of independence
of the wt, E(wkwj) = 0 for any k j. Further, because the wt have mean 0, E(wjwj) = E(wj2) = w 2.
Invertibility Restriction:
An invertible MA model is one that can be written as an infinite order AR model that converges so that the AR coefficients converge to 0 as we
move infinitely back in time. Well demonstrate invertibility for the MA(1) model.
(1) zt = w t + 1 w t1 .
At time t-1, the model is zt-1 = wt-1 + 1wt-2 which can be reshuffled to
(2) w t1 = zt1 1 w t2 .
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 9/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48
t t 1 t1 1 t2 t 1 t1 t2
(4) w t2 = zt2 1 w t3 .
2 2 2 3
zt = w t + 1 zt1 w t2 = w t + 1 zt1 (zt2 1 w t3 ) = w t + 1 zt1 zt2 + w t3
1 1 1 1
Note however, that if |1| 1, the coefficients multiplying the lags of z will increase (infinitely) in size as we move back in time. To prevent this,
we need |1| <1. This is the condition for an invertible MA(1) model.
In week 3, well see that an AR(1) model can be converted to an infinite order MA model:
2 k j
xt = w t + 1 w t1 + w t2 + + w tk + = w tj
1 1 j=0 1
This summation of past white noise terms is known as the causal representation of an AR(1). In other words, xt is a special type of MA with
an infinite number of terms going back in time. This is called an infinite order MA or MA(). A finite order MA is an infinite order AR and any
finite order AR is an infinite order MA.
Recall in Week 1, we noted that a requirement for a stationary AR(1) is that |1| <1. Lets calculate the Var(xt) using the causal representation.
2
j j 2j 2j
w
2 2
Var(xt ) = Var ( w tj = Var( w tj ) = w = w = )
1 1 1 1 2
1
j=0 j=0 j=0 j=0 1
This last step uses a basic fact about geometric series that requires |1 | < 1 ; otherwise the series diverges.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/48 10/10
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62
Covariance(y, x3 |x1 , x2 )
Variance(y|x1 , x2 )Variance(x3 |x1 , x2 )
Note that this is also how the parameters of a regression model are interpreted. Think about the difference between interpreting the
regression models:
2 2
y = 0 + 1 x andy = 0 + 1 x + 2 x
In the first model, 1 can be interpreted as the linear dependency between x2 and y. In the second model, 2 would be interpreted as
the linear dependency between x2 and y WITH the dependency between x and y already accounted for.
For a time series, the partial autocorrelation between xt and xt-h is defined as the conditional correlation between xt and xt-h, conditional
on xt-h+1, ... , xt-1, the set of observations that come between the time points t and th.
The 1st order partial autocorrelation will be defined to equal the 1st order autocorrelation.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62 1/5
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62
This is the correlation between values two time periods apart conditional on knowledge of the value in between. (By the way, the two
variances in the denominator will equal each other in a stationary series.)
Typically, matrix manipulations having to do with the covariance matrix of a multivariate distribution are used to determine estimates of the
partial autocorrelations.
For an AR model, the theoretical PACF shuts off past the order of the model. The phrase shuts off means that in theory the partial
autocorrelations are equal to 0 beyond that point. Put another way, the number of non-zero partial autocorrelations gives the order of the
AR model. By the order of the model we mean the most extreme lag of x that is used as a predictor.
Example: In Lesson 1.2, we identified an AR(1) model for a time series of annual numbers of worldwide earthquakes having a seismic
magnitude greater than 7.0. Following is the sample PACF for this series. Note that the first lag value is statistically significant, whereas partial
autocorrelations for all other lags are not statistically significant. This suggests a possible AR(1) model for these data.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62 2/5
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62
Identification of an MA model is often best done with the ACF rather than the PACF.
For an MA model, the theoretical PACF does not shut off, but instead tapers toward 0 in some manner. A clearer pattern for an MA
model is in the ACF. The ACF will have non-zero autocorrelations only at lags involved in the model.
Lesson 2.1 included the following sample ACF for a simulated MA(1) series. Note that the first lag autocorrelation is statistically significant
whereas all subsequent autocorrelations are not. This suggests a possible MA(1) model for the data.
Theory note: The model used for the simulation was xt = 10 + wt + 0.7wt-1. In theory, the first lag autocorrelation 1/(1+12 ) = .7/(1+.72) =
.4698 and autocorrelations for all other lags = 0.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62 3/5
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62
The underlying model used for the MA(1) simulation in Lesson 2.1 was xt = 10 + wt + 0.7wt-1. Following is the theoretical PACF (partial
autocorrelation) for that model. Note that the pattern gradually tapers to 0.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62 4/5
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62
R note: The PACF just shown was created in R with these two commands:
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/62 5/5
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63
Backshift Operator
Using B before either a value of the series xt or an error term wt means to move that element back one time. For instance,
Bxt = xt1 .
A power of B means to repeatedly apply the backshift in order to move back a number of time periods that equals the power. As an
example,
2
B xt = xt2 .
AR models can be written compactly using an AR polynomial involving coefficients and backshift operators. Let p = the maximum order (lag)
of the AR terms in the model. The general form for an AR polynomial is
(B) = 1 1 B p B
p
.
(B)xt = + w t .
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63 1/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63
Examples
Consider the AR(1) model xt = +1xt-1+wt where wt ~ iid N(0, w2). For an AR(1), the maximum lag = 1 so the AR polynomial is
(B) = 1 1 B
(1 1 B)xt = + w t .
To check that this works, we can multiply out the left side to get
xt 1 xt1 = + w t .
Then, swing the -1xt-1 over to the right side and we get
xt = + 1 xt1 + w t .
An AR(2) model is xt = + 1 xt1 + 2 xt2 + w t . That is, xt is a linear function of the values of x at the previous two lags. The AR
polynomial for an AR(2) model is
(B) = 1 1 B 2 B .
2
The AR(2) model could be written as (1 1 B 2 B2 )xt = + wt , or as (B)xt = + wt with an additional explanation that
(B) = 1 1 B 2 B .
2
An AR(p) model is x = + x
t + x1 +. . . + x
t1 2 t2 + w , where , , . . . , are constants and may be greater than 1. (Recall that |
p tp t 1 2 p 1| < 1
for an AR(1) model.) Here xt is a linear function of the values of x at the previous p lags.
A shorthand notation for the AR polynomial is (B) and a general AR model might be written as (B)xt = + wt . Of course, you would
have to specify the order of the model somewhere on the side.
MA Models
A MA(1) model xt = + wt + 1 wt1 could be written as xt = + (1 + 1 B)w t . A factor such as 1 + 1 B is called the MA
polynomial, and it is denoted as (B) .
A model that involves both AR and MA terms might be written (B)(xt ) = (B)w t or possibly even
(B)
(xt ) = wt .
(B)
Note: Many textbooks and software programs define the MA polynomial with negative signs rather than positive signs as above. This doesnt
change the properties of the model, or with a sample, the overall fit of the model. It only changes the algebraic signs of the MA coefficients.
Always check to see how your software is defining the MA polynomial. For example is the MA(1) polynomial 1 + 1B or 1 - 1B?
Differencing
Often differencing is used to account for nonstationarity that occurs in the form of trend and/or seasonality.
= 1 B .
Thus
xt = (1 B)xt = xt xt1 .
12 xt = xt xt12 .
This type of difference is often used with monthly data that exhibits seasonality. The idea is that differences from the previous year
may be, on average, about the same for each month of a year.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63 3/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63
A superscript says to repeat the differencing the specified number of times. As an example,
2 2 2
xt = (1 B) xt = (1 2B + B )xt = xt 2xt1 + xt2 .
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/63 4/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
When a model only involves autoregressive terms it may be referred to as an AR model. When a model only involves moving average
terms, it may be referred to as an MA model.
When no differencing is involved, the abbreviation ARMA may be used.
Note: This week were only considering non-seasonal models. Well expand our toolkit to include seasonal models next week.
In most software programs, the elements in the model are specified in the order (AR order, differencing, MA order). As examples,
A model with (only) two AR terms would be specified as an ARIMA of order (2,0,0).
A model with one AR term, a first difference, and one MA term would have order (1,1,1).
For the last model, ARIMA (1,1,1), a model with one AR term and one MA term is being applied to the variable zt=xt-xt-1. A first difference
might be used to account for a linear trend in the data.
The differencing order refers to successive first differences. For example, for a difference order = 2 the variable analyzed is zt = (xt-xt-1) - (xt-1-
xt-2), the first difference of first differences. This type of difference might account for a quadratic trend in the data.
Three items should be considered to determine a first guess at an ARIMA model: a time series plot of the data, the ACF, and the PACF.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 1/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
In Lesson 1.1, we discussed what to look for: possible trend, seasonality, outliers, constant variance or nonconstant variance.
You wont be able to spot any particular model by looking at this plot, but you will be able to see the need for various possible actions.
If theres an obvious upward or downward linear trend, a first difference may be needed. A quadratic trend might need a 2nd order
difference (as described above). We rarely want to go much beyond two. In those cases, we might want to think about things like
smoothing, which we will cover later in the course. Over-differencing can cause us to introduce unnecessary levels of dependency
(difference white noise to obtain a MA(1)difference again to obtain a MA(2), etc.)
For data with a curved upward trend accompanied by increasing variance, you should consider transforming the series with either a
logarithm or a square root.
Note: Nonconstant variance in a series with no trend may have to be addressed with something like an ARCH model which includes a
model for changing variation over time. Well cover ARCH models later in the course.
The ACF and PACF should be considered together. It can sometimes be tricky going, but a few combined patterns do stand out. (These are
listed in the Table 3.1 of the book on page 108).
AR models have theoretical PACFs with non-zero values at the AR terms in the model and zero values elsewhere. The ACF will taper to
zero in some fashion. (Example [1]) An AR(1) model has an ACF with a pattern
k
k =
1
Note: You might also consider examining plots of xt versus various lags of xt.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 2/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
After youve made a guess (or two) at a possible model, use software such as R, Minitab, or SAS to estimate the coefficients. Most software
will use maximum likelihood estimation methods to make the estimates. Once the model has been estimated, do the following.
Look at the significance of the coefficients. In R, p-values arent given. For each coefficient, calculate z = estimated coeff. / std. error of
coeff. If |z| > 1.96, the estimated coefficient is significantly different from 0.
Look at the ACF of the residuals. For a good model, all autocorrelations for the residual series should be non-significant. If this isnt the
case, you need to try a different model.
Look at Box-Pierce (Ljung) tests for possible residual autocorrelation at various lags (see Lesson 3.2 for a description of this test).
If non-constant variance is a concern, look at a plot of residuals versus fits and/or a time series plot of the residuals.
If something looks wrong, youll have to revise your guess at what the model might be. This might involve adding parameters or re-interpreting
the original ACF and PACF to possibly move in a different direction.
Sometimes more than one model can seem to work for the same dataset. When thats the case, some things you can do to decide between
the models are:
AIC, AICc, and SIC (or BIC) are defined and discussed on pages 52-53 of our book. The statistics combine the estimate of the
variance with values of the sample size and number of parameters in the model.
One reason that two models may seem to give about the same results is that, with the certain coefficient values, two different models can
sometimes be nearly equivalent when they are each converted to an infinite order MA model. [Every ARIMA model can be converted to an
infinite order MA this is useful for some theoretical work, including the determination of standard errors for forecast errors.] Well see more
about this in Lesson 3.2.
Example 1: The Lake Erie data from Week 1 assignment. The series is n = 40 consecutive annual measurements of the level of Lake Erie in
October.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 3/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
Theres a possibility of some overall trend, but it might look that way just because there seemed to be a big dip around the 15th time or so.
Well go ahead without worrying about trend.
The ACF and the PACF of the series are the following. (They start at lag 1).
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 4/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
The PACF shows a single spike at the first lag and the ACF shows a tapering pattern. An AR(1) model is indicated.
We used an R script written by one of the authors of our book (Stoffer) to estimate the AR(1) model. Heres part of the output:
Coefficients:
ar1 xmean
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 5/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
Where the coefficients are listed, notice the heading xmean. This is giving the estimated mean of the series based on this model, not the
intercept. The model used in the software is of the form (xt ) = 1 (xt1 ) + wt.
The estimated model can be written as (xt - 14.6309) = 0.6909(xt-1 - 14.6309) + wt.
The AR coefficient is statistically significant (z = 0.6909/0.1094 = 6.315). Its not necessary to test the mean coefficient. We know that its not
0.
The authors routine also gives residual diagnostics in the form of several graphs. Heres that part of the output:
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 6/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
The time series plot of the standardized residuals mostly indicates that theres no trend in the residuals, no outliers, and in general, no
changing variance across time.
The Q-Q plot is a normal probability plot. It doesnt look too bad, so the assumption of normally distributed residuals looks okay.
The bottom plot gives p-values for the Ljung-Box-Pierce statistics for each lag up to 20. These statistics consider the accumulated residual
autocorrelation from lag 1 up to and including the lag on the horizontal axis. The dashed blue line is at .05. All p-values are above it. Thats a
good result. We want non-significant values for this statistic when looking at residuals. Read Lesson 3.2 of this week for more about the
LjungBox-Pierce statistic.
All in all, the fit looks good. Theres not much need to continue, but just to show you how things looks when incorrect models are used, we will
present another model.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 7/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
Suppose that we had misinterpreted the ACF and PACF of the data and had tried an MA(1) model rather than the AR(1) model.
Coefficients:
ma1 xmean
The MA(1) coefficient is significant (you can check it), but mostly this looks worse than the statistics for the right model. The estimate of the
variance is 1.87, compared to 1.447 for the AR(1) model. The AIC and BIC statistics are higher for the MA(1) than for the AR(1). Thats not
good.
The diagnostic graphs arent good for the MA(1). The ACF has a significant spike at lag 2 and several of the Ljung-Box-Pierce p-values are
below .05. We dont want them there. So, the MA(1) isnt a good model.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 8/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
Suppose we try a model (still the Lake Erie Data) with one AR term and one MA term. Heres some of the output:
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 9/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
$AICc
[1] 1.592650
$BIC
[1] 0.640745
Note that the MA(1) coefficient is not significant (z = -0.0909/.1969 is less than 1.96 in absolute value). The MA(1) term could be dropped so
that takes us back to the AR(1). Also, the estimate of the variance is barely better than the estimate for the AR(1) model and the AIC and BIC
statistics are higher for the ARMA(1,1) than for the AR(1).
Suppose that the model for your data is white noise. If this is true for every t, then it is true for t - 1 as well, in other words:
xt = w t
xt1 = w t1
0.5xt1 = 0.5w t1
0 = 0.5xt1 + 0.5w t1
Because the data is white noise, xt = wt, so we can add xt to the left side and wt to the right side:
xt = 0.5xt1 + w t + 0.5w t1
This is an ARMA(1, 1)! The problem is that we know it is white noise because of the original equations. If we looked at the ACF what would
we see? You would see the ACF corresponding to white noise, a spike at zero and then nothing else. This also means if we take the white
noise process and you try to fit in an ARMA(1, 1), R will do it and will come up with coefficients that looks something like what we have above.
This is one of the reasons why we need to look at the ACF and the PACF plots and other diagnostics. We prefer a model with the fewest
parameters. This example also says that for certain parameter values, ARMA models can appear very similar to one another.
Heres how we accomplished the work for the example in this lesson.
We first loaded the astsa library discussed in Lesson 1. Its a set of scripts written by Stoffer, one of the textbooks authors. If you installed the
astsa package during Week 1, then you only need the library command.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 10/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64
Use the command library("astsa"). This makes the downloaded routines accessible.
In Lesson 3.3, well discuss the use of ARIMA models for forecasting. Heres how you would forecast for the next 4 times past the end of the
series using the authors source code and the AR(1) model for the Lake Erie data.
sarima.for (xerie, 4, 1, 0, 0) # four forecasts from an AR(1) model for the erie data
Youll get forecasts for the next four times, the standard errors for these forecasts, and a graph of the time series along with the forecasts.
More details about forecasting will be given in Lesson 3.3.
Links:
[1] https://onlinecourses.science.psu.edu/stat510/sites/onlinecourses.science.psu.edu.stat510/files/L03/ar1_example.png
[2] https://onlinecourses.science.psu.edu/stat510/sites/onlinecourses.science.psu.edu.stat510/files/L03/ar2_example.png
[3] https://onlinecourses.science.psu.edu/stat510/sites/onlinecourses.science.psu.edu.stat510/files/L03/ma_example.png
[4] https://onlinecourses.science.psu.edu/stat510/sites/onlinecourses.science.psu.edu.stat510/files/L03/arma11_example.png
[5] http://www.stat.pitt.edu/stoffer/tsa3
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/64 11/11
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65
3.2 Diagnostics
Analyzing possible statistical significance of autocorrelation values
The Ljung-Box statistic, also called the modified Box-Pierce statistic, is a function of the accumulated sample autocorrelations, rj, up to any
specified time lag m. As a function of m, it is determined as
m 2
r
j
Q(m) = n(n + 2) ,
n j
j=1
where n = number of usable data points after any differencing operations. (Please visit forvo.com [1] for the proper pronunciation of Ljung.)
As an example,
2 2 2
r r r
1 2 3
Q(3) = n(n + 2) ( + + ).
n 1 n 2 n 3
This statistic can be used to examine residuals from a time series model in order to see if all underlying population autocorrelations for the
errors may be 0 (up to a specified point).
For nearly all models that we consider in this course, the residuals are assumed to be white noise, meaning that they are identically,
independently distributed (from each other). Thus, as we saw last week, the ideal ACF for residuals is that all autocorrelations are 0. This
means that Q(m) should be 0 for any lag m. A significant Q(m) for residuals indicates a possible problem with the model.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65 1/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65
Distribution of Q(m)
1. When the rj are sample autocorrelations for residuals from a time series model, the null hypothesis distribution of Q(m) is approximately a 2
distribution with df = m p, where p = number of coefficients in the model.
(Note: m = lag to which were accumulating, so in essence the statistic is not defined until m > p).
2. When no model has been used, so that the ACF is for raw data, p = 0 and the null distribution of Q(m) is approximately a 2 distribution with
df = m.
p-Value Determination
In both cases, a p-value is calculated as the probability past Q(m) in the relevant distribution. A small p-value (for instance, p-value < .05)
indicates the possibility of non-zero autocorrelation within the first m lags.
Example 1
Below there is Minitab output for the Lake Erie level data that was used for homework 1 and in Lesson 3.1. A useful model is an AR(1) with a
constant. So, p = 2.
Lag 12 24 36 48
DF 10 22 34 *
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65 2/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65
Minitab gives p-values for accumulated lags that are multiples of 12. The R sarima command will give a graph that shows p-values of the
Ljung-Box-Pierce tests for each lag (in steps of 1) up to a lag that seems in some way to be a function of the samples size (not sure of the
function).
Notice that the p-values for the modified Box-Pierce all are well above .05, indicating non-significance. This is a desirable result. Remember
that there only 40 data values, so theres not much data contributing to correlations at high lags. Thus, the results for m = 24 and m = 36 may
not be meaningful.
When you request a graph of the ACF values, significance limits are shown by R and by Minitab. In general, the limits for the autocorrelation
are placed at 0 2 standard errors of rk. The formula used for standard error depends upon the situation.
Within the ACF of residuals as part of the ARIMA routine, the standard errors are determined assuming the residuals are white noise.
The approximate formula for any lag is that s.e. of rk = 1/(n)1/2.
For the ACF of raw data (the ACF command), the standard error at a lag k is found as if the right model was an MA(k-1). This allows the
possible interpretation that if all autocorrelations past a certain lag are within the limits, the model might be an MA of order defined by the
last significant autocorrelation.
What are standardized residuals in a time series framework? One of the things that we need to look at when we look at the diagnostics from a
regression fit is a graph of the standardized residuals. Let's review what this is for regular regression where the standard deviation is . The
standardized residual at observation i
p
yi 0 j xij
j=1
,
Loading [MathJax]/extensions/MathZoom.js
should be N(0, 1). We hope to see normality when we look at the diagnostic plots. Another way to think about this is:
p
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65 3/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65
p
yi 0 j xij ^
yi y
j=1 i
.
^ )
V ar(yi y
i
where
~ t1 ~ 2
xt = E(xt |xt1 , xt2 , )andP = E [(xt xt ) ] .
t
This is where the standardized residuals come from. This is also essentially how a time series is fit using R. We want to minimize the sums of
these squared values:
2
n ~
xt xt
t1
t=1 P
t
(In reality, it is slightly more complicated. The log-likelihood function is minimized, and this is one term of that function.)
Links:
[1] http://www.forvo.com/search/Ljung
Loading [MathJax]/extensions/MathZoom.js
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/65 4/4
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66
In an ARIMA model, we express xt as a function of past value of x and/or past errors (as well as a present time error). When we forecast a
value past the end of the series, on the right side of the equation we might need values from the observed series or we might, in theory, need
values that arent yet observed.
Example: Consider the AR(2) model xt = + 1xt-1 + 2xt-2 + wt. In this model, xt is a linear function of the values of x at the previous two
times. Suppose that we have observed n data values and wish to use the observed data and estimated AR(2) model to forecast the value of
xn+1 and xn+2, the values of the series at the next two times past the end of the series. The equations for these two values are
To use the first of these equations, we simply use the observed values of xn and xn-1 and replace wn+1 by its expected value of 0 (the assumed
mean for the errors).
The second equation for forecasting the value at time n + 2 presents a problem. It requires the unobserved value of xn+1 (one time past the
end of the series). The solution is to use the forecasted value of (the result of the first equation).
For any wj with 1 j n, use the sample residual for time point j
For any wj with j > n, use 0 as the value of wj
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 1/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66
To understand the formula for the standard error of the forecast error, we first need to define the concept of psi-weights.
xt = w t + 1 w t1 + 2 w t2 + + k w tk +
= j w tj where0 = 1
j=0
|j | <
j=0
[On page 95 of our book, the authors define a causal model as one for which this constraint is in place, along with the additional restraint that
we cant express the value of the present x as a function of future values.]
The process of finding the psi-weight representation can involve a few algebraic tricks. Fortunately, R has a routine. ARMAtoMA, that will do it
for us. To illustrate how psi-weights may be determined algebraically, well consider a simple example.
For an AR(1) model, the mean = /(1 - 1) so in this case, = 40/(1 - .6) = 100. Well define zt = xt - 100 and rewrite the model as zt = 0.6zt-1
+ wt. (You can do the algebra to check that things match between the two expressions of the model.)
To find the psi-weight expression, well continually substitute for the z on the right side in order to make the expression become one that only
involves w values.
Substitute the right side of the second expression for zt-1 in the first expression.
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 2/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66
Substituting this into the equation gives zt = 0.216zt-3 + 0.36wt-2 + 0.6wt-1 + wt.
If you keep going, youll soon see that the pattern leads to
j
zt = xt 100 = (0.6) w tj
j=0
Thus the psi-weights for this model are given by j = (0.6)j for j = 0, 1,, .
In R, the command ARMAtoMA(ar = .6, ma=0, 12) gives the first 12 psi-weights. This will give the psi-weights 1 to 12 in scientific notation.
Remember that 0 = 1. R doesnt give this value. Its listing starts with 1, which equals 0.6 in this case.
MA Models: The psi-weights are easy for an MA model because the model already is written in terms of the errors. The psi-weights = 0 for
lags past the order of the MA model and equal the coefficient values for lags of the errors that are in the model. Remember that we always
have 0 = 1.
Standard error of the forecast error for a forecast using an ARIMA model
The variance of the difference between the forecasted value at time n + m and the (unobserved) value at time n + m is
m1
Variance of (xn
n+m
xn+m ) = w
2
j=0
.
2
j
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 3/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66
2 m1
Standard error of (xn
n+m
^
xn+m ) =
w
j=0
.
2
j
Note that the summation of squared psi-weights begins with (0)2=1 and that the summation goes to m 1, one less than the number of times
ahead for which were forecasting.
When forecasting m = 1 time past the end of the series, the standard error of the forecast error is
2
Standard error of (xn+1 n
xn+1 ) =
^
w
(1)
When forecasting the value m = 2 times past the end of the series, the standard error of the forecast error is
2
Standard error of (xn
n+2
xn+2 ) =
^
w
(1 + ) .
2
1
Notice that the variance will not be too big when m = 1. But, as you predict out farther in the future, the variance will increase. When m is very
large, we will get the total variance. In other words, if you are trying to predict very far out, we will get the variance of the entire time series; as
if you haven't even looked at what was going on previously.
With the assumption of normally distributed errors, a 95% prediction interval for xn+m, the future value of the series at time n + m, is
n 2 m1 2
x ^
1.96 .
n+m w j=0 j
Example: Suppose that an AR(1) model is estimated to be xt = 40 + 0.6xt-1 + wt. This is the same model used earlier in this handout, so the
psi-weights we got there apply.
2
Suppose that we have n = 100 observations,
^
w
= 4 and x100 = 80 . We wish to forecast the values at both times 101 and 102, and create
prediction intervals for both forecasts.
100
x = 40 + 0.6(80) + 0 = 88
101
The 95% prediction interval for the value at time 101 is 88 2(1.96), which is 84.08 to 91.96. We are therefore 95% confident that the
observation at time 101 will be between 84.08 and 91.96. If we repeated this exact process, then 95% of the computed prediction intervals
would contain the true value of x at time 101.
Note that we used the forecasted value for time 101 in the AR(1) equation.
A 95% prediction interval for the value at time 102 is 92.8 (1.96)(2.332).
To forecast using an ARIMA model in R, we recommend our textbook authors script called sarima.for. (It is part of the astsa library
recommended previously.)
Example: In the homework for Week 2, problem 5 asked you to suggest a model for a time series of stride lengths measured every 30 seconds
for a runner on a treadmill.
From R, the estimated coefficients for an AR(2) model and the estimated variance are as follows for a similar data set with n = 90 observations:
Coefficients:
The command
will give forecasts and standard errors of prediction errors for the next six times past the end of the series. Heres the output (slightly edited to
fit here):
$pred
Time Series:
Start = 91
End = 96
[1] 69.78674 64.75441 60.05661 56.35385 53.68102 51.85633
$se
Time Series:
Start = 91
End = 96
[1] 3.386615 5.155988 6.135493 6.629810 6.861170 6.962654
The forecasts are given in the first batch of values under $pred and the standard errors of the forecast errors are given in the last line in the
batch of results under $se.
The procedure also gave this graph, which shows the series followed by the forecasts as a red line and the upper and lower prediction limits as
blue dashed lines:
Psi-Weights for the Estimated AR(2) for the Stride Length Data
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 6/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66
If we wanted to verify the standard error calculations for the six forecasts past the end of the series, we would need to know the psi-weights. To
get them, we need to supply the estimated AR coefficients for the AR(2) model to the ARMAtoMA command.
This will give the psi-weights to in scientific notation. The answer provided by R is:
The output for estimating the AR(2) included this estimate of the error variance:
As an example, the standard error of the forecast error for 3 times past the end of the series is
2 31 2 2 2
^
= 11.47(1 + 1.148 + 0.982 ) = 6.1357
w j=0 j
which, except for round off error, matches the value of 6.135493 given as the third standard error in the sarima.for output above.
For a stationary series and model, the forecasts of future values will eventually converge to the mean and then stay there. Note below what
happened with the stride length forecasts, when we asked for 30 forecasts past the end of the series. [Command was sarima.for (stridelength, 30, 2, 0,
0)]. The forecast got to 48.74753 and then stayed there.
$pred
Time Series:
Start = 91
End = 120
[1] 69.78674 64.75441 60.05661 56.35385 53.68102 51.85633 50.65935 49.89811
[9] 49.42626 49.14026 48.97043 48.87153 48.81503 48.78339 48.76604 48.75676
[17] 48.75192 48.74949 48.74833 48.74780 48.74760 48.74753 48.74753 48.74755
[25] 48.74757 48.74759 48.74760 48.74761 48.74762 48.74762
The graph showing the series and the six prediction intervals is the following
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 7/8
9/14/2017 https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66
https://onlinecourses.science.psu.edu/stat510/print/book/export/html/66 8/8