Sie sind auf Seite 1von 65

State Space Modeling in Macroeconomics and

Finance Using SsfPack in S+FinMetrics


Eric Zivot, Jiahui Wang and Siem-Jan Koopman
August 4, 2002
This version: October 10, 2002

Abstract
This paper surveys some common state space models used in macroeconomics and finance and shows how to specify and estimate these models using
the SsfPack algorithms implemented in the S-PLUS module S+FinMetrics.
Examples include recursive regression models, time varying parameter models,
exact ARMA models and calculation of the Beveridge-Nelson decomposition,
unobserved components models, stochastic volatility models, and ane term
structure models.

Introduction

State space modeling in macroeconomics and finance has become widespread over
the last decade. Textbook treatments of state space models with applications in economics and finance are now common and are given in Harvey (1989, 1993), Hamilton
(1994a), West and Harrison (1997), and Kim and Nelson (1999), Shumway and Stoffer (2000), Durbin and Koopman (2001), and Chan (2002). However, until recently
there has not been much flexible software for the statistical analysis of general models
in state space form. Currently, the most modern set of state space modeling tools
are available in SsfPack, developed by Siem-Jan Koopman1 . SsfPack is a suite of C

This paper was prepared for the Academy Colloquium Masterclass on State Space and Unobserved Components Models in honour of Professor J. Durbin , sponsored by the Royal Netherlands
Academy of Arts and Sciences. Our thanks go to Siem-Jan Koopman for inviting us to participate
in this program. Financial support from the Royal Netherlands Academy of Arts and Sciences, and
from the Gary Waterman Distinguished Scholar Fund at the University of Washington is gratefully
acknowledged.

Contact information: jwang@svolatility.com and ezivot@u.washington.edu. Updates to the paper, including data and programs used in this paper, are available at the second authors website:
http://faculty.washington.edu/ezivot/ezresearch.htm
1
Information about Ssfpack can be found at http://www.ssfpack.com.

routines for carrying out computations involving the statistical analysis of univariate
and multivariate models in state space form. The routines allow for a variety of state
space forms from simple time invariant models to complicated time-varying models.
Functions are available to put standard models like ARMA and spline models in
state space form. General routines are available for filtering, smoothing, simulation
smoothing, likelihood evaluation, forecasting and signal extraction. Full details of
the statistical analysis is provided in Durbin and Koopman (2001), and the reader is
referred to the papers by Koopman, Shephard and Doornik (1999, 2001) for technical
details on the algorithms used in the SsfPack functions.
The SsfPack routines are implemented in Ox and in Insightfuls S-PLUS module
S+FinMetrics, and are based on the algorithms in SsfPack version 3.02 . The implementation of the SsfPack functions in Ox is described in Koopman, Shephard and
Doornik (1999), and the implementation of the SsfPack functions in S+FinMetrics
is described in chapter fourteen of Zivot and Wang (2003). This paper gives a selected
survey of state space modeling in economics and finance utilizing the SsfPack/S+FinMetrics
functions.
This paper is organized as follows. Section two describes the general state space
model and state space representation required for the S+FinMetrics/SsfPack state
space functions. Subsections describe the various S+FinMetrics/SsfPack functions
for putting common time series models into state space form. The process of simulating observations from a given state space model is also covered. Section three
summarizes the main algorithms used for the analysis of state space models. These include the Kalman filter, Kalman smoother, moment smoothing, disturbance smoothing and forecasting. Estimation of the unknown parameters in a state space model is
described in Section four. Section five gives several examples of state space modeling
in economics and finance. These include recursive least squares estimation, estimation and analysis of time varying parameter models, exact ARMA estimation and
the Beveridge-Nelson decomposition, estimation and analysis of unobserved component models, the estimation of a stochastic volatility model, and the estimation and
analysis of some common ane term structure models.
The following typographical conventions are used in this paper. The typewriter
font is used for S-PLUS functions, the output of S-PLUS functions and examples of
S-PLUS sessions. S-PLUS objects of a specified class are expressed in typewriter
font enclosed in quotations . For example, the S-PLUS function timeSeries
creates objects of class timeSeries. Displayed S-PLUS commands are shown with
the prompt character >. For example
2

Ox is a matrix programming language, developed by Jurgen Doornik. Information about


Ox is available at http://www.nu.ox.ac.uk/users/doornik/. S+FinMetrics is an S-PLUS module for the analysis of economic and financial time series. It was conceived by the authors
and Doug Martin, and developed at Insightful, Inc. Its use and functionality is described in
detail in Zivot and Wang (2003). Further information about S+FinMetrics can be found at
http://www.insightful.com/products/default.asp.

> summary(ols.fit)
S-PLUS commands that require more than one line of input are displayed with the
continuation prompt indicated by + or Continue string:.

State Space Representation

Many dynamic time series models in economics and finance may be represented in
state space form. Some common examples are ARMA models, time-varying regression
models, dynamic linear models with unobserved components, discrete versions of
continuous time diusion processes, stochastic volatility models, non-parametric and
spline regressions. The linear Gaussian state space model in S+FinMetrics/SsfPack
is represented as the system of equations
t+1 =
m1

t
N 1
yt
N 1

=
=

dt + Tt
m1 mm

t + Ht t
m1

mr

ct + Zt t
N 1 N m m1
t + Gt
N 1 N N

N 1

r1

(1)
(2)
(3)

where t = 1, . . . , n and
1 N (a, P),
t iid N (0, Ir )
t iid N (0, IN )

(4)
(5)
(6)

and it is assumed that


E[t 0t ] = 0
In (4), a and P are fixed and known but that can be generalized. The state vector t
contains unobserved stochastic processes and unknown fixed eects and the transition
equation (1) describes the evolution of the state vector over time using a first order
Markov structure. The measurement equation (3) describes the vector of observations
yt in terms of the state vector t through the signal t and a vector of disturbances
t . It is assumed that the innovations in the transition equation and the innovations
in the measurement equation are independent, but this assumption can be relaxed.
The deterministic matrices Tt , Zt , Ht , Gt are called system matrices and are usually
sparse selection matrices. The vectors dt and ct contain fixed components and may
be used to incorporate known eects or known patterns into the model; otherwise
they are equal to zero.
The representation of the transition equation (1) is somewhat non-standard because the time index on the innovation variance is at t and not t + 1. A more common

representation of the transition equation, utilized by Harvey (1989, 1993), Hamilton


(1994), Kim and Nelson (1999) and others, is
t = dt +Tt t1 +Ht t

(7)

This representation in (1) simplifies the implementation of many of the recursive


algorithms associated with the Kalman filter and smoother. The notation can be
confusing when certain models need to put in this form, but it is always possible to
do it although the interpretation of some state space quantities may be a bit dierent
than those in (7).
The state space model (1) - (6) may be compactly expressed as

t+1
=
t + t t + ut ,
(8)
yt
(m+N )1 (m+N )m m1 (m+N )1
1 N(a, P)
ut iid N (0, t )

where
t =

dt
ct

, t =

Tt
Zt

, ut =

Ht t
Gt t

(9)
(10)

, t =

Ht H0t
0
0
Gt G0t

The initial value parameters are summarized in the (m + 1) m matrix

P
=
a0

(11)

For multivariate models, i.e. N > 1, it is assumed that the N N matrix Gt G0t is
diagonal.

2.1

Initial Conditions

The variance matrix P of the initial state vector 1 is assumed to be of the form
P = P + P

(12)

where P and P are symmetric m m matrices with ranks r and r , respectively,


and is a large scalar value, e.g. = 107 . The matrix P captures the covariance
structure of the stationary components in the initial state vector, and the matrix P
is used to specify the initial variance matrix for nonstationary components. When
the ith diagonal element of P is negative, the corresponding ith column and row
of P are assumed to be zero, and the corresponding row and column of P will
be taken into consideration. When some elements of state vector are nonstationary,
the S+FinMetrics/SsfPack algorithms implement an exact diuse prior approach
as described in Durbin and Koopman (2001) and Koopman, Shephard and Doornik
(2001).
4

2.2

State Space Representation in S+FinMetrics/SsfPack

State space models in S+FinMetrics/SsfPack utilize the compact representation (8)


with initial value information (11). The following examples describe the specification
of a state space model for use in the S+FinMetrics/SsfPack state space modeling
functions.
Example 1 State space representation of the local level model
Consider the following simple model for the stochastic evolution of the logarithm
of an asset price yt
t+1 = t + t , t iid N (0, 2 )

(13)

yt = t + t , t iid N (0, 2 )
1 N (a, P )

(14)
(15)

where it is assumed that E[t t ] = 0. In the above model, the observed asset price yt
is the sum of two unobserved components, t and t . The component t is the state
variable and represents the fundamental value (signal) of the asset. The transition
equation (13) shows that the fundamental values evolve according to a random walk.
The component t represents random deviations (noise) from the fundamental value
that are assumed to be independent from the innovations to t . The strength of the
signal in the fundamental value relative to the random deviation is measured by the
signal-to-noise ratio of variances q = 2 / 2 . The model (13) - (15) is called the
random walk plus noise model, signal plus noise model or the local level model.3
The state space form (8) of the local level model has time invariant parameters



2
0
1
0
(16)
=
, =
, =
0 2
0
1
with errors t = t and t = t . Since the state variable t is I(1), the unconditional distribution of the initial state 1 doesnt have finite variance. In this case, it
is customary to set a = E[1 ] = 0 and P = var(1 ) to some large positive number,
e.g. P = 107 , in (15) to reflect that no prior information is available. Using (12),
the initial variance is specified with P = 0 and P = 1. Therefore, the initial state
matrix (11) for the local level model has the form

1
=
(17)
0
where 1 implies that P = 1.
In S+FinMetrics/SsfPack, a state space model is specified by creating either a list
variable with components giving the minimum components necessary for describing
3

A detailed technical analysis of this model is given in Durbin and Koopman (2001), chapter 2.

State Space Parameter

List Component Name


mDelta
mPhi
mOmega
mSigma

Table 1: S+FinMetrics/SsfPack state space form list components


a particular state space form or by creating an ssf object. To illustrate, consider
creating a list variable containing the state space parameters in (16)-(17), with =
0.5 and = 1
> sigma.e = 1
> sigma.n = 0.5
> a1 = 0
> P1 = -1
> ssf.ll.list = list(mPhi=as.matrix(c(1,1)),
+ mOmega=diag(c(sigma.n^2,sigma.e^2)),
+ mSigma=as.matrix(c(P1,a1)))
> ssf.ll.list
$mPhi:
[,1]
[1,]
1
[2,]
1
$mOmega:
[,1] [,2]
[1,] 0.25
0
[2,] 0.00
1
$mSigma:
[,1]
[1,]
-1
[2,]
0
In the list variable ssf.ll.list, the component names match the state space form
parameters in (8) and (11). This naming convention, summarized in Table 1, must
be used for the specification of any valid state space model. Also, notice the use of
the coercion function as.matrix. This ensures that the dimensions of the state space
parameters are correctly specified.
An ssf object may be created from the list variable ssf.ll.list using the
S+FinMetrics/SsfPack function CheckSsf:
> ssf.ll = CheckSsf(ssf.ll.list)
6

> class(ssf.ll)
[1] ssf
> names(ssf.ll)
[1] mDelta mPhi
mOmega
[6] mJOmega mJDelta mX
[11] cY
cSt
> ssf.ll
$mPhi:
[,1]
[1,]
1
[2,]
1
$mOmega:
[,1] [,2]
[1,] 0.25
0
[2,] 0.00
1
$mSigma:
[,1]
[1,]
-1
[2,]
0
$mDelta:
[,1]
[1,]
0
[2,]
0
$mJPhi:
[1] 0
$mJOmega:
[1] 0
$mJDelta:
[1] 0
$mX:
[1] 0
$cT:
[1] 0

mSigma
cT

mJPhi
cX

$cX:
[1] 0
$cY:
[1] 1
$cSt:
[1] 1
attr(, class):
[1] ssf
The function CheckSsf takes a list variable with a minimum state space form, coerces the components to matrix objects and returns the full parameterization of a
state space model used in many of the S+FinMetrics/SsfPack state space modeling
functions.
Example 2 State space representation of a time varying parameter regression model
Consider the Capital Asset Pricing Model (CAPM) with time varying intercept
and slope
rt = t + M,t rM,t + t , t GW N(0, 2 )
t+1 = t + t , t GW N(0, 2 )
M,t+1 = M,t + t , t GW N(0, 2 )

(18)
(19)
(20)

where rt denotes the return on an asset in excess of the risk free rate, and rM,t
denotes the excess return on a market index. In this model, both the abnormal
excess return t and asset risk M,t are allowed to vary over time following a random
walk specification. Let t = (t , M,t )0 , yt = rt , xt = (1, rM,t )0 , Ht = diag( , )0
and Gt = . Then the state space form (8) of (18) - (20) is

t+1
I2
H t
=
t +
yt
x0t
Gt
and has parameters
t =

I2
x0t

2 0 0
, = 0 2 0
0 0 2

(21)

Since t is I(1) the initial state vector 1 doesnt have finite variance so it is customary to set a = 0 and P = I2 where is large. Using (12), the initial variance is
8

specified with P = 0 and P = I2 . Therefore, the initial state matrix (11) for the
time varying CAPM has the form

1 0
= 0 1
0
0

The state space parameter matrix t in (21) has a time varying system element
Zt = x0t . In S+FinMetrics/SsfPack, the specification of this time varying element in
t requires an index matrix J and a data matrix X to which the indices in J refer.
The index matrix J must have the same dimension as t . The elements of J are
all set to 1 except the elements for which the corresponding elements of t are time
varying. The non-negative index value indicates the column of the data matrix X
which contains the time varying values. For example, in the time varying CAPM, the
index matrix J has the form

1 1
J = 1 1
1
2

The specification of the state space form for the time varying CAPM requires values for the variances 2 , 2 , and 2 as well as a data matrix X whose rows correspond
with Zt = x0t = (1, rM,t ). For example, let 2 = (0.01)2 , 2 = (0.05)2 and 2 = (0.1)2
and construct the data matrix X using the excess return data in the S+FinMetrics
timeSeries excessReturns.ts
> X.mat = cbind(1,as.matrix(seriesData(excessReturns.ts[,SP500]))
The state space form may be created using
> Phi.t = rbind(diag(2),rep(0,2))
> Omega = diag(c((.01)^2,(.05)^2,(.1)^2))
> J.Phi = matrix(-1,3,2)
> J.Phi[3,1] = 1
> J.Phi[3,2] = 2
> Sigma = -Phi.t
> ssf.tvp.capm = list(mPhi=Phi.t,
+ mOmega=Omega,
+ mJPhi=J.Phi,
+ mSigma=Sigma,
+ mX=X.mat)
> ssf.tvp.capm
$mPhi:
[,1] [,2]
[1,]
1
0
9

[2,]
[3,]

0
0

1
0

$mOmega:
[,1]
[,2] [,3]
[1,] 0.0001 0.0000 0.00
[2,] 0.0000 0.0025 0.00
[3,] 0.0000 0.0000 0.01
$mJPhi:
[,1] [,2]
[1,]
-1
-1
[2,]
-1
-1
[3,]
1
2
$mSigma:
[,1] [,2]
[1,]
-1
0
[2,]
0
-1
[3,]
0
0
$mX:
numeric matrix: 131 rows, 2 columns.
SP500
1 1 0.002803
2 1 0.017566
...
131 1 -0.0007548
Notice in the specification of t the values associated with x0t in the third row are
set to zero. In the index matrix J , the (3,1) element is 1 and the (3,2) element is
2 indicating that the data for the first and second columns of x0t come from the first
and second columns of the component mX, respectively.
In the general state space model (8), it is possible that all of the system matrices
t , t and t have time varying elements. The corresponding index matrices J , J
and J indicate which elements of the matrices t , t and t are time varying and
the data matrix X contains the time varying components. The naming convention
for these components is summarized in Table 2.

10

Parameter Index Matrix


J
J
J
Time Varying Component Data Matrix
X

List Component Name


mJDelta
mJPhi
mJOmega
List Component Name
mX

Table 2: S+FinMetrics/SsfPack time varying state space components


Function
GetSsfReg
GetSsfArma
GetSsfRegArma
GetSsfStsm
GetSsfSpline

Create
Create
Create
Create
Create

state
state
state
state
state

space
space
space
space
space

form
form
form
form
form

for
for
for
for
for

Description
linear regression model
stationary and invertible ARMA model
linear regression model with ARMA errors
structural time series model
nonparametric cubic spline model

Table 3: S+FinMetrics/SsfPack functions for creating common state space models

2.3

S+FinMetrics/SsfPack Functions for Specifying the State


Space Form for Some Common Time Series Models

S+FinMetrics/SsfPack has functions for the creation of the state space representation of some common time series models. These functions and models are summarized
in Table 3.
A complete description of the underlying statistical models and use of these
functions is given in Zivot and Wang (2003), chapter fourteen. The applications
section later on illustrate the use of some of these functions.

2.4

Simulating observations from the State Space Model

Once a state space model has been specified, it is often interesting to draw simulated
values from the model. Simulation from a given state space model is also necessary for
Monte Carlo and bootstrap exercises. The S+FinMetrics/SsfPack function SsfSim
may be used for such a purpose. The arguments expected from SsfSim are
> args(SsfSim)
function(ssf, n = 100, mRan = NULL, a1 = NULL)
where ssf represents either a list with components giving a minimal state space
form or a valid ssf object, n is the number of simulated observations, mRan is
user-specified matrix of disturbances, and a1 is the initial state vector.
Example 3 Simulating observations from the local level model

11

6
4
-6

-4

-2

State
Response

50

100

150

200

250

Figure 1: Simulated values from local level model created using the S+FinMetrics
function SsfSim.
To generate 250 observations on the state variable t+1 and observations yt in the
local level model (13) - (15) use
> set.seed(112)
> ll.sim = SsfSim(ssf.ll.list,n=250)
> class(ll.sim)
[1] matrix
> colIds(ll.sim)
[1] state
response
The function SsfSim returns a matrix containing the simulated state variables t+1
and observations yt . These values are illustrated in Figure ??

12

Algorithms

3.1

Kalman Filter

The Kalman filter is a recursive algorithm for the evaluation of moments of the normally distributed state vector t+1 conditional on the observed data Yt = (y1 , . . . , yt ).
To describe the algorithm, let at = E[t |Yt1 ] denote the conditional mean of t
based on information available at time t 1 and let Pt = var(t |Yt1 ) denote the
conditional variance of t .
The filtering or updating equations of the Kalman filter compute at|t = E[t |Y t ]
and Pt|t = var(t |Y t ) using
at|t = at +Kt vt
Pt|t =

0
Pt Pt Z0t Kt

(22)
(23)

where
vt = yt ct Zt at
Ft = Zt Pt Z0t +Gt G0t
Kt = Pt Z0t F1
t

(24)
(25)
(26)

The variable vt is the measurement equation innovation or prediction error, Ft =


var(vt ) and Kt is the Kalman gain matrix.
The prediction equations of the Kalman filter compute at+1 and Pt+1 using
at+1 = Tt at|t
Pt+1 = Tt Pt|t T0t +Ht H0t

(27)
(28)

The S+FinMetrics/SsfPack function KalmanFil implements the Kalman filter


recursions in a computationally ecient way. The output of KalmanFil is primarily
used by other S+FinMetrics/SsfPack functions, but it can also be used to evaluate
the appropriateness of a given state space model through the analysis of the innovations vt . The S+FinMetrics/SsfPack function SsfMomentEst computes the filtered
state and response estimates from a given state space model and observed data with
the optional argument task=\STFIL". Predicted state and response estimates are
computed using SsfMomentEst with the optional argument task=\STPRED".

3.2

Kalman Filter Initialization

To be written by Siem-Jan

13

3.3

Kalman Smoother

The Kalman filtering algorithm is a forward recursion which computes one-step ahead
estimates at+1 and Pt+1 based on Yt for t = 1, . . . , n. The Kalman smoothing algorithm is a backward recursion which computes the mean and variance of specific
conditional distributions based on the full data set Yn = (y1 , . . . , yn ). The smoothing
equations are
rt = T0t rt , Nt = Tt Nt T0t , Kt = Nt Kt
0
1
0
et = F1
t vt Kt rt , Dt = Ft +Kt Kt

(29)

and the backwards updating equations are


rt1 = Z0t et +rt , Nt1 = Z0t Dt Zt < Kt Zt > +Nt

(30)

for t = n, . . . , 1 with initializations rn = 0 and Nn = 0. For any square matrix A,


the operator < A >= A + A0 . The values rt are called state smoothing residuals and
the values et are called response smoothing residuals. The recursions (29) and (30)
are somewhat non-standard. Durbin and Koopman (2001) show how they may be
re-expressed in more standard form.
The S+FinMetrics/SsfPack function KalmanSmo implements the Kalman smoother
recursions in a computationally ecient way. The output of KalmanSmo is primarily
used by other S+FinMetrics/SsfPack functions, but it can also be used to evaluate
the appropriateness of a given state space model following the arguments in Harvey
and Koopman (1992) and de Jong and Penzer (1998).

3.4

Smoothed State and Response Estimates

The smoothed estimates of the state vector t and its variance matrix are denoted

t = E[t |Yn ] (or at|n ) and var(


t |Yn ), respectively. The smoothed estimate
t is
the optimal estimate of t using all available information Yn , whereas the filtered
estimate at|t is the optimal estimate only using information available at time t, Yt .
The computation of
t and its variance from the Kalman smoother algorithm is
described in Durbin and Koopman (2001).
The smoothed estimate of the response yt and its variance are computed using
t = E[ t |Yn ] = ct +Zt
y
t =
t
var(
yt |Yn ) = Zt var(
t |Y n)Z0t
Smoothed estimates of the state and response may be computed using the S+FinMetrics/SsfPack
functions SsfCondDens and SsfMomentEst with the optional argument task=\STSMO".
The function SsfCondDens only computes the smoothed states and responses whereas
SsfMomentEst also computes the associated variances.
14

3.5

Smoothed Disturbance Estimates

The smoothed disturbance estimates are the estimates of the measurement equations
innovations t and transition equation innovations t based on all available information Yn , and are denoted
t = E[t |Y n ] (or t|n ) and
t = E[ t |Yn ] (or t|n ),
respectively. The computation of
t and
t from the Kalman smoother algorithm is
described in Durbin and Koopman (2001). These smoothed disturbance estimates are
useful for parameter estimation by maximum likelihood and for diagnostic checking.
See chapter seven in Durbin and Koopman (2001) for details.
The S+FinMetrics/SsfPack functions SsfCondDens and SsfMomentEst, with the
optional argument task=\DSSMO", may be used to compute smoothed estimates of the
measurement and transition equation disturbances. The function SsfCondDens only
computes the smoothed states and responses whereas SsfMomentEst also computes
the associated variances.

3.6

Forecasting

The Kalman filter prediction equations (27) - (28) produces one-step ahead predictions of the state vector, at+1 = E[t+1 |Yt ], along with prediction variance matrices
Pt+1 . In the Kalman filter recursions, if there are missing values in yt then vt = 0,
F1
t = 0 and Kt = 0. This allows out-of-sample forecasts of t and yt to be computed
from the updating and prediction equations. Out of sample predictions, together with
associated mean square errors, can be computed from the Kalman filter prediction
equations by extending the data set y1 , . . . , yn with a set of missing values. When
y is missing, the Kalman filter reduces to the prediction step described above. As a
result, a sequence of m missing values at the end of the sample will produce a set of
h step ahead forecasts for h = 1, . . . , m.
Forecasts from state space models may be computed using the S+FinMetrics/SsfPack
function SsfMomentEst with the optional argument task=\STPRED".

3.7

Simulation Smoothing

The simulation of state and response vectors t and yt or disturbance vectors t


and t conditional on the observations Yn is called simulation smoothing. Simulation smoothing is useful for evaluating the appropriateness of a proposed state space
model and for the Bayesian analysis of state space models using importance sampling
and Markov chain Monte Carlo (MCMC) techniques. The S+FinMetrics/SsfPack
function SimSmoDraw generates random draws from the distributions of the state and
response variables or from the distributions of the state and response disturbances.

15

3.8

Prediction Error Decomposition of Log-Likelihood

The prediction error decomposition of the log-likelihood function for the unknown
parameters of a state space model may be conveniently computed using the output
of the Kalman filter
n
X
ln f (yt |Yt1 ; )
(31)
ln L(|Yn ) =
t=1

nN
1 X
=
ln(2)
ln |Ft | + vt0 F1
t vt
2
2 t=1

where f (yt |Yt1 ; ) is a conditional Gaussian density implied by the state space
model (1) - (6). The vector of prediction errors vt and prediction error variance
matrices Ft are computed from the Kalman filter recursions.
The S+FinMetrics/SsfPack functions KalmanFil and SsfLogLik may be used
to evaluate the prediction error decomposition of the log-likelihood function for a
given set of parameters . The S+FinMetrics function SsfFit may be used to find
the maximum likelihood estimators of the unknown parameters , subject to box
constraints, using the S-PLUS function nlminb4 .
3.8.1

Concentrated log-likelihood

In some models, e.g. linear regression models and ARMA models, it is possible to
solve explicitly for one scale factor and concentrate it out of the log-likelihood function
(31). The resulting log-likelihood function is called the concentrated log-likelihood or
profile log-likelihood and is denoted ln Lc (|Yn ). Following Koopman, Shephard and
Doornik (1999), let denote such a scale factor, and let
yt = t +Gct ct
with ct iid N (0, 2 I) denote the scaled version of the measurement equation (3).
The state space form (1) - (3) applies but with Gt = Gct and Ht = Hct . This
formulation implies that one non-zero element of Gct or Hct is kept fixed, usually at
unity, which reduces the dimension of the parameter vector by one. The solution
for 2 from (31) is given by
n
1 X 0 c 1
2

() =
v (F ) vt
N n t=1 t t
and the resulting concentrated log-likelihood function is

n
1X
nN
nN 2
ln |Fct|
ln L (|Yn ) =
ln(2)
ln () + 1
2
2
2 t=1
c

(32)

There are several optimization algorithms available in S-PLUS besides nlminb. Most notable
are the functions ms and optim (in the MASS library). SsfFit may be easily modified to use these
routines.

16

Function
KalmanIni
KalmanFil
KalmanSmo
SsfCondDens
SsfMomentEst
SimSmoDraw
SsfLogLik
SsfFit

Description
Tasks
Initialize Kalman filter
All
Kalman filtering and likelihood eval
All
Kalman smoothing
None
Conditional density/mean calculation
STSMO,DSSMO
Moment estimation and smoothing
STFIL,STPRED,STSMO,DSSMO
Simulation smoother draws
STSIM,DSSIM
Log-likelihood of state space model
None
Estimate state space model parameters
None

Table 4: General S+FinMetrics/SsfPack state space functions


Task
KFLIK
STFIL
STPRED
STSMO
DSSMO
STSIM
DSSIM

Description
Kalman filtering and loglikelihood evaluation
State filtering
State prediction
State smoothing
Disturbance smoothing
State simulation
Disturbance simulation

Table 5: Task argument to S+FinMetrics/SsfPack functions


For a given set of parameters , the concentrated log-likelihood may be evaluated
using the functions KalmanFil and SsfLogLik. Maximization of the concentrated loglikelihood function may be specified in the S+FinMetrics/SsfPack function SsfFit
by setting the optional argument conc=T.

3.9

Function Summary

The S+FinMetrics/SsfPack functions for computing the algorithms described above


are summarized in Table 4.
All of the functions except KalmanSmo have an optional argument task which
controls the task to be performed by the function. The values of the argument task
with brief descriptions are given in Table 5.

Applications in Macroeconomics and Finance

The following sections illustrate the use of the S+FinMetrics/SsfPack state space
modeling and analysis functions for some commonly used time series models in
macroeconomics and finance.

17

4.1

Recursive least Squares Estimation and Tests for Structural Stability

Consider the CAPM regression


rt = + M rM,t + t , t GW N (0, 2 )
where rt denotes the return on an asset in excess of the risk free rate, and rM,t denotes
the excess return on a market index. This is a linear regression model of the form
yt = x0t + t , t GW N(0, 2 ),
where xt = (1, rM,t )0 is a 2 1 vector of data, and = (, M )0 is a 2 1 fixed
parameter vector. A state space representation is

Ik
t+1
0
=
t +
(33)
yt
x0t
t
where the state vector satisfies
t+1 = t = .
The state space system matrices are Tt = Ik , Zt = x0t , Gt = and Ht = 0. The coefficient vector is fixed and unknown so that the initial conditions are 1 N(0, Ik )
where is large.
To illustrate the CAPM, we use the monthly excess return data on Microsoft and
the S&P 500 index over the period February, 1990 through December, 2000 in the
S+FinMetrics timeSeries excessReturns.ts:
> X.mat = cbind(1,as.matrix(seriesData(excessReturns.ts[,"SP500"])))
> msft.ret = excessReturns.ts[,"MSFT"]
The state space form for the CAPM as a linear regression with fixed regressors may
be created using the S+FinMetrics/SsfPack function GetSsfReg:
> ssf.reg = GetSsfReg(X.mat)
> ssf.reg
$mPhi:
[,1] [,2]
[1,]
1
0
[2,]
0
1
[3,]
0
0
$mOmega:
[,1] [,2] [,3]
18

[1,]
[2,]
[3,]

0
0
0

0
0
0

0
0
1

$mSigma:
[,1] [,2]
[1,]
-1
0
[2,]
0
-1
[3,]
0
0
$mJPhi:
[,1] [,2]
[1,]
-1
-1
[2,]
-1
-1
[3,]
1
2
$mX:
numeric matrix: 131 rows, 2 columns.
SP500
[1,] 1 0.0028027064
...
[131,] 1 -0.0007548068
By default, the variance of the regression error is set equal to unity and the state
variables are given a diuse initialization.
4.1.1

Recursive least squares estimation

An advantage of analyzing the linear regression model in state space form is that recursive least squares (RLS) estimates of the regression coecient vector are readily
computed from the Kalman filter. The RLS estimates are based on estimating the
model
(34)
yt = 0t xt + t , t = 1, . . . , n
by least squares recursively for t = 3, . . . , n giving n 2 least squares (RLS) estimates
3, . . . ,
T ). If is constant over time then the recursive estimates
t should quickly
(
settle down near a common value. If some of the elements in are not constant then
the corresponding RLS estimates should show instability. Hence, a simple graphical
technique for uncovering parameter instability is to plot the RLS estimates it (i =
1, 2) and look for instability in the plots.
The RLS estimates are simply the filtered state estimates from the model (33),
and may be computed using the S+FinMetrics/SsfPack function SsfMomentEst with
the optional argument task=\STFIL":
19

> filteredEst.reg = SsfMomentEst(msft.ret,ssf.reg,task="STFIL")


> class(filteredEst.reg)
[1] "SsfMomentEst"
> names(filteredEst.reg)
[1] "state.moment"
"state.variance"
"response.moment"
[4] "response.variance" "task"
"positions"
The component state.moment contains the filtered state estimates at|t for t = 1, . . . , n,
which are the RLS estimates of the linear regression coecients, and the component
response.moment contains the filtered response estimates yt|t . The first column of the
component state.moment contains the RLS estimates of , and the second column
contains the RLS estimates of M : 5
> filteredEst.reg$state.moment
numeric matrix: 131 rows, 2 columns.
state.1
state.2
[1,] 0.05967933 0.0001672637
[2,] 0.05044782 3.2939515044
[3,] 0.07455946 1.1812959990
...
[131,] 0.01281735 1.525871
The last row contains the full sample least squares estimates of and M , which may
be verified using the S+FinMetrics function OLS:
> ols.fit = OLS(MSFT~SP500,data=excessReturns.ts)
> coef(ols.fit)
(Intercept)
SP500
0.01281735 1.525871
The RLS estimates may be visualized using the generic plot method for objects of
class SsfMomentEst:
> plot(filteredEst.reg,strip.text=c("alpha","beta","expected return"))
The resulting plot is illustrated in Figure 2. Notice that the RLS estimates of M
seem fairly constant whereas the RLS estimates of do not.
Since the data passed to SsfMomentEst is a timeSeries object, the time and
date information is available in the positions component of the object filteredEst.reg.
Therefore, a time series plot of the RLS estimates may be computed using
> rls.coef = timeSeries(pos=filteredEst.reg$positions,
+ data=filteredEst.reg$state.moment)
> seriesPlot(rls.coef,one.plot=F,strip.text=c("alpha","beta"))
The resulting plot is illustrated in Figure 3.
5

Since the Kalman filter is given an exact diuse initialization, the RLS estimates are available
for t = 1, 2. The usual formula for computing RLS estimates starts at t = 3.

20

Filtered estimates: RLS

alpha

beta

0.02

0.04

0.06

0.08

Values

-0.1

0.0

0.1

0.2

expected return

20

40

60

80

100

120

Figure 2: RLS estimates of CAPM for Microsoft using the Kalman filter.
4.1.2

Tests for constant parameters

Formal tests for structural stability of the regression coecients, such as the CUSUM
test of Brown, Durbin and Evans (1976), may be computed from the standardized
1 step ahead recursive residuals
0 xt
yt
vt
t1
wt = =
ft
ft
where ft is an estimate of the recursive error variance

2 1 + x0t (X0t1 Xt1 )1 xt

and Xt is the (t k) matrix of observations on xs using data from s = 1, . . . , t. These


standardized recursive residuals result as a by-product of the Kalman filter recursions
and may be extracted using the S+FinMetrics/SsfPack function KalmanFil
> kf.reg = KalmanFil(msft.ret,ssf.reg)
> class(kf.reg)
[1] "KalmanFil"
> names(kf.reg)
21

beta

0.02

0.04

0.06

alpha

1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

2001

Figure 3: RLS estimates of CAPM for Microsoft using the Kalman filter
[1] "mOut"
"innov"
"std.innov"
[5] "loglike"
"loglike.conc" "dVar"
[9] "mOffP"
"task"
"err"
[13] "positions"
> w.t = kf.reg$std.innov[-c(1,2)]
# first two

"mGain"
"mEst"
"call"
innovations are equal to zero

Diagnostic plots of the standardized innovations may be created using the plot
method for objects of class KalmanFil
> plot(kf.reg)
Make a plot selection (or 0 to exit):
1: plot: all
2: plot: innovations
3: plot: standardized innovations
4: plot: innovation histogram
5: plot: normal QQ-plot of innovations
6: plot: innovation ACF
Selection:
22

Standardized Prediction Errors

-0.1
-0.2
-0.3

Values

0.0

0.1

innovations

1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

2001

Figure 4: Standardized innovations, wt = vt / ft , computed from the RLS estimates


of the CAPM for Microsoft.
Selection 3 produces the graph shown in Figure 4.
The CUSUM test is based on the cumulated sum of the standardized recursive
residuals
t
X
wj
CU SUMt =

w
j=k+1

where
w is the sample standard deviation of wj and k denotes the number of estimated coecients. Under the null hypothesis that in (34) is constant, CU SUMt
has mean zero and variance that is proportional to t k 1. Brown, Durbin and
Evans (1976) show that approximate 95% confidence bands
for CU SUMt are given

by
the
two
lines
which
connect
the
points
(k,
0.948
n

k
1) and (n, 0.948

3 n k 1). If CUSU Mt wanders outside of these bands, then the null of parameter stability may be rejected. The S-PLUS commands to compute CU SU Mt and
create the CUSUM plot are
>
>
>
>

cusum.t = cumsum(w.t)/stdev(w.t)
nobs = length(cusum.t)
tmp = 0.948*sqrt(nobs)
upper = seq(tmp,3*tmp,length=nobs)
23

30
20
10
0
-10
-20
-30

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

2001

Figure 5: CUSUM test for parameter constancy in CAPM regression for Microsoft
>
>
+
>
+

lower = seq(-tmp,-3*tmp,length=nobs)
tmp.ts = timeSeries(pos=kf.reg$positions[-c(1,2)],
data=cbind(cusum.t,upper,lower))
plot(tmp.ts,reference.grid=F,
plot.args=list(lty=c(1,2,2),col=c(1,2,2)))

The resulting CUSUM plot is illustrated in Figure 5.The CUSUM test indicates that
the CAPM for Microsoft has stable parameters.
As mentioned in Koopman, Shephard and Doornik (1999), the output of the
basic smoothing recursions can be used to construct t-tests for structural change in
regression models using the results of de Jong and Penzer (1998). In particular, the
null hypothesis i = i with respect to the ith explanatory variable in the k variable
regression
yt = + xi,t i + + t , for t = 1, . . . ,
yt = + xi,t i + + t , for t = + 1, . . . , n
against the alternative i 6= i can be tested using the standardized smoothing
residual
p
ri, / Nii, , = 1, . . . , n 1
(35)
24

where rt and Nt are computed from the smoothing equation (30). Under the null
of no structural change, i = i , (35) is distributed Student-t with n k degrees of
freedom. The standardized smoothing residuals (35) for the CAPM regression may
be computed using the S+FinMetrics/SsfPack function KalmanSmo
> ks.reg = KalmanSmo(kf.reg,ssf.reg)
> class(ks.reg)
[1] "KalmanSmo"
> names(ks.reg)
[1] "state.residuals"
"response.residuals" "state.variance"
[4] "response.variance" "aux.residuals"
"scores"
[7] "positions"
"call"
The first two columns of the component aux.residuals contain the standardized
smoothing residuals (35) for the state variables and the last column contains the
standardized response smoothing residuals from (29):
> colIds(ks.reg$aux.residuals)
[1] "state.1" "state.2" "response"
The t-tests (35), illustrated in Figure 6, do not indicate any structural change in the
coecients.
4.1.3

Least squares residuals

To compute the residuals based on the full sample least squares estimates using the
Kalman filter, the state space model must be modified so that the initial value of the
state vector is the least squares estimate and the variance of the initial vector is equal
to zero:
> ssf.reg$mSigma[3,] = filteredEst.reg$state.moment[131,]
> ssf.reg$mSigma[1,1]=ssf.reg$mSigma[2,2]=0
The Kalman filter applied to the modified state space then give the least squares
residuals:
> kf.ols = KalmanFil(msft.ret,ssf.reg)
> res.ols = kf.ols$innov

4.2

Estimation of CAPM with Time Varying Parameters

Consider estimating the CAPM with time varying coecients (18) - (20) subject to
random walk evolution, using monthly data on Microsoft and the S&P 500 index over
the period February, 1990 through December, 2000 contained in the S+FinMetrics

25

0.2
-0.2

-0.1

0.0

0.1

t-alpha
t-beta

1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

2001

Figure 6: t-tests for structural change in the parameters of the CAPM for Microsoft
based on standardized smoothing residuals.
timeSeries excessReturns.ts. Neumann (2002) surveys several estimation strategies for time varying parameter models and concludes that the state space model
with random walk specifications for the evolution of the time varying parameters
generally performs very well. The parameters of the model are the variances of
the innovations to the transition and measurement equations: 2 = ( 2 , 2 , 2 )0 .
Since these variances must be positive the log-likelihood is parameterized using =
(ln( 2 ), ln(2 ), ln(2 ))0 , so that 2 = (exp(1 ), exp(2 ), exp(3 ))0 . Since the state
space form for the CAPM with time varying coecients requires a data matrix X
containing the excess returns on the S&P 500 index, the function SsfFit requires as
input a function which takes both and X and returns the appropriate state space
form. One such function is
tvp.mod = function(parm,mX=NULL) {
parm = exp(parm)
ssf.tvp = GetSsfReg(mX=mX)
26

diag(ssf.tvp$mOmega) = parm
CheckSsf(ssf.tvp)
}
Starting values for are specified as
> tvp.start = c(0,0,0)
> names(tvp.start) = c(ln(s2.alpha),ln(s2.beta),ln(s2.y))
The maximum likelihood estimates for based on SsfFit are computed using
> tvp.mle = SsfFit(tvp.start,msft.ret,"tvp.mod",mX=X.mat)
Iteration 0 : objective = 183.2072
...
Iteration 18 : objective = -124.4641
RELATIVE FUNCTION CONVERGENCE
> class(tvp.mle)
[1] "list"
> names(tvp.mle)
[1] "parameters" "objective" "message"
"grad.norm"
[5] "iterations" "f.evals"
"g.evals"
"hessian"
[9] "scale"
"aux"
"call"
"vcov"
The estimates of = (ln( 2 ), ln( 2 ), ln(2 ))0 are in the component parameters
> tvp.mle$parameters
ln(s2.alpha) ln(s2.beta) ln(s2.y)
-11.56685
-5.314272 -4.855237
and the estimated asymptotic variances (based on the inverse of the empirical hessian)
are in the component vcov. Estimates of the asymptotic standard errors are:
> sqrt(diag(tvp.mle$vcov))
[1] 1.8917852 2.2581790 0.1298533
The estimates for the standard deviations , and are
> sigma.mle = sqrt(exp(tvp.mle$parameters))
> names(sigma.mle) = c("s.alpha","s.beta","s.y")
> sigma.mle
s.alpha
s.beta
s.y
0.001950676 0.05234977 0.08996629
The asymptotic standard errors for the estimated standard deviations, from the delta
method, are

27

>
>
>
>

dg = diag(exp(tvp.mle$parameters)/2)
se.sigma = sqrt(diag(dg%*%tvp.mle$vcov%*%dg))
names(se.sigma) = names(sigma.mle)
se.sigma
s.alpha
s.beta
s.y
5.332895e-006 0.004232753 0.0005200605

4.2.1

Filtered estimates

Given the estimated parameters, the filtered estimates of the time varying parameters
t and M,t may be computed using SsfMomentEst
> filteredEst.tvp = SsfMomentEst(msft.ret,
+ tvp.mod(tvp.mle$parameters,mX=X.mat),task="STFIL")
> class(filteredEst.tvp)
[1] "SsfMomentEst"
> names(filteredEst.tvp)
[1] "state.moment"
"state.variance"
"response.moment"
[4] "response.variance" "task"
"positions"
The component state.moment contains the filtered state estimates, at|t , and the
component response.moment contains the filtered response estimate yt|t . The corresponding variances are in the components state.variance and response.variance,
respectively. The filtered moments, without standard error bands, may be visualized
using the plot method for objects of class SsfMomentEst
> plot(filteredEst.tvp,strip.text=c("alpha(t)",
+ "beta(t)","Expected returns"),main="Filtered Estimates")
The resulting graph is illustrated in Figure 7. The filtered estimates of the parameters
from the CAPM with time varying parameters look remarkably like the RLS estimates
computed earlier.
Figure 8 shows time series plots of the filtered estimates along with approximate
95% confidence intervals.
4.2.2

Smoothed estimates

The smoothed estimates of the time varying parameters t and M,t as well as the
expected returns may be extracted using SsfCondDens:
> smoothedEst.tvp = SsfCondDens(msft.ret,
+ tvp.mod(tvp.mle$parameters,mX=X.mat),
+ task=STSMO)
> class(smoothedEst.tvp)
[1] "SsfCondDens"
28

Filtered Estimates

alpha(t)
0.08

Expected returns

-0.1

0.02

0.0

0.04

0.1

0.06

0.2

Values

beta(t)

20

40

60

80

100

120

Figure 7: Filtered estimates of CAPM for Microsoft with time varying parameters
> names(smoothedEst.tvp)
[1] "state"
"response"

"task"

"positions"

Notice that SsfCondDens does not compute estimated variances for the smoothed
state and response variables. The plot method may be used to visualize the smoothed
estimates:
> plot(smoothedEst.tvp,strip.text=c(alpha(t),
+ beta(t),Expected returns),main=Smoothed Estimates)
Figure 9 shows the resulting plot. Notice that the smoothed state estimates are quite
dierent from the filtered state estimates shown in Figure 7. If standard error bands
for the smoothed estimates are desired, then SsfMomentEst with task=\STSMO" must
be used and the state variances are available in the component state.variance.

29

-0.15

0.05

0.25

Filtered estimates of alpha(t)

1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

2001

1998

1999

2000

2001

-10

-5

10

15

20

Filtered estimates of beta(t)

1990

1991

1992

1993

1994

1995

1996

1997

Figure 8:

4.3

Exact ARMA Model Estimation and The BeveridgeNelson Decomposition of U.S. Real GDP

Consider the problem of decomposing the movements in the natural logarithm of U.S.
postwar quarterly real GDP into permanent and transitory (cyclical) components.
The levels and growth rate data, multiplied by 100, over the period 1947:I to 1998:II
are illustrated in Figure 10. Beveridge and Nelson (1980) proposed a definition for the
permanent component of an I(1) time series yt with drift as the limiting forecast
as horizon goes to infinity, adjusted for the mean rate of growth:
BNt = lim Et [yt+h h]
h

where Et [] denotes expectation conditional on information available at time t. The


transitory or cycle component is then defined as the gap between the present level of
the series and its long-run forecast:
ct = yt BNt
30

Smoothed Estimates

1.4

alpha(t)
0.025

Expected returns

-0.2

0.010

-0.1

0.015

0.0

0.020

0.1

Mean

1.5

1.6

1.7

1.8

beta(t)

20

40

60

80

100

120

Figure 9: Smoothed estimates of CAPM for Microsoft with time varying parameters
This permanent-transitory decomposition is often referred to as the BN decomposition. In practice, the BN decomposition is obtained by fitting an ARMA(p, q) model
to yt , and then computing BNt and ct from the fitted model.
As shown recently by Morley (2002), the BN decomposition may be easily computed using the Kalman filter by putting the forecasting model for yt in state
space form. In particular, suppose yt is a linear combination of the elements of
the m 1 state vector t :

yt = z1 z2 zm t
where zi (i = 1, . . . , m) is the weight of the ith element of t in determining yt .
Suppose further that
t+1 = Tt + t , t iid N (0, V),
such that all of the eigenvalues of T have modulus less than unity, and T is invertible.
31

740

780

820

860

Log Postwar Quarterly Real GDP

1950

1955

1960

1965

1970

1975

1980

1985

1990

1995

1980

1985

1990

1995

-2

-1

Quarterly Growth Rate

1950

1955

1960

1965

1970

1975

Figure 10: U.S. postwar quarterly real GDP


Then, Morley shows that

z1 z2 zm T(Im T)1 at|t

= yt BNt = z1 z2 zm T(Im T)1 at|t

BNt = yt +
ct

(36)

where at|t denotes the filtered estimate of t .


4.3.1

Estimation of ARMA(2,2) model

To illustrate the process of constructing the BN decomposition for U.S. postwar


quarterly real GDP, we follow Morley, Nelson and Zivot (2002) (hereafter MNZ) and
consider fitting the ARMA(2,2) model
yt = 1 (yt1 ) + 2 (yt2 ) + t + 1 t1 + 2 t2
t iid N (0, 2 )
where yt denotes the natural log of real GDP multiplied by 100. In S+FinMetrics/SsfPack,
the ARMA(p, q) model for a demeaned stationary variable yt has a state space representation with transition and measurement equations
t+1 = Tt + H t , t N (0, 2 )
yt = Zt
32

and time invariant system matrices

1 1

0
2

..
T = .

m1 0
m 0

1 0
Z =

0 0
1

1
0
1

. . . ..
, H = ...
.

m1
0
1
0
m

0 0

(37)

where d, c and G of the state space form (1)-(3) are all zero and m = max(p, q + 1).
The state vector t has the form

yt
y + + y

p tm+1 + 1 t + + m1 tm+2
2 t1

t = 3 yt1 + + p ytm+2 + 2 t + + m1 tm+3


(38)

..

m yt1
+ m1 t

The exact maximum likelihood estimates of the ARMA(2,2) parameters may be


computed using the S+FinMetrics/SsfPack functions GetSsfArma and SsfFit. The
function SsfFit requires as input a function which takes the unknown parameters
and produces the state space form for the ARMA(2,2). One such function is
arma22.mod = function(parm) {
phi.1 = parm[1]
phi.2 = parm[2]
theta.1 = parm[3]
theta.2 = parm[4]
sigma2 = exp(parm[5])
ssf.mod = GetSsfArma(ar=c(phi.1,phi.2),ma=c(theta.1,theta.2),
sigma=sqrt(sigma2))
CheckSsf(ssf.mod)
}
Notice that the function arma22.mod parameterizes the error variance as 2 = exp(),
< < , to ensure that the estimated value of 2 is positive, and utilizes the
S+FinMetrics/SsfPack function GetSsfArma to create the state space form for the
ARMA(2,2) function. Starting values for the estimation are given by (conditional
mles using S-PLUS function arima.mle)
> arma22.start = c(1.34,-0.70,-1.05,0.51,-0.08)
> names(arma22.start) = c("phi.1","phi.2","theta.1","theta.2","ln.sigma2")

33

The data used for the estimation is in the timeSeries lny.ts and the demeaned
first dierence data is in the timeSeries dlny.ts.dm. The exact maximum likelihood estimates for = (1 , 2 , 1 , 2 , )0 are computed using SsfFit6 :
> arma22.mle = SsfFit(arma22.start,dlny.ts.dm,"arma22.mod")
Iteration 0 : objective = 284.6686
...
Iteration 27 : objective = 284.651
RELATIVE FUNCTION CONVERGENCE
> arma22.mle$parameters
phi.1
phi.2
theta.1
theta.2
ln.sigma2
1.341793 -0.7058402 -1.054205 0.5187025 -0.06217022
> exp(arma22.mle$parameters["ln.sigma2"])
ln.sigma2
0.9397229
4.3.2

Residual diagnostics

Residual diagnostics to evaluate the fit of the state space model may be computed
using the S+FinMetrics/SsfPack function KalmanFil:
> kf.arma22 = KalmanFil(dlny.ts.dm,
+ arma22.mod(arma22.mle$parameters))
> class(kf.arma22)
[1] "KalmanFil"
> names(kf.arma22)
[1] "mOut"
"innov"
"std.innov"
[5] "loglike"
"loglike.conc" "dVar"
[9] "mOffP"
"task"
"err"
[13] "positions"

"mGain"
"mEst"
"call"

Of particular interest are the components innov and std.innov,


which contain the

innovations, vt , and the standardized innovations, vt / Ft , respectively. Properties


of these components may be visualized using the plot method for objects of class
KalmanFil:
> plot(kf.arma22)
Make a plot selection (or 0 to exit):
1: plot: all
2: plot: innovations
6

An estimate of the asymptotic covariance matrix is given in the vcov component of arma22.mle.
An estimate of the variance of 2 = exp() may be computed using the delta method.

34

0.0

0.2

ACF
0.4

0.6

0.8

1.0

Autocorrelation Function

10
Lag

15

20

Figure 11: ACF of innovations vt from ARMA(2,2) model fit to real GDP growth.
3: plot: standardized innovations
4: plot: innovation histogram
5: plot: normal QQ-plot of innovations
6: plot: innovation ACF
Selection:
Selection 6 is shown in Figure 11.
4.3.3

BN decomposition

Given the maximum likelihood estimates , the filtered estimate of the state vector
may be computed using the S+FinMetrics/SsfPack function SsfMomentEst with
optional argument task=\STFIL"
> filteredEst.arma22 = SsfMomentEst(dlny.ts.dm,
+ arma22.mod(arma22.mle$parameters),task="STFIL")
> at.t = filteredEst.arma22$state.moment
35

740

780

820

860

Log Real GDP and BN Trend

1950

1955

1960

1965

1970

1975

1980

1985

1990

1995

1980

1985

1990

1995

-1.5

-0.5

0.5

1.5

BN Cycle

1950

1955

1960

1965

1970

1975

Figure 12: BN decomposition for U.S. postwar quarterly real GDP.


The BN decomposition (36) may then be computed using
>
+
>
>
>
>
>

filteredEst.arma22 = SsfMomentEst(dlny.ts.dm,
arma22.mod(arma22.mle$parameters),task="STFIL")
xt.t = filteredEst.arma22$state.moment
T.mat = ssf.arma22$mPhi[1:3,1:3]
tmp = t(T.mat%*%solve((diag(3)-T.mat))%*%t(at.t))
BN.t = lny[2:nobs,] + tmp[,1]
c.t = lny[2:nobs,] - BN.t

Figure 12 illustrates the results of the BN decomposition for U.S. real GDP. The BN
trend follows the data very closely, and the BN cycle does not display much cyclical
behavior.

4.4

Unobserved Components Decompositions of Real GDP:


Clarks Model

Harvey (1985) and Clark (1987) provide an alternative to the BN decomposition of


an I(1) time series with drift into permanent and transitory components based on
unobserved components structural time series models. For example, Clarks model
36

for the natural logarithm of postwar real GDP specifies the trend as a pure random
walk, and the cycle as a stationary AR(2) process:
yt = t + ct
t = + t1 + vt
ct = 1 ct1 + 2 ct2 + wt

(39)

where the roots of (z) = 1 1 z 2 z 2 = 0 lie outside the complex unit circle. For
identification purposes, Clark assumes that the trend innovations and cycle innovations are uncorrelated and normally distributed:

vt
0
v 0
iid N
,
0 2w
wt
0
The Clark model may be put in state-space form (8) with

1 0
t+1
0
0 1

t+1 = ct+1 , =
0 , = 0 1
ct
1 1
0
2

v

vt+1

0
t
ut =
, t = wt+1 , =
0
0
0
0

0
2

0
0

0
2w
0
0

0
0
0
0

0
0

0
0

Since the trend component is nonstationary, it is given a diuse initialization. The


initial covariance matrix P of the stationary cycle is determined from
vec(P ) = (I4 (F F)1 )vec(Vw )
where
F=

1 2
1 0

, Vw =

The initial value parameter matrix (11) is then

1 0
p11 p12
=
p22 p21
0
0
where pij denotes the (i, j) element of P .

37

2w 0
0 0

0
0

0
0

4.4.1

Estimation

The exact maximum likelihood estimates of the Clark model parameters, based on
the prediction error decomposition of the log-likelihood function, may be computed
using the S+FinMetrics/SsfPack SsfFit. The function SsfFit requires as input a
function which takes the unknown parameters and produces the state space form
for the Clark model. One such function is
Clark.mod = function(parm) {
mu = parm[1]
phi1 = parm[2]
phi2 = parm[3]
sigma2.v = exp(parm[4])
sigma2.w = exp(parm[5])
bigV = diag(c(sigma2.v,sigma2.w))
Omega = matrix(0,4,4)
Omega[1:2,1:2] = bigV
a1 = matrix(0,3,1)
# solve for initial variance of stationary part
bigF = matrix(c(phi1,1,phi2,0),2,2)
vecV = c(sigma2.w,0,0,0)
vecP = solve(diag(4)-kronecker(bigF,bigF))%*%vecV
P.ar2 = matrix(vecP,2,2)
Sigma = matrix(0,4,3)
Sigma[1,1] = -1
Sigma[2:3,2:3] = P.ar2
# create state space list
ssf.mod = list(mDelta=c(mu,0,0,0),
mPhi=rbind(c(1,0,0),c(0,phi1,phi2),c(0,1,0),c(1,1,0)),
mOmega=Omega,
mSigma = Sigma)
CheckSsf(ssf.mod)
}
Notice that the state variances are parameterized as 2v = exp( v ) and 2w = exp( w ),
< v , w < , to ensure positive estimates. Starting values for the parameters
are based on values near the estimates of the Clark model from MNZ:
> Clark.start=c(0.81,1.53,-0.61,-0.74,-0.96)
> names(Clark.start) = c("mu","phi.1","phi.2",
+ "ln.sigma2.v","ln.sigma2.w")
The data used for the estimation is in the timeSeries lny.ts, and is the
same data used to compute the BN decomposition earlier. The maximum likelihood
38

estimates and asymptotic standard errors of the parameters = (, v , w , 1 , 2 )0


using SsfFit are7
> Clark.mle = SsfFit(Clark.start,lny.ts,"Clark.mod")
Iteration 0 : objective = 287.5252
...
Iteration 19 : objective = 287.5243
RELATIVE FUNCTION CONVERGENCE
> Clark.mle$parameters
mu
phi.1
phi.2 ln.sigma2.v ln.sigma2.w
0.8119143 1.530305 -0.6097297 -0.7440539
-0.956487
> sqrt(diag(Clark.mle$vcov))
[1] 0.05005272 0.10194946 0.11463581 0.30134294 0.42546854
> sqrt(exp(Clark.mle$parameters[4:5]))
ln.sigma2.v ln.sigma2.w
0.6893357
0.6198712
The maximum likelihood estimates for the Clark model parameters are almost identical to those found by MNZ8 .
4.4.2

Filtered estimates

The filtered estimates of the trend, t|t , and cycle, ct|t , given the estimated parameters may be computed using the function SsfMomentEst with the optional argument
task=\STFIL"
> filteredEst.Clark = SsfMomentEst(lny.ts,
+ Clark.mod(Clark.mle$parameters),task="STFIL")
> names(filteredEst.Clark)
[1] "state.moment"
"state.variance"
"response.moment"
[4] "response.variance" "task"
"positions"
The filtered trend estimate is in the first column of the state.moment component
and the filtered cycle is in the second column. The plot method gives time plots of
the columns of the state.moment component and response.moment component:
In the estimation, no restrictions were imposed on the AR(2) parameters 1 and 2 to ensure
that the cycle is stationary. The function SsfFit uses the S-PLUS optimization algorithm nlminb,
which performs minimization of a function subject to box constraints. Box constraints on 1 and
2 may be used to constrain their estimated values to be near the appropriate stationary region.
8
MNZ estimate the Clark model in GAUSS using the prediction error decomposition with the
variance of the initial state for the nonstationary component set to a large positive number. The
state space representation of the Clark model in S+FinMetrics utilizes an exact initialization of the
Kalman filter.
7

39

Filtered estimates
0

100

150

200

y(t)

800

850

750

800
750

cycle(t-1)
4

cycle(t)

-4

-4

-2

-2

Values

50

850

trend(t)

50

100

150

200

Figure 13: Filtered estimates of the state and response variables from the Clark model
for U.S. real GDP.
> plot(filteredEst.Clark,
+ strip.text=c("trend(t)","cycle(t)","cycle(t-1)","y(t)"),
+ main="Filtered estimates")
These plots are illustrated in Figure 13 .
Since the data passed to SsfMomentEst is an object of class timeSeries, the
positions component of the object filteredEst.Clark contains the time and date
positions of the data. The filtered trend and cycle estimates as timeSeries objects
may be computed using
>
+
>
+

trend.filter = timeSeries(data=filteredEst.Clark$state.moment[,1],
positions=filteredEst.Clark$positions)
cycle.filter = timeSeries(data=filteredEst.Clark$state.moment[,2],
positions=filteredEst.Clark$positions)

Figure 14 shows the filtered estimates of trend and cycle from the Clark model based
on the timeSeries trend.filter and cycle.filter.
The filtered trend estimate is fairly smooth and is quite similar to a linear trend. The
filtered cycle estimate is large in amplitude and has a period of about eight years.
40

740

780

820

860

Log Real GDP and Filtered Trend from Clark Model

1950

1955

1960

1965

1970

1975

1980

1985

1990

1995

1985

1990

1995

-4

-3

-2

-1

Filtered Cycle from Clark Model

1950

1955

1960

1965

1970

1975

1980

Figure 14: Filtered estimates of the trend and cycle from the Clark model estimated
to U.S. real GDP.
In comparison to the BN decomposition, the trend-cycle decomposition based on the
Clark model gives a much smoother trend and longer cycle, and attributes a greater
amount of the variability of log output to the transitory cycle.
4.4.3

Smoothed estimates

The smoothed estimates of the trend, t|n , and cycle, ct|n , along with estimated standard errors, given the estimated parameters, may be computed using the function
SsfMomentEst with the optional argument task=\STSMO":
> smoothedEst.Clark = SsfMomentEst(lny.ts,
+ Clark.mod(Clark.mle$parameters),task="STSMO")
The smoothed cycle estimates with 95% standard error bands are illustrated in Figure
15.
4.4.4

Specification test

The Clark model assumes that the unobserved trend evolves as a random walk with
drift. If the variance of the drift innovation, 2v , is zero then the trend becomes
41

6
4
2
0
-2
-4
-6
-8

1950

1955

1960

1965

1970

1975

1980

1985

1990

1995

Figure 15: Smoothed cycle estimate, ct|n , with 95% error bands from Clark model for
U.S. real GDP.
deterministic. A number of statistics have been proposed to test the null hypothesis that 2v = 0, see Harvey (2001) for a review. One of the most popular is the
KPSS statistic described in Kwiatkowski et. al. (1992). The S+FinMetrics function
stationaryTest implements the KPSS test. Applying this test to the U.S. real GDP
data gives
> stationaryTest(lny.ts,trend="ct")
Test for Stationarity: KPSS Test
Null Hypothesis: stationary around a linear trend
Test Statistics:
lny
0.7804**
* : significant at 5% level
** : significant at 1% level

42

Total Observ.: 206


Bandwidth : 4
The KPSS test rejects the null that 2v = 0 at the 1% level.

4.5

Unobserved Components Decompositions of Real GDP:


MNZ Model

Recently, Morley, Nelson and Zivot (2002) have shown that the apparent dierence
between BN decomposition and the Clark model trend-cycle decomposition is due to
the assumption of independence between trend and cycle innovations in the Clark
model. In particular, they show that the independence assumption is actually an
overidentifying restriction in the Clark model, and once this assumption is relaxed to
allow correlated components the dierence between the decompositions disappears.
The MNZ model is simply Clarks model (39) where the trend and cycle innovations are allowed to be correlated with correlation coecient vw :

vt
vw v w
0
2v
iid N
,
wt
vw v w
2w
0
The new state space system matrix becomes

vw v w
2v
vw v w
2w
=

0
0
0
0
4.5.1

0
0
0
0

0
0

0
0

Estimation

An S-PLUS function, to be passed to SsfFit, to compute the new state space form is
MNZ.mod = function(parm) {
delta = parm[1]
phi1 = parm[2]
phi2 = parm[3]
sigma.v = exp(parm[4])
sigma.w = exp(parm[5])
rho.vw = parm[6]
sigma.vw = sigma.v*sigma.w*rho.vw
bigV = matrix(c(sigma.v^2,sigma.vw,sigma.vw,sigma.w^2),2,2)
Omega = matrix(0,4,4)
Omega[1:2,1:2] = bigV
a1 = matrix(0,3,1)
# solve for initial variance of stationary part
43

bigF = matrix(c(phi1,1,phi2,0),2,2)
vecV = c(sigma.w^2,0,0,0)
vecP = solve(diag(4)-kronecker(bigF,bigF))%*%vecV
P.ar2 = matrix(vecP,2,2)
Sigma = matrix(0,4,3)
Sigma[1,1] = -1
Sigma[2:3,2:3] = P.ar2
ssf.mod= list(mDelta=c(delta,0,0,0),
mPhi=rbind(c(1,0,0),c(0,phi1,phi2),c(0,1,0),c(1,1,0)),
mOmega=Omega,
mSigma = Sigma)
CheckSsf(ssf.mod)
}
Notice that no restrictions are placed on the correlation coecient vw in the function
MNZ.mod. A box constraint 0.999 < vw < 0.999 will be placed on vw during the
estimation. Starting values for the parameters are based on values near the estimates
of the Clark model from MNZ:
> MNZ.start=c(0.81,1.34,-0.70,0.21,-0.30,-0.9)
> names(MNZ.start) = c("mu","phi.1","phi.2",
+ "ln.sigma.v","ln.sigma.w","rho")
Box constraints on the AR parameters 1 and 2 , to encourage stationarity, and the
correlation coecient vw , to enforce validity, are specified using
> low.vals = c(0,0,-2,-Inf,-Inf,-0.999)
> up.vals = c(2,2,2,Inf,Inf,0.999)
The maximum likelihood estimates and asymptotic standard errors of the parameters
= (, v , w , 1 , 2 , vw )0 using SsfFit are
> MNZ.mle = SsfFit(MNZ.start,lny.ts,"MNZ.mod",
+ lower=low.vals,upper=up.vals)
Iteration 0 : objective = 285.6087
...
Iteration 29 : objective = 285.5696
RELATIVE FUNCTION CONVERGENCE
> MNZ.mle$parameters
mu
phi.1
phi.2 ln.sigma.v ln.sigma.w
rho
0.8156033 1.341859 -0.7059066 0.2125406 -0.2894671 -0.9062222
> sqrt(diag(MNZ.mle$vcov))
[1] 0.08648871 0.14473281 0.14928410 0.13088972 0.38325780
[6] 0.12658984
> # estimated standard deviations
44

740

780

820

860

Log Real GDP and Filtered Trend from MNZ Model

1950

1955

1960

1965

1970

1975

1980

1985

1990

1995

1985

1990

1995

-1.5

-0.5

0.5

1.5

Filtered Cycle from MNZ Model

1950

1955

1960

1965

1970

1975

1980

Figure 16: Filtered estimates from Clark model with correlated components for U.S.
real GDP
> exp(MNZ.mle$parameters[4:5])
ln.sigma.v ln.sigma.w
1.236816 0.7486624
These estimates are almost identical to those reported by MNZ. Notice that the
estimated value of vw is 0.91 and that the estimated standard deviation of the
trend innovation is much larger than the estimated standard deviation of the cycle
innovation.
4.5.2

Filtered estimates

The filtered estimates of the trend, t|t , and cycle, ct|t , given the estimated parameters
are computed using
> filteredEst.MNZ = SsfMomentEst(lny.ts,
+ MNZ.mod(MNZ.mle$parameters),task="STFIL")
and are illustrated in Figure 16. Notice that the filtered estimates of trend and cycle
are identical to those determined from the BN decomposition.
45

4.6

Quasi-Maximum Likelihood Estimation of Stochastic Volatility Model

Let rt denote the continuously compounded return on an asset between times t1 and
t. Following Harvey, Ruiz and Shephard (1994), hereafter HRS, a simple stochastic
volatility (SV) model has the form
rt = t t , t iid N (0, 1)
ht = ln 2t = + ht1 + t , t iid N (0, 2 )
E[t t ] = 0

(40)

where 0 < < 1. Defining yt = ln rt2 , and noting that E[ln 2t ] = 1.27 and var(ln 2t ) =
2 /2 an unobserved components state space representation for yt has the form
yt = 1.27 + ht + t , t iid (0, 2 /2)
ht = + ht1 + t , t iid N (0, 2 )
E[ t t ] = 0
If t were iid Gaussian then the parameters = (, , 2 )0 of the SV model could
be eciently estimated by maximizing the prediction error decomposition of the loglikelihood function constructed from the Kalman filter recursions. However, since
t = ln 2t is not normally distributed the Kalman filter only provides minimum mean
squared error linear estimators of the state and future observations. Nonetheless,
HRS point out that even though the exact log-likelihood cannot be computed from
the prediction error decomposition based on the Kalman filter, consistent estimates
of = (, , 2 )0 can still be obtained by treating t as though it were iid N (0, 2 /2)
and maximizing the quasi log-likelihood function constructed from the prediction
error decomposition.
The state space representation of the SV model has system matrices


2
0

=
, =
, =
1.27
1
0 2 /2
Assuming that || < 1, the initial value matrix has the form
2

/(1 2 )
=
/(1 )
If = 1 then use
=

1
0

A function to compute the state space form of the SV model given a vector of
parameters, assuming || < 1 is
Use logistic
transformation to
46
impose the
restriction
0 < < 1.

sv.mod = function(parm) {
g = parm[1]
sigma2.n = exp(parm[2])
phi = parm[3]
ssf.mod = list(mDelta=c(g,-1.27),
mPhi=as.matrix(c(phi,1)),
mOmega=matrix(c(sigma2.n,0,0,0.5*pi^2),2,2),
mSigma=as.matrix(c((sigma2.n/(1-phi^2)),g/(1-phi))))
CheckSsf(ssf.mod)
}
4.6.1

Simulated Data

T = 1000 observations are simulated from the SV model using the parameters =
0.3556, 2 = 0.0312 and = 0.9646 :
>
>
>
>
>
>
>
+

parm.hrs = c(-0.3556,log(0.0312),0.9646)
nobs = 1000
set.seed(179)
e = rnorm(nobs)
xi = log(e^2)+1.27
eta = rnorm(nobs,sd=sqrt(0.0312))
sv.sim = SsfSim(sv.mod(parm.hrs),
mRan=cbind(eta,xi),a1=(-0.3556/(1-0.9646)))

The simulated squared returns, rt2 , and latent squared volatility, 2t , are shown in
Figure 17.
4.6.2

Estimation

Starting values for the estimation of = (, , 2 )0 are values close to the true values:
sv.start = c(-0.3,log(0.03),0.9)
names(sv.start) = c("g","ln.sigma2","phi")
Using SsfFit, the quasi-maximum likelihood (QML) estimates are
> low.vals = c(-Inf,-Inf,-0.999)
> up.vals = c(Inf,Inf,0.999)
> sv.mle = SsfFit(sv.start,sv.sim[,2],"sv.mod",
+ lower=low.vals,upper=up.vals)
Iteration 0 : objective = 5147.579
...
Iteration 13 : objective = 2218.26
RELATIVE FUNCTION CONVERGENCE
47

0.0010

Simulated values from SV model

0.0

0.0002

0.0004

0.0006

0.0008

volatility
squared returns

200

400

600

800

1000

Figure 17: Simulated data from SV model.


> sv.mle$parameters
g ln.sigma2
phi
-0.481554 -3.561523 0.9508832
> exp(sv.mle$parameters[2])
sigma2
0.02839554
These values are fairly close to the true values9 .
4.6.3

Filtered and Smoothed Estimates

The filtered and smoothed estimates of log-volatility and volatility are computed
using
9

Currently, SsfFit does not compute the sandwhich covariance matrix estimator required for
the quasi-mles.

48

-8
-9
-10
-11

actual
filtered
smoothed

200

400

600

800

1000

Figure 18: Actual and estimated log-volatility from simulated stochastic volatility
model.
# compute filtered and estimates of log volatility
filteredEst.sv = SsfMomentEst(sv.sim[,2],
sv.mod(sv.mle$parameters),task="STFIL")
filtered.var = exp(filteredEst.sv$state.moment)
smoothedEst.sv = SsfMomentEst(sv.sim[,2],
sv.mod(sv.mle$parameters),task="STSMO")
smoothed.var = exp(smoothedEst.sv$state.moment)
These estimates are illustrated in Figure 18.

4.7

Monte Carlo Maximization Likelihood Estimation of Stochastic Volatility Model

The QML estimation of the SV model (40) described in the previous section is consistent but inecient. Sandmann and Koopman (1998) consider maximization of
the exact log-likelihood function of the SV model using importance sampling-based
Monte Carlo methods to approximate the exact log-likelihood function based on the
log 21 density of t = ln 2t . To illustrate the method, let = (, , 2 )0 denote the
49

parameters of the SV model and let yt = ln rt2 denote the log squared returns. The
exact log-likelihood of the non-Gaussian SV model may be expressed as

ptrue (|)
ln L(y|) = ln LG (y|) + ln EG
(41)
pG (|)
where ln LG (y|) is the log-likelihood function of the approximating Gaussian model,
ptrue(|) is the true density function of the measurement disturbances (log 21 density), pG (|) is the Gaussian (importance) density of the measurement disturbances
of the approximating model and EG denotes expectation with respect to the Gaussian (importance) density pG (|). Equation (41) shows that the non-Gaussian loglikelihood may be decomposed into a Gaussian log-Likelihood plus a correction term
to account for departures from normality. Sandmann and Koopman propose to estimate by maximizing the unbiased estimate of (41):
s2w
2N w 2
The terms w
and s2w are computed using the following algorithm:
ln\
L() = ln LG (y|) + ln w +

(42)

1. Choose a Gaussian approximating model from which a feasible sampling scheme


can be deducted based on the importance density pG (|y, ).
2. Compute ln LG (y|), and
= E[|y, ] for the approximating model via the
Kalman smoother
(i)

(i)

3. Generate a sample (i) = ( 1 , . . . , T )0 from the importance density pG (|y, )


using the simulation smoother
(i)
(i)
4. Construct an antithetic sample:
= 2

5. Compute
(i)

(i)

ptrue (|)
w( (i) ) + w(
)
, w() =
=
2
pG (|)

6. Repeat steps 3-5 until N samples are drawn


7. Calculate w and s2w as the sample mean and variance of w(i) , i = 1, . . . , N.
The Monte Carlo maximum likelihood (MCML) estimates of are those which maximize (42). Starting values for the optimization may be based on the QML estimates.
The choice of N in step 6 determines the accuracy of (42). Sandmann and Koopman
find that N = 5 is sucient.
The following S-PLUS function implements the above algorithm for constructing
the non-Gaussian log-likelihood function (42).
Insert S-PLUS function code here
Add comments to code
50

4.8

Ane Term Structure Models

Traditionally the study of the term structure of interest rates focuses on either the
cross sectional aspect of the yield curve, or the time series properties of the interest rate. Recently, reserchers have utilized state space models and Kalman filtering
techniques to estimate ane term structure models, by combining both time series
and cross sectional data. For simple models, the state space representation is often
linear and Gaussian and analysis is straightforward. For more general models, the
unobserved state variables generally influence the variance of the transition equation errors making the errors non-Gaussian. In these cases, non-standard state space
methods are necessary. This section illustrates how some common ane term structure models may be expressed in state space form, estimated and evaluated using the
Kalman filter and smoothing algorithms in S+FinMetrics/SsfPack. The notation
and examples are taken from Duan and Simonato (1999).
4.8.1

Brief Review of Ane Term Structure Models

Due and Kan (1996) show that many of the theoretical term structure models,
such as the Vasicek (1977) Ornstein-Uhlenbeck model, Cox, Ingersoll and Roll (1985)
square root diusion model and its multi-factor extensions (for example, see Chen
and Scott, 1993), Longsta and Schwartz (1992) two-factor model, and Chen (1996)
three factor model, are special cases of the class of ane term structure models. The
class of ane term structure models is one in which the yields to maturity on defaultfree pure discount bonds and the instantaneous interest rate are ane (constant plus
linear term) functions of m unobservable state variables Xt , which are assumed to
follow an ane diustion process
dXt = U(Xt ; )dt + (Xt ; )dWt ,

(43)

where Wt is an m 1 vector of independent Wiener processes; is a p 1 vector of


of model specific parameters; U() and () are ane functions in Xt such that (43)
has a unique solutions. The functions U() and () can be obtained as the solution
to some ordinary dierential equations. In this class of models, the price at time t of
a default-free prue discount bond with time to maturity has the form
Pt (Xt ; , ) = A(, ) exp {B(, )0 Xt }

(44)

ln A(, ) B(, )0 Xt
+

(46)

where A( , ) is a scalar function and B( , ) is an m1 vector function. The timet continuously compounded yield-to-maturity on a pure discount bond with time to
maturity is defined as
ln Pt (Xt ; , )
Yt (Xt ; , ) =
,
(45)

which, using (45), has the form


Yt (Xt ; , ) =

51

4.8.2

State Space Representation

Although (46) dictates an exact relationship between the yield Yt ( ) and the state
variables Xt , in econometric estimation it is usually treated as an approximation
giving rise to the measurement equation
ln A(, ) B(, )0 Xt
+
+ 6t ( ),
(47)

where 6t is a normally distributed measurement error with zero mean and variance
2 . For any time to maturity , the above equation can be naturally treated as
the measurement equation of a state space model, with Xt being the unobserved
state variable. To complete the state space representation, the transition equation
for Xt over a discrete time interval h needs to be specified. Defining (Xt ; , h) =
var(Xt+h |Xt ), Duan and Simonato (1999) show that the transition equation for Xt
has the form
(48)
Xt+h = a(, h) + b(, h)Xt + (Xt ; , h)1/2 t+h
Yt ( ) =

where t iid N (0, Im ), and (Xt ; , h)1/2 represents the Cholesky factorization of
(Xt ; , h).
In general, the state space model defined by (47) and (48) is non-Gaussian because
the conditional variance of Xt+h in (48) depends on Xt . Only for the special case in
which () in (43) is not a function of Xt , is the conditional variance term (Xt ; , h)
also not a function of Xt and the state space model is Gaussian10 . See Lund (1997) for
a detailed discussion of the econometric issues associated with estimating ane term
structure models using the Kalman filter. Although the quasi-maximum likelihood
estimator of the model parameters based on the modified Kalman filter is inconsisitent, the Monte Carlo results in Duan and Simonato (1999) and de Jong (2000) show
that bias is very small even for moderately small samples likely to be encountered in
practice.
4.8.3

Data for Examples

The data used for the following examples are in the S+FinMetrics timeSeries
fama.bliss, and consist of four monthly yield series over the period April, 1964 to
December, 1997 for the U.S. Treasury debt securities with maturities of 3, 6, 12 and
60 months, respectively. All rates are continuously componed rates expressed on an
annual basis. These rates are displayed in Figure 19.
4.8.4

Estimation of Vasiceks Model

In the Vasicek (1977) model, the state variable driving the term structure is the
instantaneous (short) interest rate, rt , and is assumed to follow the mean-reverting
10

To estimate the non-Gaussian state space model, Duan and Simonato modify the Kalman filter recursions to incorporate the presense of (Xt ; , h) in the conditional variance of t+h . The
S+FinMetrics/SsfPack functions KalmanFil and SsfLoglike were modified accordingly.

52

0.16
0.04

0.06

0.08

0.10

0.12

0.14

3M
6M
12M
60M

1965

1970

1975

1980

1985

1990

1995

Figure 19: Monthly yields on U.S. treasury debt securities.


diusion process
drt = ( rt )dt + dWt , 0, > 0

(49)

where Wt is a scalar Wiener process, is the long-run average of the short rate,
is a speed of adjustment parameter, and is the volatility of rt .Duan and Simonato
(1999) show that the functions A(), B(), a(), b() and () have the form
ln A(, ) = (B(, ) )

2 B 2 (, )
1
, B(, ) = (1 exp( ))
4

2
2

2
a(, h) = (1 exp(h)), b(, h) = exp(h)
2
(Xt ; , h) = (, h) =
(1 exp(2h))
2
= +

where is the risk premium parameter. The model parameters are = (, , , )0 .


Notice that for the Vasicek model, (Xt ; , h) = (, h) so that the state variable
53

rt does not influence the conditional variance of transition equation errors, the state
space model is Gaussian.
The state space representation of the Vasicek model has system matrices

a(, h)
b(, h)
ln A(, 1 )/ 1
B(, 1 )/ 1

(50)
=
, =

..
..

.
.
ln A(, 4 )/ 4
B(, 4 )/ 4
= diag((, h), 2 1 , . . . , 2 4 )

and initial value matrix


=

2 /2

based on the stationary distribution of the short rate in (49). Notice that this a
multivariate state space model.
A function to compute the state space form of the Vasicek model for a given set
of parameters , number of yields 1 , . . . , N , and sampling frequency h is
vasicek.ssf = function(param, tau=NULL, freq=1/52)
{
## 1. Check for valid input.
if (length(param) < 5)
stop("param must have length greater than 4.")
N = length(param) - 4
if (length(tau) != N)
stop("Length of tau is inconsistent with param.")
## 2. Extract parameters and impose constraints.
Kappa = exp(param[1]) ## Kappa > 0
Theta = exp(param[2]) ## Theta > 0
Sigma = exp(param[3]) ## Sigma > 0
Lamda = param[4]
Var
= exp(param[1:N+4]) ## meas eqn stdevs
## 3. Compute Gamma, A, and B.
Gamma = Theta + Sigma * Lamda / Kappa - Sigma^2 / (2 * Kappa^2)
B = (1 - exp(-Kappa * tau)) / Kappa
lnA = Gamma * (B - tau) - Sigma^2 * B^2 / (4 * Kappa)
## 4. Compute a, b, and Phi.
a = Theta * (1 - exp(-Kappa * freq))
b = exp(-Kappa * freq)
Phi = (Sigma^2 / (2 * Kappa)) * (1 - exp(-2 * Kappa * freq))
## 5. Compute the state space form.
mDelta = matrix(c(a, -lnA/tau), ncol=1)
54

mPhi = matrix(c(b, B/tau), ncol=1)


mOmega = diag(c(Phi, Var^2))
## 6. Duan and Simonato used this initial setting.
A0 = Theta
P0 = Sigma * Sigma / (2*Kappa)
mSigma = matrix(c(P0, A0), ncol=1)
## 7. Return state space form.
ssf.mod = list(mDelta=mDelta, mPhi=mPhi, mOmega=mOmega, mSigma=mSigma)
CheckSsf(ssf.mod)
}
Notice that the exponential transformation is used for those parameters that should
be positive, and, since the data in fama.bliss are monthly, the default length of the
discrete sampling interval, h, is set to 1/12.
Starting values for the parameters and the matruity specification for the yields
are
>
+
>
+
>

start.vasicek = c(log(0.1), log(0.06), log(0.02), 0.3, log(0.003),


log(0.001), log(0.003), log(0.01))
names(start.vasicek) = c("ln.kappa","ln.theta","ln.sigma","lamda",
"ln.sig.3M","ln.sig.6M","ln.sig.12M","ln.sig.60M")
start.tau = c(0.25, 0.5, 1, 5)

The maximum likelihood estimates for the parameters = (ln , ln , ln , , ln 1 , ln 2 , ln 3 , ln 4 )0


using SsfFit are
> ans.vasicek = SsfFit(start.vasicek, fama.bliss, vasicek.ssf,
+ tau=start.tau, freq=1/12, trace=T,
+ control=nlminb.control(abs.tol=1e-6, rel.tol=1e-6,
+ x.tol=1e-6, eval.max=1000, iter.max=500))
Iteration 0 : objective = -6347.453
...
Iteration 37 : objective = -6378.45
RELATIVE FUNCTION CONVERGENCE
> ssf.fit = vasicek.ssf(ans.vasicek$parameters,tau=start.tau,freq=1/12)
> ans.vasicek$parameters[-4] = exp(ans.vasicek$parameters[-4])
> names(ans.vasicek$parameters) = c("kappa","theta","sigma","lamda",
+ "sig.3M","sig.6M","sig.12M","sig.60M")
> dg = ans.vasicek$parameters; dg[4] = 1
> ans.vasicek$vcov = diag(dg) %*% ans.vasicek$vcov %*% diag(dg)
> summary(ans.vasicek,digits=4)
Log-likelihood: -6378.45
1620 observations
55

Parameters:
kappa
theta
sigma
lamda
sig.3M
sig.6M
sig.12M
sig.60M

Value Std. Error


0.11883000 0.0135799
0.05738660 0.0267813
0.02138420 0.0007906
0.34769500 0.1493230
0.00283512 0.0001011
0.00002155 0.0005905
0.00301624 0.0001083
0.00990032 0.0003718

t value
8.7504
2.1428
27.0489
2.3285
28.0453
0.0365
27.8416
26.6254

These results are almost identical to those reported by Duan and Simonato (1999). All
parameters are significant at the 5% level except the measurement equation standard
deviation for the six month matruity yield. The largest measurement equation error
standard deviation is for the sixty month yield, indicating that the model has the
poorest fit for this yield. The short rate is mean reverting since
> 0, and the long
run average short rate is = 5.74% per year. The estimated risk premium parameter,
= 0.3477, is positive indicating a positive risk premium for bond prices.

The smoothed estimates of the short-rate and the yields are computed using
SsfCondDens with task=\STSMO"
> m.s = SsfCondDens(fama.bliss, ssf.fit)
The smoothed yield estimates are displayed in Figure 20. The model fits well on the
short end of the yield curve but poorly on the long end. As another check on the
fit of the model, the presence of serial correlation in the standardized innovations is
tested using the Box-Ljung modified Q-statistic (computed using the S+FinMetrics
function autocorTest)
> autocorTest(KalmanFil(fama.bliss,ssf.fit)$std.innov)
Test for Autocorrelation: Ljung-Box
Null Hypothesis: no autocorrelation
Test Statistics:
Test Stat
p.value

3M
80.9471
0.0000

6M
282.4316
0.0000

12M
60M
756.3304 3911.7736
0.0000
0.0000

Dist. under Null: chi-square with 26 degrees of freedom


Total Observ.: 405
The null of no serial correlation is easily rejected for the standardized innovations of
all yields.
56

Actual vs. Smoothed: 6M

0.04

0.04

0.08

0.08

0.12

0.12

0.16

0.16

Actual vs. Smoothed: 3M

1965 1970 1975 1980 1985 1990 1995

1965 1970 1975 1980 1985 1990 1995

Actual vs. Smoothed: 60M

0.04

0.04

0.08

0.08

0.12

0.12

0.16

Actual vs. Smoothed: 12M

1965 1970 1975 1980 1985 1990 1995

1965 1970 1975 1980 1985 1990 1995

Figure 20: Smoothed estimated of yields from Vasicek term structure model.
4.8.5

Estimation of Chen and Scotts Two Factor Model

In the two factor model of Chen and Scott (1992), the unobserved state variables are
assumed to follow the mean-reverting square root diusions
p
dXi,t = i (i xi,t )dt + i Xi,t dWi,t , i = 1, 2

where W1,t and W2,t are independent Wiener processes. As in the Vasicek model,
i is a mean-reversion parameter and i measures the long-run average level of the
state variable. However, unlike the Vasicek model, the levels of the state variables
influence their volatility. For this model, Duan and Simonato (1999) show that the
functions A() and B() have the form
A(, ) = A1 (, ) A2 (, )
B(, ) = [B1 (, ), B2 (, )]

57

where

2 i exp((i + i + i ) /2
Ai (, ) =
(i + i + i )(exp( i ) 1) + 2 i
2(exp( i ) 1)
Bi (, ) =
(i + i + i )(exp( i ) 1) + 2 i
q
i =
(i + i )2 + 2 2i

2i i /2i

for i = 1, 2. They also show that the functions a(), b() and () are given by

1 (1 exp(1 h))
(51)
a(, h) =
2 (1 exp(2 h))

exp(1 h)
0
b(, h) =
(52)
0
exp(2 h)

1 (X1,t ; , h)
0
(Xt ; , h) =
(53)
0
2 (X2,t ; , h)
with
i (Xi,t ; , h) = Xi,t

2i
i

[exp(i h) exp(2i h)] + i

2i
2i

[1 exp(i h)]2

for i = 1, 2. The risk premium parameters are 1 and 2 , and i < 0 implies a
positive premium in bond prices for factor i = 1, 2. The model parameters are =
(1 , 2 , 1 , 2 , 1 , 2 , 1 , 2 )0 . Notice that for the Chen-Scott model, the state variables
Xt influence the conditional variance of transition equation errors and so the state
space model is non-Gaussian.
The state space representation of the Chen-Scott model has system matrices of
the form (50), where a(, h) is replaced by (51), b(, h) is replaced by the 2 1
vector of non-zero elements of (52), and (, h) is replaced by (53).
A function to compute the state space form of the Chen-Scott model for a given
set of parameters , number of yields 1 , . . . , N , and sampling frequency h is
chen.ssf =
function(param, tau=NULL, freq=1/52, n.factors=2)
{
## 1. Check for valid input.
if (length(param) <= n.factors*4)
stop("param must have length greater than ", 4*n.factors)
N = length(param) - 4*n.factors
if (length(tau) != N)
stop("Length of tau is inconsistent with param.")
## 2. Extract parameters and impose constraints.
58

Kappa = exp(param[1:n.factors])
## Kappa > 0
Theta = exp(param[n.factors+1:n.factors])
## Theta > 0
Sigma = exp(param[2*n.factors+1:n.factors])
## Sigma > 0
Lambda = param[3*n.factors+1:n.factors]
## Lambda < 0
Var = exp(param[1:N+n.factors*4])
## 3. Compute Gamma, A, and B.
Gamma = sqrt((Kappa + Lambda)^2 + 2 * Sigma^2)
A = B = matrix(0, N, n.factors)
for (i in 1:n.factors) {
denom = (Kappa[i] + Lambda[i] + Gamma[i])*
(exp(Gamma[i]*tau) - 1) + 2 * Gamma[i]
A[,i] = (2*Gamma[i]*exp((Kappa[i]+Lambda[i]+Gamma[i])
*tau/2)/denom)^(2*Kappa[i]*Theta[i]/(Sigma[i]^2))
B[,i] = 2*(exp(Gamma[i]*tau)-1)/denom
}
AA = rep(1, length(tau))
for (i in 1:n.factors) {
AA = AA * A[,i]
}
## 4. Compute a, b, Phi, and vS. Notice new mAffine term
a = Theta * (1 - exp(-Kappa * freq))
b = diag(exp(-Kappa * freq))
Phi = Theta * (Sigma^2) / (2 * Kappa) * (1 - exp(-Kappa*freq))^2
mAffine = Sigma^2 / Kappa * (exp(-Kappa*freq) - exp(-2*Kappa*freq))
## 5. Computer the state space form.
mDelta = matrix(c(a, -log(AA)/tau), ncol=1)
mPhi = rbind(b, B/tau)
mOmega = diag(c(Phi, Var^2))
## 6. Duan and Simonato used this initial setting.
A0 = Theta
P0 = diag(Sigma^2 * Theta / (2*Kappa))
mSigma = rbind(P0, A0)
## 7. Return state space form.
ssf.mod = list(mDelta=mDelta, mPhi=mPhi, mOmega=mOmega,
mSigma=mSigma, mAffine=mAffine)
CheckSsf(ssf.mod)
}
Since state variables influence the conditional variance of the state equation errors,
the S+FinMetrics/SsfPack functions KalmanFil and SsfLoglike were modified to
handle this. As a result, a new component mAffine is allowed in the state space
representation to specify how the state variables influence the conditional variance.
Starting values for the parameters and the matruity specification for the yields
59

are
> start.chen = c(log(0.3), log(0.2), log(0.06), log(0.03), log(0.04),
> log(0.02), -0.5, -0.5, log(0.001), log(0.001), log(0.001), log(0.001))
> names(start.chen) = c(paste("kappa", 1:2, sep="."),
+ paste("theta", 1:2, sep="."),
+ paste("sigma", 1:2, sep="."),
+ paste("lamda", 1:2, sep="."),
+ paste("s",1:4,sep=""))
start.tau = c(0.25, 0.5, 1, 5)
The maximum likelihood estimates for the parameters = (0 , ln 1 , ln 2 , ln 3 , ln 4 )0
using SsfFitFast are
> ans.chen = SsfFitFast(start.chen, fama.bliss, chen.ssf,
+ n.factors=2, tau=start.tau, freq=1/12, trace=T,
+ control=nlminb.control(abs.tol=1e-6, rel.tol=1e-6, x.tol=1e-6))
Iteration 0 : objective = 295973.9
...
Iteration 52 : objective = -7161.388
RELATIVE FUNCTION CONVERGENCE
> ssf.fit = chen.ssf(ans.chen$parameters, tau=start.tau, freq=1/12, n.factors=2)
> ans.chen$parameters[-c(7,8)] = exp(ans.chen$parameters[-c(7,8)])
> tmp = ans.chen$parameters
> tmp[7:8] = 1
> ans.chen$vcov = diag(tmp) %*% ans.chen$vcov %*% diag(tmp)
> summary(ans.chen,digits=4)
RELATIVE FUNCTION CONVERGENCE
Log-likelihood: -7161.39
1620 observations
Parameters:
Value Std. Error t value
kappa.1 0.89165700 0.06854230 13.0089
kappa.2 0.01265400 0.01250360
1.0120
theta.1 0.06972790 0.00482700 14.4454
theta.2 0.03805760 0.04609060
0.8257
sigma.1 0.07521550 0.00292212 25.7400
sigma.2 0.27804600 0.01321010 21.0480
lamda.1 -0.55659100 0.05947970 -9.3577
lamda.2 -1.51277000 0.22298200 -6.7843
s1 0.00237142 0.00008490 27.9304
s2 0.00001733 0.00010150
0.1707
s3 0.00218652 0.00007874 27.7682
60

0.04

0.08

0.12

0.16

Smoothed Estimate of 1st Factor

1965

1970

1975

1980

1985

1990

1995

1990

1995

-0.011

-0.007

-0.003

Smoothed Estimate of 2nd Factor

1965

1970

1975

1980

1985

Figure 21: Smoothed estimates of state variables from the Chen-Scott two factor
model.
s4

0.00056721 0.00029906

1.8966

The estimation results for the Chen-Scott model are dierent from those reported by
Duan and Simonato (1999), but are qualitatively similar. The first factor appears
to be strongly mean reverting, but the small estimated value of 2 suggests that
the second factor is close to being non-stationary. Both risk premium parameters
are estimated to be negative, indicating a positive risk premium on bonds. The
magnitudes of the estimated measurement error standard deviations indicate that
the model fits best at the six month and twleve month maturities and worst at the
three month maturity.
The smoothed estimates of the factors are given in Figure 21. The first factor
may be interpreted as a short-rate. It is not clear how to interpret the second factor.
The smoothed yield estimates are displayed in Figure . Overall, the Chen-Scott two
factor model fits the term structure much better than the one vactor Vasicek model.

61

Actual vs. Smoothed: 6M

0.04

0.04

0.08

0.08

0.12

0.12

0.16

0.16

Actual vs. Smoothed: 3M

1965 1970 1975 1980 1985 1990 1995

1965 1970 1975 1980 1985 1990 1995

Actual vs. Smoothed: 60M

0.04

0.04

0.06

0.08

0.08

0.10

0.12

0.12

0.14

0.16

Actual vs. Smoothed: 12M

1965 1970 1975 1980 1985 1990 1995

1965 1970 1975 1980 1985 1990 1995

Figure 22: Smoothed estimates of yields from Chen-Scott two factor model.

62

References
[1] Bomhoff, E. J. (1994). Financial Forecasting for Business and Economics.
Academic Press, San Diego.
[2] Carmona, R. (2001). Statistical Analysis of Financial Data, with an Implementation in Splus. Textbook under review.
[3] Chan, N.H. (2002). Time Series: Applications to Finance. John Wiley & Sons,
New York.
[4] Chen, R.-R., and L. Scott (1993). Maximum Likelihood Estimation for a
Multifactor Equilibrium Model of the Term Structure of Interest Rates, Journal
of Fixed Income, 3, 14-31.
[5] Clark, P. K. (1987). The Cyclical Component of U.S. Economic Activity,
The Quarterly Journal of Economics, 102, 797-814.
[6] Cox, J. C., J. E. Ingersoll, and S. A. Ross (1985). A Theory of the
Term Structure of Interest Rates, Econometrica, 53, 385-408.
[7] Dai, Q. and K. J. Singleton (2000). Specification Analysis of Ane Term
Structure Models, Journal of Finance, 55 (5), 1943-1978.
[8] de Jong, P. and J. Penzer (1998). Diagnosing Shocks in Time Series,
Journal of the American Statistical Association, 93, 796-806.
[9] de Jong, F. (2000). Time Series and Cross Section Information in Ane Term
Structure Models, Journal of Business and Economic Statistics, 18 (3), 300-314.
[10] Durbin, J. and S.J. Koopman (2001). Time Series Analysis by State Space
Methods. Oxford University Press, Oxford.
[11] Duan, J.-C., and Sinomato, J. (1999). Estimating and Testing ExponentialAne Term Structure Models by Kalman Filter, Review of Quantitative Finance and Accounting, 13, 111-135.
[12] Duan, J.-C. and J.-G. Simonato (1999). Estimating Exponential-Ane
Term Structure Models by Kalman Filter, Review of Quantitative Finance and
Accounting, 13, 111-135.
[13] Duffie, D. and R. Kan (1996). A Yield-Factor Model of Interest Rates,
Mathematical Finance, 6 (4), 379-406.
[14] Hamilton, J.D. (1994a). Time Series Analysis. Princeton University Press,
Princeton.
63

[15] Hamilton, J.D. (1994a). State-Space Models, in Handbook of Econometrics,


e.d. R.F. Engel and D.L. McFadden, vol. 4, chapter 50, 3014-3077.
[16] Harvey, A. C. (1985). Trends and Cycles in Macroeconomic Time Series,
Journal of Business and Economic Statistics, 3, 216-27.
[17] Harvey, A. C. (1985). Applications of the Kalman Filter in Econometrics,
in Advances in Econometrics, Fifth World Congress of the Econometric Society,
ed. T. Bewley, vol. 1., 285-313.
[18] Harvey, A. C. (1989). Forecasting, Structural Time Series Models and the
Kalman Filter. Cambridge University Press, Cambridge.
[19] Harvey, A.C. (1993). Time Series Models, 2nd edition. MIT Press, Cambridge.
[20] Harvey, A.C. (2001). Testing in Unobserved Components Models, Journal
of Forecasting, 20, 1-19.
[21] Harvey, A.C. and S.-J. Koopman (1992). Diagnostic Checking of
Unobserved-Component Time Series Models, Journal of Business and Economic Statistics, 10, 377-389.
[22] Harvey, A.C., E. Ruiz and N. Shephard (1994). Multivariate Stochastic
Variance Models, Review of Economic Studies, 61, 247-264.
[23] Kim, C.-J., and C.R. Nelson (1999). State-Space Models with Regime Switching. MIT Press, Cambridge.
[24] Kwiatkowski, D., P.C.B. Phillips, P. Schmidt, and Y. Shin (1992).
Testing the Null Hypothesis of Stationarity Against the Alternative of a Unit
Root: How Sure Are We that Economic Time Series Have a Unit Root?, Journal
of Econometrics, 44, 159-178.
[25] Koopman, S.J., N. Shephard, and J.A. Doornik (1999). Statistical Algorithms for State Space Models Using SsfPack 2.2, Econometrics Journal, 2,
113-166.
[26] Koopman, S.J., N. Shephard, and J.A. Doornik (2001). SsfPack 3.0beta:
Statistical Algorithms for Models in State Space, unpublished manuscript, Free
University, Amsterdam.
[27] Lund, J. (1997). Econometric Analysis of Continuous-Time Arbitrage-Free
Models of the Term Structure of Interest Rates, unpublished manuscript, Department of Finance, The Aarhus School of Business, Denmark.

64

[28] Engle, R.F. and M.W. Watson (1987). The Kalman Filter: Applications to
Forecasting and Rational Expectations Models, in T.F. Bewley (ed.) Advances
in Econometrics: Fifth World Congress, Volume I. Cambridge University Press,
Cambridge.
[29] Morley, J.C. (2002). A State-Space Approach to Calculating the BeveridgeNelson Decomposition, Economics Letters, 75, 123-127.
[30] Morley, J.C., C.R. Nelson and E. Zivot (2002). Why are BeveridgeNelson and Unobserved Components Decompositions of GDP So Dierent?,
forthcoming in Review of Economics and Statistics.
[31] Neumann, T. (2002). Time-Varying Coecient Models: A Comparison of Alternative Estimation Strategies, unpublished manuscript, DZ Bank, Germany.
[32] Sandmann, G. and S.J. Koopman (1998). Estimation of Stochastic Volatility Models via Monte Carlo Maximum Likelihood, Journal of Econometrics, 87,
271-301.
[33] Shumway, R.H. and D.S. Stoffer (2000). Time Series Analysis and Its
Applications. Springer-Verlag, New York.
[34] Vasicek, O. A. (1977). An Equilibrium Characterization of the Term Structure, Journal of Financial Economics, 49, 1278-1304.
[35] West, M. and J. Harrison (1997). Bayesian Forecasting and Dynamic Models, 2nd edition. Springer-Verlag, New York.
[36] Zivot, E. and J. Wang (2003). Modeling Financial Time Series with S-PLUS.
Springer-Verlag, New York.

65

Das könnte Ihnen auch gefallen