Sie sind auf Seite 1von 142

MARKOV CHAIN MONTE CARLO AND SEQUENTIAL MONTE CARLO

METHODS IN STOCHASTIC VOLATILITY MODELS


Hedibert Freitas Lopes
Associate Professor of Econometrics and Statistics
The University of Chicago Booth School of Business
5807 South Woodlawn Avenue, Chicago, IL 60637
http://faculty.chicagobooth.edu/hedibert.lopes/research
hlopes@chicagobooth.edu
III Summer School
Technical University of Catalonia
Barcelona, Spain
1
Course Schedule
Day Weekday Date Time Topic
1 Monday June 22nd 9:00-11:00 Bayesian inference
Monday June 22nd 11:30-13:30 Bayesian model criticism
2 Tuesday June 23rd 9:00-11:00 Monte Carlo methods
Tuesday June 23rd 11:30-13:30 Markov chain Monte Carlo methods
3 Friday June 26th 9:00-11:00 Dynamic linear models
Friday June 26th 11:30-13:30 Nonnormal, nonlinear dynamic models
4 Monday June 29th 9:00-11:00 Stochastic volatility models
Monday June 29th 11:30-13:30 Sequential Monte Carlo (SMC)
5 Wednesday July 1st 9:00-11:00 SMC with parameter learning
Wednesday July 1st 11:30-13:30 SMC in stochastic volatility models
Lecture 1 - Bayesian inference
Basic concepts such as prior, likelihood, posterior and predictive distributions are introduced.
The intuitive sequential nature of Bayesian learning is illustrated via conjugate families of distri-
butions.
The lecture ends with Bayesian inference for normal linear models.
Lecture 2 - Bayesian model criticism
The lecture starts introducing prior and posterior model probabilities and Bayes factor, key ingre-
dients in assessing model uncertainty.
Model selection as a decision problem will lead to alternative criteria, such as the posterior predictive
criterion (PPC).
The deviance information criterion (DIC) and cross-validatory measures are also presented.
Lecture 3 - Monte Carlo (MC) methods
Monte Carlo (MC) integration schemes are introduced to approximate both posterior expectations
as well as predictive ordinates.
Similarly, acceptance-rejection and sampling importance resampling (SIR) algorithms, as well as
other iterative resampling schemes, are introduced as tools to approximately sample from posterior
distributions.
Lecture 4 - Markov chain Monte Carlo (MCMC) methods
We start by reviewing basic Markov chain concepts and results that will facilitate the introduction
of more general MCMC schemes.
A few concepts are irreducibility, reversibility, ergodicity, limiting distributions and eective sample
size.
The two most famous MCMC schemes are then introduced: the Gibbs sampler and the Metropolis-
Hastings algorithm.
2
Lecture 5 - Dynamic linear models (DLM)
The linear model of the rst lecture is extended to accommodate time-varying regression coecients,
which is one of many instances in the class of dynamic linear models.
Sequential learning is provided in closed form by the Kalman lter and smoother.
Inference for xed parameters, such as observational and evolutional variances, is performed by
integrating out states.
Lecture 6 - Nonnormal, nonlinear dynamic models:
Forward ltering, backward sampling (FFBS) algorithm and other MCMC schemes.
Lecture 7 - Stochastic volatility models as dynamic models
FFBS and other MCMC schemes are adapted to stochastic volatility models.
In particular, we will compare single move and block move MCMC schemes, where block move
schemes are based on mixture of normal densities approximation to the distribution of a log chi-
square random variable with one degree of freedom.
Lecture 8 - Sequential Monte Carlo (SMC) methods
The lecture starts with standard particle lters, such as the sequential importance sampling with
resampling (SISR) lter and the auxiliary particle lter (APF), to sequentially learn about states
in nonnormal and nonlinear dynamic models.
SMC methods assist inference for xed parameters, such as observational and evolutional variances.
Stochastic volatility models are used to illustrate the lters.
Lecture 9 - SMC with parameter learning
The APF is coupled with mixture approximation to the posterior distribution of xed parameters
to produced online estimates of both states and parameters in dynamic systems.
The lecture concentrates on the particle learning (PL) lter and several simulated exercises are
performed to compare PL to SISR, APF and MCMC alternatives.
Lecture 10 - SMC in stochastic volatility models
PL and other lters are compared based on stochastic volatility models.
The lecture also list current research agenda linking, Markov chain Monte Carlo methods, sequential
Monte Carlo methods and general stochastic volatility models.
3
LECTURE 1
BAYESIAN INFERENCE
Example i. Sequential learning
John claims some discomfort and goes to the doctor.
The doctor believes John may have the disease A.
= 1: John has disease A; = 0: he does not.
The doctor claims, based on his expertise (H), that
P( = 1[H) = 0.7
Examination X is related to as follows
_
P(X = 1[ = 0) = 0.40, positive test given no disease
P(X = 1[ = 1) = 0.95, positive test given disease
Exams result: X = 1
P( = 1[X = 1) l( = 1 ; X = 1)P( = 1)
(0.95)(0.7) = 0.665
P( = 0[X = 1) l( = 0 ; X = 1)P( = 0)
(0.40)(0.30) = 0.120
Consequently
P( = 0[X = 1) = 0.120/0.785 = 0.1528662 and
P( = 1[X = 1) = 0.665/0.785 = 0.8471338
The information X = 1 increases, for the doctor, the probability that John has the disease A from
70% to 84.71%.
2nd exam: Y
John undertakes the test Y , which relates to as follows
_
P(Y = 1[ = 1) = 0.99 P(X = 1[ = 1) = 0.95
P(Y = 1[ = 0) = 0.04 P(X = 1[ = 0) = 0.40
Predictive:
P(Y = 0[X = 1) = (0.96)(0.1528662) + (0.01)(0.8471338) = 0.1552229.
Suppose the observed result was Y = 0. This is a reasonably unexpected result as the doctor only
gave it roughly 15% chance.
4
Questions
He should at least consider rethinking the model based on this result. In particular, he might want
to ask himself
1. Did 0.7 adequately reect his P( = 1[H)?
2. Is test X really so unreliable?
3. Is the sample distribution of X correct?
4. Is the test Y so powerful?
5. Have the tests been carried out properly?
What is P([X = 1, Y = 0)?
Let H
2
= X = 1, Y = 0 and using the Bayes theorem
P( = 1[H
2
) l( = 1 ; Y = 0)P( = 1[X = 1)
(0.01)(0.8471338) = 0.008471338 and
P( = 0[H
2
) l( = 0 ; Y = 0)P( = 0[X = 1)
(0.96)(0.1528662) = 0.1467516
P( = 1[H
i
) =
_
_
_
0.7000 , H
0
: before X and Y
0.8446 , H
1
: after X=1 and before Y
0.0546 , H
2
: after X=1 and Y=0
Example ii. Normal model and prior normal posterior
Suppose X, conditional on , is modeled by
X[ N(,
2
)
and the prior distribution of is
N(
0
,
2
0
)
with
2
,
0
and
2
0
known.
Posterior of : ([X = x) N(
1
,
2
1
)

1
= w
0
+ (1 w)x

2
1
=
2
0
+
2
where w =
2
0
/(
2
0
+
2
) measures the relative information contained in the prior distribution with
respect to the total information (prior plus likelihood).
5
Example from Box & Tiao (1973)
Prior A: Physicist A (large experience): N(900, (20)
2
)
Prior B: Physicist B (not so experienced): N(800, (80)
2
).
Model: (X[) N(, (40)
2
).
Observation: X = 850
([X = 850, H
A
) N(890, (17.9)
2
)
([X = 850, H
B
) N(840, (35.7)
2
)
Information (precision)
Physicist A: from 0.002500 to 0.003120 (an increase of 25%)
Physicist B: from 0.000156 to 0.000781 (an increase of 400%)
Priors and posteriors
600 700 800 900 1000
0
.
0
0
0
0
.
0
0
5
0
.
0
1
0
0
.
0
1
5
0
.
0
2
0

Physicist A: prior
Physicist B: prior
Likelihood
Physicist A: posterior
Physicist B: posterior
Example iii. Simple linear regression
A simple normal linear regression relates the dependent variable y
i
and the the explanatory variable
x
i
, for i = 1, . . . , n, by
y
i
[, H N(x
i
;
2
)
[H N(
0
,
2
0
)
Therefore H =
2
,
0
,
2
0
, x
1
, . . . , x
n
.
Example iv. Simple stochastic volatility model
6
The simplest stochastic volatility model with rst-order auregressive log-volatilities, namely SV-
AR(1), relates log-return of nancial time series y
t
to log-volatility
t
, for t = 1, . . . , T, via
y
t
[, H N(0; e
t
)

t
[H N( +
t1
,
2
)
Therefore H = , ,
2
,
0
.
Bayesian ingredients
Posterior (Bayes Theorem)
p( [ x, H) =
p(, x [ H)
p(x [ H)
=
p(x [ , H)p( [ H)
p(x [ H)
Predictive (or marginal) distribution
p(x[H) =
_

p(x, [H) d = E

[p(x [ , H)]
p(x[H) is also known as normalizing constant and plays an important role in Bayesian model
criticism.
Bayesian ingredients (cont.)
Posterior predictive
p(y [ x, H) =
_

p(y, [ x, H)d
=
_

p(y [ , x, H)p( [ x, H)d


=
_

p(y [ , H)p( [ x, H)d


= E
|x
[p(y [ , H)]
since, in general, but not always,
X, Y are independent given .
It might be more useful to concentrate on prediction rather than on estimation because the former
is veriable, i.e. y is observable while is not.
Sequential Bayes theorem: a rule for updating probabilities
Experimental result: x
1
p
1
(x
1
[ )
p( [ x
1
) l
1
(; x
1
)p()
7
Experimental result: x
2
p
2
(x
2
[ )
p( [ x
2
, x
1
) l
2
(; x
2
)p( [ x
1
)
l
2
(; x
2
)l
1
(; x
1
)p()
Experimental results: x
i
p
i
(x
i
[ ), for i = 3, , n
p( [ x
n
, , x
1
) l
n
(; x
n
)p( [ x
n1
, , x
1
)

_
n

i=1
l
i
(; x
i
)
_
p()
Example iii. Revisited
Combining likelihood and prior
y
i
[, H N(x
i
;
2
)
[H N(
0
,
2
0
)
leads to posterior
[y, H N(
1
,
2
1
)
where

2
1
=
2
0
+
2
n

i=1
x
2
i
and
1
=
2
1
_

2
0

0
+
2
n

i=1
y
i
x
i
_
When
2
0
0, i.e. with little prior knowledge about , the above moments converge to ordinary
least squares counterparts:

2
1
=
2
n

i=1
x
2
i
and
1
=

n
i=1
y
i
x
i

n
i=1
x
2
i
Example v. Multiple normal linear regression
The standard Bayesian approach to multiple linear regression is
(y[X, ,
2
) N(X,
2
I
n
)
where y = (y
1
, . . . , y
n
), X = (x
1
, . . . , x
n
)

is the (n q), design matrix and q = p + 1.


The prior distribution of (,
2
) is NIG(b
0
, B
0
, n
0
, S
0
), i.e.
[
2
N(b
0
,
2
B
0
)

2
IG(n
0
/2, n
0
S
0
/2)
for known hyperparameters b
0
, B
0
, n
0
and S
0
.
Example v. Conditionals and marginals
It is easy to show that (,
2
) is NIG(b
1
, B
1
, n
1
, S
1
), i.e.
([
2
, y, X) N(b
1
,
2
B
1
)
(
2
[y, X) IG(n
1
/2, n
1
S
1
/2)
8
where
B
1
1
= B
1
0
+X

X
B
1
1
b
1
= B
1
0
b
0
+X

y
n
1
= n
0
+n
n
1
S
1
= n
0
S
0
+ (y Xb
1
)

y + (b
0
b
1
)

B
1
0
b
0
.
It is also easy to derive the full conditional distributions, i.e.
([y, X) t
n1
(b
1
, S
1
B
1
)
(
2
[, y, X) IG(n
1
/2, n
1
S
11
/2)
where
n
1
S
11
= n
0
S
0
+ (y X)

(y X).
Example v. Ordinary least squares
It is well known that

= (X

X)
1
X

y

2
=
S
e
n q
=
(y X

(y X

)
n q
are the OLS estimates of and
2
, respectively.
The conditional and unconditional sampling distributions of

are
(

[
2
, y, X) N(,
2
(X

X)
1
)
(

[y, X) t
nq
(, S
e
)
respectively, with
(
2
[
2
) IG
_
(n q)/2, ((n q)
2
/2
_
.
Example v. Sucient statistics recursions
Recall the multiple linear regression (y
t
[x
t
, ,
2
) N(x

t
,
2
) for t = 1, . . . , n, [
2
N(b
0
,
2
B
0
)
and
2
IG(n
0
/2, n
0
S
0
/2).
Then, for y
t
= (y
1
, . . . , y
t
) and X
t
= (x
1
, . . . , x
t
)

, it follows that
([
2
, y
t
, X
t
) N(b
t
,
2
B
t
)
(
2
[y
t
, X
t
) IG(n
t
/2, n
t
S
t
/2)
where nt = nt1+1, B
1
t
= B
1
t1
+xtx

t
, B
1
t
bt = B
1
t1
bt1+ytxt and ntSt = nt1St1+(yt b

t
xt)yt + (bt1 bt)

B
1
t1
bt1.
The only ingredients needed are: x
t
x

t
, y
t
x
t
and y
2
t
.
These recursions will play an important role later on when deriving sequential Monte Carlo
methods for conditionally Gaussian dynamic linear models, like many stochastic volatility models.
9
Example v. Predictive
The predictive density can be seen as the marginal likelihood, i.e.
p(y[X) =
_
p(y[X, ,
2
)p([
2
)p(
2
)dd
2
or, by Bayes theorem, as the normalizing constant, i.e.
p(y[X) =
p(y[X, ,
2
)p([
2
)p(
2
)
p([
2
, y, X)p(
2
[y, X)
which is valid for all (,
2
).
Closed form solution is available for the multiple normal linear regression:
(y[X) t
n0
(Xb
0
, S
0
(I
n
+XB
0
X

)).
Unfortunately, closed form solutions are
rare.
Example iv. Revisited
The posterior distribution of = (
1
, . . . ,
T
) is given by
p([y)
T

t=1
p(
t
[
t1
, H)
T

t=1
p(y
t
[
t
)

t=1
exp
_

1
2
2
(
t

t1
)
2
_

t=1
e
t/2
exp
_

1
2
y
t
e
t
_
How to compute E(
43
[y) or V (
11
[y)?
How to compute 95% credible regions for (
35
,
36
[y)?
How to sample from p([y) or p([y
1
, . . . , y
10
)?
How to compute p(y) or p(y
t
[y
1
, . . . , y
t1
)?
10
LECTURE 2
BAYESIAN MODEL
CRITICISM
Outline
Prior and posterior model probabilities
Posterior odds
Bayes factor
Computing normalizing constants
Savage-Dickey density ratio
Bayesian Model Averaging
Posterior predictive criterion
Deviance information criterion
Prior and posterior model probabilities
Suppose that the competing models can be enumerated and are represented by the set M = M
1
, M
2
, . . ..
Bayesian model comparison is commonly performed by computing posterior model probabilities,
Pr(M
j
[y) f(y[M
j
)Pr(M
j
)
where Pr(M
j
) and
f(y[M
j
) =
_
f(y[
j
, M
j
)p(
j
[M
j
)d
j
are, respectively, the prior model probability and the predictive density of model M
j
, for j = 1, 2, . . .
Posterior odds
Posterior odds of model M
j
relative to M
k
Pr(M
j
[y)
Pr(M
k
[y)
. .
posterior odds
=
Pr(M
j
)
Pr(M
k
)
. .
prior odds

f(y[M
j
)
f(y[M
k
)
. .
Bayes factor
.
The Bayes factor can be viewed as the weighted likelihood ratio of M
j
to M
k
.
11
Bayes factor
Jereys (1961) recommends the use of the following rule of thumb to decide between models j and k:
B
jk
> 100 : decisive evidence against k
10 < B
jk
100 : strong evidence against k
3 < B
jk
10 : substantial evidence against k
Posterior model probability for model j is
Pr(M
j
[y) =
_

k=1
B
kj
Pr(M
k
)
Pr(M
j
)
_
1
for j = 1, 2, . . ..
Computing normalizing constants
A basic ingredient for model assessment is given by the predictive density
f(y[M) =
_
f(y[, M)p([M)d ,
which is the normalizing constant of the posterior distribution.
The predictive density can now be viewed as the likelihood of model M.
It is sometimes referred to as predictive likelihood, because it is obtained after marginalization of
model parameters.
For any given model, the predictive density can be written as
f(y) = E[f(y[)]
where expectation is taken with respect to the prior distribution p().
Approximate methods (discussed more later)
Several approximations for f(y) based on Monte Carlo and Markov chain Monte Carlo methods are
routinely available. Amongst them are:
Laplace-Metropolis estimator
Simple Monte Carlo
Monte Carlo via importance sampling
Harmonic mean estimator
Chibs estimator
Reversible jump MCMC
Key references are DiCiccio, Kass, Raftery and Wasserman (1997) Han and Carlin (2001) and Lopes
and West (2004).
12
Savage-Dickey Density Ratio
Suppose that M
2
is described by
p(y[, , M
2
)
and M
1
is a restricted verison of M
2
, ie.
p(y[, M
1
) p(y[ =
0
, , M
2
)
Suppose also that
([ =
0
, M
2
) = ([M
1
)
Therefore, it can be proved that the Bayes factor is
B
12
=
( =
0
[y, M
2
)
( =
0
[M
2
)
where
(1)
, . . . ,
(N)
([y, M
2
). See Verdinelli and Wasserman (1995) for further details.
Example i. Normality x Student-t
Suppose we have two competing models
/
1
: x
i
N(,
2
)
/
2
: x
i
t

(,
2
)
Letting = 1/, /
1
is a particular case of /
2
when =
0
= 0.0, with = (,
2
).
Let U(0, 1), with = 1 corresponding to a Cauchy distribution. Assuming that
(,
2
[/
1
) = (,
2
, [/
2
)
2
the Savage-Dickey formula holds and the Bayes factor is
B
12
= (
0
[x, /
2
)
the marginal posterior of evaluated at 0.
Results
Because (,
2
, [x, /
2
) has no closed form solution, they use a Metropolis algorithm.
When n = 100 from N(0, 1), then
B
12
= 3.79
with standard error of 0.145.
When n = 100 from Cauchy(0, 1), then
B
12
= 0.000405
with standard error of 0.000240.
13
Bayesian model averaging
Let / denote the set that indexes all entertained models.
Assume that is an outcome of interest, such as the future value y
t+k
, or an elasticity well dened
across models, etc.
The posterior distribution for is
p([y) =

mM
p([m, y)Pr(m[y)
for data y and posterior model probability
Pr(m[y) =
p(y[m)Pr(m)
p(y)
where Pr(m) is the prior probability model.
See Hoeting, Madigan, Raftery and Volinsky (1999) for more details.
Posterior predictive criterion
Gelfand and Ghosh (1998) introduced a posterior predictive criterion that, under squared error loss,
favors the model M
j
which minimizes
D
G
j
= P
G
j
+G
G
j
where
P
G
j
=
n

t=1
V ( y
t
[y, M
j
)
G
G
j
=
n

t=1
[y
t
E( y
t
[y, M
j
)]
2
and ( y
1
, . . . , y
n
) are predictions/replicates of y.
The rst term, P
j
, is a penalty term for model complexity.
The second term, G
j
, accounts for goodness of t.
More general losses
Gelfand and Ghosh (1998) also derived the criteria for more general error loss functions.
Expectations E( y
t
[y, M
j
) and variances V ( y
t
[y, M
j
) are computed under posterior predictive densi-
ties, ie.
E[h( y
t
)[y, M
j
] =
_ _
h( y
t
)f( y
t
[y,
j
, M
j
)(
j
[M
j
)d
j
d y
t
for h( y
t
) = y
t
and h( y
t
) = y
2
t
.
The above integral can be approximated via Monte Carlo.
14
Deviance information criterion
Inspired by Dempsters (1997) suggestion to compute the posterior distribution of the log-likelihood,
D(
j
) = 2 log f(y[
j
, M
j
), Spiegelhalter et al. (2002) introduced the deviance information criterion
(DIC)
D
S
j
= P
S
j
+G
S
j
where
P
S
j
= E[D(
j
)[y, M
j
] D[E(
j
[y, M
j
)]
G
S
j
= E[D(
j
)[y, M
j
].
The DIC is decomposed into two important components:
One responsible for goodness of t: (G
S
j
)
One responsible for model complexity: (P
S
j
)
P
S
j
is also currently referred to as the eective number of parameters of model M
j
.
DIC and WinBUGS
The DIC has become very popular in the applied Bayesian community due to its computational
simplicity and, consequently, its availability in WinBUGS.
Further applications appear, amongst many others, in
Berg, Meyer and Yu (2002): stochastic volatility models.
Celeux et al. (2005): mixture models, random eects models and several missing data models.
Nobre, Schmidt and Lopes (2005): space-time hierarchical models.
van der Linde (2005): variable selection.
Lopes and Salazar (2006): nonlinear time series models.
Silva and Lopes (2008): mixture of copulas models.
15
LECTURE 3
MONTE CARLO METHODS
Basic Bayesian computation
Main ingredients:
Posterior : () =
f(x[)p()
f(x)
Predictive : f(x) =
_
f(x[)p()d
Bayesian Agenda:
Posterior modes: max

();
Posterior moments: E

[g()];
Density estimation: (g());
Bayes factors: f(x[M
0
)/f(x[M
1
);
Decision: max
d
_
U(d, )()d.
Analytic approximations
Asymptotic approximation (Carlin&Louis, 2000)
Laplace approximation (Tierney&Kadane, 1986)
Gaussian quadrature (Naylor and Smith, 1982)
Stochastic approximations/simulations
Simulated annealing (Metropolis et al, 1953)
Metropolis-Hastings algorithm (Hastings, 1970)
Monte Carlo integration (Geweke, 1989)
Gibbs sampler (Gelfand and Smith, 1990)
Rejection methods (Gilks and Wild, 1992)
Importance Sampling (Smith and Gelfand, 1992)
Monte Carlo methods
In what follows we will introduce several Monte Carlo methods for integrating and/or sampling from
nontrivial densities.
Simple Monte Carlo integration
Monte Carlo integration via importance sampling
Sampling via the rejection method
Sampling via importance resampling (SIR)
16
Monte Carlo integration
The objective here is to compute moments
E

[h()] =
_
h()()d
If
1
, . . . ,
n
is a random sample from ()

h
mc
=
1
n
n

i=1
h(
i
) E

[h()]
If, additionally, E

[h
2
()] < , then
V

h
mc
] =
1
n
_
h() E

[h()]
2
()d
and
v
mc
=
1
n
2
n

i=1
(h(
i
)

h
mc
)
2
V

h
mc
]
Example i.
The objective here is to estimate
p =
_
1
0
[cos(50) +sin(20)]
2
d = 0.965
by noticing that the above integral can be rewritten as
E

[h()] =
_
h()()d
where h() = [cos(50) +sin(20)]
2
and () = 1 is the density of a U(0, 1).
Therefore
p =
1
n
n

i=1
h(
i
)
where
1
, . . . ,
n
are i.i.d. from U(0, 1).
0.0 0.2 0.4 0.6 0.8 1.0
0
1
2
3
4
Sample size (log10)
0
.
0
0
.
4
0
.
8
1
.
2
1 2 3 4 5 6
17
Monte Carlo via importance sampling
The objective is still the same, ie to compute
E

[h()] =
_
h()()d
by noticing that
E

[h()] =
_
h()()
q()
q()d
where q() is an importance function. Therefore, if
1
, . . . ,
n
is a random sample from q() then

h
is
=
1
n
n

i=1
h(
i
)(
i
)/q(
i
) E

[h()]
Ideally, q() should be (i) as close as possible to h()() and (ii) easy to sample from.
Example ii.
The objective here is to estimate
p =
_

2
1
(1 +
2
)
d
Three Monte Carlo estimators of p are
p
1
=
1
n
n

i=1
I
i
(2, )
p
2
=
1
n
n

i=1
1
2
I
i
(, 2) (2, )
p
3
=
1
n
n

i=1
u
2
i
2[1 +u
2
i
]
where
1
, . . . ,
n
Cauchy(0,1) and u
1
, . . . , u
n
U(0, 1/2).
n v
mc1
v
mc2
v
mc3
50 0.051846 0.033941 0.001407
100 0.030000 0.021651 0.000953
700 0.014054 0.008684 0.000359
1000 0.011738 0.007280 0.000308
5000 0.005050 0.003220 0.000138
10000 0.003543 0.002276 0.000097
100000 0.001124 0.000721 0.000031
1000000 0.000355 0.000228 0.000010
If 0.0035 is the desired level of precision in the estimation, then 1 million draws would be necessary for
estimator p
1
while only 700 for estimator p
3
, i.e. roughly 3 orders of magnitude smaller.
18
10 5 0 5 10
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
X ~ CAUCHY(0,1)
10 5 0 5 10
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
P(X>2)
10 5 0 5 10
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
P(X>2)+P(X<2)
0.0 0.1 0.2 0.3 0.4 0.5
0
.0
0
0
0
.0
0
2
0
.0
0
4
0
.0
0
6
0
.0
0
8
Transformation
2 3 4 5 6
0
.1
0
0
.1
2
0
.1
4
0
.1
6
Monte Carlo estimators
log10(n)
2 3 4 5 6
0
.0
0
0
0
.0
1
0
0
.0
2
0
0
.0
3
0
Standard deviations of
Monte Carlo estimators
log10(n)
Rejection method
The objective is to draw from a target density
() = c

()
when only draws from an auxiliary density
q() = c
q
q()
is available. Here c

and c
q
are normalizing constants.
If there exist a constant A < such that
() A q() for all
then q() becomes a blanketing density or an envelope and A the envelope constant.
(()) q(())
PROPOSAL: CAUCHY(0,2.5)
6 4 2 0 2 4 6
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Acceptance rate=49.65%
3 2 1 0 1 2 3
0
.0
0
.1
0
.2
0
.3
0
.4
A=1.982 (()) q(())
PROPOSAL: UNIFORM(6,6)
6 4 2 0 2 4 6
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Acceptance rate=20.85%
2 1 0 1 2 3
0
.0
0
.1
0
.2
0
.3
0
.4
0
.5
0
.6
A=4.787
Algorithm
Under the previous conditions the following algorithm can be used to draw from ().
1. Draw

from q();
2. Draw u from U(0, 1);
19
3. Accept

if u
(

)
A q(

)
;
4. Repeat 1, 2 and 3 until n draws are accepted.
Proof
Applying Bayes theorem:
p( [ Au q() ()) =
Pr(Au q() < () [ )q()
_
Pr(Au q() () [ )q()d
=
Pr
_
u <
()
A q()
[
_
q()
_
Pr
_
u <
()
A q()
[
_
q()d
=
()
A q()
c
q
q()
_
()
A q()
c
q
q()d
=
()
_
()d
= () .
One does not need to known c

and c
q
.
The smaller the A is the larger the acceptance rate.
The theoretical acceptance rate is
Pr
_
u
()
A q()
_
=
_
Pr
_
u
()
A q()
[
_
q()d
=
_
()
A q()
c
q
q()d
=
1
A
_
()d
_
q()d
=
c
q
Ac

.
Example iii.
Enveloping the N(0,1) density
() =
1

2
exp0.5
2

by a multiple of a Cauchy density


q
C
() =
1

2.5
_
1 +

2
2.5
_
1
or a multiple of a Uniform density
q
U
() =
1
12
(6, 6).
n = 2000 draws sampled from q
C
(), an observed acceptance rate of 49.65% and true acceptance rate
of 1/

1.25 = 50%.
n = 2000 draws sampled from q
U
(), an observed acceptance rate of 20.85% and true acceptance rate
of

2/12 = 21%.
20
(()) q(())
PROPOSAL: CAUCHY(0,2.5)
6 4 2 0 2 4 6
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Acceptance rate=49.65%
3 2 1 0 1 2 3
0
.0
0
.1
0
.2
0
.3
0
.4
A=1.982 (()) q(())
PROPOSAL: UNIFORM(6,6)
6 4 2 0 2 4 6
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Acceptance rate=20.85%
2 1 0 1 2 3
0
.0
0
.1
0
.2
0
.3
0
.4
0
.5
0
.6
A=4.787
Sampling importance resampling
No need to rely on the existance of A!
Algorithm
1. Draw

1
, . . . ,

n
from q()
2. Compute weights
w
i
=
(

i
)/q(

i
)

n
j=1
(

j
)/q(

j
)
, i = 1, . . . , n
3. For j = 1, . . . , m, sample

j
from

1
, . . . ,

such that Pr(


j
=

i
) =
i
, i = 1, . . . , n.
Rule of thumb: n/m = 20.
SIR in the Bayesian context
Let the target distribution is the posterior distribution
() = c

p()f(x[)
A natural (but not necessarily good) choice is
q() = p()
so the weights

i
=
f(x[
i
)

n
j=1
f(x[
j
)
, i = 1, . . . , n
are the normalized likelihoods.
Example iv.
Assume that
2
/n = 4.5, x = 7,
0
= 0 and
2
0
= 1.
Normal model
f(x[) =
1

2
2
exp
_

n
2
2
( x)
2
_
21
Cauchy prior
p()
1

2
0
+ (
0
)
2
Posterior
()
exp
_

n
2
2
( x)
2
_

2
0
+ (
0
)
2
200 draws from the prior
14 5 0 5 14
0
.
0
0
0
0
.
0
0
1
0
.
0
0
2
0
.
0
0
3
0
.
0
0
4
PRIOR
LIKELIHOOD
POSTERIOR
15 10 5 0 5 10 15
0
.
0
0
0
.
0
5
0
.
1
0
0
.
1
5
Posterior density
based on 10000 draws
22
LECTURE 4
MARKOV CHAIN MONTE CARLO
Homogeneous Markov chain
A Markov chain is a stochastic process where given the present state, past and future states are
independent, i.e.
Pr(
(n+1)
A[
(n)
= x,
(n1)
A
n1
, . . . ,
(0)
A
0
)
equals
Pr(
(n+1)
A[
(n)
= x)
for all sets A
0
, . . . , A
n1
, A S and x S.
When the above probability does not depend on n, the chain is said to be homogeneous and a
transition function, or kernel P(x, A), can be dened as:
1. for all x S, P(x, ) is a probability distribution over S;
2. for all A S, the function x P(x, A) can be evaluated.
Example i. Random walk
Consider a particle moving independently left and right on the line with successive displacements
from its current position governed by a probability function f over the integers and
(n)
representing
its position at instant n, n N. Initially,
(0)
is distributed according to some distribution
(0)
. The
positions can be related as

(n)
=
(n1)
+w
n
= w
1
+w
2
+. . . +w
n
where the w
i
are independent random variables with probability function f. So,
(n)
: n N is a
Markov chain in Z.
The position of the chain at instant t = n is described probabilistically by the distribution of w
1
+
. . . +w
n
.
time
5 10 15 20
!
3
!
2
!
1
0
1
time
0 20 40 60 80 100
!
1
2
!
1
0
!
8
!
6
!
4
!
2
0
time
0 100 200 300 400 500
!
1
0
0
1
0
2
0
time
0 200 400 600 800 1000
!
1
0
0
1
0
2
0
23
Pr
(n)
=
(n1)
+i = 1/2, for i = 1, 1 and
(0)
= 0.0.
Example ii. Birth and death processes
Consider a Markov chain that from the state x can only move in the next step to one of the neighboring
states x 1, representing a death, x or x +1, representing a birth. The transition probabilities are given
by
P(x, y) =
_

_
p
x
, if y = x + 1
q
x
, if y = x 1
r
x
, if y = x
0 , if [y x[ > 1
.
where p
x
, q
x
and r
x
are non-negative with p
x
+q
x
+r
x
= 1 and q
0
= 0.
time
5 10 15 20
!
1
.
0
0
.
0
0
.
5
1
.
0
1
.
5
2
.
0
time
0 20 40 60 80 100
0
5
1
0
1
5
time
0 100 200 300 400 500
0
5
1
0
1
5
2
0
time
0 200 400 600 800 1000
0
1
0
2
0
3
0
Pr
(n)
=
(n1)
+i = 1/3, for i = 1, 0, 1 and
(0)
= 0.0.
Discrete state spaces
If S is nite with r elements, S = x
1
, x
2
, . . . , x
r
, a transition matrix P with (i, j)th element given
by P(x
i
, x
j
) can be dened as
P =
_
_
_
P(x
1
, x
1
) . . . P(x
1
, x
r
)
.
.
.
.
.
.
P(x
r
, x
1
) . . . P(x
r
, x
r
)
_
_
_.
Transition probabilities from state x to state y over m steps, denoted by P
m
(x, y), is given by the
probability of a chain moving from state x to state y in exactly m steps. It can be obtained for m 2 as
P
m
(x, y) = Pr(
(m)
= y|
(0)
= x)
=
X
x
1

X
x
m1
Pr(y, x
m1
, . . . , x
1
|x)
=
X
x
1

X
x
m1
Pr(y|x
m1
) . . . Pr(x
1
|x)
=
X
x
1

X
x
m1
P(x
m1
, y) P(x, x
1
)
24
Chapman-Kolmogorov equations
P
n+m
(x, y) =
X
z
Pr(
(n+m)
= y|
(n)
= z,
(0)
= x)
Pr(
(n)
= z|
(0)
= x)
=
X
z
P
n
(x, z)P
m
(z, y)
and (more generally)
P
n+m
= P
n
P
m
.
Marginal distributions
Let

(n)
= (
(n)
(x
1
), ,
(n)
(x
r
))
with the initial distribution of the chain when n = 0. Then,

(n)
(y) =

xS
P
n
(x, y)
(0)
(x)
or, in matrix notation,

(n)
=
(0)
P
n

(n)
=
(n1)
P
Example iii. 2-state Markov chain
Consider
(n)
: n 0, a Markov chain in S = 0, 1 with
(0)
given by

(0)
= (
(0)
(0),
(0)
(1))
and transition matrix
P =
_
1 p p
q 1 q
_
.
It is easy to see that
Pr(
(n)
= 0) = (1 p)Pr(
(n1)
= 0) + qPr(
(n1)
= 1)
= (1 p q)
n

(0)
(0) + q
n1
X
k=0
(1 p q)
k
If p +q > 0,
Pr(
(n)
= 0) =
q
p + q
+ (1 p q)
n

(0)
(0)
q
p + q

If 0 < p +q < 2 then


lim
n
Pr(
(n)
= 0) =
q
p + q
lim
n
Pr(
(n)
= 1) =
p
p + q
25
Stationary distributions
A fundamental problem for Markov chains is the study of the asymptotic behavior of the chain as
the number of iterations n .
A key concept is that of a stationary distribution . A distribution is said to be a stationary
distribution of a chain with transition probabilities P(x, y) if

xS
(x)P(x, y) = (y), y S
or in matrix notation as P = .
If the marginal distribution at any given step n is then the next step distribution is P = .
Once the chain reaches a stage where is its distribution, all subsequent distributions are .
is also known as the equilibrium distribution.
Example iv. 10-state Markov chain
state tomorrow
s
t
a
t
e

t
o
d
a
y
1 2 3 4 5 6 7 8 9 10
1
2
3
4
5
6
7
8
9
1
0
0.333
0
0
0
0
0
0
0
0
0
0.667
0.328
0.037
0
0
0
0
0
0
0
0
0.567
0.429
0.08
0.003
0
0
0
0
0
0
0.104
0.492
0.537
0.148
0.008
0
0
0
0
0
0
0.04
0.365
0.596
0.246
0.019
0
0
0
0
0
0.001
0.018
0.244
0.603
0.357
0.033
0
0
0
0
0
0
0.008
0.141
0.547
0.489
0.062
0
0
0
0
0
0
0.002
0.077
0.446
0.594
0
0
0
0
0
0
0
0.001
0.032
0.336
0.6
0
0
0
0
0
0
0
0
0.008
0.4
Equilibrium distribution
26
state
Equilibrium distribution
1 2 3 4 5 6 7 8 9 10
0
.
0
0
0
.
0
5
0
.
1
0
0
.
1
5
0
.
2
0
0
.
2
5
0
.
3
0
0
.
3
5
True
Estimated
Ergodicity
A chain is said to be geometrically ergodic if [0, 1), and a real, integrable function M(x) such
that
|P
n
(x, ) ()| M(x)
n
(1)
for all x S.
If M(x) = M, then the ergodicity is uniform.
Uniform ergodicity implies geometric ergodicity which implies ergodicity.
The smallest satisfying (??) is called the rate of convergence.
A very large value of M(x) may slow down convergence considerably.
Ergodic theorem
Once ergodicity of the chain is established, important limiting theorems can be stated. The rst and
most important one is the ergodic theorem.
The ergodic average of a real-valued function t() is the average

t
n
= (1/n)

n
i=1
t(
(i)
). If the chain
is ergodic and E

[t()] < for the unique limiting distribution then

t
n
a.s.
E

[t()] as n
which is a Markov chain equivalent of the law of large numbers.
It states that averages of chain values also provide strongly consistent estimates of parameters of the
limiting distribution despite their dependence.
There are also versions of the central limit theorem for Markov chains.
Ineciency factor or integrated autocorrelation time
Dene the autocovariance of lag k of the chain t
(n)
= t(
(n)
) as
k
= Cov

(t
(n)
, t
(n+k)
), the variance
of t
(n)
as
2
=
0
, the autocorrelation of lag k as
k
=
k
/
2
and
2
n
/n = V ar

t
n
).
It can be shown that

2
n
=
2
_
1 + 2
n1

k=1
n k
n

k
_
(2)
27
and that

2
n

2
=
2
_
1 + 2

k=1

k
_
(3)
as n .
The term between parentheses in Equation (??) can be called ineciency factor or integrated au-
tocorrelation time because it measures how far t
(n)
s are from being a random sample and how much
V ar

t
n
) increases because of that.
Eective sample size
The ineciency factor can be used to derive the eective sample size
n
e
=
n
1 + 2

k=1

k
(4)
which can be thought of as the size of a random sample with the same variance since
V ar

t
n
) =
2
/n
e
.
It is important to distinguish between

2
= V ar

[t()] and
2
the variance of t() under the limiting distribution and the limiting sampling variance of

n

t, respec-
tively.
Note that under independent sampling they are both given by
2
. They are both variability measures
but the rst one is a characteristic of the limiting distribution whereas the second is the uncertainty of
the averaging procedure.
Central limit theorem
If a chain is uniformly (geometrically) ergodic and t
2
() (t
2+
()) is integrable with respect to (for
some > 0) then

t
n
E

[t()]
/

n
d
N(0, 1), (5)
as n .
Just as (??) provides theoretical support for the use of ergodic averages as estimates, Equation (??)
provides support for evaluation of approximate condence intervals.
Reversible chains
Let (
(n)
)
n0
be an homogeneous Markov chain with transition probabilities P(x, y) and stationary
distribution .
Assume that one wishes to study the sequence of states
(n)
,
(n1)
, . . . in reversed order. It can be
shown that this sequence is a Markov chain with transition probabilities are
P

n
(x, y) = Pr(
(n)
= y [
(n+1)
= x)
=
Pr(
(n+1)
= x [
(n)
= y)Pr(
(n)
= y)
Pr(
(n+1)
= x)
=

(n)
(y)P(y, x)

(n+1)
(x)
28
and in general the chain is not homogeneous.
If n or alternatively,
(0)
= , then
P

n
(x, y) = P

(x, y) = (y)P(y, x)/(x)


and the chain becomes homogeneous.
If P

(x, y) = P(x, y) for all x and y S, the Markov chain is said to be reversible. The reversibility
condition is usually written as
(x)P(x, y) = (y)P(y, x) (6)
for all x, y S.
It can be interpreted as saying that the rate at which the system moves from x to y when in equilib-
rium, (x)P(x, y), is the same as the rate at which it moves from y to x, (y)P(y, x).
For that reason, (??) is sometimes referred to as the detailed balance equation; balance because it
equates the rates of moves through states and detailed because it does it for every possible pair of states.
MCMC: a bit of history
Dongarra and Sullivan (2000)
1
list the top 10 algorithms with the greatest inuence on the develop-
ment and practice of science and engineering in the 20th century (in chronological order):
Metropolis Algorithm for Monte Carlo
Simplex Method for Linear Programming
Krylov Subspace Iteration Methods
The Decompositional Approach to Matrix Computations
The Fortran Optimizing Compiler
QR Algorithm for Computing Eigenvalues
Quicksort Algorithm for Sorting
Fast Fourier Transform
40s and 50s
Stan Ulam soon realized that computers could be used in this fashion to answer questions of neutron
diusion and mathematical physics;
He contacted John Von Neumann and they developed many Monte Carlo algorithms (importance
sampling, rejection sampling, etc);
In the 1940s Nick Metropolis and Klari Von Neumann designed new controls for the state-of-the-art
computer (ENIAC);
Metropolis and Ulam (1949) The Monte Carlo method. Journal of the American Statistical Associa-
tion, 44, 335-341;
Metropolis, Rosenbluth, Rosenbluth, Teller and Teller (1953) Equations of state calculations by fast
computing machines. Journal of Chemical Physics, 21, 1087-1091.
1
Guest Editors Introduction: The Top 10 Algorithms, Computing in Science and Engineering, 2, 22-23.
29
70s
Hastings and his student Peskun showed that Metropolis and the more general Metropolis-Hastings
algorithm are particular instances of a larger family of algorithms.
Hastings (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika,
57, 97-109.
Peskun (1973) Optimum Monte-Carlo sampling using Markov chains. Biometrika, 60, 607-612.
80s and 90s
Geman and Geman (1984) Stochastic relaxation, Gibbs distributions and the Bayesian restoration
of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721-741.
Pearl (1987) Evidential reasoning using stochastic simulation. Articial Intelligence, 32, 245-257.
Tanner and Wong (1987) The calculation of posterior distributions by data augmentation. Journal
of the American Statistical Association, 82, 528-550.
Gelfand and Smith (1990) Sampling-based approaches to calculating marginal densities. Journal of
the American Statistical Association, 85, 398-409.
Metropolis-Hastings
A sequence
(0)
,
(1)
,
(2)
, . . . is drawn from a Markov chain whose limiting equilibrium distribution
is the posterior distribution, ().
Algorithm
1. Initial value:
(0)
2. Proposed move:

q(

[
(i1)
)
3. Acceptance scheme:

(i)
=
_

com prob.

(i1)
com prob. 1
where
= min
_
1,
(

)
(
(i1)
)
q(
(i1)
[

)
q(

[
(i1)
)
_
Special cases
1. Symmetric chains: q([

) = q(

[)
= min
_
1,
(

)
()
_
2. Independence chains: q([

) = q()
= min
_
1,
(

)
()
_
where (

) = (

)/q(

).
30
Random walk Metropolis
The most famous symmetric chain is the random walk Metropolis:
q([

) = q([

[)
Hill climbing: when
= min
_
1,
(

)
()
_
a value

with higher density (

) greater than () is automatically accepted.


Example v. Bivariate mixture of normals
Tthe target distribution is a two-component mixture of bivariate normal densities, ie:
() = 0.7f
N
(;
1
,
1
) + 0.3f
N
(;
2
,
2
).
where

1
= (4.0, 5.0)

2
= (0.7, 3.5)

1
=
_
1.0 0.7
0.7 1.0
_

2
=
_
1.0 0.7
0.7 1.0
_
.
Targe distribution
1

2
0.1
0.2
0.3
0.3
0.4
0.4
0.5
0.6
0.7
0.8
0
.9

2 0 2 4 6
0
2
4
6
8
2
0
2
4
6
0
2
4
6
8
0.0
0.2
0.4
0.6
0.8

1
Random walk Metropolis: draws
q(, ) = f
N
(; , I
2
) and =tuning.
31
G
G G G G G G
G
GG
G GGG
G GGG G GGG G G GG G G
G
G G G G
G G
G
GG G G G G G G G G
GG GGGGGGG GGG G GG GGGG GG
G GG
GG G
G GG
G GGG G
G GGG
G
G
GG G GG G
GGGG
G G G
G
GG G GGG GG G GGGG
G GG G G
G G GG
G GGG
G
G
G GGG G G G G G GGG G G G G G G GG GG G G G
G GG G G G
G
G GGGG G G GG GG
G
G GGG
G
G
G G G G G G GG GGG G G G G GGG G
G
G GG G
G G
GGG G G
G
GG
G
GGG
G G GGG G
GG G G GGG
GGGG G G GG
GGGGG GG G
G
G G G G G GG GG G G
GGGGG GG G G G GG G GG G G G G GGG G G GG GGG GGG
G G GGGGG GGG G
G G
G G GG GG GGG G G G G
GGG GG G G
GGG G GG G GG
G GG
G GGGGG
GG G
G GGGG G
GG
G GG G G
GGG G GGG G G GG
GGGG G G G GGG G GG G
GGG GG GGGG G G
G GG
GG G G GG G
G
GG
G GG
G GG G G
G G
G G G G GGG
G
GG G
G GG GG GG G GG
GGG
G G G GG G G GGG
G G GG
G G G
GG G G
GG G G G G GGG G G G
G G G G GG
G GG G GG G GG G GG G G GG
G G G GG GGG
G GG G GG G G GG G
G GGG
G
G GGG
GG GG GG GG
G
GG
G G G G
G GG G
GG G
GG G GG G G G GG
G G
G GG G G
G
GG G G G G G GG
G G G GGG G GG G GGG G G G G G GGG
GG G GG GG G
GGG G G GGGGG
G G G G G
G G G G
G
G G GG G G G G G G G GG G G GG GG
G G G GG G
GGG G
G G
G G G G GG G G G
G
GGG
G
GG G GG G
G
G
G G G G G G G G G G
G
G G G
G G GG
G GGG G
G G G G GGG GG
GG
GG GG G GG
G G G G GG GG
GG
GG G GG
GGGG G G GG GGG G
G G G GG GG G
G
GGG G
GG G G GGGG GG G GG G GGGGG G
G GG G
G
GGG G G GG G
G
G
G G G GG GG G
G
G G
GGG G
G
G G G G G
G G G GG
GG
G GGGG G
G GG
GG GG
GGG G G GG G G GG GGG G
GGG
G G
GG
GG G GG G G
G GGGG
G GG
GGG G G G G G GG
GGG G
G G
G G G G GG GG
G G G GG GGG
G
G G
GGGGG G GG GG
G
G GG GGG G G
G GG
G
G
G
GG GGGG GG GG GG G GG G G GG
G
GG GG G G
G G G G GG G GGG G G GG GG G
G G
GG G G G G GG
G
G
G
G GGGGGGG G GGG G GG G G G G G GG GG GGGG
G G G GG GG
GG G
G GG G G G G
G G GG G
G G G G G G G GGGGG G GG GG G G G G GG G G G G G
G
GG GG G GG GGGG G G G
G
GG
G G G G
G G GG
G G G GG G GG
GG GG G G G G
GG G G GGG G G G G G G GG G G GGGG GG
GG G G G G GG GGG GG G G
G GGGGG
G G G G G G GG G
GG
G
G G G G
GG G G GG G GG GGG
G G
G GG GG
G
G G G G GG G G
G G GG GG G
G
G G GG
G G GG
GG
GG G
G GG G G GGG
GGGG G
G G G G G G GG G
G G G GG G G GGGGG GG
G G GGGG G G
G G GG GGGG
G GG G
GGG G G G G G G G G
GG GG
GGG
G G
G
GG GGG GG G G
G G G
GG GGG GG G GG GG GGG G GGG GGG GG GG GG G G GG G GGG G G GG
GG GG GG G G
G
GGG G
G G GG G G G GG
GG G GG G GGG
GG G
G
G G
G
GG G G G G G G
GG G G
G
G GG
G G GG GG
G G G G GGG G G G G G
G G G GGG G G
G GGG GGG G
G
GGGG G
G G
GG G GGG
GGG GG GG G G
GG G
G GG G GG GG G
GGGG GG G
G G
GG GGG GG G GG GGG GG
GGG
GG G GGG
GGG
GGG
GG G G G G GGGGG G GG GGG
GG G GG
GGGGGG GG G G GGG G G
GG G G G
G G
GGGGGG G G
G G GG G GGG GG G GG
G GG G G G
G G
G G
G
G
G G
G GG G G G GGG
G GG GG G
G
G G G GG GG
G G G G GGG GG G G G
G
GGG
G G G G G GG G G G G
GGGG G G GG G GGGG GG
G
G G GGGGG GG
GGG
G GG
G
GG G G GG GG G
G
G G GG
G G G G GG GGGG
G GG GGGG
G
GG G G
G G G G G G
GG
GG G
GG G GGG G G G G GGG
G GGGGGG
G
GG G G
G
GGGG G G G G G G GGGG GG G GG G G
GG G GG
G G GGGG GGG G
GG
G GG G G G
G
G GGG
G GG G G GG G G G G G GG
G G GG
GG
GGG G GG G
GG
G G
G G G GG
G G
G
GG
G GG
GG G
GG G G
G G G G GG G G
GG G G GGG G
G
G G G G G G G
G G G GGG GG GGG G
G
G G G GG G G G G G
G
G G
GGG GG G G G GG GGGGG G GG GG G GGG GGG G GG G
G
GG
GG
G GGGG GGGGG GG GGGGGG GGG G
G G G
GG G G
G G G
G GG G
G GG G GGG G
GG
G GGG
G G
G
GGG G GGG G
G G G
G G GG GG
GG GGG GG G GG
GGGG G G G
G
G
GG G G GG G G
G GG
G
GGG G GGGG GG GG G G
GGG
G G G G G G G GGGG G G
G GGG
G
G
GGG GGG G GG GGG G
G
GG
GGGG GGG G
G G G G G
GG G G G GG
G G
GG GGG G G G G
GG G G G GGGGG GGG
G G G G G G G
G G G G GG
GG G GG GG G
G G GGGGGGG G G GGG GGG G G GG GG GG GGG GG G GG G GGG G G G G G
GGGGG G GG G
G
G G GG G G GGG G GGG G G GG GG G G
G
G G GG
GG GG G G G G
G
G
G G GGGG GGG G G
GG G G
G G G GG GGG GG G G G G GG G GG
GG G
G G G G GG G
G G
G G GG G
GG GG G G G GGG GG GGG GG
G G G GG G G
GG G G
GGG
G
G GG GG
G GG GG GGG
G
GG G G GGG
G GGGG
GG G GGGG G
G G G
GG GG
G G G G G GG G
G G G GGGGG G GG
G GGGG G
GGG
G GGG GGG G G G G G GGGGG G G GG G G
G G G GG G G G G G
GG G G G
G G
G G G G GGGG GG G GG GGGG GGG GG GG G G G
G G G G G G G
G
GG GGG GG
GG GG GG G G GG GG G
GG G G G G
GG
G G GGGGG GG G
G G G
G
G G
GG G G
GGGG G G GGG GGG GG G G GGG
G GG G GG
G G G G
GG G G GG G
G
G G GG G GG G
G G
GGG G GG G G G GGG G GG GG
G G
GG
G GG
GGGG G G GG G GG
G
GG GG G GG G G G GGG G G G G
GGG GG G GG GG G G GG
G GG G GG
GG G G
G
G
G G G G G G GG G G GGG G
G G GGGG G
G G G
G G G G G G G G GG G
G G GG GG G G G GG G
G G G GGGG G
G G
GG
G G G G GGG
GG G G GG G GG GG G G G G G G GG
G GG G GGG GG G G GG GG
G G
G
G G G G GG G G G G G G
GG G G G G G G G GGG GG G G
G
GG
G G G
GG GG G G G
G G
G G G GGGG GG G GG G G G G G G G G
G
GG G
G G G G
G G G
GGGG G GGGG GGG
G GGGG
G G GG
G G G G G
G GGGG G G GG G
GG G GG GG G G GG G GGG GGG G
GG GG GG GG
GG
GGGG
G
G G
GG G
GG G G G G G G GG
G G G GGG
G GG GG G G G G G
G
G
G GG G GG G G G GGGG G GGGG G GG G G
G GG GGG
GGGG G G
GGGGG GG GG
GG GG GGG G
G GG G G G
G G G
G G GG GG GG
G G
G GG
GG G GG G G GGG GG
G G GGGG G
GG G G
G
G G GG GG G GG G G GG
GG
GG G G GG G GGGGGG
G G GG G G G G GG GG GG GGGG
GG GGGGG G G
GGG GG GG GG
G GG G GGG G
GG G G G G G G G
G G
GG
GGG G G
G
GG GGG
G G GGG
G G G G
G G G G
G
G
G
G GG G G GG G GGG G
G G GGG
G G
GGGG G G GG GGGG
GG G
GGGG GGGGG G
G
GG G G GG G G G GG GG G G GG GG G G GG G GG G G G G G G G G G G G G GG
GGG G G
G G GG G GGG G GG GG G G
G GG GGG G
G GGG G
GG G G GGG G GG GG
GG
G GG G
G
GGG
G
G G
GG
G
GG GG G G G GG
GG GGGG GG G G G GGGG
G
G G G GGG G G G G
G GG GG GG GG
G
G
GGG G
GG G G
G
GG GG GG G GG G GGG G G G GG
G
GGGGG G
GGGGG
GG G G
G GG G G GGGGGG
G
G G G
G G GG GGG G G G G GG G G GGGG GGG GG G G G GGG GG G G
G GG G G G G G G GG G
G GG
G G GG G G G G G G
G G G G GG
GG G
G G
G
G
G
GGGGGG
G G
GG G G G G
G G G
GG G
G
GG GG
G
GG G
GG GGGG
GGG G G G G G GG G G G G GGGG G G
G GG
G G G
G GG GG G
G G G G GG GG GG
G
G
GG
GG
GGGGG GG G GGGG G
G GGGG GG GGGG
GG G G G GG GGG G G GG G G G G G G
G G GG G
GG G G GGG G G
G GGGG G
GG G GGG G
G GG
G G
G
G G
G G
G GGG G GGG GGGGG
GG GG G
G GG G GGG GG G GGG G G G G GGGG
G GG G G
G GG G GGG
GG GG
G
GGG
G G
G G G G G G G
GG GG G
G G G G GG G G GG
GG G G G G GG
GG GG
G GGG
G GG
G GG GG GG GG
GG G
G GGG G
G G
G GG G G G G G
G G
G G GG GGG G G
GGG
G G G
G
GG G GGGG G GGG
G GG G GG
GGGG GGG G G
G G
G G GG G G
G
G G G G GG G
G GG G
GG GG G
G GG G G G G GG G G
G
GG
G G GG GG
G G G GG
G G G GG
G G
GG
G G
GGGGGG G
GGG GGG G G G G G GG
G G G G GGGG
G G GG G G
GG G G G GG
G G G
GG
G GGGG GG
GG GG G
G GG
G GG
G GG
GG G
G G
G GG GGGG
GGGGGG G G
G G G G G G G
GG GGGG
G G
G GG GG G GG GG G G GG G GG G GGG G G
G GGGG GGG G GG G GG G GGGGGG GG G GG GG
GG G
G GGG GG G
G GG G G
GGGGG G G
G G GG
G G G G G G GGG G G GG GG G G GG GG G GG G
G
G G G G GGG G GGG GG G G
G GGG
GG
G GGG
GGG G GGGG GGGGG G G G
G G
G GG G G GG GG
G G
GG GG G
G G GGG G GGG G G G G G GGG
G G G G G G GG GG G
G G G G GGG GGG
GGGGG G
GG GGGG GGG G G GGG G GGGGG G
G GG
G G G GG GG
G G G G G G G GG G GG GGGG
GGG
G
G GGG
GG G G G G
G
G
G G GGGGG GG G G G G G GG G
G GG G
G GGG
GG GG GGG
GGGG G G
GG G GG G G
G G G GGGGG GGG GG GG
G G G G G G
G
G
G GGG
GGG G G G
GGG G
GGG
G G G G G
G
GG GGGG G
GG G G G GG G G G G G G G GG
GG
GG GG GG
GG
GG G
GG
G G
GGG
G G G
GGGG G G G G G
GGG
G G GG
G G GG G
G GG G G
G G
G
G
G GG
GG GG
G G GG GGG
G G GG
GG G G G
GG G GG G
G GG
GG
G G GG G G
G
G G
G
G
G G
G G G
G G
GGG G GG GG G GGGG GG G G G
G G G
G G G G G G G G
G
G GGG G G G G
G G
G
G GGG GG
G GG G
G G G
G G G GG GGG GGGG G GG
G G
GG G G GG G GGGG GG GGGGGG GG
G
GG
G G
GG G
G GGG G G GG G GG
G G
G
G G
G GGGGG
GG GG GGG G G G G GG G
G GGG G GG G
G
G GG G G G G G G
G G
G G GG
GG GG G G G GGG G
G G G
G
G GGGG G GG G G GG G
G G G GG
GGG G
G GG GGG G G GGG GGGG GG G GG G GGG GG G GGGG GG G G GGGG G
G
G
G GG
G
G GG GG G G GG G G GG GG
GG G GG
G G GG GGG
GG GG G
GG
GG
G GG
G G G G GG
G G G GG G
G G
G G G GG G G GG G GGG GGG
GG G G GG GG G GG G G GG
G G G
GGG G G GG
G G G GG GG
G G GG G G
G
G G
G G G GG GG G G G GGG
G G G G G GG G G G
GG G GGG G G G G GGG G GG G G G GG GG G GGG G
G G GG G GG G G GG G G G GG GG GG G G
GG GG G
G GG GG
G G
GG G G GG
G G
G
GGG GG G GG GG G GG
GGGGG G
G GGG GG G
G G
G G
GG GG GG
G G
GG
GG G G G G GGGG G G G G GGGG
GG G GG
G G G GGG GG G G G GGGGGG G G
G G G G
GG
G GGG G
G G
G GG
G GG G G GGG G
G
G
G G GG G
GG
G
GGGGGG
2 0 2 4 6
0
2
4
6
8
tuning=0.01
Initial value=(4,5)
Acceptance rate=93.8%
GGGGGGGGGGGG
GG
G
G
G
GG
G G
G G
G
GGG
G
G
G GG GG
GGGG
G
G
GG
G GGG
G
GG
G
G
GG
GG
GG
G
G
G
GGGG
G G
G
G G
GGG
GG
GGGG
GG
GG
G G
GGGG
GGG G
G
G
GG GG GGG
GG
G
GG
G
G
G
GG
GG
GG
GG
GGG
GGGGG
GG G GGG
G
GGG
GG
GG
G
GGG GG
GGGG
G
GGGG
G
G
GGGGG
GG GG
GGG
G G G
GG
GG
GGGGG
GG GG G G
G
GGGG
GGGG
GGG
G
G
G
G
GGGGG
G
GGG
GGG
G
GG
GG
G
G
G
GG
GG
G
G
GG
GG
G
G
GGG
GGGGGG
GG G GG
GGGGGG
GGGGGG
GGGGG GG
G
G
GGGGG
GGGGG
GG
G
G
GG
GG
G GGG
GG
GGG
GGGGG
G
GGG GGG
GG
G
GG
G
GG G
GGG G
G G GGGGG
G
G
G GG G
G GG
GG
G
GG
GGG
G
G
G
GG
G
G
GG
GG
GGG GGG
GG G
GGGGGGG
GGG
G
G
GGG
G GGGGGGGG
G
GGG
G
GGG GGG
G
G
G
GGGG
G
G G
GG
GG
GGG
GG
GG
GG
G GG
G
GG
GG
G
GG
GGGG
G
G
GG
G
G G
GGGGGGG
GGG
GG GGGG
G
GG GGG
G
GGGG
G
G
GG
GGG
G
G
G
G
G
GGGG
GG
G
GGGGGG G
G
GG G
G
G G
GG
GGGGGGGGGGGGG
G
G
G
GG
G
GG
GGGG
GGG
G
GGGG
G
G G
G
GGGGGG
G
GG
GG
G
G GGGG GG
GGG
G
GGG G GGG
GGGG
G
GG
GG
G
GGG GGGGGGGG
GG
GGGGG
GGGG
G
GG
GGG
G
G
GGG
GGGGG
GG
GG GGG
GGGGGGGGGGG G
G
G
GG
G
G
G
G
G GG
GG
GG
GG
G
G
G
G
G
GG
GG
GGGG
G
GG
G
G
G
GG
GGGGGGGGGGGG
G
G
GGGGG
GGG
G
GG
GG
G
GGG
G
G G
GGG
G
GGG
GGG
GGGG
G GG
GG
GGGG
GGGG
G
GG
GGGGG
GGGG GGGG
GGG
G
G
GG
G
GG
GG
GGGGGG
G
GGGGG
GGGGGG
G
G
GG GGG
GGGG
G
G
GGGGGG
GG
GG
GGG GGG
GG GG
GGGG
GGGGGGGG
GGG
G GG
GG GGGG GG
G
GGG
GG
GGGG
GG
GGGGG GGGG
GGGGG
GGG
G
G
GG
G
G
G
G
GG
G
G
GG
GGGG
GG
GGGG
G
GGGGG
G
G
GGG
G
GG
G GG
G
GGG GG
G
GGGG G
G
GG
G G
G
G
GG
GGG GGG
GG
GG
GGG G
GG G
GG
G
GGGGGGG
GG
G
G
GG
G
G
GG G
GG
GGGGG
GGGG
GGGGGGG
GG
GGG
G
GGGGG
G GGGGGG
GGG
GG
GG
G
GGG G
G G
G
G
GG
G
G
GG
G
G
GG
G
G
GGGGG
GG
GG
GGG
GGG G
GGGG
G
G
GGG
GGGG
GGGG
G
G
G
G
G
GG
GGG GGG
G
G
GGGGGGG
G
GG
GG
G GG
GGGG
G
G
G
G GG
G
GGG
GGG G
GGGG
G
GGGGG
G G
GGGGG
GGGG GG
GGGGGGG
GG
GGGG
GGGGG
G
G
GG
GGG
G
G
GGGGG
G
GG
G
G
G
G
GGG
G
G
G
G
GG
GGGG
G
G
G
G
GGGGGG
G
G
GGGG
GG
GG
GG
GGG
GGGGGG
GGG GG
G GG
GG
G
GG
GG
GGG
G
GGGGGGG
GGG
G
G
GGG
G
G G
G
G
G
G G
GGGGG
GG
GG
GG
GG
GGG
G
GG G
G
GG
G
GGGGGGGG
GGG
GG
GGG
GGGGG
GGG
G GG
G
GGG
GG
GG GG
G
G
G
G
GG GGG GGG
GGG
GGG
GGGGGGGGGG G
G
GGGGG
GG
GGG
G
GG
G
GG
G
G
G G
GG
GGG
GG
G
G
GG
G
G
G
G
GGGG
GG
G
G
GG G
GG
GG
GG
G
GGG
GGGGGGG
G
GGGGG
G
G
G
G G
GGG
G
G
G
G
G
GG
G
GG
GGG
G
G
GGG G
G
GGGG G
GG
GG
GGGGG
G
GG
G
G
G
GGG
G
G
G GGGG
GG G
G
GG
G
G
GG
G
GG
GG
GGGG
GGGGG
GGGG
GGGG
GG
GG
G
G GGG
GGGGGG GGGGG
G
GGGGGG
G
GGG
GG
G
GGGGGGGG
G
GGGG
GG
GG
GG
GGG
G
GGGG
GG
G
GG
G
G G
G GGGGG G
G
G
GGGGGG
GGG
GGGG
GGG G
GGG
G
GGGGGGGG
GG
G
GG G
G G GGG
G GGG
GG
G
GGG
GG
GG G
GGG
GGG
GG G
GG
GGGG G
GGGG
GG GGG
GGGG
G
GGGG
GG
G
GGG
GGGGGGGG
G
GG
G
GGG G
G
G
GGGG
G
G G
GG
G
G GGGGGG
GG
GGG
GG
G
G
G
GGGGG
GGGGG
G
GG G
GG
GG GG
G
GG
GGGGG
GGGGG G
G
G
G
GGGGGG
G G
G
G
GG
G
GGGG G
GG
G
GG
G
G
GG
G
G
G
G
G
G
G
G
GG
GGG
G
G
GG
G
GG
G
GG
G
G
GG
G GG
GG
G
G G
G
GG
G
GG
G
GGG
G
G
GG
G
GGG
G G
G
GGGG GG
GGGGGG
GGG
GGGG
GG
G G
GG
GG
GG
GGG
G
G
G
G
GGG
GGGG
G
G
GG
G
GGGG
G
G
G
GG
GGG
G
G G
GG
G
GG G
GGG
G GG
G
G
GG
GGG G G
GGGGGG
G
GGGGG
G
G
G
G
G
G
GGGG
G
GGGG
G
G
GG GG
GGGGG
G
GGG
G
GGGGG
GG
GG
GG
GGGGG
G
GGG
G
GG
G
G
GG GG
G
GGG
G
GGGGG
GGG
GGG
GG
GG
GG
G
G
GGGGGG GG
G
G G G G GG
GGG
G
G
GG
GGGG GGGG
GGG G
GGGGGGG
G GG
G
G GGG
GGGGG
GGGGGG
G
G
G
G
G
G
G
G
GG
G
GGG
G
G G
G
GG
GG
G
GG
G
GGGGG
GG
GGGGGGG
GG
G
G
G
G
GGG GGG
G
GGG
GG
G
G
G
GG GGGGG
GGG
G
GG G
GG
G
G GGG
GG
GG
GGG
GG
G G
GG
GGGGGGG
GG
GG
G
G
G
GG
G
GGG
G
G
G
G
G
GGG
GG
GG G
GG
G
G
GGG
GG
G
GG
GG
GGGG
GGGGG
G
G
GG
GGGGGGG
GG
G
G
G G
GGGG
G
GGGGGGGG
GG
G
GGGG
G G
GG
G
G
GGGGG
GGGG
G
G
G
G
G
GG
G
G
G
GGGGG
GG
G
G
GGGGGGGG
GG
G
GG GG
GGGGGGG
GGG
GG
G
GGGG
GGGG
G
GG
GGG
GG
G G
G
GG
GG
G
GG
GG
G
GGG
GG G GGGG
GG G
GG
GG G
G
G
GGGGG
G
GGG GGGG
GGG
GG
GGGG
G
G
G
G
G
G
G
GG
G
GGG
GGG
G
G
GG
G
G
GG
GGGGGG GG
G
G
G
GG
GGGG
GG
G
G
G GGGGG
G
GGG
GG
GGGG
GG
GG
GG
G
GGG
GGG
G
G
GGGGG
G
G
GGGGGGGGGG
GG
G
GGG
G
GG
GG
G
GGGGGG G
GG
GG
G
G
G
GG
G
G
G
G G
G
GG
G
G G
G
G
G
GGG
GG
GGG
G
GGG
GGGG
GG
G
G
G
G
G
G
GG
G
G GG G
G
GGGG
GGG
G GG
G
G
GG
GG
GG
GGGG
G
G
G
GG
GGG
GGGGGGG
G
G
GG
G
G
G
G
G
GGG
GGGGG
G
GG
GGGG
G G
GG
GGGG
GGGGG
G
G
G
GG G
G
GGGGG
G G
GGGG
G
G
G
GG
GGGGGG
GGGGGG
GGG
GGGGG
G
GGGG
G
G
GG
GG
G
G
GGG
GG
G
G
GG
G
GGGG
GGG
GGGG
G GG
GG
GGGG
GGG
G
GGG
G
G
GGG
GG
GG
G
GGG GGG
GGGGGGGG
GG
GG
GG
GG
G
G
G
GG
GG GG
G
G
G
GG
GGGG
G
G
G
GGG
GG
G
GGGG GG
GG
GG
G
GG
GG
GGG
GG
G
G
G
G
G
G
GGGGGG
GGG
GG
G
GGGG
G
G
GGGGGG
G
G
G
GGGG
G G
GG
G GGGGGGGG
GGGGGG
G
G
GG
GGGGGGGGG
G
G
G
G
G
G
G
GG G
GG
GGGG
GG G
GG
GG
G
GG
G
GGGGG
G
GG
G
GGGG
G
G
GG
GG
G
G GG
GGGGGG
GGGG
GG
GGG
G
GGG GG
G
GG
G
GG
G
G
GGGGG
GGGGGGG
G
GG
GGGG
G
GGG
GGGG
G
G GG
GGG
GGGG
G
G
GG
G
G
GGG
GGGGG
G
GG
GG
GG
G
G
GG
G
G G GGG
GG
GG GG
GGGGG
GGG
G
GG
G
G
GGG
GG
G
G GG
G
GGG
G
GG
G
G G
GGGGGG
G G
G G
GGG
G
GGGG
GG
G GG
G
G GG
GG
G
GGGGG
GG
GG
G
G GG
G
G
G GG
G
GG
GGG
GGGGGGG
GG
G
G
G
GG G
G
GGGG
GG
GGG
G
GGGG
GGGG
GGG
G
GGG
GGGG
GGGGGGG
GGG
GG
G
G
G
GGG
GG
GGG
G
G
GG GGG G
GGG G
GG
GG
GGG
GGGG
G
GG
GG
GG
G
GGGG
G G G
G
GG
G
G GG
GG
GGGGGGG
GGGG
GG
GG
G
GG
G
G
G G
GGGGGG
G
G
GGG
GG
GGG
GG
GG
G
G
G
GG
GGGGGGGGGGG
G
GGGG
G
GG
G
G
GG
G
GG
G G
G
GGGG
G
G
GG
GGG GG
G
G
G
G
G
G
G
G
GG
G
GGGGG
G
G
GGGGG
GGGG
G
G
GGGG
G
G
G
GGGGGG
G
G
G
GGG
G
G
G
G G GG
GGGG
G
GGG G
G
G
G
GGGGGGG
G
G
GG
G
G GG
G
G
GG
G
G
G
G G
G
G G
G
GG
G
G
GGGGG
G
GG
G
G GG
GGG
G
GGGGGG
G
GG GG
GG
G
G
GGG G
G
GG G
G
G
GGG
GGGG
G
G
GG
G
G
GGGG
G
G
GGG
GG
G
GG
G GGG
G
GGG
GGGG
GGG
GG
G
G
G GGG GG G
GGG
GG
GGG
G
GGGG
GG
GG
GG
GG
G
GG
G G
GGG G GG
GG
G
GGG G G
GGGGG
GGG
GG
G
G
G
GG
GGGG
GG
G G
G
G
GGG GG
G
GGGGGG
G G
GGG
G
G
GG
G
G
GGG
G
G
GGG
GG
G
GGG
GGG
G
GG
G
G
GG
G
GGG
GGGG
GG GG
G
G
GGG GG
GGGGG GGG
G
GG
G
G
G
G
G
G
GGG
G
GG GG
GG
GGGG
G
GG
GGGG
GGGGG
G
GGG
GGG
GGGGG
GGG
G
G
GG
G
G
GG
G GGGGGG
GGG GGGGG
GGGGGG
G
G
GGGG GGGG
GGGG
GGGG
GG
GGGGGGG
G G GGG
G
G
GG
G
G
GGG
G
G
G
G
GG
G
G
G
GG
G
G GGG G
G
GG G
GGGGG
GGGG
GG
GGGGGGG
GGGG
GGG
GG G
G G
G
G
G
GGG G
GGG
G
GG
GG
GG
GG
G
G
G G
GG
G
GG
G
GG
GGG
G GG
GG
G
GG
GG
G
GGGGGGG
GGG
GG
G GGGGGGG
GG
GGG
GGGG
GG
GGG
G
G
G
GGG
GGGGG
G
G
G
GG
GG GGGG
G
G
GG
GG
G
GG
G
G
GG
GGGG
G
G
G
GGGGG
G
GG
G
G
GGGG
GGG
GG
GG
GG
G
G
G
G
GG GG
GGG
G GG
GG
GGGGGGGGGGGG
GGG
G
GG
GG
GGGG
G
G
G
G
GG
GG
G
G
G
G GG
GG
G GGGGGGG
G
G
G
GGGGG
G
G
GGGG
GGGGGG
G
G
G
G G
GGG
G GG
GG
G
GG
G
G
G G
GGGG GG
G
GG
GGGG GGG
GG GG
G
GGGGG G
GG
G
GG
GG
G
GGG
G
G
G
GG
G
GG
G
G
G
G
G
G
G
G
G
GGGG
GGGGGG
G
G
G
G GGGG
GG
G
G
G
G
GG
GGGG
GGGG GGGG G
G
G
GG
GG
GG
G
GGGG
GGG
G
GG
G
GGGGG
GG
G
GGG
G
GG
G
GGG
G
G
GGGGG
GG
GG
GG
G
G
G
G
GGG
GG
G
G
G
GGG
GG
GG
G
GGG
GGGG
GG
GGGGG
GG
G
G
G
GG
GG
GGG
GGGG
G
GGGG
GG
GG G
GG
GGG
GG
G
G
GG
G GGGG
G
G
G
GGG
G
GGGGGG
G
GGGGG
G
GG
GGGG
G G
GGGGGGG G
G
GGG
G
GG GG
GG
GGG
G
GG
G
GG
G
GGG
GGG
GG
GGGG
GGG
GGG
G
G
G
G GGG
G
G
G GG GG
G
GG G
GG
GG
GGGG
G
G GGGG
G G
G
GGG
G
G
GG
GGGG
GG G
GGG
G
G GG GGG
GGG
GGGGGGGGGGG
GGG G GG
GGGGG
GGGG
G
G
G
GG
GGG
G
GG
GGGG
G
GGGGG
GG
GG
G
G GG
G
G
GG
GGG
GG
G
G
G
GGG GGGGGGGGGGG
GG
GG
G G
G
GGGGG
G
GG
G
G
G
GGG
GG
G
GGG
GG
GG
G
G
G
G
G
G
GG
GG
GGGGG
GGG
GGGGG
GG
GGGG
GGGG
G
GGGG
G
G
GG
G
G
GG
GG
GGG
G GG
GGG
G
GGGGGG
G
G GG GG
GGG
GGG
GGG
G
GGGGGGG
GG
G
GG
GG
G
GG
GG GGG GG
G G G
GGGG
GGGG
G
GG
GG
G
GG
GG
G
GG
G
G
GG
G
G
GG
G
G
GGGG
GGGGG
GGGG
GGG G
GGGGG
GGG
G
G
G GGG
G
GG
GGGGG
G
GGGGG
GGG
GG GG
GG
G
G
GG
G
GGG
G
G
GGGGG
GGG
GGG G
GG
G
GG
G
GGGGGGGG
GGG
GGGG
G
G
GGGG
GG
G
G
GGGG
G
GG GG
G
GGG
G
G
GG
GGG
G
GG G GGG
GGG GGG
GGGGG
GGG
G
G GG
GGGGGGGGG
GGGG
G
GG
GG
G
GGG
GG GG GG
G
G
G
G
GG
GG
G
GG GG
GGG
GGGGG
GGGGGG
GG
G
GG
GGG
G
GGGG
G
GGG
G
G
G
G
G
G
GGGGG G
G G
G G GG
G G
GGG
GG
GGGGG
G
GGG
G
GGGGGGGGGGGG
GGG
G
GGGG
GGG G
GG
GG
G
GGG
GGG G
GGG
GG
2 0 2 4 6
0
2
4
6
8
tuning=1
Initial value=(4,5)
Acceptance rate=48.5%
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGG
GGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGG
GGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGG
GGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
GGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGG
GG
GG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
GGGGGGGGGGGGGGGGGGG
GGGGGGGGG
GGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
2 0 2 4 6
0
2
4
6
8
tuning=100
Initial value=(4,5)
Acceptance rate=2.4%
G G GG GG GGGG
G
G
G GG GG GG G GGGG
G G GGGGGG G G
GG
G GG
G
GG
G GG G GGG GGG GGG
GG
G
G G
G
G GGG G GGGGGG G
G GG GG GG GGG
G
GG G G GGG G G G G G
G G GG G
GGGG GGGG G GG
G GG GG G
G
G
GG GG G GG GGGGG
GGGG G
G
GGG G
GG G G G G G GG G GG GG GG
G G GG GG GG GG GG
G GGGGGG
GG
G
G G G G GG G GGGGG GGG G
GGGG G G G GG G G
GG G G GG G G GG
G
G G GG GG G G GG GG G G G G
G GG
G
GGG GG GG G GGGG GGG GG GG G
G G GG GG G
G
G
G
G
GGGGG G
G
GGGG GGG
GG GGG G G
G GGGGG G G
G G G G G G G G GGG G GGG G G G GGG
GGGGGG
G G
GG G G GG GGG G GG G G G GG GG G GGG G G G
GG GG
GG GG G G G G G GG G GGGGG G G
G GGGG GG G G GG
G GG GG
G
G GG G G G G G
G
GG
G G GG G G G G GG G GG
G G
GGG
G
GG G GG
G G
G GGG G G G G GG G GG G G G
G GGG GGGG
G G G G G
G G GG
G
G G GG GGG GGGGG GG G GG GG
G G GG GGGGGG
G GG G
G GG G GGG G G G GG G
G GG G
GGGGG G GG G
GG G G G G G GG GG G G G GGG G G
G G
G
G GGG G GG G
G G G GGG
G G
G G
G GG G GGG G GG G G G G
GGG G G GGGG GGG
G
G G GGG G GGGGG G GGGG GG
G G GGG GG G
G G G G
GGG GG
G
GGGG GG G GGG G G G GG
G G G
G G G
G GG GG G G G
GGGGGG
G G GGGGG GG GG G G GG G G
G
GGG G G GG GGGG
GG
GG GG GG G GGG GGG
G G
GGGGGG G G GG GG GG G G G G
GG G GG G G G GG
GGG G G GG
G GG G GG
G
GG G
G
G GGG
G
G
G G G GG GGG G G G G GGG G G GG G G G
GGG G G G
GG G GGGG G G
G
G G GG
G GG GGG G GGG
G G G G G GG GGG GGG GGG G G G G GG G
G G G
G GGGG G G
G G
G G G GG G G G
G GGGG GG G
G G
G G G GGG
GG
G GGG GG GG GGGG
GG G
G
G GGG
G
GGG G G G G G G G G G G GG GGG G
G
G G G
G G G G
G G G GG G G G G G
GGG GG GG
GG G
G GGG G GGGGGGGG G
G GG
GG
GG GGG G
GGG GGG G
G
G GG G
G GG GG
G
GG
GG G G G
G
G G G G G
GG GG G
G G G G GG G GG GG G
GG G
G G
G
GG
G GGG G
G
G GG G
G GG GG G G GGGG G GG GG
G G G GGGG G G G G G GG G GGGGGG G G
GGG GGGGG
GG
GG
G G GGG G G GGG G G G GGGG
G GGG GGG
G
G
GG
GG G G G G G GG
G G G GGGG
GG G G
GG G GG
G GG
G
GG GG GGGG GG GG
G G
G GG G GG G GGG
GGGGG GGGG G
G GG
GG
G
G G GG G GGG
GG
G
GGG G GGGG G GG G G G
GG G G GGG G G G GG
G G
G
G G
GG GGGGG
G
G G G GGGG GG G G G G GGG GG GG G G
G G GGG G
G GG G G
G GG GG
G G GGG G G G G G G GGG G
G
GGGGG GG G
G G G G
G G G G
G
G G G G
G G G
G GG GG G
GGG
G G G
GGG G
G G GG
GG GG G
G
GGGGGG
G GG GG G G G G G G G
G G
G GGG GG
GG GGG G G GGG G G G G
GG G G GG
GG GGGGG GG G G G
G
G GG
G GG GG G
GG G GGG G G G GG
G
GGGGGGG G GGG G
G G G GG
G GG GG G
G GG
GG G
G GGGG G
G
G G GG G GG G G G G G
G GG G G G G
GGG G
G G
G G G G G
G G G G
GGGG G G G G GG G G
G G G
GG
GG G G GGG
G G
G G G G
G GG G G G G G G
GG G GG
GG GG G GG G
GG G
GG GG
GG
GGGG
G GG G GGG G G
GG
GGGG
G G GG G G
GG GGG G
G GGGG
G G G
G
G G G G G G G
GG G
G G GGG
G G
G G GGG GGG G G G
GG G
G G GGG G
G GG
GG
G G G GG
GGG
GG
GGGG GG GGG
G
G G
G G G GGG
G GG G G GGG GGG GG G GG G
G
G
G G G GG GGGGG G G G G
G G
G GG
G G GG
G G G G G G G G
G
GG G G G GG G G GGG G
GG GG G G G G
G G
G G G G
GG
GG G G G GG
GG GG G
G
G
GG G G
G
G GG G GGG G
G
G G
G G G
GG
G G G G G
G
G G
G G
GG G G GG G
G GG GG G G
G
G G G
G G GGG
G GG G G G GG GG
G G G G G G
G G
GGG
G G G
GG G G GGGG G G
G G G G GG
G G G G
G GGG
GG GGGG
G GG GGG G G
G
G
G GG GG GGGGG G GG
G G G
G G GG G G G GGG
G G G G GG
G GG G GG G G G G
GGG G GG
G
G GG
G G GG GG
GGG GG GG
G G G G G GGG GG GGGG
G
GGG G G G G GG G G
G GG G
G G
G GGGG G GG G G
GG
G
G G
G GG GGGG G GG GGGG GG G G G
GG G G G GGG G GG
G GG GG G G G
GG GG G G GGGG G
G G GG G G
G
G GG
G G
G G G G G
G GG G G GG GG GGG G G G G GG
G GG GGG
GGG G GG GG G
G G
G GGG G G
G G
G G
G G GG G
G
G GG GG GGG G GGG G G G G GGG G GG G G G
G
G G G GGGG GG
GGG G G GG G G G
GG G
G GGGGGGG
G GG GGG
G G G G G GG G GGG GG
G G
G GGG G
G GG G GG G GG G G G
G G G G
G G
G G
G G GG G G G G
GGG G G GG G GG G G G
G G G
G G
G G
G G GG G G G GG
GGG
GGGGG GGGG
G
G G GGG GGG
G
GG G G GG G
GG GGG G G G GG G
G G G G G G GG G G G G G GG G G GGG G
GG G GGG
G G GG GG
GG GG G G GGG
GG G G G G G
GGG GG G
G G GG G GG G GGGGGG
G G
G G G G
G G GG G G G
G G G GGGG G G G
G G
G
GG
GGGG G G G G
GG G G GG GG G G G
G GG GG G GG G GGG G G GG GGG G GG G G GG G G G G G
GGG G
GG GGG
G G
G G G G GGG G G GGGG G GGG G G
G
G GG G GG GG GG
GGGG
GGG G G G
GG
G GGGGG
GG
G GG
G G G G G G
GG G G GG
G G G G
G
G
G GGG G GG G GGG G GGGG G
G G GG
G GG G GG GG G G GG G G G G G G GGG G
GGGG
G
GG
GG G G G G GG GG G GG G
G GG
GG G
G G G G GG
G G GGGG G G G G GG GG G GG GGGGGG
G GG GGGG G G G G GGG G G GG G
G G
G G G
GG G GGG GG
GGG G G G G
GGGGG GGG GGG
G
G
G
G GG G G
G G G G
G G G
G
GGG
G
G
G G
GG
GG GG G G G
G G
G G GG G
G GG G
G GGGG GG G G G
G G GGGG G GG G
G G G G G GGGG GGG
G G G G GG
G
G GG GG G GG GG GGG GGG
G
GG G GG GG GG
G G GGG G
G G
G GG
G
G GG GG GG GGG G GG GGGG GG
G G
GGGGG GGG GG G G G G G G G GG
GG GG
GGG GGG G G
G G G G G G G GG G GG G GGG GGG G
G G GG G G G
G G GG G G G GGG G G G G GG G GG G GG G G G G
GGG G GG GG G
G G G
GGGG
GGG GG G
GG G GGG G G G G
G G GGGGGG
G G GG GG
G
G G G G
GG G
G G GG
G GG
G G G G
G G
G G G G G GG G GGG
G
GG
GGGG G GG G
G G
G G GG G G G
G
GG G GG G GG G GG G GG G
GGG G
GG
GG G
GG GG G G GG G GG G G G G
G GG
G G G G
G GG G GGG GGGG
GGG
G G GGGGG
G G GGG G G G G G
G
GG GG GG GG
GGG G GG GGG G
G GG GGG GG G
G GG G
G G G G G
G G GGG G G GG G
GG G G GG
G G G G G G
G
G
G G GG GGGGGGG
GG G GG
G G GG
GGGGG G
G GG G G G
G G G GG G G G G GG G GG G
GGG GG G G G G GG G G G GG G G
GG GG G GGGG
G G GG GG G G GGGG GGGG GG G
G G G GG G G G
G G GG G
GG G GGG
GG G GG G G
GGGG G GG G
G
GG G G G
G G G G
GG GGGG GG G G GG G GG G GG G
G G G G
GGG G G GG G GG G GGG
G G G GG GGG GG G GG G
GG G
G
G G G
G GG G GG G G G G GG GG GG G G
G
G
GG
G
G
G G GGGG G G
GG GGGG G G G G
G GG
G GGGGGG GGGGGGG GG G
GGG GG G G
G
G G G GGG G G G G GG
G GGGG GG GGGGGGG G G
G GG G
G
G G
GG G GGG GG G GG G
GGG
GGGG
G
GG GG
G G
G G G
G GGG G G G G G
GG G GGG GG G GGG GG GGGG
G GG G G GG GG G GG
G GGGGG G G
G
G
G GG
G
G
GG G G G G G
GG GGG GG
G G GG G G G
G G
GGGGG GG
GG G GG G GG G
GGGG G G G G G G G G G
G G
G
G
G G G G GG GG GG
G GG G G G G G
GGGGGGG G
G G G
G G GGG G
G G G G G G G GG G G G G GG G
GG G G GG
G G
GG G G G G G G
G
GGG G
G GG G GG G G
G G
G
GGG GG GG
G G G GG G
GGGG G G G GGGGG G GG G G G G GGGGG G G G GG GGG G
G
GG G GGGGG
G G
G G G G GGGGGG G G G G GGG GG G G GG G G
GG
G
G G G
G G
G
G G
GG
G GG
G G G
GG G
G G
G G
G
G G G GGGG G GG
G G
G GG G G
GG G GG G G G G G G G G
G G G GG G G G G GG GGG GG G G
G GGG G G GG GG G
G G G G G GGGG GGG G
G GGGGG G GG G GGG GGGG G GG G G G G GGGGGG
G GGG G G G GG G GGG GGG
G
GG G G GG G G GG GG
G GG
GG
GGG G
GGG
G G
GGGG G GGG G G GGGG
G G G GG G G G GG G G G G G GGG
G G GG G
G GG G G
G G
G G GG
GGG G
G GG G
G G G
GG
G G G G
G G G GG G GGG GGGG GG GG GG G GG
G G G G
G G G G GG G
GG
G GG G G G GG G G G G G GGGG GGG G G G
GGG
G
G GG G G
G GGG
G GG
G GGGGG G GG GG G GG GG GGG GG G GG G G G GG GG
GG G G G GGG
G G
G GGG GG G GG G
G GGG
G G
G G
G G GGG G
G G
GG
G G G GG G G
G GGG G G
G G GG G G GG G G G G G GG G G GG GG GG G G GG GG
GGG
G G G GG G G GGG G
GG
G G
GG G G GGGG G GG
GGG G
G G GG G G G G GG G
G GG GG G G GGG
G GGG
G G
G GG G
G G
G G GG
GG G GG
G GG
G
G G G
G GG G G
G
GG GGGG G GGGGG
GG G GG G GG G G G GGG GG GGGG G
GGG G G
G GGG
GG GGGGGG
GG G G
G GG G GGG G G
G GGG G G G
G G G G G G G
G
GGG G G GGGG GGG GGG G G G GG G
G
G GG
G G GG GGG G
GG GG
G G G G
G G
G GG G G
G GGG GGG G G G
G G G
G GG G G G
GGG G G G GGG
G G GG
G G G G
G
GG G GGGGGG
G G
G GG G
GG GGG
G GG G
GG
G
GGGGGGG G G GG G G G G
G G G GG GG GG GGG G G GG GG
G GGG G GG GG G G G GG G
GGG GGG
G G G GG
G G
G G G GG
G
G
G
G G GGG GG
G GG G G
G G GG
G GGGG GGGG G G GGG G GG G G G G G G G
G G GG G GGG GG GG G
GG
G GG GGG G
GG G GGG G G GGGG GG GG
G G G G GG G GGG G G GG G GG GGG
GG G G G G G G
G G G G G
G GG G GGG G
G G
G GGG G GGG G
G GG
G G G G G
G
G G GGGG GGGG
G G G
G
GG GG GG G
G G
G GGG
GGGGG GG
G GG G G G G GGG G GG
GG G G G G
GGG G
GG G G GGGG G G GG GGG GGG G G G
G G GGGGG G G G G G G GGGGG G G G G G
G G GGGGGG G
G
GG G G G G G GG G G GG
G GGGGGG GGG G G G
G G G
G GG G
G G
GG G GGGG G GGG G GG G
G
G GG GG G G
G GG G G G
G G GG G G
G G GGGGG
GG
G G G
G GG G GGGG GGG GGG G GG G GGG GG G GG G
G
G GG G
G
G G
G G GGGG G GGGGG GG
GG
G G GGG G GG GG GG G G G
G G G G G
GGGGG G G G
GG G G GG G G GG G GGGGG GG
G
GG G G G G G
GGG
G G GGGG GGG GG G
G GG G G G GG G GG GG GG G GG G
GG G GG G GGG G G
GGG G G
GG GGG G G
GGG G G GG G G GGG G
G
G GGG G
G
2 0 2 4 6
0
2
4
6
8
tuning=0.01
Initial value=(0,7)
Acceptance rate=93.3%
G
GG GGGGG
G
G
G G
G
G
GG
GGG
G GGGGGGG
GG
G
G
GG
GGGGGGGGG G
G
GG
GG
GG
G
G
GGG
GG G
G
G
GG
GG
G
GG G
G
GG
G
G
G
GGGG G
G
GG
GGG
GGGGG
G GG
G
G
G
GG GGGGG
GGG
G
GG G
GG
G
G
G
G
G
GG G
GG
GGG
G G
GG GGG
GGGG
GG
G
G
GG
G
G
G
GGGG
G
G GGGG
GGGG
G
GGGG G
GGGGGG
GGG
G GGG
GGG
G G
G
GGG
GGGGG
GGGG
G
GGG G
GGG
GG
G
GGG
G
GGGG G
GGG
G
G
G
GG G
GGGG
GGG
G
GGG
G
G
GGGGGGGG
G
G
GGG
GGGGGGGG
G G
G
G
GGGGG GG
G
GG
GGGG GGG
GGG
G
GG
GG
G
GG
GGG
G
G
G GG
G
G
GGG
GGGG
G G
GGG
GGGG
GGGGGG
G
GG
GG
G
G
G G
G
GG
GGGGG
G G
GG G
GGG
G
G
GG
G
G
G
GG
GGGGG
G
GG
GG
GGG
G
G GG GGG
G
G
GGGG
G
GGG GGG
GG
G
GG
G
G
GGG
GGG
GGG
GG
G GGGGG
GG
G
GG
G
G
G
GGGGG
G
G
G GG
GG
G
GG GG
GGG
GG
G
G
G
G GG
G G
GG
GGG
GGGG
GGGGGGGGGG
GGG
GGGG
GG
GGG
GGG
G
GG
GGGGG
G
GGGGGG
GG
G G
G
G
GGGG G
GG
GG
GGG
GGGG
GG
G
G
G
G GGG
GG
GG
G
G
G G
G G
GG GG
GGG
GGG
GGG
GG
GGGG GGG
G G
G
G
GG
G
G
G G
G
GG
G
G
G
GG
GG
G
G
G GG
GGGGGGGG
G
G
GG
G
GG G
G
G
G
G
GG
GGGG
GGGGGGGG
G
GGG
G
GG
GGGGG
GGGG
GG
G
G
G
G G
G
GG
G
G
G G
GGGG
GGG
GGG
G
GGGGG
G
G
GG
G
G
GGG
GG
G
GG
G
G
G
G
G
GG
G
G
GG
G
G
G
G
G
GGGGGG
GG
G
G
G
GG
G
G
G
GGGG
G
GG G
G
GG
G
G
GG
G
G
GG
GG
GGGGG
G
GG
GGGG
GGGGGG
GGGGG
GGG
G GG
G
G
G GGGG
G
G
G
GGG G
GGGG G
G
GGG
G
GG
GG
GGGG
GG
GGGG
GGG
G
GG
G
GGG GG
GG
G G
GGGGGGG
GGGG
G
G
GG G
G
GG
GGGG
G
G
G
GGGG
G
G
G
G GG
GG
GG
G
GG
GGGG GGG
G
G
G
GGGGGG
G G
GGG
GG
G
GGGG
GGGGG
GG
GG
G G
G
GGG
G
GG
G
GGGGGG
G
G
GGGGG
G
G
GGGG
G
G
GGGGG
GGG
G
G
G
GGG
GG
GGGG
GG GG
GG G
GG
GGG
G
G
G
GGG
GGGGGG G G
G
G
G
GG
GGGGGG
G
GGG
G
G
GGG
GGG
GGGGG
GGGGGGGG G
G G
GGGGG GGGGGGG
GGGGGG
GGGG
GGGGG
G
G G
G
GG
GG
GG
G
G
G
GG
GGGG
GGGGGG
GG
GGG G
G
G
G
GGGGG
GG
GG
GG G
G
GGGGG
GGG
GGGG
G GG
G
GG
GGGG
G
G
GG
GG
GG G
GGGG
G
G GG
G
GGGGG
GG
G
G
GGGG
GGGG
G GG
G GGGGGGGGG
G
G
G
GG
G
GG
GGGG
G
GGG
GG
GG GGGGGGGG
G
GGG
GGGGG
G
G
GG
G
GG
GG GGG
GG GG
G
G
G
G
GG
GG
GG GGG
GG
G
G G
GG
G
GGGG
GGGG
GGGG
G
G
G
G
GGGG
GGGGGG
G GG
G GG G G
G
G
G
GGGG
G
G
GGGGG G
G
G
G
GGGG
G
G
G
G
GGG
GGGGGGG
GG
G
G
G G
G
GG
GGG
GG
G
G
G
G
GG
G
GGG
G G
GG
GG
GGG
GG
G
GG GGGGGGGGGG
GG
GG
G
G
G
G
GG G
GGGG G G
GGGGG
G
GGGG
G
G
GG
GG
GG
GGGG
G
GGGG
G
G
G
GGG
G
GG
GGGG GGGGG
G
GGG
GG
G
GGGG
GG GGG
GG
GGG
G
G
GGGGG
G
GGG GG
GGGG
GGGG
G
G
G
GG
G
GG
G
GG
GGG GG
GG
G
GG
G
GGGGGG
G
GG
G GGGGG
G GG
GG
GGGGGGGGGG
GG GGGGGG
G
GGG
G
GGGG
G
G
GG
GGG
G
G
GG
G
G
GG
G
G
G
G
GGG
G
G GGGG
GGG
G
G
G
G G
GGGG
G
GGG GGGG
GG
G G
GG
G
GG
GGG
G
G
G
G
G
G
G
GGG G G
G
GG
G
G
G
G
GG
GGG G
G
GG
GG G
G
GG GGGGGGGGGGGG
G
G
GGG
GG GGGGGGGGG G
G
GGGG
G
G
GGGG
GG G
GGGGG
GG
G
GGGGGGG
GG
G
GGGG
GGG
GGGGG
GGGG
GGG
GGGGGG
GG
GG GGGGGG
G
G
G
G
GGGGGGGG
GGGGG
GG
G
G
G
GGG
GGG
G
G
GGGGG G
G
GG G GGG
G
GGG
GG
GGG
GG
GGGGGG
GG G
GG
G
GG
G
GG
G
GGGG
GG
GG
G
GGG
GGG
GGGGGGG
G
GG
GG
G G GG
G
GGG
G
GGG
G G
G
G G
GGG
G GGGG G
GG
G
GG
GG
G GG
GGGGGG G
GGG
G
GG
G
G
G
GG GG
GG
GG
G
GG
GG
G
GGGG
G
G
GG G
GGGG GG
GG GGG
G
GGG
G
GG
G
GGGGG
G
GGG G
G
G
GG
GG
G G
GGGGGG
GG
GG
GGGG
GGG GG
G
GGGGG
GGGGGG
G
G
GGGGG GGGG G
G GGGG
GGG
GG
GG GGGG
GG
GGGGG
GG
G
GGGGGG
G GGG
G
G
GGG G
GG
GGGGGG
GGGGG
G
G
GGG
G
G
GG
GG
G
GG
G
GG
G
GG
G
GGGGGG
G
GG
G
GGGGG
G
G
GGGG
GGG
GG GG
G
G
G
G
G G G
GGG
GGGGG
G
GG
G
G
GGG
G
GG
GG
G
G
G
G
G
GGGGG
G G
GGG
GGG
G
G
G
G
G
GGG
G
GGGGG
G
GGG
GGGG
GG
GG
G
G
GGGGGGGGG
GG
G
G
GGGGGG
GGGGG
G
G
G
GGG
GGG
GGGGG
GGG
GG
GGGG
GG
G
GGG
GGGG
G
G
GGGGG
G GG
G
GGGGGGGGG
G
G
G
G
G
G
G G
GG
GG
G
G
G
GGGGGG
G
G G
G
G
GG
G
GG
G
GG GG
G
G
GGG GGG
GGG
GG
GG
GG
GG
GGGGGG
G
GGG GG
GGG
G
G
G
G
GGG
GG
GGGG
GG
G
GG
GGGGG
GG
G G
G
G
G
G
GG
GGGG
GG G
GG
G
G
GG
GGGG
G
GG
GG GGG
G
G
GG
GG
G
G
G
GG
G
G
GGG
G
G
G
GGGG
GG
G
G
GG
G
GG
GGG
G
G
G
G G
G GG G
G
GGG
G
G
GG
G
GGGGGGGGG
GGG
GGGGG
G
GGGGG
GG
G
GG
G
G
GGG
G
G
G
G
GG
G
G
G
G
GG GG
GG
G
GG
G
G
G G
GG
GG
GG
GGG
G
G
GGG
G
GGG
GG
G
G
GG
G GG
GG
G GGGG
G
G
G
GGG
GG
GGG
G
G
GGG
G
GGGGGGG G G
G
GG
G
G
G
GG
GGGG
G GGGGG
G
GG
G
GG
G
GG
G G G
G
GGG
G
GG G
GGGGG
G
G
G
GG
G G
G GG
G
G GGGG
G
G
GG
G
GG
GGG
G
GGG
GG
G
GG
G
G
GGG
G
GG
GGGGG G
GGG
GG
G
G
G
GGGG
GG
GGG
G
GG
GG
GG
GG
G
GGGGGGG
G
G
GGG
GGG
GGGG
G
GGG G
G
GG
GGGGG
G
GGGGG
G GG
GGGG
G
GGG
G
GG
G
G
GG
GG
G GG
G
G
GGG
G GG
GGGG
GG
G G GG
GG
GG
GGG
G
GG
G
GGGGG
GG
G
GG
GG
GG
GGGG
G
GG
G
GGGGGGGG
GGGGG
GGG
G
GG
GGGG
G
GG
G
GG
G
GG
G
GG
GG
G
GG
G
G
G
GG
GGG
G
G
G G
GGG
G
GGG
GGG
GGG
G
GGG
GG
G
G
G
G
G
GG
G
G G
GGGGG
GG
GG
GGG
GGGG
G
G
GGG
GGG
GGGGG
GGG
G GGGGGG
GGGG G
GGGGG
GGGGG
G
G
G
G
G GGG
G G
GGGG
GGGGGGGGGG
G
G
GG
G
G
GG
G
G
G G
GGG GGGG G
G
GGGGG G
G
G
GG
G
GG
GGG
GG
G GG
GGG
GG
G GGG
GG
G
GGG
G
G
G
G
GGGG
GG GG
G
GGGG
G
GGGG
G
GG
G
G
GGG
GG
G
GG
G
G
G
G
G
G
GGG
G G
G
GGG
GG
GGG
GGGG
G
GGGGGGG
G
GG
GGG G
G
GGG
G
G GGG
G
GG
G
G
GG
GGGG
G
G
G
G GGG
G
G
GG
G
G G
G GG
GGGG
G
G
G
G
GGG
G
G
GGGG
G
GG
GG
G
GGG
GGGG
GGG GG
GG G
G
G
G GGGG
G GG
G
G
G
G
G
G
G
G
G
GGG
G G
G
G
GGGGG
G
G
G
G
G
GG
G
GGGG
GGG
GGGG
GG
G
G
G
GG
G
G
G
G
GGG G
GG
GGGG
GGG
GGGGG
GG GGGG
GG
GG GGGGG
GGGG GGGG
GGGG
GGGG
G
G GGG
G
GG
GG
GG
GGG
G
GG
G
GG
GG
GG
G
GG GGG
GG
GG
G GGGG
G
GG
GGG G
G
GG
GG GG GG
GG
G G
G
GGGGG
GG
G
GG
G
GG
G
G
G
GGG
G G
G
G
G GGGG
GGGG
G
G
G
G G
GG
GGGG
G
G
G
GGGG
G
G
G
GG
G
GGGGGG
G G
GG GGGG
GG GG
GG
G
G
GGGGGGGG
G
G
G
G
GG
G
GG
G G
G
G
G
G
GG
G
G
GG
GGGG
GG
GG
GG GGGG
G
GG G
G
GG G
G GG
GG G
G G
G
GGGG
G G GGG
G
G
GGGG
G
G
G
GG
G
G
GGGGGG
GG
GGG GGGG
GGG
G
G
G
G
GGGG
G
G
GG
G
GG
GG
G
G
GGGGG
GGG
G
GGG
GGGGG
GGG
G
GG
GG GG
GGGG
G
G
GGG
G
GGGGG
G
GG
G
GGGGGGGGGG
GG
G
G
GGG
GG
GG
G
GG
G
G
G GGGG
G
G
G
G GG
GG
GG
GGGG G
GG
GGGGG
GGG
G
GGGGGG
GGG
G
G
GGGG
GG
GGGGG
G
G
G
G
GG
GG
GG GG
G
GGGG
G
GG
GGG
GG G
GGGGG
GG
GG
G
G
G
G
G
G
G
G
G
GGG
G
GGG
G G
G
GG
GGG
GGG
G
G
G
GGGG
GG G
G
G
GGG
GG
GGGGGG
GG G
G
G
G
G
GGGG
G
G
GGGGGG
G
GG
GG
G
GG
G
GGGG
GGG
GG G
GG
GG
GG
G G
G
GG
G
GGG
G G G
GGGGGGGG
GGGG
G
GGG
G GG
G
G G
GG G
G
G
G
G
GGG
GGGGG
GGGGG
GGG
G
GGG
G
GGG
G
GGGG
G
GGGGGG
GG
GGGG
GGGGGGGGGGG
G G
G
G G GG
GGG
G
G
GGGGGGGG
G
G
G
GGG
G
G
G
GGGGG
GG
GG
G
G
G
GGG
G
GGG
GGG
G
G
G
GG
G
GG
G GGGG
G
G G
G
G G
G
GGG
GG
GGG
G GG
G GGG
G
GGG
GGGGGGGGGGG
GGGGGG
G
GGG
G
GG
GGG
G
GGG
GG
GGGGGGG
G
G
G
G G
GG
G
G
G GGG
GG
GGGGG GGG
GG
G
GG
GG
GG GGG
GG
GG
GG
G
G
G
G
G
G
G
G
G
GGGG
GG
G
GG
GG GG
GG
GG
G
GGGGGGGG GGGGGG
G
GGG G
GGG
GGG
GGGGG GG
G
GG
GG
GGG
GGG
G
GGG
G
G GGG
G
GG
GG
GG
GGGG
G
G
G
G
G
G
G
GG
G
G
GG
G
G
GG
GG
G
GG
GGGG
GG
G G
G
GGGGG G
GGGGG
GG
G
GGGGG G
GGGG GG
G
GG
GG
G
G
GG
G
G
GG GGG
GG
GG
G
GGG
G
G
GGGGGGG G
GG GGGGGG G
GGG
GGGG
G
G
GGG
G
G
G
GG
G G
GG
G
GGG
GGGGGGGGG
GGG
G
GGGG
G
GGGG
G
G
G
G
G
G
G
G
GG
G
GGG
GG
G GG
GG
GGGG
GG
GGGG
GGGGGGGG
G
GGG
G
GGGGGG
G
G
G
G
G
G
G
GG
G
G
GG
GG
G
G
G
G
GGG
G
G
GGG
GG
G
GGG
GGG
GGG
G
G GG
GGGGGG G
G
GG
GG G
GG GGG
GGG
G
G G
G
G
GG GGGG
GG
GGGGG
GG
GG
GGG
G G
GG
GGGG
GGGG
GGG
GG
GGGG
GG
GGGGGGGGG
G
GGGG
G
G
G
GG G
G
G
G
GGG
G
GG
G
G
GG
G
GG GGG
GG
G
G
GG
G
G
G
GGGG
GGGGGG
GG
GG
GG
GGGG
GGGG
G
GG GGG
GGGG
G
G
G G
GGGGGG
G
GG GG
G
G
GGG
GG
GGGG
G
GG
GG
GGG
GGGGG
G
GGGG
GG
G
G
GGGGG
GGGG GG
GGG
G G
G
G
G
GGGG
GGGGGG
G
GG
GGG
GGGGGG
G
GGG
GG
G G G
GG
GGGGGG
GGGGG
GGGGGGG
G
G
G
GG
G GGGG GG
GG
G
G
G
G
G
GG
G
GG
G
G
GG
GGGGG
GG
G
G
GG
GGGG G
G
G
GG
GGG
GG
G
G GG
G
G
G
GGG
GG
G
GGGGGG
G
GG
G
GGG G GGGG
G
G
GG
GG
G G
G
GG
G
G
G
GGGGGG GGGG
G
GGGGGGGGG
G
G
G
G G
GG GGG G
G
G
G
G
GGG
GGGG
GG
GG
GG
GG
G
GGGGGG
G
GGGGG G
G
G
G
GG
G
GG
G
GG
G
GG
G GGG
G
G GGG GGGG
GGG
GGG
GG G G
GGG
GG
G
G
G
G
GG
G
G
G
G
G
GGG
GG
G
G
GG
G
GGGGG
G
G
G
GG
GGGGGGG
GG
G
G
G
G
GGG
G GG
G
GG
GGGGGG
G
G
GGGGGGGGGGGGGGGG
GGGGGG
GG
G
G
GGG
G
GGG G
G
GG
G
GG G
G
G GGGG
GGGGGGG
G
G GGG
GG
G
G
GGG
GG
GG
GG
G
GG G G G
G
GG
GG GG
G
GGG
GGGG
G
GGG G
G
GG
GGGG
G
GGGG G
G
G GG
GGG
GGG
GGG
GG
GG
G
GG
GG
G
2 0 2 4 6
0
2
4
6
8
tuning=1
Initial value=(0,7)
Acceptance rate=49.3%
GGG
G
GGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGG
GGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGGGGGG
GGGG
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGG
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
GGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGG
GGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGG
GGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGG
GGGGGGGGGG
GG GGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGGGGGGGGGG
GGGGGGGGGGGGG
GGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGG GGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGG
GGGGGGGGGGGGG
2 0 2 4 6
0
2
4
6
8
tuning=100
Initial value=(0,7)
Acceptance rate=2.4%
Random walk Metropolis: autocorrelations
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=0.01
Initial value=(4,5)
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=1
Initial value=(4,5)
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=100
Initial value=(4,5)
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=0.01
Initial value=(0,7)
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=1
Initial value=(0,7)
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=100
Initial value=(0,7)
Independent Metropolis
q(, ) = f
N
(;
3
, I
2
) and
3
= (3.01, 4.55)

.
32
G
GG
G
G
G
G
G
G
GG
G G
G
G
G
G
GGG
G
G
G
GGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
G
G
G
G
G
G
G
GG
G
G
GG
G G
GGG
G
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GG
G
GG
G
G
G
GGG
G G
GG GG
G
G
G
GG
G
G
GGGGGGGGGGGGGG
G
G
G G
GGG
GGGG
G
G
GGG
G
GGG
G
GGGGG
GG
GG GGG
GG
GG
G
G G
G
G
GGG
G
GG
G
GG
G
G
G
G
G
G G
GG
G G
G
G
G
G
G
GG
GG
G
G
G
GGGG
GG
GG G
GGG
G
G
G
G
GG
GGGG
G
GGG
GG
G
GG
GG
GGGG
GG
G
G
GGGG
G
G
G G
GGGGGG
G
G
G
G
GG
G
G
GGGGGGGGGGGGGGGGGGGGGGG
GGGGG
G
G
G GGG
G GG
GGGGGGG
G
G
GG
GGGG
GGG
G
GG
GG GG G
GGGG
G
G
G
G
G
GGGG
G
GG
G GG
G
G
G
GGGGG
G
GG
GG G
GGG
G
GG
G
G
GG
GG G
G
GG
GGGG
GGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGG
G
GGGGG
G
G
GGG
G
GG
G
G
G G GGG
G
G
G
GGGGGGGGGGGGGGGGGG
GGG
G
GGGGG
G
G
GGG
G
GG
G GGG
G
GGG G G G
GG
G
G
GG
G
GG
GGGGGGGGGGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGG
GG GG G
G
G
G
G
GGGGGGG
GGGG
G
GGGGG GGGGGG
GG GG
G
GGGG
G GG
G
GG
G
G
G
G
GGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
G
G
G
GG
G
G
GGGGG
GGG
G
GGGG
G
GG G
G
G
G
GG
G
G
GGGGGG
G
G
G
GGG
GGG
G
GG
G
GGG
G
G
G
G
G G
G
G
GG
G GG
G
GG
G
G
G
G G G GG
G
GGG
G G
G
G
G G
G
G
GGG
G
GG
G
G G
G
GG
GG
G
GG
GG
GG
GG
GG
G
G
G
GGG
G
GG
G
G
GGG G
G
GG
G
GG
G
G
GG
G
GG
GGGGGGGGGG
G
GGGGGG
G
G
G
G
G
G
G
G
G
GG
G
GG
G
GGGG
G
GG
GG
G
G
GG G
GGG
G
G
G
GGG
GGG
G
G G
G
G
G
G
G
G
GG
GGG
G
G
G
G
G
GGG
G
GGG
G
G G
G
G
GG
G
GGGGGG
G
G G
G
GGG
G
G
G
G
GG
G
G
G
G G
GGG
G
GG
GG
G
G
G
G
G
G
GGGGGG
GG
GG
G
G
GGG
G
GGGGGG
GGGGGGG
G
G
G
G
G
G
G G
GGGG
GGGGGG
G
G
GGGGGGGGG G
G
G
GGG
GGGGGGGGGGGG
GGG
GG
G
GGGGGGGGGG
GG
GGG G
G
G
G
GGGGGGGGG
GGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GG
G
GGG
G
GG
GG
G
GG
G
GG
GG
GG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
G
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
2 0 2 4 6
0
2
4
6
8
tuning=0.5
Initial value=(4,5)
Acceptance rate=9.9%
Acceptance rate=9.9%
GGGGGGGGGGGGG
G
G
G
GGGGGG
GG
GGGGGG
GG
GG
GGGGG
GGGGGGGG
GGG
G
GGGG
G
GGGG
GGGG
G
GGGGGG
GGG
GGG
GGG
G
GG
G
GGGG
GGGG
GG
GG G
G
GG
G
G
GGGG
GG
G
GG
GGGGG
G
GGG
GGGGG
GGGGG
GG
GGGG
GGGG
G
GG
GGGG
GG GG
GGGGGG
G
G
GG
GGGGGGGG
G GGG
G
GGGGGG
GG
GG
G
GGGGGGG
GG
GG
GGGGGGGGGGG
GGGGGGG G
GGGG
GGGGG
GG
G
GGGGGGGGGGGG
GGGGGGGGG
G
GGGGG
G GG
GGGGGGGG
GG
G
G
GGGGGGGGGG
G
GGG
GGGG
G
G
GGGGGG
GGG
GGG
G
GGG
GGGGGGGG
G
GG
G
GGGGGGGGGGGG
G
G
GG
G
GG
G
G
GG
GG
G
GGGG
GGGGGGGGG
GGG
GG
GGGG
GGGGGGG
GGG
G
GG
G
G
G
GGGGGGGG
G
GGGG
GGG GGGG
GGG
GGGGG
GGG
GGGGGGGGGGG
GGGG
GGGGGGGGG
G
GGGGGGG
GGGGGG
GG
GGG
GGGGGGGGGGGGG
GGGGGGGGGGGGGG
GG
GGGG
GGGGGGGGG
G
GGGGGGGGGG
GGGGGGG
GGGG
GG
GGGGGGG
GG
GGGGG
G
GGGGGG
GG
GGGGGGGG
GGGG
G
GGG
G
GG
G
GG
G
G
GGGGGGGG
GGGGGGG
GGGG
GGGGGGGG
GG
GGGGG
GGGGGGGGGG
GG
GGGGGGGGGGGGGG
G GGGGGG
GGGGGG
GG
GGGGGG GGGGGGGGGG
G
GG
GGG
GGGGGG
G
G
G GG
GGGGG
GGGGGGGGGG
G
GGGGGGGG
G
G
G
GGG
GG
GG
G
GG
GG
GG
GGGGGG
GGGG
G
GGGGGGGGGGGGG
GG
GGGGG
GGGGGGGG
GGG
G
G GG
G
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGG
GG
GG
GGGGGGGG
GG
GGGGGGGGGGGG
GGG
G
GG
GGGGGG
GGG
GGGGG
GGGGGGGGGGG
GGGGG
GGGG
GGG
GGGGGG
G
GG
GG
GG
G
GGGGGGGGGG
GG
GGGGG
GGG
G
G
GG
GG
G
GGG
G
G
GGGGGG
G
G
GGGG
GG
GGGG
G
GGG
GGGG
GGGGGGGG
G
GG
G
G
GGG
G
GGG
G
GGG G
GG
GGGGGGGGGGGGGGGGGGG
GGGGG
G
GGGGGGGGGGGGGGG
GGGGG
GGGG
G
GG
GG
GGGGG
G
GGG
GGGGGGG
GGGG
G
GGGG
G
GG
GGG
G
G
GGG
GGG
G
GGGG
G
G
GG
G
G
G
GG
G
GGG
G
G
GGGGGGGGGGG
GGGG
G
G
GGG
GG
GGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGG
GGGGGGGGGGG
GG GGG
GGGGGGG
GGGGGGG
G
G
GGG
G
GGG
G
GGG
GG
GGG
G
GGGG
GGGGGG
GGGGGGG
GGGGGGGG
GG
GG
GGGGGGGGG
G
GG
GGGGGGGG
GG
GG
G
GGG
GGGGGGGGGG
GG
GG
G
G
GG
G
G
GG
G
G
GG
G
GGGGGGGGG
GGGGGG
GG
GG G
GGGG
GGGG
GGGGG
GGG
GG
G
GGGGG
G
GG
GGGGGG
GGGG
G
GGGG
G
GGG
G
GGGGGGGGGGGGGGGGGG
GG
G
GG
G
GG
G
GGG
G
GGGGGGGG
G
G
G
G
GG
G
G
G
G
GGGGGG
GGGG
G
GG
GGG
G
GGGG
GG
G
GG
GGGGG G
GGG
G
GG
GGGGG
G
GGGG
GGGG
GGGG
GGGGGGGG
GG
G
G
GGGGGGGG
G
GGGG
G
G
GGGGGGG GGG
G
GGGGGG
G
GGGG
GGG GG
GGG
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGG
G
GGGGG
GGGGG
GGGG G
GG
GG
GGGGGGGGG
G G
GG
G
GG
G
GGGGGG
GGG
GGGGGG
GGGGG
GGG
G
GG
GGG
GGGGGGG
GGGGGGGGG
G
GG
G
G
G
GGGG
GGG
G
GG
GGGGGGGG
G
GGGGGGG
G
GG
GGGG
GGGG
G
GG
GGG
GGGGGGGGGGG
GGGGGGG
GGG
GG
G
GGGGGGG
GGG
GGGGGG
GG
G
G
GGGGGGGGGGGGGGGG
GGGGG
G
GGGGGGG
GGGGG
GG
GGG
GGG
GGGGGGG
G
GGGGGGGGGG GGGGGGG
GGGG
GGGGGG
G
GG
GG GGG
G
GGGGGGG
G
G
GGGGGG
GG
GGG
GG
GG
GGG
G
GG
G
GG
G
GG
GGGGGGG
GG
GG
G
G
G
GGGG
GGGGGGGGGG GGGGGGGG
GG
GG
GGGGGG
G
G
GG
GGG
GGGGGGG
G
G
GGGGGGGG
GGGG
G
G
GGG
G
G
G
GGGGGGG
G
G
GGGGG
GGGGGGGGGGGGGGGGGG
GG
GGGG
GGG
GGGG
GG
GGG
GGGGG
GG
GG GGGGGGG
G
GGGG
GGGG
G
G
GGG
GG
GGGG
GG
GG
GG
G
GGGGG
GGG
GGG
G
G
GG
G
GGGGGGGGG
G
G
G
GG
G
G
GGGGG
G
G
G
GGGGGG
GG GG
GGG
G
GGG
GGGG
GG
GGGG
GG
G
G
G
GGGGGGG
G
G
G
G
G
GGG G
G
G
GGG
GG
G G
GG
G
GGGG
GGGGGGGGGGGGGGG
GG
GG
G
GGGG
GGG
GGGGGGG
GG
G
GGGGGG GGGGGG
G
GG
G
G
GG
G
G
G
GG
GGG
GG
G
G
GG
GGGG
GGG GG
GGGGG
GG
G
GG
GG
G
GG
G GGGGGG
GGGGG
G
GGG
GG
G
GGGGGG
GG
G
GGG
GG
G
G
GGGGGGGG
GG
G
G
GGGGG
GGG
GG
GGGGG
GG
GGGGGG
G
GG
GG
GGGGG
GG
GGG
G
G
GGG
GG
GG
GGGGGGGG
G
GG
GGGGGG
GGG
GG
GGGGGGGGGGGG
G
GGGGGGG
GGG
GGGGGG
GGG
G
G
GG
GGGGG
GGG
GGG
GG
GGGGGGG
GG
GGGGGG
G
GGGGG
GGGGGGGGGGGGGGGGGGGG
GGG
GGG
GGGGGGG
G
GGGGGGGGGG
GGGGGG
GGG
GGGG
GGGGGGGGGG
GGG
GGG GGG GG
G
GG
GGG
GG
G
G
GG
GGGGGGGG
GGGGGGG
GGGG
GGG
GGGGGGGG
GGGGGGG
GGGG
GGG
G
GGGG
G
GG
GGG
GGGG
GGGGGGG
G
G
G
GGGGGGG
G
GG
GG
GGGGGGGGGG G
GGG
GGG GGGGGGG
GGGGGG
GGGGG G
GGG
GG
G
G
GG
GGGGGG
GG
GG
G
G
GGGG
GGGGGGGG
GGGG
GGGG
GGGG
GGGG
GG
GG
G
GG
G
GGG
GG
G
G
GGGG
G
G GGGGG
GGGG
G
GGGG
G
G
G
GGGG
GGG
GGGG
GG
GG
GGGGG
GG
GG
GGGG
G
GGGG
GGGGGGG
G
GG
GGGGGG
GGG GGGGGGGGG
GG
GGGGGG
GGGGGG
G
GG
GG
GG
G
G
GGGG
GG
G
GGGG
GGG
GGG
GGGGGGG
GGGGGGGGGGGGGGG
G
G
GGGG
GGG
GG
GGGG
GG
GG
GGGGG
G
GGGG
G
GG
G
GGGGGGGGG
G
G
GG
GGGGG
GG G
GGG
GGGGGGGGGGGGGGG
GGGGGGGGGG
G
GGG
G
GGGG
GGGGGGGG
GGGGGGGGG
G
GGGGGGGGGGG
GGGGGGGGGGG
GGG
GGGGGGGG
GGGGG
GGG
GG
G
G
G
GGGGG
GGGGGG
GGGGGGGGG
GGGGGG
GG
GGG
GGG
GG
GGG
G
G
GG
GGGG
G
GG
GG
GGGGG
G
GGG
G
GGG
GGGGGG
GG
GG
G
GG
GGGGG
GGGGGGGGGG
GGG
GG
GG
G
GGGGGGGG
G
G
GGGGGGG
G
GGG
GGG
GGG
GG
GG
G
GG
G
G
GG
GGG
GGGGGG
GGGG
GGGGGG
GGGGG
GGGGGGG
GGGGG
G
G
GGGG
GG
GG
G
G
G
GGGG
GGGG
GGGG
GGG
GG
GG
GGG
GG
GGGGGG
GG
GGG
G
G
GGG
G
GGG
GG
GGGG
GG
G
GGGGGGGGGGGGGGGGG
GGGGGG G
GGGGG
G
G
G
GG
G
GGG
GGGGGGGGGGGGGGGGGGGG
G
G
G
GGGGGGGGG
GGG GG GGG
GG
GGG
GGG
G
GG GG
GG
GGGGGGGGGGG
GG
GG
G
G
GGGGGGGGGG
G
G
GGGGG
GG
GG
GG
GG
GG
GG
GG
G
GGG
GG
GGG
GG
GG GGGGGGGG
GG
G
GGGGG
G G G
GGGG
G
GGG
GGGGGG
GGGGG
GGGGGGGGGGGG
GGGG
GGG
GGGGG
GG
GGG
GGGGG
GGGG GGG
G
GGGG
G
GG
GGGGGGGGGGGG
GG
GGGG
G
GG
GGGG
G
G
GGG
G
G
GGG
G
GGGGGGGGG
GG
GGG
GG
GGG
GGGG
G
GGGGG
G
G
GGG
GGGG
G
GGG
G
GGGGGGGGGG
G
GGG
G
GG GGGGG
G
GGGGGGGG
GG
G
GG
GG
GGG
G
GGGGG
GGG GG G
G
GGG
GGG
GG
GGG
G
G
GGGGGGGGGG
GGG GG
GGG
G GGG
GGGG
GG
GGGGG
G
GG
G
G
G
GGGG
G
GGGG
G
GGGGGGGG
GG
GGGG
G
G
G
GG
G GG
GG
G
GG
G
G
GG
GGG GG
G
GGG
GG
GGGG
G
G
GGG
GGG
GGGGG
GGGGGG
GGGGGGGG GGGG GG
GGGG
GGGGGGGGGG
GGGG
GGG
G
G GG
GG
GGGGGG
GG
GG
GG
GGGGGGG
GGGGGGGG
GGGG
G
G GG
GGGG GGGGGGG
G
G
G
GG
GG
GG
GG
GG
G
GGGGG
GGGG
GG
GG
GGGGGGGG
GGGG
GGGG
GG
GGGGGGGG
GGGGGGGG
GGG
G
GGG
G
G
GGGG
G
GG
GG
GG
GG
GGGG
GGGGGG
GGGGGGG
GG
GGGGGGGGGGGGGGGGGGGGG
G
G
GGGG
GGGG
GG
GG
GG
GG
GGGG
G
G
G
GGGG
GGG
G
GGG GGG
G
G
G
G
GGGG
GGGGG
GGGGG
GGGGGG
G
G
GGGG
GG
G
G
GGGG
GG
GGGGGG
G
GGGGGG
GGGG
G
GGGG
GGG
GGG
GG GG
GGGGGG
GG
GGG
GGGGGGGG
G
G
GGG
G
G
G
G G
GG
G
GGGGGGGG
G
GG
GGG
G
G
GGGG
GG
G
G
GGGGGGGGG
GG
GGGG
GGGGGGG
G
GGGGG
G
G
G
GGGGG
G
GG
G
G
G
GGGGG
GGG
G
G
G
G
GG
GGGG
G
GGGGGG
G
GG
G
GGG
GG
G
G
G
GG GGGGGGGGGG
GGGGG
G
G
G
GGG
GGGGGGGGG
G
GGGG GG
GGG
GG
G
G
G
G
GG
G G
GGGG
GG
G
GGGGGGGGGGGGGGG
GG GGG
G
G
GG
GG GGGGGGGGGG
GGGG
GG
GG
GG
GGGG
GGGGGGGGGGGG
GGG
GGGGGG
GGGG
GG
G
G
GGGGGGGGGGGG GGGGGGGG
GGGGG
GG
G
GGG
G
GGGGGGGGG
G
GG
G
G G
GGGGGGGGGG
G
GGGG
GG
GGGGG
G
G
GGGGGG
G
GGG
GGGG
G
GGGG
G
G G GG
GGGGGGGG
GG
GG
GG
GG
GGGGGGGGG
G
G
G
G
GGGGG
GG
GGGG
GG
GGGGGGGGGGGGGGGGGGGGG G
GG G
GGGGGGGGGGGGGGGG
GGGGGGG
GG
GGG
GG GGGGGGG
G
G
GGGG
G
G
GGGGG
G
GG
GGGGGGGGGGGG
G
GGGGGGGG
G
G
G
G
GG GGGGGGGGG G
G
GG
G
GGGGGGG
GG
GGG
GG
GGGGGGGG
GGGG G
GG
G
GGG
GGGG
G
GG
GG
G
GGGG
GGGG
GGG
GGG
G
G
GGGGGG
GGGG
GGGGGGGGG
G
GGGGGGGGGG
G
G
GG
G
GGGG
G
GG
GG
G
G GGGGG
GGGGGGG
GGG
GGGGGG
GGGGGG
G
GGG
GGGGGGGG
GG
GGGG
GGG
GGGGGGGG G
G
GG
GGGGG GGGGGG
G
GGGGGGGG
GG
GG
GG
G
GGG
GGGGGGGG
GGGGGG
G
GGGGGGGGG
GGGGGGGG
G
GGGGGG
GG
GGGG
GGG
GGGGGGGGG
GG
GGGG
G
GG
GGGGGGG
G
G
G
GG
G
G
GG
GGGGG
GG
GG
G
GGGGGGGGG
G
G GGGG
GGG GG
G
GGG
GG
GGG
G GGGGGGGGGGG
GGGGG
GGGGG GG
GGGGGGGGGGGG
GGGGGGGGG
G
G
GGG
G
GGGGG
G
G
GGGGG
GG
GGGG
G
GGG
G
G
GGGGG
GGGGGGGGGG
GG GGG
G
GGGG
GGG
GGG
GGGGGGGGG
G
GG
G
G
G
2 0 2 4 6
0
2
4
6
8
tuning=5
Initial value=(4,5)
Acceptance rate=30.9%
Acceptance rate=30.9%
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGG
GGGGGG GGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
GGGGGGGGGGG
GGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGG GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGG
GGGGGGGG
GGGGGGG
GGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGGG
GGGG
GGGGG
GGGGGG
G
GGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGG
GGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGG
GGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGG
GGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGG
GGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGG
GGGGG
GGG
G
GGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGG
GGGGGGGG
GGGGGG
GGGGG
GGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGG
GG
GGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
GG
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGG
GGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG GGGGGGGGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGGGGG
GG
GGGG
G
GGGGGG
GGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGG
GGGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGGGGGGGGGGG
GG
GGGGGGGGGGG
GGG
GGGGGGGG
GGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGG
GGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGG
GG
GG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGG
GGGGG
GGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGG
GGGGGG
GGGG
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGG
GGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGG
GGGGG
GGGG
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGG
G
GGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGG
GGGG
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGG
GGGGGGGGG
G
GGG
GGGGGGGGGGG
GGGGGGGGGG
GGGGG
GGGGGGGGGGGGGG
GGGGG
GGGGGGGGG
GGGGG
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGG
GG
GGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGG
GGG
GGGGGG
GGGGGGGGGGGGGGGGGGGGGGG
GG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGG
GGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGG
GGGGGGGGGGGGGGGGG
GGGGGGGGGGGGG
GGGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGG
GGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGG
G GGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGG
GGGGGGGGG
GGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGG
GGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGG GGGGGGGGGGGGG
GGGGGGGGGGGGGGGG
G
GGGGGGG
GGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
2 0 2 4 6
0
2
4
6
8
tuning=50
Initial value=(4,5)
Acceptance rate=5%
Acceptance rate=5%
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
G
GG
G
GGGGGG
G
G
G
G
GG G
GGGGGGGGGG
GG
G
G G
G
G
GGG
GG
GG
GGG
G
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
G
GGGGGGGG
G G
G
G
G
G
G
G
G
GGGG
G G G
GG
GGG
GGG
GG
G
GGG G
G
GGGGGGGG
G
G
G
GG
G
G G
GGGGGGG
GG
G
G
GGG
G
GG G
G
G
G
G
GGGG
G
G
G
G
GGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGG
G
GGG
G
G
GGGG
GGGGG GGG
G
GG
G
G
G G
GG
GGG
G
G
GGGGG
GGGGGGGGGG
G
GG
G GG G
G G
G
GG
GG
G G G
G
G
GGGGGGGGGGGGGGGGGGGGGGGG
G
GG
G G
GG
G
G
G GG
G
G
GG
GG
GGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGG
GG
G
G
G
GG GGGG
GGGGG
G
GG
G
G
GG
GGGGGG
G G
G
GGGGGGGG GG GGG
GG
GG
G
G
GGGG
G G
G
G
G
GG
G
G
G
G
G G
G
GG
G G
G
G
GGGGGGG
G
GGGGG
GG
G
G GGG
G
G
GGGGGGGG
GGG
G
G
GGGGGGGGG
G
GG
G
G
G GGG
GG
GGG
G
G G
GGG
GGGGGGG
GG
GG
G
GGGGGGGGGGGGGGGGGGGG
GGG
GGGGGGGGGGGG G
G
G
G
G
GGGGGGGGG
G
G
GGGG
G
G
G GG
G
G
GGG
G
G
GG
GGG
G
G
G
G
G
G
G
GG
G
G
G
G
G GG
GGGG
G
G
GG
GG
GG
G GG
G
GGG
GG
G
G
G G
GGG
GGGGGGGGGGGGGG GGGGGGGG
G
GGG
G
G
G GG
G
GG G
GG
G
G G
G
G
GGGG
GGGG
GG
G
G
G
GGGG
GG
G
GGG
GGG
G
G
G
GG
GG
G
G
GGGG
G
G
G
GGGGG
G
GG
G
G
G
G G
GGGG G
G
G
GG
G
GG
GG GG
G
G
GGGG
GGGGGGGGGG
GGGGGGGG
GGGGG
GGG
GGGGG
GGG
G
GG GGG
GGG
G
G
G GGG
G
G
G
GG
G
G
G
G
G
G GG
G
G
G
GG
GGGGGGG
G
G
G
G
GGGGGG
GG
G
G
G G
GG
GGGGGGG
GGGG
GGGGGGGGG
G
G
G
GG G
G
GG
G
G
GG
G
G
G
G G
GG GG
GGG
G
G
G
G
G
G
GGGGGGGGGGGGGG
G
GG
GG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
G
G
G
GG
GG
G GGG
GG
G
G GGG
GGGG
G
G
G
G G
GGG
GGGGGG
G
GGG
G
GGGGG
GGGGG
G
GGGGGG
G
GG
G
G
G
GGGGGGGGGGGGGGG
G
G
G
GG
GG
G
G
G
GG
G
G
G
GG
GG
G
G
G
G
G
G
G
GG
GGGGGGGGG
GG
G
G
G
GGGGGG
G
GG
GGGGG
G
G
GGG
G
GG G G
GG
GG G
G
GG
GG
G
G
G
G
G GG
GGGGGG
G
G
G
G
GG
G
GG
GGGGGGGGGGG
GG
G
G
GG
G
G
GG
G G
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGG G
GG
GGG
GGGGGGG
G
G
G
GG
GG
GGGGGGGGGGGGGGGGGGGG
GG
G
GG
G G
G
G
G
G
GGGGGGGGG GG
GGGG
G
G
G
G
G
G
GG
G
G
GGGG
G
GGG
GGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGG
G
GG
GGGG G
GG
G
GG
G GGG
GGG
GG
G
G
G
GGGGGGG
GGGGGGGGG
G
G
GG
G
G
G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
G
G
G
GG
G G
G
G
GG
GG
GGGG
G
G GGGGG
GGG
GG
GG
G
GG GGGGG
GG G
G
G
G
G
G
G
G
G
G
G
G
G G
GG
G
GGGGGGGGGGG
G
G
G G
G
G
G GGGG
GG
G
G
GG
G
GG
GGGGGGGG
G
GG
GGG
GGGG
G G G
G
G GGGG
G
G
GG
GG
GG
G
G
G
GGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G G
G
G
G
GG
GGGG
GG
G GG GG
G
G
G
GGGGGGGGGGG
GGG
GG GG
GGG
G
G
GGG
GG
G
GG
GG
G G
G
G
GG
GGGGG
GGGGG
G
GGG
GG
GGGGGGG
GGGGGGG
G
G
G
G G
G
GGG G
GGGGGGGGGGGG
G
G
G
G
GGGG
GGG G
G
G
G
GG
G
GG G
G
G
G
GGG
GGGGG
G
GGGG
GGG
G
GGG
G
G
G
G
G
G
GGG G
G
G
G
GGGGG
G
GGG GGGGGGGGG
GGGGGG
G
G
GG
G G
G
G
GGGGGGG
G
G
G
GG
G
G
GGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGG
GGG
G
G
GG
GG
GGG
G
GGG
GGGG
GGGGGGGGGGG
GGGG
G
G
G
G
G G
G
G
GG
G
G
GGGGGGG
GGGGGGGGGGG
GGG GG
G G
G
GG
GGG
GG G
GGGG
G
GG
G
G
G
G G
GGG
G
G
G
G
GGG
GGGGG
G
GGG
G
G
G
G
GGGG
G
G
G
GGG
G G
G
G G
G
G
G
G
GG
GG
G
GG
G
G
G G
GG G G
G
G
G
GG
G
G
G
GGGG
GGGGGGGGGGGGGGGGGGGGGGGG
G
GG
GGGGGGGG
GGGGGGGG
GG G
GG
GG
G
G
G
GG
G
G
GGGGG
G
G
G G
G
G
GGG G
G
G
G
G
G
G
G
G
GG G
G
GG G
G
G
G
G
G
G
G GGG
GGGGG
GGGGGGGGGGGGG
G
G
G
G
GGG
G
G
G
G
G
G
G G
GG
GGGGG
GG
GGGG
G
G
GG
GGGGG
G
G GG
G
G
G
G
G
GG
GGG
GG
GG
G
GGGGGG
G
G
GGG
GG
G
GGG
G
GG
GG
GGG
G
GG GGGGGGGGGGGGGGGGGGG
GG
G
GGGGGGGGGGGGGG
G
G
GGGGGGGGGGGG
GG
G
G
G
G
GGGGGGGGGGGGGGGGGGGGGG
G
GGG
GGGGGGGGGGG GG
G
G
G
G
G
G
G
GG G
GG
G
G
G
GGGGG
G
GGGGGG
G
G
GGGGG
G
G
G
G
GGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGG
GGGGG
GGG
GG
G
G
G
G GGG G GG
G
G G
G
GGG
G G
G
G
G
G
GG
G GG
G
GGGGGGGGGGGG
G
G
G
GG
G
G
G
G
G
G
G
G
GGGGGGGGGGGGGG
GG G
GGGGGGGGG
G
G
G
G GG
GGG
G
G
GGG
GG
G
G
G
GGGG
GG G
G
G
GG
G
GGG
G
G
G G
G
GG
G
G
GGG
G
GGGGG
GGGG
G
GGGGGGG
GG
G
G
G
G
GG
G
GGG
G
G G
GGG
GG
G
G
GG
GG
GG
G
G
G
G
GG
G
G
G GG
G
G G
G
GGGGG
G
G GGGG
GGGGG
GGGGG
G
G
G
G
G
G
G
GGGGGGGGG
GGG
G
G
G
G
G
G
GGGGG
G
GGGG
G
GGGGG
GGGGGG
GG
G G
G G
GG
G
G
G
GGGG G
GG
GG
G
GG
GGG
GG
G
G
GG
G
G
G
G G
GGGGG
GGGGGGGGG
GG
G
GG
G
G
G
GG
G
G
G
GG G GG
G
GG
GG
G G G
G
G
GG G
G
GGG
GGGGGGGGGGGGGGGG
G
GGGGGGGGGG
G
GGG
GGG
G GG
G
G GG
G
G
G
G
G
G
G
G
G
G
G
GG
GGG
G
GGGGG G
GGG GG
G
G
GG
GG
G
G
G
GG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GG G
G
GGGGGGG
GGGGGG
G
G G
G
G G
G
GGG
G
G
GG
G
GGGGGG
G
GG
G
G
GG
G
G
G
GGGG
G
GGG
G
G
G
G
G
G
G
G
G
GGGGGG
GG G G
GGG
G
GGGGGGGGGGGGGGGGG
GGG
G
G G
G
GGG
G
G
G
GG
G
G
GG
G
G G
G
GG
G
G G
GGGGGG
G G
GGG
G
G
G
GGGGGGG GGGGG
G
G
G
GG
G G
GG
G
G G
G
GG
G
G
G
G
G
G
G
GG
G
G
G
GG G
G
G
GGG
GGG
G G G
G
G
G
G G G
G
G
G
GGG
G
GG
GGG
G
GGGGGG
GGG
G
GG
G
GGG
G
G
G
G
G
GG
GG
GG
GGG
G
GGGG
GG
G
G
G
G
G
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GG
G
G
GG GG
G
GGGGGGGGGGGG
GGGGGGGGGGGGG
GGG
GGG
GG
G
G
G
GG
GG
GG
G G
G
G
GG
G
GG
G
GGG
G
GGG
G
GG
G
GGGG
G
GG
GGG
G
GG
GG
G GG GG
G
G GGGGGGGGGG
GG
G
G
GG
GGG
GG
GGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGGGGGGGGGGGGGGG
GGG
G
GG
GGGG
G
G
GGG
GGGGGG
G
G
G G
GG
G
GGGGGGGGGGGGGGGG GG
G
G
G GG
G
G
G G
G
G
G
GGGGGGGGGGGGGGGG
GG
G
GGG
G
G
G
GGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GG
GGG
G G
G
G
G
G
G
GGGGGG
GG
G
G
GG
G
GG
G
G
G
G
G
G
GGGG
GG
GG
GGG
G G
GGGG
GG
GG
G
GG
G
G
GG
G
G
G
G
GGGGGGGGGGG
GGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGG
2 0 2 4 6
0
2
4
6
8
tuning=0.5
Initial value=(0,7)
Acceptance rate=29.4%
Acceptance rate=29.4%
G
GG
G
GGGGG
GGGGG
GGGGG
G
GGGGGGGGGG
G
G
GG
G
G
G
G
GG
G
G G
GGG
G
GG
G
GG
G
GG
GGGGGGGGGG
GG
G
G
GG
G
G
G
GGGG
GGGG
GG
G
GGGGG
G
GG
GGGGG
GGGGGGGGG
GG
G
GGG
GGG
GGG
GGG
GGGGG
GG
GGGGGGGGGGGG
GGGG
GGG
GG
G
GGGGGGGGGG
GGGG
GGGGGG
GGG
G
GGGG
G
GGG
GGGGGGG
GG
GGGGG
GGGG
GGGGGGGGGG GGGG
G
GG
G
G
GGGGG GGGG
GGG
G
GGG
GG
GGGGGGGGGGGGG
G
GGGGGGGG
GGGGGG G
G
GGGGG
GG
GG
G
G
GG
GG
GGG
GGG
G
GGGG
G
GG
G
G
GGG
GGGGGG
G
GGGGG
G
GGGGGGGG
GG
G
GGG
G
GGG
GGG
GGGGG GGGG
GGG
G
GG
GGG
G
G
G
GGGGG
G
GG
GG
GGGG
G
GG
G
GGGGGGGGGGGGGGGGGGGG GG
GGGGGGGGGGGG
G
GG
GGGGGGGG
G
G
GGGGGG
G
GG
GG
GGGG
GG
GGGGG
G
G
G
G
GGGGGGG
GG
GGG
GG
GGGGGGGGG
GGGG
GGGG
GGGGG
GGG
GG
GGGGGG
GG
GGG
GGG
G
GG
GGGG
G
GGGGGGGG
GGGGG
G
G
GGGGGG
GGGG
GGG
GG G
GG
G
GGG
GGGGGGG
G
GGGGGGGGG
GGG
GG
GG
G
GG
GGGGGGGG
GGGG GGGGGG
GGGGGG
GGG
GGGG
G
G
GGG
GGGGG
GGGGGGGG G
GGGGG
GGGG
GGGGG
GG
GGG
G
GGGGGGG
GGG
G G
GG GGGGGGGGGG
GGGG
GGGG
GG
GGGGGGG
G
GG
G
G
GGGGGG G
G
GGGGG
G
GGGGG
GG
G
G
G
GGGG
GGGG
GG
G
GGG
G
G GG
GG
G
GG
G G
G
G
GG
GGGG
GG
GGGG
GGGG
GGGG
G
GG
G
GGGG
GGG
G
G
GGGG
GGGG
GGGGGG
G
G
GG
GGGG
GG
G
GGG
G
GG
GGGG
GGGGG
GG
GGGGGGGGGG
G
GGG
GGG
GGGG
G
GGG
GGGGGGGG G
G
GGGGGGGGGG
GGGG
GGGG
GGGGGGGG
GG
GG
G
G
GG
GGGG
G
G
GGGGG
GGGG
G
G
G
G
GGGGGG
GGGGGG
GGGGG
GGG GG
G
GG
GGGGG
GGGGG
G
GGG
G
GGGGG
GG
GG
GGG
GGGGG
GGGGGGGGGG
GG
GGGGGGGGGGGGGGGGG
GGGG
G
GGG
GGGGGGGGGGGGGG
G
G
GGGG
GGGGGGGGGGGGGGGGGG
G
GGGG
GG
G
G
GG
GGGGG
G
GGGGGGGGG
GGGG
GG
GG G
GGGGGGGGGGGGG
GG
GGGGGGG
GGGGGGG
GG
GG
GGG GGGGGGGGG
GGG
GGGGG
G
G
G
GG
G
G
G
G
GGG
G
GG
GG
GGGGG
GGGGGGGGGGG G
G
GG
G
GGGG
G
GGGG
GG
G
GG
G
G
G
G GG
GGGGG
G
GGG
GGG
GGG
GG
G
G
G
GG
GG
G
GGG
G
G
G
G
GGGGG
GGGGG
GGG
GG
GG
GGGGG
G
GGG
GGGG
G
GGG
G
G
GG
G
GGG G
G GGGG
GGGGGG
GGGGG
GGG
GGG
GGGGGGGGGG
GGGGGG
G G
GGGGGG
GGGG
GGG
GGGG
GGGGGG
GGG
GG
G
G
GGGG
G
GG
G
G
G
GGG
GGGGG
GGG
GG
G
GGGGGGG
GG
G
G
GGGG
G
GGGGGG
GG
GG
G
GGGGGGGGGG
GGGGGGG
G
GGGG
G
G
GGGGG
GGG
GG
G
GGG
G
GGGGG
G
GGGGGGGGGGGGGGGGG
G
G
GG GG
GG
GG
G
GGG
GG
G
GG
G
GGGG
GGGG
GGGGGGGGGG GGGGGGGG
GGGGGG
G
GGGGGGGGGGGGG
GGGGGGGG
GGGG
GGGGGG
G
GG
GGG
GGGG
G
G
GG G
GG
GGGG
GG
G
GGG
GGGG
GGG
GGG
GGG
G
G
GGG
G
GGG
GG
GG
GGGGGGGG
GG
G
G
GGGG
G GGG
GGG
GG
GGG
GG
GGGGGG
G
G
GG
GGGGGGGGGGGGGGGGG
GG
G
GGGGGGGG
GGGGGG
GGGGGG
GG
G
GGGGG
G
G
GGGGGGGGGG
G
GG
GG
GGGGG
G
GGGGGGGGG
GGGGGGGGG
G
GGGG
GG
G GG
G
GGGGGGGG
GGG
GGGGGG
GGGGGGGG
G
GGGGGGGGGG
GG GGG GG
G
G G
GG
GG
GG
GGG
GGG
G
G GGGGGGGG
GG
GG
G
G
G
GGG
GGGGG GGGGG
G
GG
G
G
GGG
G
G
GGGGG
GG
G
GG
GGGGGGG
G
GG
GG
GGGG
G
G
GGG
G
G
GGGGG
G
GG
GGG
GG
GG
GGG
G G
GG GG
GGG
GG
G
GGGG
G
G
GGGGGGG
GGGGGGGGGGG
GGGG
GG
G
G
GG
GGG
GG
G
GGGGG
G
GGGGGGG
GGGGGGG
G
GGG
GGG
G
GGGGGGGGG
GGGGG
GG
G
GG
GGGGGGGG
GG
GG
GG
GGGGGGG
GGGG GGG
GGGGGGG
GG
GGGG
G
G
GG
G
GGGGGGGGGGGGGGGG
G
G
G
G
GGG
GG
GGGG
GG G
G
GGGG
G
GGGG
GGG
G
GG
GGGGG
GGGGG
GGG
GG
G
GG
G
GGG
G
GGGG
GGG
GGGG
GGGGGG GG
G
GG
GGG
G
G
G
GG
GG
GGG
G
GG
GG
GG
G
GGGGG
GGGGG GGGGGGGG
GGGGGGGGGG
GGG G
GGGGGGG
GGGGGGGGGGG
G
GGGGGGG
GGG
GGGG
G
GGGGGGG
GGG
GG
G
GGGGG
GGG
GGGGGG
GGGG
G
G
GGGGGGGGG
G
G
GG
GGGG
GGGG
GGGGG
GGG
G
GGGGGGGGGGGG GGG
GG GG
GG
GGGGGGGG
GGGGGG
GGG
G
G
GG
GGG
GG
G
GGG
GG
GGGGGG
GG
GGGG
GGG
G
GGGGGGG
GGGGGG
GGGGG
GG
GGGG
GGGGGGGGGGG
GG
GGGGGGGGGGGGGGG
G
GGG
GGG
GGG
GG
GG
G
G
G
GGG
GGG
GG
GGGGGGGGGGGG
GGG
GG
GGG G
GGG
GGGGGGGGGGGG
GGGGGGG
GGGGGGGGG
GG
GGGGGGG
G
G
GGGGGGGGGG
GG
GG
GGGGGGG
GG
G
GGG
GGG
GGG
G
GG
GGGG
GGGGG
G
GGGGG
G
G
G
GGGGGGGGGGGGG
GGGGGGG
GGG
G
GG
GGGGG
G
GGGG G
GG
G
GGGG GG
G
G
GGGGG
GGG
G
GGGGGG
GGGGGGGGGGGGGGG
GGGG
GGGGGG
G
G
GGG
GG
GGGGGGG
GGG
G
GGGGG
G
G
G
G
G
GGGG
GGGGGGGGGGGGGG
GG
GGGGGG
GG
GG
GG
GG
GGG GG G
GGGGGG
GGGG G
GGG
G
GG
G
GGGGGG
GGGG
G
GGG
GGG
GGG
G
G
G
G GG
GGG
G
GGG
G
G
GGGGGGGGGGGGGGGGG GGG
G GGG
GG
GGGGGGGG
GGG
G
G
G
G
GG
GGGGGGGGGGGG GGGG GG G
G
G
G
GG
GGGGG
GG
G
GGGGGGGGGGGGG
G
GGGGGGGG
G
G
G GGG
G
G
GGGGG
GGGGGG
GG
G
G
GG
GGG
GGGGGGGGG
GGGGG
GGGGGGG
GGGGGG
GGGGGGG
GG
G
G
G
GGGG GG
GG
GGGG
G
G
GG
G
G
GGG
GGG
GGGG
GG
GG
GGG
GGG
GGGGGGG GGG
GGGGGGGGGGGGGG
GGG
G
G
GGGGG
G
GG
G
GG
G
G
GGGG
GGGG
G
G
GG
GGG
GGGGG
G
GGGG
G
GGGG
GGG
G G GGGGG
G
GGGGGG
GGG
GGGG
G
G
G
G
GGGGG
G
GGGG
GG
G
GG
GG G
GGGGGGGGGGGGGGGGGGG
GGGGGG
GGGGGGGGGGGGGG
GG
GG G
GG
G G
G
G
GGGGGG
GGGGG
GG
GGG
GGG
GGG
GGGGGGGGG
G G
G GGGGGGG
G
GGG
GGGG
G
G
G
G
G
GGG
GGGGGGGGGGGGGGGGGG
GGGGG G
GGG
GG
GGGGGGG
GGGG
GG
G
GG
GGGGG
GG
GGGGGG
GG
G
G
GG
GGG
GG
GGG
GGG
GG
G
GG
GG
GG
GGG
GGG G
G
GGG
GG
GGG
G
G
G
GG
GG
GGGGG
GGG
G
G
G
GG GGGG
GGGGGGGGGGGGGG
GGGGGG
GG
G
GGGGG GGGG
G
GGGGGGGGG
GGG
G
G
GGGG
GG
GG
GG
G
GG
G
GGGG
G
GG
GGG
G
G
GG
GGGG
G
G
GGG
G
GG
GG
GG
GG
G
GG
GGG
GGG
G
GGG
GGGGGG
GG
GGGGGG
GGGGGG
GGG
GGGG
GGGG G
GGGG
GG
G
GGGG
G
G
GGGGGGG
GGGGGGGGG
G
G
GG
GG
GG
GGGGG
GGGGGGGG
G
G
G
GG
GGG
GGGGGG
G
G
G
G
G
GGGGG
G
GG
GGGGGGGGGG
GG
GGGGGGGGGG
G GGGG
GG
GGGGGGG
GG
GGGGGGGGG
GG
GGGGG
G
G
GG
GG
GG
G
G
GGGG
G
GG
G
GG
GGGG GGGGG
GGGGGGG
GGG
GG
GGG
GGGGG
G
G
GGG
GGG
GGGGG
G
GG
GGG
GGGG
G
G
G
G
GGGG
G
G
GGGGG
G
G
GGGG
GGG
GG
G
G
GGG
G
G
G
GGGG GGGG
GGGGG
GGGGGG
GG
G
GG
G
GGGG
GGGGGGGGGG
GG
GGGG
G
G
G
GGGGGG
GGGGGG
GGG
GGG
GGGGG
GGGGG
G
GGGGGGGGGGG
GG
GGGGGG
GGGG
G
GGGGGG GGGGG
G
G
G
GGG
G
GG
G
GG
GGGGG
GGG
GGGGGGG
GG
G
G
GGGGGGG
GGGGG
G
GGGGGGGG
GG GGG
GGG
G
GG
GGGGGG
GG
G
GGGGG
G
G
GGGGGGGG
GG
GG
GGGG
GGG
GG
GGG G
G
G
GGGGGG
GG
GG
G
GGG
G
GG
G G
GGGGGG
GG
G
GGGG
G
GGGGG
GG
G
GG
GGGGGGG
GGGGGGGGGGGG
G
GG
GG
GG
GG
GG
G
G
GGG
G
G
GGGGGGG
GGGG
G GG
GG
GG
GG
G
GG
G
GG
GG
GGGG
GGG
GGGGGG
GG
GGGGGGG
GGGGGG
GG
G
GGG GGG
GG
G GGGGG
GGG
GG
GGGGGGGGGGGGGG
G
GG
G
G
GGGG
GG
GGGGGG
GGG
GG
GGGGG
GGGGGGG
G G
GGGGGGGG
G
G
GGGG
G
G
G
GG
GG
G
GG
G
G
G
G
GGG
GGGGG
GGGGGG
GGGGGG
GG
GGGGGGGGGGGGGG
GGGGG
GG
GG
G
GGGGGGG
GGG
GGG
GGGGGGGG
GG
GGGGGGGGGGGGGGGGG
GGG
G
GGG
GGGGG
GGGGG
G
GGGG
GGGGG GGGGG
GG
GGGG G
G
G
G
GG
GGGGG
G
GGG
G
GGGG
G
GGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGGGG
GG
GGG G GGG
GG
GG
G
G
G
GG
GG
GGGGG
G
GGG
G
GG
G
G GG
G
GG
G
G
GGGG
G
GGG
G
G
GGG GGGGGGGGG
G
G
GG
GGGG
GG
G
GGGG
G GGG G
GGG
G
GG
GGGG
GGGG
G
GGGGGGG
GG
GGGGGGGGGG
G
GGG
GG
GGGGG
GGGGGGGGGGG
G
G
GGG
GGGGGGGGG
GGGGGGGGGG
G
G
GG
GG
GGGG
G
GGG
GGG
G
GG
GGGG
GGG
GG
G
GGGG
GGGGGGGGG GGGGGGGGGG GGGGGGG G
G
GGG
G
GGGG
G
GGG
GGG
GGG
G
GGG
G
G
G
GGGGGGGGG
GG
G
G
GG
GGGGG
GG GGG
GG
GGGGGGGGG
GGGGG
GGG
GGGGG
G
G
G
GGGG
G
G
GGGGG
GGG
GG
G
G
GGGGGGGGGGGG
GG
GGGGGG
GGG
GG
GG
GGGGGGGGG
G
GGGGG
GGG
G
GG
GGGGG G
GGGGG
GGGG
G
GGGG
G
GGGG
GG
GGGG
G
G
GGGG
GGGG
GG
G
GGGG
GGGG GGGGGGGGG
G
G
G
G
GGGGGGGGGGGGGGGGGGG
GGGG
GGGGG
G
G
GGGG
GG
GG
GGG
G
GGG
G
GGG
G
GGGGGGGGGGGGGGGGGGGG
G
G
G
GGGGG
GG
GGGGGG
G
GG
G
GGGGG
GGGGGGGGGG
GGGGGGG
GGGGGGGGGGGGGG
GGG
GGGGGGGGGG
G
GGGGGGGGGGG
G
GGGG
GGGGGGGG
GGGGG
G
G
GGGG
G
GG
GGG
G
GGGG
G
G
GGGGGG
GG
GGG
GGG
G
GGG
GGGGGGGGG
GG
GG
GGGG
GGG
GGGGG
GGGGGG
G
GGGGGGGGGGGG
G
G G
GGGGGG
GG
G GG
GGGGGGGGGG
2 0 2 4 6
0
2
4
6
8
tuning=5
Initial value=(0,7)
Acceptance rate=32.2%
Acceptance rate=32.2%
GGG
GGGGGGGGGGG
GGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GG
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGG
GGGGG
GGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGG
GGGG
GGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
GGG
GGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGG
GG
GGGGGGGGGGGGGGGGGGGGG
G
GGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGG
G
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGG
GGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGG
GGGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGG
GGGGGGGGGGGGGG
GGGGGGGGGGGGGGGG
GGGG
GG
GGGGGGGGGGGGGGGGGG GGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGG
G
GGGGGGGG
GGGGGGGGGG
G
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGG
GGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGG
GGGGGGGGGGG
GGGGGGGGGG
GGGG
GGGG
GGGGG
GGGGGGGGGGGG
GGGG
GG
G
G
GGGGGGGGGGGGGGGG
GGGG
GGGGG
GGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGG
GGGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGG GGGGGGGGG
GGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
GGGGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGG
GG
GGGGGGGGG
GGGGGGGGGGGGGGGGGGG
GGGGGGG
GGGGGGGGGGGGG
GGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGG
GGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGG GGGGGGGGGGG
GGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGGGGGGG
G
GGGGGGGGG
GGG
GGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGG
G
GGGG
GGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGG
GGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGG
GGGGGG
GGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGG
G
GGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGG
GGGGGGGGGGGG
G
2 0 2 4 6
0
2
4
6
8
tuning=50
Initial value=(0,7)
Acceptance rate=4.3%
Acceptance rate=4.3%
Independence Metropolis: autocorrelations
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=0.5
Initial value=(4,5)
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=5
Initial value=(4,5)
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=50
Initial value=(4,5)
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=0.5
Initial value=(0,7)
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=5
Initial value=(0,7)
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
lag
Tuning=50
Initial value=(0,7)
Gibbs sampler
Technically, the Gibbs sampler is an MCMC scheme whose transition kernel is the product of the full
conditional distributions.
Algorithm
1. Start at
(0)
= (
(0)
1
,
(0)
2
, . . .)
2. Sample the components of
(j)
iteratively:

(j)
1
(
1
[
(j1)
2
,
(j1)
3
, . . .)

(j)
2
(
2
[
(j)
1
,
(j1)
3
, . . .)

(j)
3
(
3
[
(j)
1
,
(j)
2
, . . .)
.
.
.
33
The Gibbs sampler opened up a new way of approaching statistical modeling by combining simpler
structures (the full conditional models) to address the more general structure (the full model).
Example vi: Poisson with a change point
y
1
, . . . , y
n
is a sample from a Poisson distribution.
There is a suspicion of a single change point m along the observation process.
Given m, the observation distributions are
y
i
[ Poi(), i = 1, . . . , m
y
i
[ Poi(), i = m+ 1, . . . , n.
Independent prior distributions
G(, )
G(, )
m U1, . . . , n
with , , and known hyperparameters.
Posterior distribution
Combining the prior and the likelihood
(, , m) f(y
1
, . . . , y
n
[, , m)p(, , m)
=
m

i=1
f
P
(y
i
; )
n

i=m+1
f
P
(y
i
; )
f
G
(; , )f
G
(; , )
1
n
Therefore,
(, , m)
_

+sm1
e
(+m)
__

+snsm1
e
(+nm)
_
where s
l
=

l
i=1
y
i
for l = 1, . . . , n.
Full conditional distributions
The full conditional distributions for , and m are
([m) = G( +s
m
, +m)
([m) = G( +s
n
s
m
, +n m)
and
(m[, ) =

+sm1
e
(+m)

+snsm1
e
(+nm)

n
l=1

+s
l
1
e
(+l)

+sns
l
1
e
(+nl)
,
for m = 1, . . . , n, respectively.
34
Coal mining disasters in Great Britain
This model was applied to the n = 112 observed counts of coal mining disasters in Great Britain by
year from 1851 to 1962.
Sample mean from 1951 to 1891 = 3.098
Sample mean from 1892 to 1962 = 0.901
1860 1880 1900 1920 1940 1960
0
1
2
3
4
5
6
years
C
o
u
n
t
s
Markov chains
The Gibbs sampler run: 5000 iterations
Starting point: m
(0)
= 1891
Hyperparameters: = = = = 0.001

0 1000 2000 3000 4000 5000


2
.5
3
.0
3
.5
4
.0

0 1000 2000 3000 4000 5000


0
.6
0
.8
1
.0
1
.2
m
0 1000 2000 3000 4000 5000
1
8
8
5
1
8
9
0
1
8
9
5
2.5 3.0 3.5 4.0
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
1
.2
1
.4
0.6 0.8 1.0 1.2 1.4
0
.0
0
.5
1
.0
1
.5
2
.0
2
.5
3
.0
3
.5
1880 1885 1890 1895 1900
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
Exact and approximate posterior summary
Exact posterior can be obtained by analytically deriving (m) and using it to derive () and ().
35
Par. Mean Var 95% C.I.
3.120 0.280 (2.571,3.719)
0.923 0.113 (0.684,0.963)
m 1890 2.423 (1886,1895)
Approximate posterior summary based on the Gibbs sampler
Par. Mean Var 95% C.I.
3.131 0.290 (2.582,3.733)
0.922 0.118 (0.703,1.167)
m 1890 2.447 (1886,1896)
Example vii: AR(1) with normal errors
Let us assume that
y
t
= y
t1
+
t

t
N(0,
2
)
for t = 1, . . . , n.
Prior specication
y
0
N(m
0
, C
0
)
N(r
0
, V
0
)

2
IG(n
0
/2, n
0
s
2
0
/2)
for known hyperparameters m
0
, C
0
, r
0
, V
0
, n
0
and s
2
0
.
Full conditional distributions
([
2
, y
0
, y
1:n
) N(r
1
, V
1
)
V
1
1
= V
0
+
2
n

t=1
y
2
t1
V
1
1
r
1
= V
1
0
r
0
+
2
n

t=1
y
t1
y
t
(
2
[, y
0
, y
1:n
) IG(n
1
/2, n
1
s
2
1
/2)
n
1
= n
0
+n
n
1
s
2
1
= n
0
s
2
0
+
n

t=1
(y
t
y
t1
)
2
(y
0
[,
2
, y
1:n
) N(m
1
, C
1
)
C
1
1
= C
0
+
2

2
C
1
1
m
1
= C
1
0
m
0
+
2
y
1
36
Simulated data
Set up: n = 100, y
0
= 0.0, = 0.95 and
2
= 1.0.
0 20 40 60 80 100

5
0
5
1
0
y
t
Time
5 0 5 10

5
0
5
1
0
yt1
y
t
MCMC output
Gibbs sampler run: (M
0
, M, L) = (1000, 1000, 1)
Starting point: true values
Prior of (,
2
): r
0
= 0.9, V
0
= 10, n
0
= 5 and s
2
0
= 1.
Prior of y
0
: m
0
= 0 and C
0
= 10.
rho
iteration
0 200 400 600 800 1000
0
.9
2
0
.9
6
1
.0
0
1
.0
4
0 5 10 15 20 25 30
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Lag
A
C
F
0.95 1.00 1.05
0
5
0
1
0
0
1
5
0
sigma2
iteration
0 200 400 600 800 1000
0
.8
1
.0
1
.2
1
.4
1
.6
0 5 10 15 20 25 30
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Lag
A
C
F
0.8 1.0 1.2 1.4 1.6
0
5
0
1
0
0
1
5
0
2
0
0
2
5
0
Forecasting
37
Time
5 10 15 20

2
0

1
5

1
0

5
0
5
1
0
100 forecasting paths
G
GG
G
GGG
GGG
G
G
GG
G
G
G
G
G
G
G
G
G
G
G
GG
G
GG
G
GG
GG
G
G
G
G
G
GG
G
G
GG
G
G
G
G
G
GG
G
GG
G
G
G
G
G
GGGGG
G
G
GG
G
G
G
G
G
G
G
G
G
G
G
GGG
GGG
G
GG
G
GG
G
G
GG
G
G
G
0 20 40 60 80 100 120

2
0

1
5

1
0

5
0
5
1
0
Time
95% forecasting interval
Example viii: AR(1) with drift and normal errors
Let us assume that
y
t
= +y
t1
+
t

t
N(0,
2
)
for t = 1, . . . , n.
Prior specication
y
0
N(m
0
, C
0
)
N(
0
, W
0
)
N(r
0
, V
0
)

2
IG(n
0
/2, n
0
s
2
0
/2)
for known hyperparameters m
0
, C
0
,
0
, W
0
, r
0
, V
0
, n
0
and s
2
0
.
Full conditional distributions
([,
2
, y
0
, y
1:n
) as before with y
t
replaced by y
t
.
(
2
[, , y
0
, y
1:n
) as before with y
t
replaced by y
t
.
(y
0
[, ,
2
, y
1:n
) as before with y
1
replaced by y
1
.
([,
2
, y
0
, y
1:n
) N(
1
, W
1
)
W
1
1
= W
1
0
+n/
2
W
1
1

1
= W
1
0

0
+
n

t=1
(y
t
y
t1
)/
2
Simulated data
Set up: n = 100, y
0
= 0.0, = 0.1, = 0.99 and
2
= 1.0.
38
0 20 40 60 80 100
5
1
0
1
5
2
0
y
t
Time
MCMC output
Gibbs sampler run: (M
0
, M, L) = (1000, 1000, 1)
Starting point: true values
Prior of (, ,
2
):
0
= 0, W
0
= 10, r
0
= 0.9, V
0
= 10, n
0
= 5 and s
2
0
= 1. Prior of y
0
: m
0
= 0 and
C
0
= 10.
mu
iteration
0 200 400 600 800 1000

0
.4
0
.0
0
.2
0
.4
0 5 10 15 20 25 30
0
.0
0
.4
0
.8
Lag
A
C
F
0.4 0.2 0.0 0.2 0.4
0
1
0
0
2
0
0
3
0
0
rho
iteration
0 200 400 600 800 1000
0
.9
8
1
.0
0
1
.0
2
0 5 10 15 20 25 30
0
.0
0
.4
0
.8
Lag
A
C
F
0.98 0.99 1.00 1.01 1.02 1.03
0
1
0
0
2
0
0
sigma2
iteration
0 200 400 600 800 1000
0
.8
1
.0
1
.2
1
.4
0 5 10 15 20 25 30
0
.0
0
.4
0
.8
Lag
A
C
F
0.6 0.8 1.0 1.2 1.4
0
5
0
1
5
0
2
5
0
Joint posterior of (, )
39
0.0 0.5 1.0
0
.
9
4
0
.
9
6
0
.
9
8
1
.
0
0
1
.
0
2

Forecasting
Time
5 10 15 20
0
5
1
0
1
5
2
0
2
5
3
0
3
5
100 forecasting paths
G
GG
G
GGG
GG
G
G
G
GG
G
G
GG
G
G
G
G
G
G
GGG
GGGG
GG
GG
G
G
G
G
G
GG
G
G
GG
G
G
G
G
G
GG
G
G
GG
GG
G
G
GG
GG
G
G
G
GG
G
G
G
G
G
G
G
G
G
G
G
GGG
GGGG
GG
G
GG
G
G
GG
G
G
G
0 20 40 60 80 100 120
0
5
1
0
1
5
2
0
2
5
3
0
3
5
Time
95% forecasting interval
Example ix: GARCH(1,1) model with normal errors
Let us assume that
y
t
N(0,
2
t
)

2
t
= a
1
+a
2
y
2
t1
+a
3

2
t1
for t = 1, . . . , n.
Prior specication
y
0
N(m
0
, V
0
)

2
0
IG(n
0
/2, n
0
s
2
0
/2)
a
i
N(a
0i
, V
0i
) i = 1, . . . , 3
for known m
0
, C
0
, n
0
, s
2
0
and (
0i
, V
0i
) for i = 1, 2, 3.
40
Metropolis-Hastings algorithm
None of the full conditionals for a
1
, a
2
and a
3
are of known form nor easy to sample from.
Here we implement a very simple M-H algorithm where a new vector a

= (a

1
, a

2
, a

3
) is generated
from
a

N(a
(j)
, v
2
I
3
)
where a
(j)
is the current state of the chain and v is a tuning standard deviations.
Simulated data
Set up: n = 300, a
1
= 0.1, a
2
= 0.4, a
3
= 0.59, y
2
0
=
2
0
= 0.1.
0 50 100 150 200 250 300

2
0
2
4
6
y
t
Time
MCMC output
Gibbs sampler run: (M
0
, M, L) = (10000, 1000, 100).
Starting point: true values.
Prior of (, ,
2
): a
0i
= 0 and C
0i
= 10 for i = 1, 2, 3.
Metropolis-Hastings tuning variance: v
2
= 0.01
2
.
a1
iteration
0 200 400 600 800 1000
0
.0
0
0
.1
5
0
.3
0
0 5 10 15 20 25 30
0
.0
0
.4
0
.8
Lag
A
C
F
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35
0
1
0
0
3
0
0
a2
iteration
0 200 400 600 800 1000
0
.2
0
.4
0
.6
0
.8
0 5 10 15 20 25 30
0
.0
0
.4
0
.8
Lag
A
C
F
0.2 0.4 0.6 0.8
0
5
0
1
0
0
2
0
0
a3
iteration
0 200 400 600 800 1000
0
.4
0
.6
0 5 10 15 20 25 30
0
.0
0
.4
0
.8
Lag
A
C
F
0.3 0.4 0.5 0.6 0.7 0.8
0
5
0
1
5
0
2
5
0
Pairwise joint posterior distributions
41
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35
0
.
2
0
.
4
0
.
6
0
.
8
a1
a
2
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35
0
.
4
0
.
5
0
.
6
0
.
7
a1
a
3
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0
.
4
0
.
5
0
.
6
0
.
7
a2
a
3
a2 ++ a3
0.9 1.0 1.1 1.2 1.3
0
5
0
1
5
0
2
5
0
p(
2
n+h
[y
1
, . . . , y
n
) for h = 1, . . . , 10
Time
2 4 6 8 10
0
2
4
6
8
42
LECTURE 5
DYNAMIC LINEAR MODELS
Example i. local level model
The local level model (West and Harrison, 1997) is
y
t+1
[x
t+1
, N(x
t+1
,
2
)
x
t+1
[x
t
, N(x
t
,
2
)
where x
0
N(m
0
, C
0
) and = (
2
,
2
) xed (for now).
Example i. Evolution, prediction and updating
Let y
t
= (y
1
, . . . , y
t
).
p(x
t
[y
t
) =p(x
t+1
[y
t
) =p(y
t+1
[x
t
) =p(x
t+1
[y
t+1
)
Posterior at t: (x
t
[y
t
) N(m
t
, C
t
)
Prior at t + 1: (x
t+1
[y
t
) N(m
t
, R
t+1
)
Marginal likelihood: (y
t+1
[y
t
) N(m
t
, Q
t+1
)
Posterior at t + 1: (x
t+1
[y
t+1
) N(m
t+1
, C
t+1
)
where R
t+1
= C
t
+
2
, Q
t+1
= R
t+1
+
2
, A
t+1
= R
t+1
/Q
t+1
, C
t+1
= A
t+1

2
, and m
t+1
=
(1 A
t+1
)m
t
+A
t+1
y
t+1
.
Example i. Backward smoothing
For t = n, x
n
[y
n
N(m
n
n
, C
n
n
), where
m
n
n
= m
n
C
n
n
= C
n
43
For t < n, x
t
[y
n
N(m
n
t
, C
n
t
), where
m
n
t
= (1 B
t
)m
t
+B
t
m
n
t+1
C
n
t
= (1 B
t
)C
t
+B
2
t
C
n
t+1
and
B
t
=
C
t
C
t
+
2
Example i. Backward sampling
For t = n, x
n
[y
n
N(a
n
n
, R
n
n
), where
a
n
n
= m
n
R
n
n
= C
n
For t < n, x
t
[x
t+1
, y
n
N(a
n
t
, R
n
t
), where
a
n
t
= (1 B
t
)m
t
+B
t
x
t+1
R
n
t
= B
t

2
and
B
t
=
C
t
C
t
+
2
This is basically the Forward ltering, backward sampling algorithm used to sample from p(x
n
[y
n
)
(Carter and Kohn, 1994 and Fr uhwirth-Schnatter, 1994).
Example i. Simulated data
n = 100,
2
= 1.0,
2
= 0.5 and x
0
= 0.
0 20 40 60 80 100

2
0
2
4
6
8
1
0
time
y(t)
x(t)
44
Example i. p(x
t
[y
t
, ) versus p(x
t
[y
n
, )
m
0
= 0.0 and C
0
= 10.0
time
0 20 40 60 80 100

2
0
2
4
6
8
1
0
Forward filtering
Backward smoothing
Example i. Integrating out states x
n
We showed earlier that
(y
t
[y
t1
) N(m
t1
, Q
t
)
where both m
t1
and Q
t
were presented before and are functions of = (
2
,
2
), y
t1
, m
0
and C
0
.
Therefore, by Bayes rule,
p([y
n
) p()p(y
n
[)
= p()
n

t=1
f
N
(y
t
[m
t1
, Q
t
).
Example i. p(y[
2
,
2
)

2
0.5 1.0 1.5 2.0
0
.
5
1
.
0
1
.
5
2
.
0
45
Example i. MCMC scheme
Sample from p([y
n
, x
n
)
p([y
n
, x
n
) p()
n

t=1
p(y
t
[x
t
, )p(x
t
[x
t1
, ).
Sample x
n
from p(x
n
[y
n
, )
p(x
n
[y
n
, ) =
n

t=1
f
N
(x
t
[a
n
t
, R
n
t
)
Example i. p(
2
,
2
[y
n
)

2
0.5 1.0 1.5 2.0
0
.5
1
.0
1
.5
2
.0
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G G
G G G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G G
G
G
G
G
G
G
GG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G G
GG
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G G
G
G
G G
G
G G
G G
G
G
G
G
G
G
G
G
G G G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G G
G
G G G
G
G
G
G
G
GG
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
GG
G
G
G
G G
G G G
G
G
G
G
G
G
G
G
G
G
G G G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G G
G
G
G
G G G
G
G
G G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
GG
G
G
G
G
G
G
G
GG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G G
G
G
G G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G G
G
G G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
GG
G
G
G
G G
G
G G
G
G
G G G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G G
G G
G
G G
G G
G
G
G G
G
G G
G
G
G
GG
G
G G
G
G
G
G
G
G
G
G
G
G
GG
G
G
G G
G
G
G
G
G
G
G
G G
G
G
G G
G
G
G
G
G
G
G
G
G
G
GG
G
G
G G
G
G
G
GG
G
G
G
G
G G
G
G
G
G
G G
G
G
G G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
GG
G
G
G G
G
G
G
G
G
G
G
G G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G

2
D
e
n
s
ity
0.5 1.0 1.5
0
5
0
1
0
0
1
5
0

2
D
e
n
s
ity
0.5 1.0 1.5 2.0
0
1
0
0
2
0
0
3
0
0
Example i. p(x
t
[y
n
)
time
0 20 40 60 80 100

2
0
2
4
6
8
1
0
46
Lessons from Example i.
Sequential learning in non-normal and nonlinear dynamic models p(y
t+1
[x
t+1
) and p(x
t+1
[x
t
) in
general rather dicult since
p(x
t+1
[y
t
) =
_
p(x
t+1
[x
t
)p(x
t
[y
t
)dx
t
p(x
t+1
[y
t+1
) p(y
t+1
[x
t+1
)p(x
t+1
[y
t
)
are usually unavailable in closed form.
Over the last 20 years:
FFBS for conditionally Gaussian DLMs;
Gamerman (1998) for generalized DLMs;
Carlin, Polson and Stoer (2002) for more general DMs.
Dynamic linear models
Large class of models with time-varying parameters.
Dynamic linear models are dened by a pair of equations, the observation equation and the evolu-
tion/system equation:
y
t
= F

t
+
t
,
t
N(0, V )

t
= G
t

t1
+
t
,
t
N(0, W)
y
t
: sequence of observations;
F
t
: vector of explanatory variables;

t
: d-dimensional state vector;
G
t
: d d evolution matrix;

1
N(a, R).
Example ii. Linear growth model
The linear growth model is slightly more elaborate by incorporation of an extra time-varying param-
eter
2
representing the growth of the level of the series:
y
t
=
1,t
+
t

t
N(0, V )

1,t
=
1,t1
+
2,t
+
1,t

2,t
=
2,t1
+
2,t
where
t
= (
1,t
,
2,t
)

N(0, W) and
F
t
= (1, 0)

G
t
=
_
1 1
0 1
_
47
Prior, updated and smoothed distributions
Prior distributions
p(
t
[y
tk
) k > 0
Updated/online distributions
p(
t
[y
t
)
Smoothed distributions
p(
t
[y
t+k
) k > 0
Sequential inference
Let y
t
= y
1
, . . . , y
t
.
Posterior at time t 1:

t1
[y
t1
N(m
t1
, C
t1
)
Prior at time t:

t
[y
t1
N(a
t
, R
t
)
with a
t
= G
t
m
t1
and R
t
= G
t
C
t1
G

t
+W.
predictive at time t:
y
t
[y
t1
N(f
t
, Q
t
)
with f
t
= F

t
a
t
and Q
t
= F

t
R
t
F
t
+V .
Posterior at time t
p(
t
[y
t
) = p(
t
[y
t
, y
t1
) p(y
t
[
t
) p(
t
[y
t1
)
The resulting posterior distribution is

t
[y
t
N(m
t
, C
t
)
with
m
t
= a
t
+A
t
e
t
C
t
= R
t
A
t
A

t
Q
t
A
t
= R
t
F
t
/Q
t
e
t
= y
t
f
t
By induction, these distributions are valid for all times.
48
Smoothing
In dynamic models, the smoothed distribution ([y
n
) is more commonly used:
([y
n
) = p(
n
[y
n
)
n1

t=1
p(
t
[
t+1
, . . . ,
n
, y
n
)
= p(
n
[y
n
)
n1

t=1
p(
t
[
t+1
, y
t
)
Integrating with respect to (
1
, . . . ,
t1
):
(
t
, . . . ,
n
[y
n
) = p(
n
[y
n
)
n1

k=t
p(
k
[
k+1
, y
t
)
(
t
,
t+1
[y
n
) = p(
t+1
[y
n
)p(
t
[
t+1
, y
t
)
for t = 1, . . . , n 1.
Marginal smoothed distributions
It can be shown that

t
[y
n
N(m
n
t
, C
n
t
)
where
m
n
t
= mt +CtG

t+1
R
1
t+1
(m
n
t+1
at+1)
C
n
t
= Ct CtG

t+1
R
1
t+1
(Rt+1 C
n
t+1
)R
1
t+1
Gt+1Ct
Individual sampling from (
t
[
t
, y
n
)
Let
t
= (
1
, . . . ,
t1
,
t+1
, . . . ,
n
).
For t = 2, . . . , n 1
(
t
[
t
, y
n
) p(y
t
[
t
) p(
t+1
[
t
) p(
t
[
t1
)
f
N
(y
t
; F

t
, V )f
N
(
t+1
; G
t+1

t
, W)
f
N
(
t
; G
t

t1
, W)
= f
N
(
t
; b
t
, B
t
)
where
b
t
= B
t
(
2
F
t
y
t
+G

t+1
W
1

t+1
+W
1
G
t

t1
)
B
t
= (
2
F
t
F

t
+G

t+1
W
1
G
t+1
+W
1
)
1
for t = 2, . . . , n 1.
For t = 1 and t = n,
(
1
[
1
, y
n
) = f
N
(
1
; b
1
, B
1
)
and
(
n
[
n
, y
n
) = f
N
(
t
; b
n
, B
n
)
49
where
b
1
= B
1
(
2
1
F
1
y
1
+G

2
W
1

2
+R
1
a)
B
1
= (
2
1
F
1
F

1
+G

2
W
1
G
2
+R
1
)
1
b
n
= B
n
(
2
n
F
n
y
n
+W
1
G
n

n1
)
B
n
= (
2
n
F
n
F

n
+W
1
)
1
The FFBS algorithm: sampling from ([y
n
)
For t = 1, . . . , n 1, it can be shown that
(
t
[
t+1
, V, W, y
t
)
is normally distributed with mean
(G

t
W
1
G
t
+C
1
t
)
1
(G

t
W
1

t+1
+C
1
t
m
t
)
and variance (G

t
W
1
G
t
+C
1
t
)
1
.
Sampling from ([y
n
) can be performed by
Sampling
n
from N(m
n
, C
n
) and then
Sampling
t
from (
t
[
t+1
, V, W, y
t
), for t = n 1, . . . , 1.
The above scheme is known as the forward ltering, backward sampling (FFBS) algorithm (Carter
and Kohn, 1994 and Fr uhwirth-Schnatter, 1994).
Sampling from (V, W[y
n
, )
Assume that
= V
1
Gamma(n

/2, n

/2)
= W
1
Wishart(n
W
/2, n
W
S
W
/2)
Full conditionals
([, )
n

t=1
f
N
(y
t
; F

t
,
1
) f
G
(; n

/2, n

/2)
f
G
(; n

/2, n

/2)
([, )
n

t=2
f
N
(
t
; G
t

t1
,
1
) f
W
(; n
W
/2, n
W
S
W
/2)
f
W
(; n

W
/2, n

W
S

W
/2)
where n

= n

+n, n

W
= n
W
+n 1,
n

= n

+(y
t
F

t
)
2
n

W
S

W
= n
W
S
W
+
n
t=2
(
t
G
t

t1
)(
t
G
t

t1
)

50
Gibbs sampler for (, V, W)
Sample V
1
from its full conditional
f
G
(; n

/2, n

/2)
Sample W
1
from its full conditional
f
W
(; n

W
/2, n

W
S

W
/2)
Sample from its full conditional
([y
n
, V, W)
by the FFBS algorithm.
Likelihood for (V, W)
It is easy to see that
p(y
n
[V, W) =
n

t=1
f
N
(y
t
[f
t
, Q
t
)
which is the integrated likelihood of (V, W).
Jointly sampling (, V, W)
(, V, W) can be sampled jointly by
Sampling (V, W) from its marginal posterior
(V, W[y
n
) l(V, W[y
n
)(V, W)
by a rejection or Metropolis-Hastings step;
Sampling from its full conditional
([y
n
, V, W)
by the FFBS algorithm.
Jointly sampling (, V, W) avoids MCMC convergence problems associated with the posterior corre-
lation between model parameters (Gamerman and Moreira, 2002).
Example iii. Comparing sampling schemes
Based on Gamerman, Reis and Salazar (2006) Comparison of sampling schemes for dynamic linear models. International
Statistical Review, 74, 203-214.
First order DLM with V = 1
y
t
=
t
+
t
,
t
N(0, 1)

t
=
t1
+
t
,
t
N(0, W),
with (n, W) (100, .01), (100, .5), (1000, .01), (1000, .5).
400 runs: 100 replications per combination.
Priors:
1
N(0, 10) and V and W have inverse Gammas with means set at true values and
coecients of variation set at 10.
Posterior inference: based on 20,000 MCMC draws.
51
Schemes
Scheme I: Sampling
1
, . . . ,
n
, V and W from their conditionals.
Scheme II: Sampling , V and W from their conditionals.
Scheme III: Jointly sampling (, V, W).
Scheme n=100 n=1000
II 1.7 1.9
III 1.9 7.2
Computing times relative to scheme I. For instance, when n = 100 it takes almost 2 times as much to
run scheme III.
Scheme
W n I II III
0.01 1000 242 8938 2983
0.01 100 3283 13685 12263
0.50 1000 409 3043 963
0.50 100 1694 3404 923
Sample averages (based on the 100 replications) of eective sample size n
e
based on V .
Example iv. Spatial dynamic factor model
Based on Lopes, Salazar and Gamerman (2008) Spatial Dynamic Factor Models. Bayesian Analysis, 3, 759-792.
Let us consider a simple version of their model
y
t
[f
t
, N(f
t
, ) (7)
f
t
[f
t1
, N(f
t1
, ) (8)
where = (, , , ),
y
t
= (y
1t
, . . . , y
Nt
)

is the N-dimensional vector of observations (locations s


1
, . . . , s
N
and times t =
1, . . . , T),
f
t
is an m-dimensional vector of common factors, for m < N.
= (
(1)
, . . . ,
(m)
) is the N m matrix of factor loadings.
Spatial loadings
The j
th
column of , denoted by
(j)
= (
(j)
(s
1
), . . . ,
(j)
(s
N
))

, for j = 1, . . . , m, is modeled as a
conditionally independent, distance-based Gaussian random eld (GRF), i.e.

(j)
GRF(
2
j

j
()) N(0,
2
j
R
j
), (9)
where the (l, k)-element of R
j
is given by r
lk
=
j
([s
l
s
k
[), l, k = 1, . . . , N, for suitably dened
correlation functions
j
(), j = 1, . . . , m.
Matern spatial aucorrelation function

(d) = 2
12
(
2
)
1
(d/
1
)
2
/
2
(d/
1
)
where /
2
() is the modied Bessel function of the second kind and of order
2
.
Key references on spatial statistics are Cressie (1993) and Stein (1999).
52
Forecasting
One is usually interested in learning about the h-steps ahead predictive density p(y
T+h
[y), i.e.
p(y
T+h
[y) =
_
p(y
T+h
[f
T+h
, )p(f
T+h
[f
T
, )p(f
T
, [y)df
T+h
df
T
d (10)
where
(y
T+h
[f
T+h
, ) N(f
T+h
, )
(f
T+h
[f
T
, ) N(
h
, V
h
)
and

h
=
h
f
T
and V
h
=
h

k=1

k1
(
k1
)

for h > 0.
p(y
T+h
[y) can be easily approximated by Monte Carlo integration.
Sampling the common factors
Joint distribution p(F[y) =

T1
t=0
p(f
t
[f
t+1
, D
t
)p(f
T
[D
T
), where D
t
= y
1
, . . . , y
t
, t = 1, . . . , T and
D
0
represents the initial information.
Forward ltering: Starting with f
t1
[D
t1
N(m
t1
, C
t1
), it can be shown that
f
t
[D
t
N(m
t
, C
t
)
where m
t
= a
t
+A
t
(y
t
y
t
), C
t
= R
t
A
t
Q
t
A

t
, a
t
= m
t1
, R
l
= C
t1

+, y
t
= a
t
, Q
t
= R
t

+
and A
t
= R
t

Q
1
t
, for t = 1, . . . , T.
Backward sampling: f
T
is sampled from p(f
T
[D
T
). For t T 1, f
t
is sampled from p(f
t
[f
t+1
, D
t
) =
f
N
(f
t
; a
t
,

C
t
), where a
t
= m
t
+B
t
(f
t+1
a
t+1
),

C
t
= C
t
B
t
R
t+1
B

t
and B
t
= C
t

R
1
t+1
.
SO
2
in Eastern US
!86 !84 !82 !80 !78 !76 !74
3
4
3
6
3
8
4
0
4
2
4
4
longitude
l
a
t
i
t
u
d
e
ESP
SAL
MCK
OXF
DCP
CKT
LYK
PNF
QAK
CDR
VPI
MKG
PAR
KEF
SHN
PED
PSU
ARE
BEL
CTH
WSP
CAT
+
SPD
observed stations
left out stations
study area
+
BWR
53
Spatial factor loadings
!4 !2 0 2 4
ESP
SAL
MCK
OXF DCP
CKT
LYK
PNF
QAK
CDR
VPI
MKG
PAR
KEF
SHN
PED
PSU
ARE
BEL
CTH
WSP
CAT
+
SPD
+
BWR
!4 !2 0 2 4
ESP
SAL
MCK
OXF DCP
CKT
LYK
PNF
QAK
CDR
VPI
MKG
PAR
KEF
SHN
PED
PSU
ARE
BEL
CTH
WSP
CAT
+
SPD
+
BWR
0 2 4 6
ESP
SAL
MCK
OXF DCP
CKT
LYK
PNF
QAK
CDR
VPI
MKG
PAR
KEF
SHN
PED
PSU
ARE
BEL
CTH
WSP
CAT
+
SPD
+
BWR
1.5 2 2.5 3 3.5
ESP
SAL
MCK
OXF DCP
CKT
LYK
PNF
QAK
CDR
VPI
MKG
PAR
KEF
SHN
PED
PSU
ARE
BEL
CTH
WSP
CAT
+
SPD
+
BWR
1.2 1.4 1.6 1.8 2 2.2 2.4
ESP
SAL
MCK
OXF DCP
CKT
LYK
PNF
QAK
CDR
VPI
MKG
PAR
KEF
SHN
PED
PSU
ARE
BEL
CTH
WSP
CAT
+
SPD
+
BWR
Dynamic factors
1998 1999 2000 2001 2002 2003 2004

0
.3

0
.2

0
.1
0
.0
0
.1
0
.2
0
.3
1998 1999 2000 2001 2002 2003 2004

0
.2

0
.1
0
.0
0
.1
0
.2
1998 1999 2000 2001 2002 2003 2004

0
.2

0
.1
0
.0
0
.1
0
.2
1998 1999 2000 2001 2002 2003 2004
0
.6
0
.7
0
.8
0
.9
1
.0
1
.1
1
.2
1998 1999 2000 2001 2002 2003 2004

0
.6

0
.2
0
.0
0
.2
0
.4
0
.6
Spatial interpolation (stations SPD and BWR)
S
O
2
0
5
1
0
1
5
2
0
2
5
3
0
20011 200126 20021 200226 20031 200326 20041
Observed
Post. Mean
95% C.I.
S
O
2
0
1
0
2
0
3
0
4
0
5
0
6
0
20011 200126 20021 200226 20031 200326 20041
Observed
Post. Mean
95% C.I.
54
Forecasting
S
O
2

le
v
e
ls
0
1
0
2
0
3
0
20041 200415 200430
MCK
Observed
SSDFM
SGSTM
SGFM
95% C.I.
S
O
2

le
v
e
ls
0
1
0
2
0
3
0
4
0
5
0
6
0
7
0
20041 200415 200430
QAK
Observed
SSDFM
SGSTM
SGFM
95% C.I.
S
O
2

le
v
e
ls
0
1
0
2
0
3
0
4
0
20041 200415 200430
BEL
Observed
SSDFM
SGSTM
SGFM
95% C.I.
S
O
2

le
v
e
ls
0
5
1
0
1
5
2
0
2
5
3
0
3
5
20041 200415 200430
CAT
Observed
SSDFM
SGSTM
SGFM
95% C.I.
55
LECTURE 6
NONNORMAL AND
NONLINEAR DYNAMIC
MODELS
Generalized linear models
An extension of regression models still preserving linearity and inuence of covariates through the
mean response is given by generalized linear models.
The observations remain independent but now have distributions in the exponential family.
The model is
f(y
i
[
i
) = a(y
i
) expy
i

i
+b(
i
)
E(y
i
[
i
) = b

(
i
) =
i
g(
i
) =
i

i
= x
i1

1
+. . . +x
id

d
for i = 1, . . . , n, where the link function g is dierentiable.
Binomial regression
Consider y
i
[
i
bin(n
i
,
i
), i = 1, . . . , n, and assume that the probabilities
i
are determined by the
values of a variable x.
The
i
lie between 0 and 1 and can be associated to a distribution function.
One possibility is the normal distribution and in this case

i
= ( +x
i
), i = 1, . . . , n
where is the distribution function of the N(0,1) distribution and and are constants.
The binomial distribution belongs to the exponential family and the link function g
1
=
1
is
dierentiable.
The structure of a generalized linear model is completed with the linear predictor

i
= +x
i
, i = 1, . . . , n.
Other possible links include the logistic and complementary log-log transformations
g
2
(
i
) = logit(
i
) = log
_

i
1
i
_
g
3
(
i
) = log
_
log
_
1
1
i
__
56
associated respectively to the logistic and extreme-value distributions.
Note that g
1
, g
2
and g
3
take numbers from [0,1] to the real line.
Example i. O-ring data
Christensen (1997) analyzed binary observations of O-ring failures y
i
(1=failure) in relation to tem-
perature t
i
(Fahrenheit).
y = (1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0)
t = (53, 57, 58, 63, 66, 67, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81)
G G G G
G GGG G G GG
GG
G G G
G
GG G G G
55 60 65 70 75 80
0
.
0
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
Temperature (in Fahrenheit)
O

R
in
g

f
a
ilu
r
e
Bernoulli model: y
i
[
i
Bern(
i
), for i = 1, . . . , n = 23.
Link function: g(
i
) = +(t
i

t)
Prior: N(0, V

)
Other constants: = 1.26 and

t = 69.6.
Links
Logit : g
1
() = log
_

1
_
Probit : g
2
() = ()
Log-log : g
3
() = log
_
log
_
1
1
__
Prior variances
V

= 1.0
V

= 10.0
V

= 100.0
Posterior model probability (PMP)
Assume that Pr(M
i
) = 1/9 for all i.
57
V

Link PMP
1 Log-log 0.361
1 Logit 0.311
10 Log-log 0.115
10 Logit 0.101
100 Log-log 0.037
100 Logit 0.033
1 Probit 0.030
10 Probit 0.010
100 Probit 0.003
Pr(y = 1[t)
Bayesian model averaging (black), logit models (red), probit models (green) and complementary
log-log models (blue).
Temperature (in Fahrenheit)
O

r
i
n
g

f
a
i
l
u
r
e
G G G G
G GGG G G GG
GG
G G G
G
GG G G G
0
.
0
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
53 55 57 59 61 63 65 67 69 71 73 75 77 79 81
Dynamic generalized linear model
Dynamic generalized models were introduced by West, Harrison and Migon (1985).
The model is
f(y
t
[
t
) = a(y
t
) expy
t

t
+b(
t
)
E(y
t
[
t
) =
t
g(
t
) = F

t
= G
t

t+1
+w
t
with w
t
N(0, W
t
) and the link function g is again dierentiable.
The model is completed with a prior
1
N(a, R).
It combines the prior specication of normal dynamic models with the observational structure of
generalized linear models.
Dynamic binomial and Poisson regressions
58
Dynamic logistic regression with a series of binomial observations y
t
with respective success prob-
abilities
t
dynamically related to explanatory variables x = (x
1
, . . . , x
d
)

through the logistic link


logit(
t
) = x

t
.
Poisson counts with means
t
dynamically related through multiplicative perturbations
t
=
t1
w

t
.
After a logarithmic transformation, one obtains log
t
= log
t1
+w
t
with w
t
= log w

t
.
Posterior inference via MCMC
Assuming that the variances of the system disturbances are constant, the model parameters are given
by the state parameters = (
1
, . . . ,
n
)

and the system variance W =


1
.
The model is specied with the observation and system equations and completed with the independent
prior distributions
1
N(a, R) and W(n
W
/2, n
W
S
W
/2).
The posterior distribution is given by
(, )
n

t=1
f(y
t
[
t
)
n

i=2
p(
t
[
t1
, ) p(
1
)p() .
Full conditional for

()
n

t=2
p(
t
[
t1
, ) p()

t=2
[[
1/2
exp
_

1
2
tr[(
t
G
t

t1
)(
t
G
t

t1
)

]
_
[[
[n
W
(p+1)]/2
exp
_

1
2
tr(n
W
S
W
)
_
[[
[n

W
(d+1)]/2
exp
_

1
2
tr [(n

W
S

W
) ]
_
.
that is the density of the W(n

W
/2, n

W
S

W
/2) distribution with
n

W
= n
W
+n 1
n

W
S

W
= n
W
S
W
+
n

t=2
(
t
G
t

t1
)(
t
G
t

t1
)

Full conditionals for


For block

()
n
Y
t=1
f(yt|t)
n
Y
t=2
p(t|t1, ) p(1)
exp
(
n
X
t=1
[ytt +b(t)]
1
2
n
X
t=1
(t Gtt1)

(t Gtt1)
)
.
59
For block
t
, t = 2, . . . , n 1
t(t) f(yt|t) p(t|t1, )p(t+1|t, )
exp{ytt +b(t)} exp

1
2

(t Gtt1)

(t Gtt1)
+ (t+1 Gt+1t)

(t+1 Gt+1t) ]} .
Similar results follow for blocks
1
and
n
.
Sampling schemes
Knorr-Held (1997) suggested the use of independence chains with prior proposals.
Shephard and Pitt (1997) used independence chains with proposals based on both prior and a normal
approximation to the likelihood.
Ravines (2005) used independence normal proposals for the block with moments given by the
approximation of West, Harrison and Migon (1985).
Singh and Roberts (1982) and Fahrmeir and Wagenpfeil (1997) extended to the dynamic setting the
method of mode evaluation for static regression.
An alternative previously discussed is the reparametrization in terms of the system disturbances w
t
(Gamerman, 1998)
Example ii. Generalized Spatial Dynamic Factor Model
Let s
1
, . . . , s
N
be the N spatial locations in the region of study S, where S R
2
and y
t
=
(y
t1
, . . . , y
tN
) be the N-dimensional vector of measurements at time t, for t = 1, . . . , T.
The GSDFM of Lopes, Gamerman and Salazar (2009) is a hierarchical model with rst level mea-
surement equation for conditionally independent univariate observations y
ti
in the one-parameter natural
exponential family, i.e.
p(y
ti
[
ti
, ) = exp[y
ti

ti
b(
ti
)] +c(y
ti
, )
where
ti
is the natural parameter and is a dispersion parameter.
The mean and variance of y
ti
are, respectively, b

(
ti
) and b

(
ti
)/.
The natural parameter
ti
is deterministically dened by a linear combination of spatial and temporal
components through the link function , i.e.
ti
= (
ti
).
The GSDFM is then completed by specifying the spatio-temporal dependence of the
ti
s.
Temporal and spatial variations
The temporal behavior of y
t
is modeled by two levels of hierarchy

t
=
t
+f
t
f
t
[ f
t1
N(f
t1
, )
where
t
is the mean level of the space-time process, f
t
contains m common factors, f
0
N(m
0
, C
0
) and
is the evolutional variance.
60
The spatial variation of y
t
is modeled via the columns of the factor loadings matrix , i.e.
j
=
(
1j
, . . . ,
Nj
)

, for j = 1, . . . , m, is modeled as

j
N(
j
,
2
j
R
j
)
where
j
is a N-dimensional mean vector. The (l, k)-element of R
j
= R(
j
) is given by r
lk
=
j
([s
l
s
k
[),
l, k = 1, . . . , N, for suitably dened correlation functions
j
().
Example ii. Modeling rainfall in Minas Gerais, Brazil
T = 365 daily occurrences of rain in 2005 measured at 17 meteorological stations in the state of Minas
Gerais, Brazil.
50 48 46 44 42 40

2
4

2
2

2
0

1
8

1
6

1
4
longitude
l
a
t
i
t
u
d
e
HO
SF
AR
AC
MC
PR
AN
IP
VC
LP
LA
IT
BH
MA
+
+
PI
CA
Atlantic
Ocean
Rio de Janeiro
Espritu Santo
Bahia
Gois
So Paulo
Stations
Left out stations
Study area
The solid line contours the state. Pirapora (PI) and Caratinga (CA) states (+) were retained for a spatial
interpolation exercise.
Counts
month
ra
in
y
d
a
y
s
1 2 3 4 5 6 7 8 9 10 11 12
0
5
1
0
1
5
2
0
2
5
3
0
SF
AC
MC
NA
month
ra
in
y
d
a
y
s
1 2 3 4 5 6 7 8 9 10 11 12
0
5
1
0
1
5
2
0
2
5
3
0
PR
AN
IP
VC
LP
BH
month
ra
in
y
d
a
y
s
1 2 3 4 5 6 7 8 9 10 11 12
0
5
1
0
1
5
2
0
2
5
3
0
HO
AR
LA
IT
MA
Monthly counts of rainy days for north and northeast stations (top left), center and southeast stations
(top right) and south and southwest stations (bottom).
Posterior summary
HO
SF
AR
AC
MC
PR
AN
IP
VC
LP
LA
IT
BH
NA
MA
+
+
PI
CA
0.2 0.4 0.6 0.8
HO
SF
AR
AC
MC
PR
AN
IP
VC
LP
LA
IT
BH
NA
MA
+
+
PI
CA
0.2 0.4 0.6 0.8
61
Mean probability maps for two typical days in 2005 (January 6th and March 2nd) for gauged stations
(dot) and ungauged stations (+).
Posterior summary
days
p
r
o
b
a
b
ilit
y
30 60 90 120 150 180 210 240 270 300 330 360
0
.
0
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
days
p
r
o
b
a
b
ilit
y
30 60 90 120 150 180 210 240 270 300 330 360
0
.
0
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
Daily posterior probability of rain for ungauged Pirapora (left) and Caratinga (right) stations in 2005.
Dots are observed rain indicators, solid lines are rain mean probabilities and dashed lines are 95% credi-
bility intervals.
Example iii: Nonlinear model
Let y
t
, for t = 1, . . . , n, be generated by the following nonlinear dynamic model
(y
t
[x
t
, ) N(x
2
t
/20,
2
)
(x
t
[x
t1
, ) N(G

xt1
,
2
)
x
0
N(m
0
, C
0
)
where G

xt
=
_
x
t
, x
t
/(1 +x
2
t
), cos(1.2t)
_
, = (, , )

and = (

,
2
,
2
).
Prior distribution

2
IG(n
0
/2, n
0

2
0
/2)
[
2
N(
0
,
2
V
0
)

2
IG(
0
/2,
0

2
0
/2)
Sampling ([x
0:n
, y
n
)
Let y
n
= (y
1
, . . . , y
n
) and x
0:n
= (x
0
, . . . , x
n
)

.
It follows that
(,
2
[x
0:n
) N(
1
,
2
V
1
)IG(
1
/2,
1

2
1
/2)
(
2
[y
n
, x
n
) IG(n
1
/2, n
1

2
1
/2)
where
1
=
0
+n, n
1
= n
0
+n
Z = (G
x0
, . . . , G
xn1
)

V
1
1
= V
1
0
+Z

Z
V
1
1

1
= V
1
0

0
+Z

x
1:n

2
1
=
0

2
0
+ (y Z
1
)

(y Z
1
) + (
1

0
)

V
1
0
(
1

0
)
n
1

2
1
= n
0

2
0
+
n

t=1
(y
t
x
2
t
/20)
2
62
Sampling x
1
, . . . , x
n
Let x
t
= (x
0
, . . . , x
t1
, x
t+1
, . . . , x
n
), for t = 1, . . . , n 1, x
0
= x
n
, x
n
= x
0:(n1)
and y
0
= .
For t = 0
p(x
0
[x
0
, y
0
, ) f
N
(x
0
; m
0
, C
0
)f
N
(x
1
; G

x0
,
2
)
For t = 1, . . . , n 1
p(xt|xt, yt, ) fN(yt; x
2
t
/20,
2
)fN(xt; G

x
t1
,
2
)fN(xt+1; G

x
t
,
2
)
For t = n
p(x
n
[x
n
, y
n
, ) f
N
(y
n
; x
2
n
/20,
2
)f
N
(x
n
; G

xn1
,
2
)
Metropolis-Hastings algorithm
A simple random walk Metropolis algorithm with tuning variance v
2
x
would work as follows. For
t = 0, . . . , n
1. Current state: x
(j)
t
2. Sample x

t
from N(x
(j)
t
, v
2
x
)
3. Compute the acceptance probability
= min
(
1,
p(x

t
|xt, yt, )
p(x
(j)
t
|xt, yt, )
)
4. New state:
x
(j+1)
t
=
_
x

t
w. p.
x
(j)
t
w. p. 1
Simulation set up
We simulated n = 100 observations based on = (0.5, 25, 8)

,
2
= 1,
2
= 10 and x
0
= 0.1.
yt
time
0 20 40 60 80 100
0
5
1
0
1
5
2
0
xt
time
0 20 40 60 80 100

1
0
0
1
0
2
0
10 0 10 20
0
5
1
0
1
5
2
0
xt
y
t
10 0 10 20

1
0
0
1
0
2
0
xt1
x
t
63
Prior hyperparameters
x
0
N(m
0
, C
0
)
m
0
= 0.0 and C
0
= 10
[
2
N(
0
,
2
V
0
)

0
= (0.5, 25, 8)

and V
0
= diag(0.0025, 0.1, 0.04)

2
IG(
0
/2,
0

2
0
/2)

0
= 6 and
2
0
= 20/3
such that E(
2
) =
_
V (
2
) = 10.

2
IG(n
0
/2, n
0

2
0
)
n
0
= 6 and
2
0
= 2/3
such that E(
2
) =
_
V (
2
) = 1.
MCMC setup
Metropolis-Hastings tuning parameter
v
2
x
= (0.1)
2
Burn-in period, step and MCMC sample size
M
0
= 1, 000 L = 20 M = 950 20, 000 draws
Initial values
= (0.5, 25, 8)


2
= 10

2
= 1
x
0:n
= x
true
0:n
Parameters
5000 10000 15000 20000

1
.5

0
.5
0
.0
0
.5
1
.0
1
.5
2
.0
alpha
iteration
5000 10000 15000 20000
2
0
2
2
2
4
2
6
2
8
beta
iteration
5000 10000 15000 20000
5
.5
6
.0
6
.5
7
.0
7
.5
8
.0
gamma
iteration
5000 10000 15000 20000
6
8
1
0
1
2
1
4
1
6
tau2
iteration
5000 10000 15000 20000
1
2
3
4
sig2
iteration
0 5 10 15 20 25 30
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Lag
A
C
F
0 5 10 15 20 25 30
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Lag
A
C
F
0 5 10 15 20 25 30
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Lag
A
C
F
0 5 10 15 20 25 30
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Lag
A
C
F
0 5 10 15 20 25 30
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Lag
A
C
F
1.5 0.5 0.0 0.5 1.0 1.5 2.0
0
5
0
1
0
0
1
5
0
2
0
0
2
5
0
3
0
0
3
5
0
20 22 24 26 28 30
0
5
0
1
0
0
1
5
0
2
0
0
2
5
0
5.5 6.0 6.5 7.0 7.5 8.0 8.5
0
5
0
1
0
0
1
5
0
6 8 10 12 14 16
0
5
0
1
0
0
1
5
0
1 2 3 4
0
1
0
0
2
0
0
3
0
0
64
States
0 20 40 60 80 100

2
0

1
0
0
1
0
2
0
time
States
lag
A
C
F
0 5 10 15 20 25 30
0
.
0
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
65
LECTURE 7
STOCHASTIC VOLATILITY
MODELS
Stochastic volatility model
The canonical stochastic volatility model (SV-AR(1), hereafter), is
y
t
= e
ht/2

t
h
t
= +h
t1
+
t
where
t
and
t
are N(0, 1) shocks with E(
t

t+h
) = 0 for all h and E(
t

t+l
) = E(
t

t+l
) = 0 for all
l ,= 0.

2
: volatility of the log-volatility.
[[ < 1 then h
t
is a stationary process.
Let y
n
= (y
1
, . . . , y
n
)

, h
n
= (h
1
, . . . , h
n
)

and h
a:b
= (h
a
, . . . , h
b
)

.
Prior information
Uncertainty about the initial log volatility is h
0
N (m
0
, C
0
).
Let = (, )

, then the prior distribution of (,


2
) is normal-inverse gamma, i.e. (,
2
)
NIG(
0
, V
0
,
0
, s
2
0
):
[
2
N(
0
,
2
V
0
)

2
IG(
0
/2,
0
s
2
0
/2)
For example, if
0
= 10 and s
2
0
= 0.018 then
E(
2
) =

0
s
2
0
/2

0
/2 1
= 0.0225
V ar(
2
) =
(
0
s
2
0
/2)
2
(
0
/2 1)
2
(
0
/2 2)
= (0.013)
2
Hyperparameters: m
0
, C
0
,
0
, V
0
,
0
and s
2
0
.
Posterior inference
The SV-AR(1) is a dynamic model and posterior inference via MCMC for the the latent log-volatility
states h
t
can be performed in at least two ways.
Let h
t
= (h
0:(t1)
, h
(t+1):n
), for t = 1, . . . , n 1 and h
n
= h
1:(n1)
.
Individual moves for h
t
66
(,
2
[h
n
, y
n
)
(h
t
[h
t
, ,
2
, y
n
), for t = 1, . . . , n
Block move for h
n
(,
2
[h
n
, y
n
)
(h
n
[,
2
, y
n
)
Sampling (,
2
[h
n
, y
n
)
Conditional on h
0:n
, the posterior distribution of (,
2
) is also normal-inverse gamma:
(,
2
[y
n
, h
0:n
) NIG(
1
, V
1
,
1
, s
2
1
)
where X = (1
n
, h
0:(n1)
),
1
=
0
+n
V
1
1
= V
1
0
+X

X
V
1
1

1
= V
1
0

0
+X

h
1:n

1
s
2
1
=
0
s
2
0
+ (y X
1
)

(y X
1
) + (
1

0
)

V
1
0
(
1

0
)
Sampling (h
0
[,
2
, h
1
)
Combining
h
0
N(m
0
, C
0
)
and
h
1
[h
0
N( +h
0
,
2
)
leads to (by Bayes theorem)
h
0
[h
1
N(m
1
, C
1
)
where
C
1
1
m
1
= C
1
0
m
0
+
2
(h
1
)
C
1
1
= C
1
0
+
2

2
Conditional prior distribution of h
t
Given h
t1
, and
2
, it can be shown that, for t = 1, . . . , n 1,
_
h
t
h
t+1
_
N
__
+h
t1
(1 +) +
2
h
t1
_
,
2
_
1
(1 +
2
)
__
so E(h
t
[h
t1
, h
t+1
, ,
2
) and V (h
t
[h
t1
, h
t+1
, ,
2
) are

t
=
_
1
1 +
2
_
+
_

1 +
2
_
(h
t1
+h
t+1
)

2
=
2
(1 +
2
)
1
respectively. Therefore,
(h
t
[h
t1
, h
t+1
, ,
2
) N(
t
,
2
) t = 1, . . . , n 1
(h
n
[h
n1
, ,
2
) N(
n
,
2
)
where
n
= +h
n1
.
67
Sampling h
t
via random walk Metropolis
Let
2
t
=
2
for t = 1, . . . , n 1 and
2
n
=
2
, then
p(h
t
[h
t
, y
n
, ,
2
) = f
N
(h
t
;
t
,
2
t
)f
N
(y
t
; 0, e
ht
)
for t = 1, . . . , n.
A simple random walk Metropolis algorithm with tuning variance v
2
h
would work as follows:
For t = 1, . . . , n
1. Current state: h
(j)
t
2. Sample h

t
from N(h
(j)
t
, v
2
h
)
3. Compute the acceptance probability
= min
(
1,
fN(h

t
; t,
2
t
)fN(yt; 0, e
h

t
)
fN(h
(j)
t
; t,
2
t
)fN(yt; 0, e
h
(j)
t )
)
4. New state:
h
(j+1)
t
=
_
h

t
w. p.
h
(j)
t
w. p. 1
Example i. Simulated data
Simulation setup
n = 500
h
0
= 0.0
= 0.00645
= 0.99

2
= 0.15
2
Prior distribution
N(0, 100)
N(0, 100)

2
IG(10/2, 0.28125/2)
h
0
N(0, 100)
MCMC setup
M
0
= 1, 000
M = 1, 000
68
Time series of y
t
and exph
t

0 100 200 300 400 500

2
0
2
4
time
0 100 200 300 400 500
0
1
2
3
4
time
Parameters
0 500 1000 1500 2000 2500 3000

0
.0
2
0
.0
0
0
.0
2
mu
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.03 0.02 0.01 0.00 0.01 0.02
0
2
0
0
6
0
0
0 500 1000 1500 2000 2500 3000
0
.9
5
0
.9
8
1
.0
1
phi
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.96 0.98 1.00 1.02
0
2
0
0
4
0
0
6
0
0
0 500 1000 1500 2000 2500 3000
0
.0
1
0
.0
3
0
.0
5
tau2
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.01 0.02 0.03 0.04 0.05
0
4
0
0
8
0
0
Autocorrelation of h
t
69
lag
A
C
F
0 5 10 15 20 25 30 35
0
.
3
0
.
4
0
.
5
0
.
6
0
.
7
0
.
8
0
.
9
1
.
0
Volatilities
Tuning parameter: v
2
h
= 0.01
0 100 200 300 400 500
0
1
2
3
4
5
6
time
TRUE
POSTERIOR MEAN
90%CI
Sampling h
t
via independent Metropolis-Hastings
The full conditional distribution of h
t
is given by
p(h
t
[h
t
, y
n
, ,
2
) = p(h
t
[h
t1
, h
t+1
, ,
2
)p(y
t
[h
t
)
= f
N
(h
t
;
t
,
2
)f
N
(y
t
; 0, e
ht
).
Kim, Shephard and Chib (1998) explored the fact that
log p(y
t
[h
t
) = const
1
2
h
t

y
2
t
2
exp(h
t
)
70
and that a Taylor expansion of exp(h
t
) around
t
leads to
log p(y
t
[h
t
) const
1
2
h
t

y
2
t
2
_
e
t
(h
t

t
)e
t
_
g(h
t
) = exp
_

1
2
h
t
(1 y
2
t
e
t
)
_
Proposal distribution
Let
2
t
=
2
for t = 1, . . . , n 1 and
2
n
=
2
.
Then, by combining f
N
(h
t
;
t
,
2
t
) and g(h
t
), for t = 1, . . . , n, leads to the following proposal distri-
bution:
q(h
t
[h
t
, y
n
, ,
2
) N
_
h
t
;
t
,
2
t
_
where
t
=
t
+ 0.5
2
t
(y
2
t
e
t
1).
Metropolis-Hastings algorithm
For t = 1, . . . , n
1. Current state: h
(j)
t
2. Sample h

t
from N(
t
,
2
t
)
3. Compute the acceptance probability
= min
(
1,
fN(h

t
; t,
2
t
)fN(yt; 0, e
h

t
)
fN(h
(j)
t
; t,
2
t
)fN(yt; 0, e
h
(j)
t )

fN(h
(j)
t
; t,
2
t
)
fN(h

t
; t,
2
t
)
)
4. New state:
h
(j+1)
t
=
_
h

t
w. p.
h
(j)
t
w. p. 1
Parameters
0 500 1000 1500 2000 2500 3000

0
.0
3

0
.0
1
0
.0
1
mu
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.03 0.02 0.01 0.00 0.01 0.02
0
4
0
0
8
0
0
0 500 1000 1500 2000 2500 3000
0
.9
6
1
.0
0
phi
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.96 0.98 1.00 1.02
0
2
0
0
4
0
0
6
0
0
0 500 1000 1500 2000 2500 3000
0
.0
1
0
0
.0
2
5
0
.0
4
0
tau2
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.005 0.015 0.025 0.035
0
1
0
0
3
0
0
71
Autocorrelation of h
t
lag
A
C
F
0 5 10 15 20 25 30 35
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
Autocorrelations of h
t
for both schemes
RANDOM WALK
lag
A
C
F
0 5 10 15 20 25 30 35
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
INDEPENDENT
lag
A
C
F
0 5 10 15 20 25 30 35
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
Volatilities
72
0 100 200 300 400 500
0
1
2
3
4
5
6
time
TRUE
POSTERIOR MEAN
90%CI
Sampling h
n
- normal approximation and FFBS
Let y

t
= log y
2
t
and
t
= log
2
t
.
The SV-AR(1) is a DLM with nonnormal observational errors, i.e.
y

t
= h
t
+
t
h
t
= +h
t1
+
t
where
t
N(0, 1).
The distribution of
t
is log
2
1
, where
E(
t
) = 1.27
V (
t
) =

2
2
= 4.935
Normal approximation
Let
t
be approximated by N(,
2
), z
t
= y

t
, = 1.27 and
2
=
2
/2.
Then
z
t
= h
t
+v
t
h
t
= +h
t1
+
t
is a simple DLM where v
t
and
t
are N(0, 1).
Sampling from
p(h
n
[,
2
,
2
, z
n
)
can be performed by the FFBS algorithm.
73
log
2
1
and N(1.27,
2
/2)
20 15 10 5 0 5 10
0
.
0
0
0
.
0
5
0
.
1
0
0
.
1
5
0
.
2
0
0
.
2
5
x
d
e
n
s
i
t
y
log chi^2_1
normal
Parameters
0 500 1000 1500 2000 2500 3000

0
.0
4
0
.0
0
0
.0
4
mu
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.04 0.02 0.00 0.02 0.04
0
2
0
0
4
0
0
6
0
0
0 500 1000 1500 2000 2500 3000
0
.8
5
0
.9
5
phi
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.85 0.90 0.95 1.00
0
4
0
0
8
0
0
1
2
0
0
0 500 1000 1500 2000 2500 3000
0
.0
2
0
.0
6
0
.1
0
tau2
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.00 0.02 0.04 0.06 0.08 0.10
0
4
0
0
8
0
0
1
2
0
0
Autocorrelation of h
t
74
lag
A
C
F
0 5 10 15 20 25 30 35
0
.
0
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
Autocorrelations of h
t
for the three schemes
RANDOM WALK
lag
A
C
F
0 5 10 15 20 25 30 35
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
INDEPENDENT
lag
A
C
F
0 5 10 15 20 25 30 35
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
NORMAL
lag
A
C
F
0 5 10 15 20 25 30 35
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Volatilities
75
0 100 200 300 400 500
0
1
2
3
4
5
time
Sampling h
n
- mixtures of normals and FFBS
The log
2
1
distribution can be approximated by
7

i=1

i
N(
i
,
2
i
)
where
i
i

i

2
i
1 0.00730 -11.40039 5.79596
2 0.10556 -5.24321 2.61369
3 0.00002 -9.83726 5.17950
4 0.04395 1.50746 0.16735
5 0.34001 -0.65098 0.64009
6 0.24566 0.52478 0.34023
7 0.25750 -2.35859 1.26261
log
2
1
and

7
i=1

i
N(
i
,
2
i
)
76
20 15 10 5 0 5 10
0
.
0
0
0
.
0
5
0
.
1
0
0
.
1
5
0
.
2
0
0
.
2
5
x
d
e
n
s
i
t
y
log chi^2_1
mixture of 7 normals
Mixture of normals
Using an argument from the Bayesian analysis of mixture of normal, let z
1
, . . . , z
n
be unobservable
(latent) indicator variables such that z
t
1, . . . , 7 and Pr(z
t
= i) =
i
, for i = 1, . . . , 7.
Therefore, conditional on the zs, y
t
is transformed into log y
2
t
,
log y
2
t
= h
t
+ log
2
t
h
t
= +h
t1
+

t
which can be rewritten as a normal DLM:
log y
2
t
= h
t
+v
t
v
t
N(
zt
,
2
zt
)
h
t
= +h
t1
+w
t
w
t
N(0,
2

)
where
zt
and
2
zt
are provided in the previous table.
Then h
n
is jointly sampled by using the the FFBS algorithm.
Parameters
77
0 500 1000 1500 2000 2500 3000

0
.0
2
0
.0
0
0
.0
2
mu
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.02 0.01 0.00 0.01 0.02
0
4
0
0
8
0
0
0 500 1000 1500 2000 2500 3000
0
.9
4
0
.9
8
phi
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.94 0.96 0.98 1.00
0
2
0
0
4
0
0
6
0
0
0 500 1000 1500 2000 2500 3000
0
.0
1
0
.0
3
0
.0
5
tau2
iteration
0 5 10 15 20 25 30 35
0
.0
0
.4
0
.8
Lag
A
C
F
0.01 0.02 0.03 0.04 0.05
0
4
0
0
8
0
0
1
2
0
0
Autocorrelation of h
t
lag
A
C
F
0 5 10 15 20 25 30 35
0
.
0
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
Autocorrelations of h
t
for the four schemes
78
RANDOM WALK
lag
A
C
F
0 5 10 15 20 25 30 35
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
INDEPENDENT
lag
A
C
F
0 5 10 15 20 25 30 35
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
NORMAL
lag
A
C
F
0 5 10 15 20 25 30 35
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
MIXTURE
lag
A
C
F
0 5 10 15 20 25 30 35
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Volatilities
0 100 200 300 400 500
0
1
2
3
4
5
time
90%CI
Comparing the four schemes: parameters
79
0.03 0.02 0.01 0.00 0.01 0.02
0
1
0
2
0
3
0
4
0
5
0
6
0
7
0

D
e
n
s
ity
0.96 0.98 1.00 1.02
0
1
0
2
0
3
0
4
0
5
0
6
0
7
0

D
e
n
s
ity
0.01 0.02 0.03 0.04 0.05
0
1
0
2
0
3
0
4
0
5
0
6
0
7
0

2
D
e
n
s
ity
Comparing the four schemes: volatilities
time
0 100 200 300 400 500
0
1
2
3
4
TRUE
RANDOM WALK
INDEPENDENT
NORMAL
MIXTURE
Example ii. LSTAR-SV models (Lopes and Salazar, 2006)
Lopes and Salazar (2006) adapt the LSTAR structure to model time-varying variances, SV-LSTAR(k):
y
t
[ h
t
N(0, e
ht
)
h
t
N(x

1
+(, c, h
td
)x

2
,
2
)
with x

t
= (1, h
t1
, . . . , h
tk
) and
i
= (
0i
,
1i
, . . . ,
ki
), for i = 1, 2. The logistic transition function is
(, c, h
td
) = 1/(1 +e
(h
td
c)
).
Particular case: k = d = 1
E(ht|ht1, c, ) =

10 +
20
1 +e
(h
t1
c)

11 +
21
1 +e
(h
t1
c)

ht1
80
S&P500 stock index
MCMC: Conditional on h
n
, sampling
1
,
2
, c, and
2
are relatively easy. To sample the compo-
nents of h
n
, we use the single parameter move introduced by Jacquier, Polson and Rossi (1994).
North American Standard and Poors 500 index, daily observed from January 7th, 1986 to December
31st, 1997. A total of 3127 observations. We entertained six models for the stochastic volatility:
/
1
: AR(1)
/
2
: AR(2)
/
3
: LSTAR(1) with d = 1
/
4
: LSTAR(1) with d = 2
/
5
: LSTAR(2) with d = 1
/
6
: LSTAR(2) with d = 2
Model comparison
Models AIC BIC DIC
/
1
: AR(1) 12795 31697 7223.1
/
2
: AR(2) 12624 31532 7149.2
/
3
: LSTAR(1, d = 1) 12240 31165 7101.1
/
4
: LSTAR(1, d = 2) 12244 31170 7150.3
/
5
: LSTAR(2, d = 1) 12569 31507 7102.4
/
6
: LSTAR(2, d = 2) 12732 31670 7159.4
AIC: Akaikes information criteria, BIC: Schwarzs information criteria, and DIC: Deviance informa-
tion criteria.
Posterior inference
Example iii. Factor SV models (Lopes and Carvalho, 2007)
Factor stochastic volatility models appear in Pitt and Shephard (1999), Aguilar and West (2000) and
Lopes and Migon (2002) and Lopes and Carvalho (2007), to name just a few. The model is
(y
t
[f
t
,
t
,
t
) N(
t
f
t
;
t
)
(f
t
[H
t
) N(0; H
t
)
with
t
= diag(
2
1t
, ,
2
pt
) and H
t
= diag(h
1t
, . . . , h
qt
). Let
t
= log(
t
) and
t
= log(H
t
). Then
(
t
[
t1
, , , V ) N( +
t1
, V )
(
t
[
t1
, , , U) N( +
t1
, U)
Pitt and Shephard (1999): diagonal V and U and t = .
Aguilar and West (2000): nondiagonal V and U and t = .
Lopes and Migon (2002): diagonal V and U and t.
Lopes and Carvalho (2007): diagonal V and U, t and Markov switching for t.
81
Parameter /
1
/
2
/
3
/
4
/
5
/
6

01
-0.060 -0.066 0.292 -0.154 -4.842 -6.081
(0.184) (0.241) (0.579) (0.126) (0.802) (1.282)

11
0.904 0.184 0.306 0.572 -0.713 -0.940
(0.185) (0.242) (0.263) (0.135) (0.306) (0.699)

21
- 0.715 - - -1.018 -1.099
(0.248) (0.118) (0.336)

02
- - -0.685 0.133 4.783 6.036
(0.593) (0.092) (0.801) (1.283)

12
- - 0.794 0.237 0.913 1.091
(0.257) (0.086) (0.314) (0.706)

22
- - - - 1.748 1.892
(0.114) (0.356)
- - 118.18 163.54 132.60 189.51
(16.924) (23.912) (10.147) (0.000)
c - - -1.589 0.022 -2.060 -2.125
(0.022) (0.280) (0.046) (0.000)

2
0.135 0.234 0.316 0.552 0.214 0.166
(0.020) (0.044) (0.066) (0.218) (0.035) (0.026)
Daily exchange rate returns
To illustrate the time-varying loadings extension we analyze the returns on weekday closing spot
prices for six currencies relative to the US dollar (as in Aguilar and West, 2000):
German Mark (DEM)
British Pound (GBP)
Japanese Yen (JPY)
French Franc (FRF)
Canadian Dollar (CAD)
Spanish Peseta (ESP)
To keep the analysis comparable with Aguilar and West (2000) we only use the rst 1000 observations
ranging from 1/1/1992 to 10/31/1995.
Time varying factor loadings
82















Time varying variance decomposition
























































































































































































































Importance of time-varying loadings
An interesting observation that highlights the importance of time-varying loadings in the context
of this example is the change in the explanatory power of factor 1, the European factor on the British
Pound.
The nal months of 1992 marks the withdrawal of Great Britain from the European Union exchange-
rate agreement (ERM), fact that is captured in our analysis by changes in the British loading in factor 1
83
FSV FSV+MSSV
v v
IBOVESPA -0.202 0.980 0.040 -0.284 - 0.971 0.047
MEXBOL -0.440 0.959 0.065 -0.434 - 0.957 0.051
MERVAL -0.409 0.959 0.083 -0.508 - 0.947 0.068
IPSA -0.600 0.947 0.058 -0.765 - 0.932 0.071
u
1

2
u
Factor -0.305 0.971 0.067 -0.951 -0.588 0.912 0.090
E(
t
) -10.517 -10.807 -6.682
and emphasized by the changes in the percentage of variation of the British Pound explained by factors
1 and 2.
If temporal changes on the factor loadings were not allowed, the only way the model could capture
this change in Great Britains monetary policy would be by a shock on the idiosyncratic variation of the
Pound, reducing, inturn, thepredictiveabilityof thelatentfactorstructure.
Latin american stock returns
We now illustrate the FSV+MSSV generalization in a extended version of the dataset in Lopes and
Migon (2002).
Returns on week-day closing spot prices in Latin American:
Brazilian Indice Bovespa (IBOVESPA)
Mexican Indice de Precios y Cotaziones (MEXBOL)
Argentinean Indice Merval (MERVAL)
Chilean Indice de Precios Selectivos de Acciones (IPSA).
The series are observed daily from January, 3rd 1994 to May, 26th 2005 (2974 observations), which
includes several international currency crises. These crises have directly impacted on Latin American
markets, generating higher levels of uncertainty and consequently higher levels of volatility.
Comparing FSV and FSV+MSSV models
Common factor volatilities
84
Under the model with k = 2 the persistency parameter is likely to be smaller in line with conclusions
drawn in Carvalho and Lopes (2007).
The model with k = 2 estimates two unconditional means to the log-volatility process that correspond
to times of high and low risk in the market. More specically, the posterior mean of the unconditional
standard deviation of the common factor in the FSV model is roughly the same as the one obtained for
the low volatility regime in the FSV+MSSV, however the factor is on a high volatility state around 6%
of the time, in which the unconditional standard deviation is about eight times higher.
This allows the volatilities to react faster once a regime switch is identied, which is highlighted
by the previous gure that compares FSV and FSV+MSSV common factors volatilities.
85
LECTURE 8
SEQUENTIAL MONTE CARLO
METHODS
Nonnormal/nonlinear dynamic models
Most nonnormal and nonlinear dynamic models are dened by
Observation equation
p(y
t
[x
t
, )
System or evolution equation
p(x
t
[x
t1
, )
Initial distribution
p(x
0
[)
The xed parameters that drive the state space model, , is kept known and omitted for now.
Evolution and updating
Let the information regarding x
t1
at time t 1 be summarized by
p(x
t1
[y
t1
)
Then Evolution and updating are represented by
p(x
t
[y
t1
) =
_
p(x
t
[x
t1
)p(x
t1
[y
t1
)dx
t1
and
p(x
t
[y
t
) p(y
t
[x
t
)p(x
t
[y
t1
)
respectively.
These densities are usually unavailable in closed form.
The Bayesian boostrap lter
Gordon, Salmond and Smiths (1993) seminal paper uses SIR ideas to obtain draws from p(x
t
[y
t
)
based on draws from p(x
t1
[y
t1
).
SIR: the goal is to draw from p(x) based on draws from q(x).
1. Draw x

1
, . . . , x

N
from q
2. Compute (unnormalized) weights
i
= p(x

i
)/q(x

i
)
86
3. Draw x
1
, . . . , x
M
from x

1
, . . . , x

N
with weights
1
, . . . ,
N

Sampling from the prior: If


p(x) (x)l(x)
where (x) and l(x) are prior and likelihood, respectively, then a natural (but not necessarily good,
actually usually bad!) choice is
q(x) = (x).
Under this choice, unnormalized weights are likelihoods, i.e.
(x) l(x).
Example i. Revisiting the 1st order DLM
For illustration, let us reconsider the local level model where closed form solutions are promptly
available. The model is
y
t
[x
t
N(x
t
,
2
)
x
t
[x
t1
N(x
t1
,
2
)
Posterior at t = 0: (x
0
[y
0
) N(m
0
, C
0
)
Prior at t = 1: (x
1
[y
0
) N(m
0
, C
0
+
2
)
Likelihood at time t: l(x
1
; y
1
) f
N
(x
1
; y
1
,
2
)
Posterior at time t: (x
1
[y
1
) N(m
1
, C
1
)
where A
1
= (C
0
+
2
)/(C
0
+
2
+
2
), m
1
= (1 A
1
)m
0
+A
1
y
1
and C
1
= A
1

2
.
Example i. One step update
Let (x
0
,
0
)
(i)

N
i=1
summarizes p(x
0
[y
0
). For example,
E(g(x
0
)[y
0
)
1
N
N

i=1

(i)
0
g(x
(i)
0
).
Then, (x
1
,
0
)
(i)

N
i=1
summarizes p(x
1
[y
0
), where
x
(i)
1
N(x
(i)
0
,
2
) i = 1, . . . , N.
are draws from the prior p(x
1
[y
0
).
Then, (x
1
,
1
)
(i)

N
i=1
summarizes p(x
1
[y
1
), where

(i)
1
=
(i)
0
f
N
(y
1
; x
(i)
1
,
2
) i = 1, . . . , N.
87
Example i. Sequential importance sampling (SIS)
Let (x
t1
,
t1
)
(i)

N
i=1
summarizes p(x
t1
[y
t1
).
Then, (x
t
,
t1
)
(i)

N
i=1
summarizes p(x
t
[y
t1
), where
Propagation: x
(i)
t
N(x
(i)
t1
,
2
) i = 1, . . . , N,
and (x
t
,
t
)
(i)

N
i=1
summarizes p(x
t
[y
t
), where
Reweighting:
(i)
t
=
(i)
t1
f
N
(y
t
; x
(i)
t
,
2
) i = 1, . . . , N.
Eective sample size
Liu (1996) proposed using the following measure of degeneracy of an algorithm:
N
e,t
=
1

N
i=1
_
w
(i)
t
_
2
where w
t
s are normalized weights, i.e. w
(i)
t
=
(i)
t
/

N
j=1

(j)
t
.
If w
(i)
t
= 1/N (equally balanced weights), then
N
e,t
= N.
If w
(j)
t
= 1 for only one j (particle degeneracy) then
N
e,t
= 1.
Example i. SIS with resampling (SISR)
SIS:
(x
t1
,
t1
)
(i)

N
i=1
summarizes p(x
t1
[y
t1
).
( x
t
,
t1
)
(i)

N
i=1
summarizes p(x
t
[y
t1
), where x
(i)
t
N(x
(i)
t1
,
2
), for i = 1, . . . , N.
( x
t
,
t
)
(i)

N
i=1
summarizes p(x
t
[y
t
), where
(i)
t
=
(i)
t1
f
N
(y
t
; x
(i)
t
,
2
), for i = 1, . . . , N.
Resampling:
Draw x
(1)
t
, . . . , x
(N)
t
from the set
_
x
(1)
t
, . . . , x
(N)
t
_
with weights
_

(1)
t
, . . . ,
(N)
t
_
.
Therefore, (x
t
,
t
)
(i)

N
i=1
summarizes p(x
t
[y
t
), where
t
= 1/N.
88
SIS with Resampling (SISR)
Uniform weights is the goal!
Example i. Simulated data
n = 50, x
0
= 0,
2
= 0.5 and
2
= (0.25, 0.5, 1.0).
G G
G
G G
G
G
G G
G G
G G
G
G
G
G
G
G
G
G
G G
G G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
time

1
0
1
2
3
4
5
1 10 20 30 40 50
tau/sig=1.414
G
G
G
G
G
G
G
G G
G
G
G
G G
G
G
G
G
G
G
G
G G
G
G
G
G
G G G
G
G G G
G G
G
G G
G G
G
G
G G
G
G G G
G
time

2
0
1 10 20 30 40 50
tau/sig=1
G
G
G G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G G
G
G
G
G
G G
G
G
G
G
G
G
G G
G
G
G
G
G G
G
G
G G
G
time

2
0
2
1 10 20 30 40 50
tau/sig=0.707
time

1
0
1
2
3
4
5
1 10 20 30 40 50
time

2
0
1 10 20 30 40 50
time

2
0
2
1 10 20 30 40 50
Top: y
t
and x
t
; bottom: m
t
and m
t
2

C
t
.
Left: / = 1.414; center: / = 1.000; right: / = 0.707.
Example i. SIS, N = 1, 000
89
time

2
0
2
4
6
1 10 20 30 40 50
time

2
0
1 10 20 30 40 50
time

2
0
2
1 10 20 30 40 50
time
0
1
0
2
0
3
0
4
0
5
0
6
0
1 10 20 30 40 50
time
0
2
0
4
0
6
0
8
0
1 10 20 30 40 50
time
0
2
0
4
0
6
0
8
0
1
0
0
1
2
0
1 10 20 30 40 50
Top: States; Bottom: N
e
.
Left: / = 1.414; center: / = 1.000; right: / = 0.707.
Example i. SIS, N = 10, 000
time

2
0
2
4
1 10 20 30 40 50
time

2
0
1 10 20 30 40 50
time

2
0
2
1 10 20 30 40 50
time
0
1
0
0
2
0
0
3
0
0
4
0
0
5
0
0
6
0
0
7
0
0
1 10 20 30 40 50
time
0
2
0
0
4
0
0
6
0
0
8
0
0
1
0
0
0
1 10 20 30 40 50
time
0
2
0
0
4
0
0
6
0
0
8
0
0
1
0
0
0
1
2
0
0
1
4
0
0
1 10 20 30 40 50
Top: States; Bottom: N
e
.
Left: / = 1.414; center: / = 1.000; right: / = 0.707.
Example i. SISR, N = 10, 000
90
time

1
0
1
2
3
4
5
1 10 20 30 40 50
time

2
0
1 10 20 30 40 50
time

2
0
2
1 10 20 30 40 50
time
1
0
0
0
2
0
0
0
3
0
0
0
4
0
0
0
5
0
0
0
6
0
0
0
7
0
0
0
1 10 20 30 40 50
time
1
0
0
0
2
0
0
0
3
0
0
0
4
0
0
0
5
0
0
0
6
0
0
0
7
0
0
0
8
0
0
0
1 10 20 30 40 50
time
0
2
0
0
0
4
0
0
0
6
0
0
0
8
0
0
0
1 10 20 30 40 50
Top: States; Bottom: N
e
.
Left: / = 1.414; center: / = 1.000; right: / = 0.707.
Example i. SISR, n = 1, 000 and N = 1, 000
e
t,
= [ q

(x
t
[y
t
) q

(x
t
[y
t
)[, for = 0.025, 0.5, 0.975.
2.5% 50% 97.5%
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
2.5% 50% 97.5%
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
2.5% 50% 97.5%
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
Left: / = 1.414; center: / = 1.000; right: / = 0.707.
Auxiliary particle lter (APF)
Recall the two main steps in any dynamic model:
p(x
t
[y
t1
) =
_
p(x
t
[x
t1
)p(x
t1
[y
t1
)dx
t1
p(x
t
[y
t
) p(y
t
[x
t
)p(x
t
[y
t1
)

_
(x
t1
,
t1
)
(i)
_
N
i=1
summarizes p(x
t1
[y
t1
).
91
Approximating p(x
t
[y
1
) by
p
N
(x
t
[y
t1
) =
N

i=1
p(x
t
[x
(i)
t1
)
(i)
t1
Approximating p(x
t
[y
t
) by
p
N
(x
t
[y
t
) =
N

i=1
p(y
t
[x
t
)p(x
t
[x
(i)
t1
)
(i)
t1
Pitt and Shephards (1999) idea
The previous mixture approximation suggests an augmentation scheme where the new target distri-
bution is
p
N
(x
t
, k[y
t
) = p(y
t
[x
t
)p(x
t
[x
(k)
t1
)
(k)
t1
.
A natural proposal distribution is
q(x
t
, k[y
t
) = p(y
t
[g(x
(k)
t1
))p(x
t
[x
(k)
t1
)
(k)
t1
where, for instance, g(x
t1
) = E(x
t
[x
t1
).
By a simple SIR argument, the weight of the particle x
t
is

t

p(y
t
[x
t
)
p(y
t
[g(x
(k)
t1
))
APF algorithm

_
(x
t1
,
t1
)
(i)
_
N
i=1
summarizes p(x
t1
[y
t1
).
For j = 1, . . . , N
Draw k
j
from 1, . . . , N with weights
(1)
t1
, . . . ,
(N)
t1
:

(i)
t1
=
(i)
t1
p(y
t
[g(x
(i)
t1
))
Draw x
(j)
t
from p(x
t
[x
(k
j
)
t1
).
Compute associated weight

(j)
t

p(y
t
[x
(j)
t
)
p(y
t
[g(x
(k
j
)
t1
))
.

_
(x
t
,
t
)
(i)
_
N
i=1
summarizes p(x
t
[y
t
).
Maybe add a SIR step to replenish x
t
s.
92
Example i. APF, n = 1, 000 and N = 1, 000
e
t,
= [ q

(x
t
[y
t
) q

(x
t
[y
t
)[, for = 0.025, 0.5, 0.975.
2.5% 50% 97.5%
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
2.5% 50% 97.5%
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
2.5% 50% 97.5%
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
Left: / = 1.414; center: / = 1.000; right: / = 0.707.
Example i. SISR & APF, n = 1, 000 and N = 100
SISR2.5 APF2.5 SISR50 APF50 SISR97.5 APF97.5
0
.0
0
.1
0
.2
0
.3
0
.4
0
.5
0
.6
SISR2.5 APF2.5 SISR50 APF50 SISR97.5 APF97.5
0
.0
0
.1
0
.2
0
.3
0
.4
0
.5
0
.6
SISR2.5 APF2.5 SISR50 APF50 SISR97.5 APF97.5
0
.0
0
.1
0
.2
0
.3
0
.4
0
.5
0
.6
Example ii. Nonlinear dynamic model
Recall the nonlinear dynamic model previously studied
(y
t
[x
t
, ) N(x
2
t
/20,
2
)
(x
t
[x
t1
, ) N(G

xt1
,
2
)
where x
0
N(m
0
, C
0
), = (, , )

, = (

,
2
,
2
) and
G

xt
=
_
x
t
, x
t
/(1 +x
2
t
), cos(1.2t)
_
Simulated data: n = 100,
2
= 1,
2
= 10, = (0.5, 25, 8), x
0
= 0.1.
93
Prior setup: m
0
= 0 and C
0
= 10.
MCMC setup: m
0
= 0, C
0
= 10, v = 0.1, M
0
= 1000 and M = 5000.
SMC setup: N = 5000.
Comparing SMCMC, SISR and APF
20 40 60 80 100

1
0

5
0
5
1
0
1
5
Time
TRUE vs. SISR
20 40 60 80 100

1
0

5
0
5
1
0
1
5
Time
TRUE vs. APF
Smoothing
Godsill, Doucet and West (2004) proposed a smoothing scheme based on particle lter draws.
The key results are
p(x
n
[y
n
) = p(x
n
[y
n
)
n1

t=1
p(x
t
[x
t+1
, y
t
)
and (by Bayes rule and conditional independence)
p(x
t
[x
t+1
, y
t
) p(x
t+1
[x
t
, y
t
)p(x
t
[y
t
).
We can now jointly sample from p(x
n
[y
n
) by sequentially sampling from ltered particles with weights
proportional to p(x
t+1
[x
t
, y
t
).
Backward sampling algorithm
Repeat the following three steps N times.
Sample x
n
from x
(i)
n

N
i=1
with weights
(i)
n

N
i=1
.
For t = n 1, . . . , 1
Sample x
t
from x
(i)
t

N
i=1
with weights
(i)
t

N
i=1

(i)
t

(i)
t
p
_
x
t+1
[x
(i)
t
_
i = 1, . . . , N
Then x
(j)
1
, . . . , x
(j)
n
is a draw from p(x
n
[y
n
).
94
Example i. smoothing
n = 50,
2
= 0.5,
2
= 1, x
0
= 0, m
0
= 0, C
0
= 100, N = 1000.
time
SISR
Forward filtering

2
0
2
4
6
1 10 20 30 40 50
time
APF
Forward filtering

2
0
2
4
6
1 10 20 30 40 50
time
SISR
Backward sampling

2
0
2
4
6
1 10 20 30 40 50
time
APF
Backward sampling

2
0
2
4
6
1 10 20 30 40 50
Example i. outlier in y
t
0 10 20 30 40 50

2
0
2
4
6
time
SISR
Forward filtering
0 10 20 30 40 50

2
0
2
4
6
time
APF
Forward filtering
0 10 20 30 40 50

2
0
2
4
6
time
SISR
Backward sampling
0 10 20 30 40 50

2
0
2
4
6
time
APF
Backward sampling
Example i. outlier in x
t
95
0 10 20 30 40 50

1
0

5
0
5
time
SISR
Forward filtering
0 10 20 30 40 50

1
0

5
0
5
time
APF
Forward filtering
0 10 20 30 40 50

1
0

5
0
5
time
SISR
Backward sampling
0 10 20 30 40 50

1
0

5
0
5
time
APF
Backward sampling
Example i. outlier in x
t
(more particles)
0 10 20 30 40 50

1
0

5
0
5
time
SISR
Forward filtering
0 10 20 30 40 50

1
0

5
0
5
time
APF
Forward filtering
96
LECTURE 9
SEQUENTIAL MONTE CARLO WITH
PARAMETER LEARNING
Revisiting the bootstrap and the AP lters
Consider the following general state space model
Observation equation : p(y
t+1
[x
t+1
)
State equation : p(x
t+1
[x
t
)
For a given time t
(x
t
,
t
)
(i)

N
i=1
is a particle representation of
p(x
t
[y
t
)
where y
t
= (y
1
, . . . , y
n
).
Sample-resample
Goal: (x
t+1
,
t+1
)
(i)

N
i=1
p(x
t+1
[y
t+1
).
Algorithm
Sample x
(i)
t+1
from q(x
t+1
[x
(i)
t
, y
t+1
)
Compute weights

(i)
t+1
=
(i)
t
p(y
t+1
[x
(i)
t+1
)p(x
(i)
t+1
[x
(i)
t
)
q(x
(i)
t+1
[x
(i)
t
, y
t+1
)
Special case: bootstrap lter
In the bootstrap lter
q(x
t+1
[x
t
, y
t+1
) = p(x
t+1
[x
t
),
i.e. the transition equation.
This proposal density has no information about y
t+1
, so we say that the scheme is blinded.
The weights are then proportional to the likelihoods

(i)
t+1
=
(i)
t
p(y
t+1
[x
(i)
t+1
).
97
Special case: optimal lter
In the optimal lter
q(x
t+1
[x
t
, y
t+1
) = p(x
t+1
[x
t
, y
t+1
).
The weights are then

(i)
t+1
=
(i)
t
p(y
t+1
[y
t
)
(i)
t
so, if
0
1, then
t+1
1 for all t.
This is a perfectly adapted lter.
Resample-sample
Goal: (x
t+1
,
t+1
)
(i)

N
i=1
p(x
t+1
[y
t+1
).
Algorithm
Resample x
(i)
t
from x
(1)
t
, . . . , x
(N)
t
with weights
q
1
(x
(j)
t
[y
t+1
) j = 1, . . . , N
Sample x
(i)
t+1
from q
2
(x
t+1
[ x
(i)
t
, y
t+1
)
Compute weights

(i)
t+1
=
(i)
t
p(y
t+1
[x
(i)
t+1
)p(x
(i)
t+1
[ x
(i)
t
)
q
1
( x
(i)
t
[y
t+1
)q
2
(x
(i)
t+1
[ x
(i)
t
, y
t+1
)
Special case: auxiliary particle lter
In the auxiliary particle lter
q
1
(x
t
[y
t+1
) = p(y
t+1
[g(x
t
))
where, for instance, g(x
t
) = E(x
t+1
[x
t
).
Also,
q
2
(x
t+1
[x
t
, y
t+1
) = p(x
t+1
[x
t
)
i.e. the transition equation, so again a blinded proposal.
The weights are then equal to

(i)
t+1
=
(i)
t
p(y
t+1
[x
(i)
t+1
)
p(y
t+1
[g( x
(i)
t
))
.
Special case: optimal lter
In the optimal lter both proposals q
1
and q
2
depend on y
t+1
, i.e.
q
1
(x
t
[y
t+1
) = p(y
t+1
[x
t
).
and
q
2
(x
t+1
[x
t
, y
t+1
) = p(x
t+1
[x
t
, y
t+1
).
98
The weights are then equal to

(i)
t+1
=
(i)
t
so, if
0
1, then
t+1
1 for all t.
This is a perfectly adapted lter.
Resample-sample with learning
The objective is to combine (x
t
,
t
,
t
)
(i)

N
i=1
p(x
t
, [y
t
) with y
t+1
to produce (x
t+1
,
t+1
,
t+1
)
(i)

N
i=1

p(x
t+1
, [y
t+1
).
The index t and t + 1 in
(i)
are used to facilitate the identication of the time at which draws are
being used.
Algorithm
Resample ( x
t
,

t
)
(i)
from (x
t
,
t
)
(j)

N
j=1
with weights
q
1
((x
t
,
t
)
(j)
[y
t+1
) j = 1, . . . , N.
Sample (x
t+1
,
t+1
)
(i)
from q
2
(x
t+1
, [( x
t
,

t
)
(i)
, y
t+1
).
Compute weights

(i)
t+1
=
(i)
t
p(yt+1|(xt+1, t+1)
(i)
)
q1(( xt,

t)
(i)
)|yt+1)
p((xt+1, t+1)
(i)
|( xt,

t)
(i)
)
q2((xt+1, t+1)
(i)
|( xt,

t)
(i)
, yt+1)
Questions:
How to choose q
1
and q
2
?
What is p(x
t+1
,
t+1
[x
t
,
t
)?
Is it okay to decompose it as
p(x
t+1
,
t+1
[x
t
,
t
) = p(x
t+1
[
t
, x
t
)p(
t+1
[x
t
,
t
)?
If so, then what is p(
t+1
[x
t
,
t
)?
Liu and West (2001)
They approximate p([y
t
) by a N-component mixture of multivariate normal distributions, i.e.
p([y
t
) =
N

i=1

(i)
t
f
N
([a
(i)
t
+ (1 a)

t
, (1 a
2
)V
t
)
where

t
=

N
i=1

(i)
t

(i)
t
and V
t
=

N
t=1

(i)
t
(
(i)
t

t
)(
(i)
t

t
)

.
99
This leads to
p(
t+1
[x
(i)
t
,
(i)
t
) = f
N
(
t+1
[a
(i)
t
+ (1 a)

t
, (1 a
2
)V
t
)
They use the same decomposition for q
2
. So the weights are

(i)
t+1
=
(i)
t
p(y
t+1
[(x
t+1
,
t+1
)
(i)
)
q
1
(( x
t
,

t
)
(i)
)[y
t+1
)
Resampling step
q
1
(x
t
,
t
[y
t+1
) = p(y
t+1
[g(x
t
), m(
t
))
where
g(x
t
) = E(x
t+1
[x
t
, m(
t
))
for instance, and
m(
t
) = a
t
+ (1 a)

t
The weights are then

(i)
t+1
=
(i)
t
p(y
t+1
[x
(i)
t+1
,
(i)
t+1
)
p(y
t+1
[g( x
(i)
t
), m(

(i)
t
))
Choosing a
Liu and West (2001) use a discount factor argument (see West and Harrison, 1997) to set the param-
eter a:
a =
3 1
2
For example,
= 0.50 leads to a = 0.500
= 0.75 leads to a = 0.833
= 0.95 leads to a = 0.974
= 1.00 leads to a = 1.000.
In the last case, i.e. a = 1.0, the particles of will degenerate over time to a single particle.
The LW lter in one page
For particles (x
t
,
t
,
t
)
(j)

N
j=1
summarizing p(x
t
, [y
t
), estimates

t
=

N
i=1

(i)
t

(i)
t
and V
t
=

N
i=1

(i)
t
(
(i)
t

t
)(
(i)
t

t
)

, and given shrinkage parameter a, the algorithm runs as follows.


For i = 1, . . . , N, compute
100
m(
(i)
t
) = a
(i)
t
+ (1 a)

t
.
g(x
(i)
t
) = E(x
t+1
[x
(i)
t
, m(
(i)
t
)).
w
(i)
t+1
= p(y
t+1
[g(x
(j)
t
), m(
(j)
t
)).
For i = 1, . . . , N
Resample ( x
t
,

t
)
(i)
from (x
t
,
t
, w
t+1
)
(j)

N
j=1
.
Sample
(i)
t+1
N(m(

(i)
t
), h
2
V
t
).
Sample x
(i)
t+1
from p(x
t+1
[ x
(i)
t
,
(i)
t+1
).
Compute weight

(i)
t+1
=
(i)
t
p(y
t+1
[x
(i)
t+1
,
(i)
t+1
)
p(y
t+1
[g( x
(i)
t
), m(

(i)
t
))
.
Example i. rst order dynamic linear model
The revisit the rst order dynamic linear model
y
t
= x
t
+
t

t
N(0,
2
)
x
t
= x
t1
+
t

t
N(0,
2
)
where x
0
= 25,
2
= 0.1,
2
= (0.2, 0.1, 0.05) and n = 200.
Prior setup:

2
IG(a
0
, b
0
)
x
0
N(m
0
, C
0
)
where a
0
= 5, b
0
= 0.4, m
0
= 25 and C
0
= 100.
Particle lter setup:
N = 2000
= (0.75, 0.95)
Example i. LW + optimal propagation
Liu and Wests (2001) lter with optimal resampling proposal, i.e.
p(x
t+1
[x
t
,
2
, y
t+1
) = f
N
(x
t+1
[m
t+1
, C
t+1
)
where
C
1
t+1
=
2
+
2
m
t+1
= C
t+1
(
2
y
t+1
+
2
x
t
)
101
Example i. LW + optimal propagation + kernel for
2
Optimal propagation + with mixture approximating
2
directly, i.e.
q(
2
[x
t
,
2
t
, y
t+1
) f
N
(y
t+1
; x
t
,
2
)f
IG
(
2
[(
2
t
), (
2
t
))
where
(
2
t
) =
m(
2
t
)
2
v(
2
t
)
+ 2
(
2
t
) = m(
2
t
)(
2
t
)
and
m(
2
t
) = a
2
t
+ (1 a)
2
v(
2
t
) = (1 a
2
)S
2

2
with
2
and S
2

2
the particle approximation to the mean and variance of
2
from p(
2
[y
t
).
Example i. Comparing various LW lters
LW1 : LW + log
2
LW2 : LW +
2
LW3 : LW + log
2
+ optimal propagation
LW4 : LW +
2
+ optimal propagation
Example i. / = 1.4
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
0
.3
5
time
LW1 with delta=0.75
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
0
.3
5
time
LW2 with delta=0.75
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
0
.3
5
time
LW3 with delta=0.75
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
0
.3
5
time
LW4 with delta=0.75
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
0
.3
5
time
LW1 with delta=0.95
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
0
.3
5
time
LW2 with delta=0.95
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
0
.3
5
time
LW3 with delta=0.95
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
0
.3
5
time
LW4 with delta=0.95
Example i. / = 1.0
102
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW1 with delta=0.75
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW2 with delta=0.75
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW3 with delta=0.75
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW4 with delta=0.75
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW1 with delta=0.95
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW2 with delta=0.95
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW3 with delta=0.95
0 50 100 150 200
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW4 with delta=0.95
Example i. / = 0.7
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW1 with delta=0.75
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW2 with delta=0.75
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW3 with delta=0.75
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW4 with delta=0.75
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW1 with delta=0.95
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW2 with delta=0.95
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW3 with delta=0.95
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
LW4 with delta=0.95
Example ii. nonlinear dynamic model
Let y
t
, for t = 1, . . . , n, be modeled as
(y
t
[x
t
, ) N(x
2
t
/20,
2
)
(x
t
[x
t1
, ) N(G

xt1
,
2
)
where G

xt
=
_
x
t
, x
t
/(1 +x
2
t
), cos(1.2t)
_
, = (

,
2
,
2
) and = (, , )

.
103
Prior distributions for
0
, ,
2
and
2
are
x
0
N(m
0
, V
0
)
N(c
0
, C
0
)

2
IG(a
0
, A
0
)

2
IG(b
0
, B
0
)
Example ii. Simulation set up
We simulated n = 200 observations based on = (0.5, 25, 8)

,
2
= 10,
2
= 1 and x
0
= 0.1.
Prior hyperparameters:
m
0
= 0.0 and V
0
= 5
c
0
= (0.5, 25, 8)

and C
0
= diag(0.1, 16, 2)
a
0
= 3 and A
0
= 20
b
0
= 3 and B
0
= 2
Example ii. Simulated data
yt
time
0 50 100 150 200

5
0
5
1
0
2
0
xt
time
0 50 100 150 200

1
0
0
1
0
2
0
10 0 10 20

5
0
5
1
0
2
0
xt
y
t
10 0 10 20

1
0
0
1
0
2
0
xt1
x
t
Example ii. (N, , a) = (2000, 0.75, 0.83)
104
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
GG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
GG
G
G G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
GG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
10 0 10 20

1
0
0
1
0
2
0
xt
TRUE
P
o
s
te
r
io
r
m
e
a
n
alpha
Time
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
beta
Time
0 50 100 150 200
1
5
2
0
2
5
3
0
gamma
Time
0 50 100 150 200
4
5
6
7
8
9
1
0
1
1
sigma2
Time
0 50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
tau2
Time
0 50 100 150 200
0
.5
1
.0
1
.5
2
.0
2
.5
3
.0
3
.5
Example ii. (N, , a) = (2000, 0.90, 0.94)
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
GG
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G G
G
GG
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
10 0 10 20

1
0
0
1
0
2
0
xt
TRUE
P
o
s
te
r
io
r
m
e
a
n
alpha
Time
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
beta
Time
0 50 100 150 200
1
5
2
0
2
5
3
0
gamma
Time
0 50 100 150 200
4
5
6
7
8
9
1
0
sigma2
Time
0 50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
tau2
Time
0 50 100 150 200
0
1
2
3
4
5
Example ii. (N, , a) = (5000, 0.90, 0.94)
105
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
GG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
GG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G G
G
G
G
G G
G
G
10 0 10 20

1
0
0
1
0
2
0
xt
TRUE
P
o
s
te
r
io
r
m
e
a
n
alpha
Time
0 50 100 150 200
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
beta
Time
0 50 100 150 200
1
5
2
0
2
5
3
0
gamma
Time
0 50 100 150 200
5
6
7
8
9
1
0
sigma2
Time
0 50 100 150 200
1
0
2
0
3
0
4
0
tau2
Time
0 50 100 150 200
0
.5
1
.0
1
.5
2
.0
2
.5
3
.0
3
.5
Example ii. (N, , a) = (10000, 0.90, 0.94)
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
GG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
GG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
GG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G G
G
G
10 0 10 20

1
0
0
1
0
2
0
xt
TRUE
P
o
s
te
r
io
r
m
e
a
n
alpha
Time
0 50 100 150 200

0
.2
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
beta
Time
0 50 100 150 200
1
5
2
0
2
5
3
0
gamma
Time
0 50 100 150 200
5
6
7
8
9
1
0
sigma2
Time
0 50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
tau2
Time
0 50 100 150 200
0
.5
1
.0
1
.5
2
.0
2
.5
3
.0
3
.5
Example ii. Assessing MC error -
N = 5000
106
50 100 150 200
0
.3
5
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
50 100 150 200
0
.3
5
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
50 100 150 200
0
.3
5
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
50 100 150 200
0
.3
5
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
50 100 150 200
0
.3
5
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
50 100 150 200
0
.3
5
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
Example ii. Assessing MC error -
50 100 150 200
2
0
2
5
3
0
ind
50 100 150 200
2
0
2
5
3
0
ind
50 100 150 200
2
0
2
5
3
0
ind
50 100 150 200
2
0
2
5
3
0
ind
50 100 150 200
2
0
2
5
3
0
ind
50 100 150 200
2
0
2
5
3
0
ind
Example ii. Assessing MC error -
107
50 100 150 200
6
7
8
9
1
0
ind
50 100 150 200
6
7
8
9
1
0
ind
50 100 150 200
6
7
8
9
1
0
ind
50 100 150 200
6
7
8
9
1
0
ind
50 100 150 200
6
7
8
9
1
0
ind
50 100 150 200
6
7
8
9
1
0
ind
Example ii. Assessing MC error -
2
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
Example ii. Assessing MC error -
2
108
50 100 150 200
0
2
4
6
8
ind
50 100 150 200
0
2
4
6
8
ind
50 100 150 200
0
2
4
6
8
ind
50 100 150 200
0
2
4
6
8
ind
50 100 150 200
0
2
4
6
8
ind
50 100 150 200
0
2
4
6
8
ind
Example ii. Assessing MC error -
N = 10000
50 100 150 200
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
50 100 150 200
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
50 100 150 200
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
50 100 150 200
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
50 100 150 200
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
50 100 150 200
0
.4
0
0
.4
5
0
.5
0
0
.5
5
0
.6
0
0
.6
5
ind
Example ii. Assessing MC error -
109
50 100 150 200
2
0
2
5
3
0
ind
50 100 150 200
2
0
2
5
3
0
ind
50 100 150 200
2
0
2
5
3
0
ind
50 100 150 200
2
0
2
5
3
0
ind
50 100 150 200
2
0
2
5
3
0
ind
50 100 150 200
2
0
2
5
3
0
ind
Example ii. Assessing MC error -
50 100 150 200
6
7
8
9
ind
50 100 150 200
6
7
8
9
ind
50 100 150 200
6
7
8
9
ind
50 100 150 200
6
7
8
9
ind
50 100 150 200
6
7
8
9
ind
50 100 150 200
6
7
8
9
ind
Example ii. Assessing MC error -
2
110
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
50 100 150 200
5
1
0
1
5
2
0
2
5
3
0
3
5
ind
Example ii. Assessing MC error -
2
50 100 150 200
0
1
2
3
4
5
6
7
ind
50 100 150 200
0
1
2
3
4
5
6
7
ind
50 100 150 200
0
1
2
3
4
5
6
7
ind
50 100 150 200
0
1
2
3
4
5
6
7
ind
50 100 150 200
0
1
2
3
4
5
6
7
ind
50 100 150 200
0
1
2
3
4
5
6
7
ind
Particle Learning (PL)
Carvalho, Johannes, Lopes and Polson (2009) introduce Particle Learning (PL) as the following
resample-sample scheme, where s
t
is the vector of sucient statistics for .
Posterior at t: (x
t
, s
t
, )
(i)

N
i=1
p(x
t
, s
t
, [y
t
).
Resampling weights: w
(j)
t+1
p(y
t+1
[x
(j)
t
,
(j)
), j = 1, . . . , N.
For i = 1, . . . , N
Resample: Draw ( x
t
, s
t
,

)
(i)

N
i=1
from (x
t
, s
t
, )
(j)
, w
t+1

N
j=1
.
Sample: Draw x
(i)
t+1
p(x
t+1
[( x
t
,

)
(i)
, y
t+1
).
111
Recursive sucient statistics: s
(i)
t+1
= o( s
(i)
t
, x
(i)
t+1
, y
t+1
).
Oine sampling of xed parameters:
(i)
p([s
(i)
t+1
).
PL ingredients
Resampling distribution
p(y
t+1
[x
t
, )
Propagating distribution
p(x
t+1
[x
t
, , y
t+1
)
Recursive sucient statistics
s
t+1
= o(s
t
, x
t+1
, y
t+1
)
Example i. 1st order dynamic linear model via PL
p(y
t+1
[x
t
,
2
) is
N(y
t+1
; x
t
,
2
+
2
)
p(x
t+1
[x
t
,
2
, y
t+1
) is
N(x
t+1
; Ay
t+1
+ (1 A)x
t
, A
2
)
where A =
2
/(
2
+
2
).
p(
2
[s
t+1
) is
IG(a
t+1
, b
t+1
)
where s
t+1
= (a
t+1
, b
t+1
) is recursively updated
a
t+1
= a
t
+
1
2
b
t+1
= b
t
+
(y
t
x
t
)
2
2
Example i. learning
2
- n = 200, 500, 1000
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
time
tau/sig=0.71
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
tau/sig=1
0 50 100 150 200
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
0
.3
0
time
tau/sig=1.41
0 100 200 300 400 500
0
.0
5
0
.1
0
0
.1
5
0
.2
0
time
tau/sig=0.71
0 100 200 300 400 500
0
.0
5
0
.1
0
0
.1
5
0
.2
0
time
tau/sig=1
0 100 200 300 400 500
0
.0
5
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
tau/sig=1.41
112
200 400 600 800 1000
0
.0
8
0
.1
0
0
.1
2
0
.1
4
0
.1
6
0
.1
8
time
tau/sig=0.71
200 400 600 800 1000
0
.1
0
0
.1
5
0
.2
0
time
tau/sig=1
200 400 600 800 1000
0
.1
0
0
.1
5
0
.2
0
0
.2
5
time
tau/sig=1.41
Example i. learning
2
- n = 5000 and N = 2000
0 1000 2000 3000 4000 5000
0
.0
8
0
.1
0
0
.1
2
0
.1
4
0
.1
6
time
tau/sig=0.71
0 1000 2000 3000 4000 5000
0
.0
8
0
.1
0
0
.1
2
0
.1
4
0
.1
6
time
tau/sig=1
0 1000 2000 3000 4000 5000
0
.0
8
0
.1
0
0
.1
2
0
.1
4
0
.1
6
0
.1
8
0
.2
0
time
tau/sig=1.41
Example iii. MCMC and PL comparison
Simulation set up: For t = 1, . . . , n = 300,
p(y
t
[x
t
, ) f
N
(x
t
;
2
) (11)
p(x
t
[x
t1
, ) f
N
(x
t1
;
2
) (12)
where = (,
2
,
2
) = (1.0, 1.0, 0.25) and x
0
= 0.
Model set up: Equations (1) and (2) above plus
p(, x
0
) f
N
(; r
0
, W
0
)f
IG
(
2
; a
0
, b
0
)
f
IG
(
2
; c
0
, d
0
)f
N
(x
0
; m
0
, V
0
)
where r
0
= 0, W
0
= 3, a
0
= 3, b
0
= 2, c
0
= 3, d
0
= 0.5, m
0
= 0 and V
0
= 3.
MCMC set up: M
0
= 100K, L = 100 and M = 20K. A total of 2100K draws.
SMC set up: M = 20K particles.
Example iii. MCMC algorithms
Gibbs sampler (GIBBS)
Sample x from p(x[y, ) - FFBS
Sample
2
from p(
2
[x, y)
Sample from p([x,
2
)
Sample
2
from p(
2
[x, )
Random-walk Metropolis (RW)
113
Sample x from p(x[y, ) - FFBS
Sample

from
q(

[) = q

(, V

)q

2(, V

2)q

2(
2
, V

2),
with V

= 0.01, V

2 = 0.01 and V

2 = 0.01, and accept with probability


= min
_
1,
p(y[

)p(

)q(

[)
p(y[)p()q([

)
_
.
Note : Since p(y[) =
_
p(y[x, )p(x[)dx can be analytically derived, x and are jointly sampled.
Example iii. Simulated y
t
and x
t
0 50 100 150 200 250 300

2
0
2
4
6
8
1
0
1
2
time
Example iii. Prior of (, x
0
)
5 0 5
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
D
e
n
sity
rho=1
5 0 5
0
.0
0
0
.0
5
0
.1
0
0
.1
5
0
.2
0
D
e
n
sity
x0=0
0 1 2 3 4 5 6
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
1
.2
D
e
n
sity
sig2=1
0.0 0.5 1.0 1.5
0
1
2
3
4
D
e
n
sity
tau2=0.25
114
Example iii. MCMC trace plots
GIBBS
iterations
rho
0 5000 10000 15000 20000
0.98
0.99
1.00
1.01
RW
iterations
rho
0 5000 10000 15000 20000
0.98
0.99
1.00
1.01
GIBBS
iterations
sig2
0 5000 10000 15000 20000
0.8
1.0
1.2
1.4
1.6
RW
iterations
sig2
0 5000 10000 15000 20000
0.8
1.0
1.2
1.4
1.6
GIBBS
iterations
tau2
0 5000 10000 15000 20000
0.1
0.2
0.3
0.4
0.5
0.6
RW
iterations
tau2
0 5000 10000 15000 20000
0.1
0.2
0.3
0.4
0.5
0.6
Example iii. MCMC autocorrelation plots
0 10 20 30 40
0.0
0.2
0.4
0.6
0.8
1.0
Lag
rho
GIBBS
0 10 20 30 40
0.0
0.2
0.4
0.6
0.8
1.0
Lag
rho
RW
0 10 20 30 40
0.0
0.2
0.4
0.6
0.8
1.0
Lag
sig2
GIBBS
0 10 20 30 40
0.0
0.2
0.4
0.6
0.8
1.0
Lag
sig2
RW
0 10 20 30 40
0.0
0.2
0.4
0.6
0.8
1.0
Lag
tau2
GIBBS
0 10 20 30 40
0.0
0.2
0.4
0.6
0.8
1.0
Lag
tau2
RW
Example iii. MCMC marginal posteriors
115
GIBBS
rho
D
ensity
0.98 0.99 1.00 1.01
0
20
40
60
80
100
RW
rho
D
ensity
0.98 0.99 1.00 1.01
0
20
40
60
80
100
GIBBS
sig2
D
ensity
0.8 1.0 1.2 1.4 1.6
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
RW
sig2
D
ensity
0.8 1.0 1.2 1.4 1.6
0
1
2
3
GIBBS
tau2
D
ensity
0.1 0.2 0.3 0.4 0.5 0.6
0
2
4
6
8
RW
tau2
D
ensity
0.1 0.2 0.3 0.4 0.5 0.6
0
2
4
6
8
Example iii. MCMC and true marginal posteriors
0.98 0.99 1.00 1.01
0
20
40
60
80
100
GIBBS
rho
D
ensity
0.98 0.99 1.00 1.01
0
20
40
60
80
100
RW
rho
D
ensity
0.8 1.0 1.2 1.4 1.6
0
1
2
3
GIBBS
sig2
D
ensity
0.8 1.0 1.2 1.4 1.6
0
1
2
3
RW
sig2
D
ensity
0.1 0.2 0.3 0.4 0.5 0.6
0
2
4
6
8
GIBBS
tau2
D
ensity
0.1 0.2 0.3 0.4 0.5
0
2
4
6
8
RW
tau2
D
ensity
Example iii. p(x
t
[y
n
) via MCMC
116
0 50 100 150 200 250 300
0
2
4
6
8
1
0
time
x(t)
GIBBS
RW
Posterior medians and 95% credibility intervals.
Example iii. PL and MCMC quantiles
50 100 150 200 250 300
0
.9
8
1
.0
0
1
.0
2
1
.0
4
1
.0
6
rho
time
TRUE
PL
50 100 150 200 250 300
0
.6
0
.8
1
.0
1
.2
1
.4
1
.6
1
.8
sig2
time
50 100 150 200 250 300
0
.2
0
.4
0
.6
0
.8
tau2
time
0.990 0.995 1.000 1.005 1.010
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
rho
C
.D
.F
.
TRUE
GIBBS
RW
PL
0.8 0.9 1.0 1.1 1.2 1.3 1.4
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
sig2
C
.D
.F
.
0.15 0.20 0.25 0.30 0.35 0.40
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
tau2
C
.D
.F
.
TOP: True and PL estimate of percentiles of p([y
t
) for all t. Percentiles are 2.5%,50% and 97.5%.
BOTTOM: True, GIBBS and PL estimates of F([y
n
).
Example iii. PL and GIBBS quantiles
117
0.98 0.99 1.00 1.01
0
2
0
4
0
6
0
8
0
1
0
0

D
e
n
s
ity
0.8 1.0 1.2 1.4 1.6
0
1
2
3

2
D
e
n
s
ity
0.1 0.2 0.3 0.4 0.5 0.6
0
2
4
6
8

2
D
e
n
s
ity
TRUE
GIBBS
PL
0.990 0.995 1.000 1.005 1.010
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
rho
C
.D
.F
.
0.8 0.9 1.0 1.1 1.2 1.3 1.4
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
sig2
C
.D
.F
.
0.15 0.20 0.25 0.30 0.35 0.40
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
tau2
C
.D
.F
.
TRUE
GIBBS
PL
TOP: True, Gibbs and PL estimate of p([y
n
).
BOTTOM: True, Gibbs and PL estimates of F([y
n
).
Example iii. PL and MCMC quantiles
0 50 100 150 200 250 300
0
2
4
6
8
1
0
MCMC
time
x(t)
GIBBS
RW
0 50 100 150 200 250 300
0
2
4
6
8
1
0
PL
time
x
(
t)
LEFT: Posterior medians and 95% credibility intervals of p(x
t
[y
n
).
RIGHT: Posterior medians and 95% credibility intervals of p(x
t
[y
t
).
Example iii. Eective sample sizes
ESS
t
= M
_
1 +
V (t)
E
2
(t)
_
1
and ESS = M (1 + 2

k=1

k
)
1
.
118
time
E
S
S
1
0
0
1
0
0
0
5
0
0
0
2
0
0
0
0
0 50 100 150 200 250 300
PL
G G G
E
S
S
G
G
G
1
0
0
1
0
0
0
5
0
0
0
2
0
0
0
0

2

2
MCMC
GIBBS
RW
PL

t
are particle weights.

k
= cov

(t
(n)
, t
(n+k)
)/var

(t
(n)
) and t
(n)
= t(
(n)
).
Example iv. Dynamic factor model with switching loadings
For t = 1, . . . , T, the model is dened as follows
2
:
Observation equation
y
t
[z
t
, N(
t
x
t
,
2
I
2
)
State equations
x
t
[x
t1
, N(x
t1
,
2
x
)

t
[
t1
, Ber((1 p)
1t1
q
t1
)
where z
t
= (x
t
,
t
)

,
t
= (1,
t
)

is the vector of time-varying loadings and = (


1
,
2
,
2
,
2
x
, p, q)

is
the vector of xed parameters.
The prior distributions are conditionally conjugate:
(
i
[
2
) N
_
b
i0
,
2
B
i0
_
for i = 1, 2,

2
IG
_

00
2
,
d
00
2
_

2
x
IG
_

10
2
,
d
10
2
_
p Beta(p
1
, p
2
)
q Beta(q
1
, q
2
)
x
0
N(m
0
, C
0
)
Particle representation
At time t, particles
_
(x
t
,
t
, , s
x
t
, s
t
)
(i)
_
N
i=1
approximating
p
_
x
t
,
t
, , s
x
t
, s
t
[y
t
_
where
2
This example is from Carvalho, Johannes, Lopes and Polson (2009)
119
s
x
t
= o(s
x
t1
, ) are state sucient statistics
s
t
= o(s
t1
, x
t
,
t
) are xed parameter sucient statistics
Re-sampling (x
t
,
t
, , s
x
t
, s
t
)
Let us redene
i
= (1,
i
)

whenever necessary.
Draw an index k(i) Multi(
(i)
) with weights

(i)
p(y
t+1
[(s
x
t
,
t
, )
k(i)
)
with
p(y
t+1
[m
t
, C
t
,
t
, ) =
2

j=1
f
N
(y
t+1
;
j
m
t
, V
j
) Pr (
t+1
= j[
t
, )
where V
j
= (C
t
+
2
x
)
j

j
+
2
I
2
, m
t
and C
t
are components of s
x
t
and f
N
denotes the normal density
function.
Propagating states
Draw auxiliary state
t+1

(i)
t+1
p(
t+1
[(s
x
t
,
t
, )
k(i)
, y
t+1
)
where
Pr(
t+1
= j[s
x
t
,
t
, , y
t+1
) f
N
(y
t+1
;
j
m
t
, V
j
) p (
t+1
= j[
t
, ) .
Draw state x
t+1
conditionally on
t+1
x
(i)
t+1
p(x
t+1
[
(i)
t+1
, (s
x
t
, )
k(i)
, y
t+1
)
by a simply Kalman lter update.
Updating sucient statistics for states, s
x
t+1
The Kalman lter recursion yield
m
t+1
= m
t
+A
t+1
(y
t+1

t+1
m
t
)
C
t+1
= C
t
+
2
x
A
t+1
Q
1
t+1
A

t+1
where
Q
t+1
= (C
t
+
2
x
)
t+1

t+1
+
2
I
2
A
t+1
= (C
t
+
2
x
)

t+1
Q
1
t+1
Updating sucient statistics for parameters, s
t+1
Recall that s
t+1
= o(s
t
, x
t+1
,
t+1
). Then,
(
i
[
2
, s
t+1
) N
_
b
i,t+1
,
2
B
i,t+1
_
for i = 1, 2,
(
2
[s
t+1
) IG
_

0t
2
,
d
0,t+1
2
_
(
2
x
[s
t+1
) IG
_

1t
2
,
d
1,t+1
2
_
(p[s
t+1
) Beta(p
1,t+1
, p
2,t+1
)
(q[s
t+1
) Beta(q
1,t+1
, q
2,t+1
)
120
where I
t+1=i
= I
i
, I
t=i,t+1=j
= I
ij
,
it
=
i,t1
+ 1, B
1
i,t+1
= B
1
it
+ x
2
t+1
, B
1
i,t+1
b
i,t+1
= B
1
it
b
it
+
x
t+1
y
t+1,2
I
i
, p
i,t+1
= p
it
+ I
1i
(similarly for q
i,t+1
) for i = 1, 2, d
0,t+1
= d
0,t
+ (y
t+1,1
x
t+1
)
2
+

2
j=1
_
(y
t+1,2
b
j,t+1
x
t+1
) y
t+1,2
+B
1
j,t+1
b
j,t+1

I
j
, and d
1,t+1
= d
1,t
+ (x
t+1
x
t
)
2
.
CASE I: Randowm walk dynamic factor, static loadings
Simulation setup:
n = 500
= 2

2
= 0.2

2
x
= 0.05
Prior hyperparameters:
: b
0
= 0 B
0
= 10

2
:
00
= 10 d
00
= 1.8

2
x
:
10
= 10 d
10
= 0.45
Time series and dynamic factor
Parameter learning: M = 1000
Parameter learning: M = 5000
Dynamic factor
121
122
CASE II: AR(1) dynamic factor, static loadings
x
t
[x
t1
, N(x
t1
, 1.0)
N(
0
, V

)
Simulation setup:
n = 200
= 2

2
= 2.0
= 9
Prior hyperparameters:
: b
0
= 0 B
0
= 100

2
:
00
= 10 d
00
= 18
:
0
= 1 V

= 100
Time series and state variables
Parameter learning: M = 1000 particles
Dynamic factor
CASE III: Random walk dynamic factor, time-varying loadings
123
124
Simulation setup:
n = 300
(
1
,
2
) = (1, 2)

2
= 0.2

2
x
= 0.05
p = q = 0.975
Prior hyperparameters:

1
: b
10
= 0 B
10
= 2

2
: b
20
= 3 B
20
= 2

2
:
00
= 5 d
00
= 1.0

2
x
:
10
= 5 d
10
= 0.25
p, q : p
1
= p
2
= q
1
= q
2
= 1
Time series and state variables
Parameter learning: M = 5000 particles
Discrete switching state
Dynamic factor
Dynamic factor
125
126
LECTURE 10
STOCHASTIC VOLATILITY via
SEQUENTIAL MONTE CARLO
METHODS
Example i: Stochastic volatility
Let y
t
, for t = 1, . . . , n, be modeled as
y
t
[x
t
N(0, e
xt
)
(x
t
[x
t1
, ) N( +x
t1
,
2
)
where = (, ,
2
).
Simulation setup: n = 500, = 0.0031, = 0.9951 and
2
= 0.0074 and x
1
= /(1 ) =
0.632653 (13% of annualized standard deviation).
Prior setup:
x
0
N(m
0
, C
0
) N(
0
, V

)
N(
0
, V

)
2
IG(n
0
/2, n
0

2
0
/2)
where m
0
= 0.0, C
0
= 0.1,
0
= 0.0031, V

= 0.01,
0
= 0.9951, V

= 0.01, n
0
= 3 and
2
0
= 0.0074.
LW lter with shrinkage factor a
Particles t: (x
t
, )
(j)
,
(j)
t

M
j=1
p(x
t
, [y
t
).
Summary of p([y
t
):

E([y
t
) and V V ([y
t
).
Resample quantities: For j = 1, . . . , M
127
Compute m
(j)
= a
(j)
+ (1 a)

Compute g
(j)
=
(j)
+
(j)
x
(j)
t
Algorithm: For l = 1, . . . , M
Draw k
l
1, . . . , M, with P(k
l
= j)
(j)
t
p(y
t+1
[g
(j)
)
Sample
(l)
from N(m
(k
l
)
, (1 a
2
)V )
Sample x
(l)
t+1
from p(x
t+1
[x
(k
l
)
t
,
(l)
)
Compute weight
(l)
t+1
p(y
t+1
[x
(l)
t+1
)/p(y
t+1
[g
(k
l
)
)
Particles at t + 1: (x
t+1
, )
(j)
,
(j)
t+1

M
j=1
p(x
t+1
, [y
t+1
).
Time series y
t
and p(e
xt
[y
t
)
N = 5000 and = (, , log(
2
)).
yt
time
0 100 200 300 400 500

1
0
1
2
3
time
0 100 200 300 400 500
0
2
4
6
8
1
0
Parameter learning

Time
0 100 200 300 400 500

0
.2

0
.1
0
.0
0
.1
0
.2

Time
0 100 200 300 400 500
0
.8
0
.9
1
.0
1
.1
1
.2

2
Time
0 100 200 300 400 500
0
.0
0
0
.0
2
0
.0
4
0
.0
6
0
.0
8
0
.1
0
128
Example ii: SV-AR(1) via sequential MCMC and LW
We simulated n = 50 observations based on = 0.0031, = 0.9951,
2
= 0.0074, with m
0
= 0.0
and C
0
= 0.1.
Also, x
1
= /(1 ) = 0.632653, which corresponds to annualized standard deviations around
13%.
p(x
t
[y
t
) when is known
MCMC: Kim, Shephard and Chib (1994)
SMC: Liu and West (2001) with = 0.75 and a = 0.9521743.
time
0 10 20 30 40 50
0
.
0
0
.
5
1
.
0
1
.
5
2
.
0
2
.
5
3
.
0
MCMC: (burn,niter,lag)=(2000,2000,1)
SMC: N=2000
Seq.MCMC
APF
p(x
t
[y
t
) when is known
129
0.5 1.0 1.5 2.0 2.5 3.0
0
.0
0
.5
1
.0
1
.5
2
.0
y(1)=0.427
0.0 0.5 1.0 1.5 2.0 2.5
0
.0
0
.5
1
.0
1
.5
2
.0
y(7)=0.26
0.0 0.5 1.0 1.5 2.0 2.5
0
.0
0
.5
1
.0
1
.5
2
.0
y(13)=1.179
0 1 2 3 4
0
.0
0
.5
1
.0
1
.5
2
.0
y(19)=0.316
0.5 1.0 1.5 2.0 2.5 3.0
0
.0
0
.5
1
.0
1
.5
2
.0
y(25)=0.541
0.5 1.0 1.5 2.0 2.5
0
.0
0
.5
1
.0
1
.5
2
.0
y(31)=0.136
0.5 1.0 1.5 2.0
0
.0
0
.5
1
.0
1
.5
2
.0
y(37)=0.868
0.5 1.0 1.5 2.0
0
.0
0
.5
1
.0
1
.5
2
.0
y(43)=1.079
0.5 1.0 1.5 2.0 2.5 3.0
0
.0
0
.5
1
.0
1
.5
2
.0
y(50)=0.778
p(x
t
[y
t
) when (, ) is unknown
Prior: N(0.0031, 0.01) and N(0.9951, 0.01)
time
0 10 20 30 40 50
0
1
2
3
4
MCMC: (burn,niter,lag)=(1000,1000,1)
SMC: N=10000
Seq.MCMC
APF
p([y
t
) and p([y
t
)
130

Time
0 10 20 30 40 50

0
.
3

0
.
2

0
.
1
0
.
0
0
.
1
0
.
2

Time
0 10 20 30 40 50
0
.
8
0
.
9
1
.
0
1
.
1
1
.
2
Learning x
t
, , and
2
The prior for
2
is IG(1.5, 0.0111).
time
0 10 20 30 40 50
0
1
2
3
4
MCMC: (burn,niter,lag)=(1000,1000,1)
SMC: N=10000
Seq.MCMC
APF

Time
0 10 20 30 40 50

0
.
3

0
.
1
0
.
1

Time
0 10 20 30 40 50
0
.
8
1
.
0
1
.
2

2
Time
0 10 20 30 40 50
0
.
0
0
0
.
0
5
0
.
1
0
0
.
1
5
Learning x
t
, , and
2
131
time
0 10 20 30 40 50
0
1
2
3
4
MCMC: (burn,niter,lag)=(2000,2000,1)
SMC: N=2000
Seq.MCMC
APF

Time
0 10 20 30 40 50

0
.
2
0
.
0
0
.
1
0
.
2

Time
0 10 20 30 40 50
0
.
7
0
.
9
1
.
1

2
Time
0 10 20 30 40 50
0
.
0
0
0
.
1
0
0
.
2
0
Learning x
t
, , and
2
time
0 10 20 30 40 50
0
.
0
1
.
0
2
.
0
3
.
0
MCMC: (burn,niter,lag)=(10000,5000,1)
SMC: N=5000
Seq.MCMC
APF

Time
0 10 20 30 40 50

0
.
3

0
.
1
0
.
1

Time
0 10 20 30 40 50
0
.
8
1
.
0
1
.
2

2
Time
0 10 20 30 40 50
0
.
0
0
0
.
0
4
0
.
0
8
0
.
1
2
Learning x
t
, , and
2
132
time
0 10 20 30 40 50
0
.
0
1
.
0
2
.
0
3
.
0
MCMC: (burn,niter,lag)=(10000,1000,10)
SMC: N=10000
Seq.MCMC
APF

Time
0 10 20 30 40 50

0
.
3

0
.
1
0
.
1

Time
0 10 20 30 40 50
0
.
8
1
.
0
1
.
2

2
Time
0 10 20 30 40 50
0
.
0
0
0
.
0
5
0
.
1
0
0
.
1
5
Sequential MCMC when n = 500
0 100 200 300 400 500
0
2
4
6
8
1
0
n=500
Index
m
h
s

Time
0 100 200 300 400 500

0
.
2
0
.
0
0
.
1
0
.
2

Time
0 100 200 300 400 500
0
.
8
0
.
9
1
.
0
1
.
1
1
.
2

2
Time
0 100 200 300 400 500
0
.
0
0
0
.
1
0
0
.
2
0
Eective sample size
ESS
t
=
N
1 +
V (t)
E
2
(t)
133
Time
E
S
S
0 100 200 300 400 500
8
0
0
8
5
0
9
0
0
9
5
0
1
0
0
0
Sequential volatilities
time
0 100 200 300 400 500
0
2
4
6
8
1
0
1
2
MCMC: (burn,niter,lag)=(1000,1000,1)
SMC: N=1000
Seq.MCMC
APF
Sequential parameter learning
134

Time
0 100 200 300 400 500

0
.
3

0
.
2

0
.
1
0
.
0
0
.
1
0
.
2

Time
0 100 200 300 400 500
0
.
7
0
.
8
0
.
9
1
.
0
1
.
1
1
.
2

2
Time
0 100 200 300 400 500
0
.
0
0
0
.
0
5
0
.
1
0
0
.
1
5
0
.
2
0
0
.
2
5

Time
0 100 200 300 400 500

0
.
3

0
.
2

0
.
1
0
.
0
0
.
1
0
.
2

Time
0 100 200 300 400 500
0
.
7
0
.
8
0
.
9
1
.
0
1
.
1
1
.
2

2
Time
0 100 200 300 400 500
0
.
0
0
0
.
0
5
0
.
1
0
0
.
1
5
0
.
2
0
0
.
2
5
Example iii: Markov switching stochastic volatility
Carvalho and Lopes (2007) adapted APF and LW lters to sequentially estimate states and param-
eters in Markov switching stochastic volatility (MSSV) models.
Let the daily returns of the IBOVESPA index, y
t
, be modeled by a MSSV model, ie.
y
t
[
t
N(0, exp(
t
))
(
t
[
t1
, , s
t
) N(
st
+
t1
,
2
)
where = (, ,
2
), = (
1
, . . . ,
k
) and regime variables s
t
following a k-state rst order Markov
process,
p
ij
= Pr(s
t
= j[s
t1
= i) for i, j = 1, . . . , k
and P = (p
11
, . . . , p
1k1
, . . . , p
k1
, . . . , p
k,k1
).
Particle lter
Step 0:
_

(j)
t
, s
(j)
t
, w
(j)
t
_
M
j=1
p(
t
, s
t
, [D
t
)
Step 1: For j = 1, . . . , M,
s
(j)
t+1
= arg max
l1,...,k
Pr(s
t+1
= l[s
t
= s
(j)
t
)

(j)
t+1
=
(j)
s
(j)
t+1
+
(j)
t

(j)
t
Step 2: For l = 1, . . . , M
1. Sample k
l
from 1, . . . , k, with Pr(k
l
) p(y
t+1
[
(k
l
)
t+1
)w
(k
l
)
t
2. Sample
(l)
t+1
from N(m
(k
l
)
t
, b
2
V
t
)
135
Jul 2nd, 97 Thailand devalues the baht by as much as 20%.
Aug 11th, 97 IMF and Thailand set a rescue agreement.
Oct 23rd, 97 Hong Kongs stock index falls 10.4%. South Korea Won weakens.
Dec 2nd, 97 IMF and South Korea set a bailout agreement.
Jun 1st, 98 Russias stock market crashes.
Jun 20th, 98 IMF gives nal approval to a loan package to Russia.
Aug 19th, 98 Russia ocially falls into default.
Oct 09th, 98 IMF and World Bank joint meeting + Fed cuts interest rates.
Jan 15th, 99 The real is allowed to oat freely by lifting exchange controls.
Feb 2nd, 99 Arminio Fraga is named president of Brazils Central Bank.
3. Sample s
(l)
t+1
from 1, . . . , k with Pr(s
(l)
t+1
) = Pr(s
(l)
t+1
[s
(k
l
)
t
)
4. Sample
(l)
t+1
from p(
t+1
[
(k
l
)
t
, s
(l)
t+1
,
(l)
t+1
)
Step 3: For l = 1, . . . , M, compute new weights
w
(l)
t+1
p(y
t+1
[
(l)
t+1
)/p(y
t+1
[
(k
l
)
t+1
)
Step 4:
_

(j)
t+1
, S
(j)
t+1
, w
(j)
t+1
_
M
j=1
p(
t+1
, S
t+1
, [D
t+1
).
Currency crisis
Carvalho and Lopes (2007) used IBOVESPA daily data from January 2nd, 1997 to January, 16th
2001 (1000 observations).
Fitting regime shifts
The vertical lines indicate key market events.
Ibovespa
Index
1/2/97 7/2/97 9/8/97 12/2/97 6/1/98 8/19/98 1/15/99 1/13/00 1/15/01

0
.1
0
.0
0
.1
0
.2
0
.3
Predicted Regime (Online)
time
1/2/97 7/2/97 9/8/97 12/2/97 6/1/98 8/19/98 1/15/99 1/13/00 1/15/01
0
.0
0
.2
0
.4
0
.6
0
.8
1
.0
Predicted LogVolatility (Online)
Index
1/2/97 7/2/97 9/8/97 12/2/97 6/1/98 8/19/98 1/15/99 1/13/00 1/15/01

5
136
Sequential inference for xed parameters
Posterior mean, 5% and 95% quantiles of .
alpha1
time
0 200 400 600 800 1000

2
.5

1
.5

0
.5
log(gamma1)
time
0 200 400 600 800 1000

2
.0

1
.5

1
.0

0
.5
0
.0
phi
time
0 200 400 600 800 1000
0
.6
0
.7
0
.8
0
.9
1
.0
1
.1
sigma2
time
0 200 400 600 800 1000
0
.0
6
0
.1
0
0
.1
4
logit(p11)
time
0 200 400 600 800 1000
3
.5
4
.0
4
.5
5
.0
5
.5
6
.0
logit(p12)
time
0 200 400 600 800 1000

4
.5

3
.5
Sequential Bayes factor: MSSV vs SV
1/2/97 7/2/97 12/2/97 6/1/98 10/9/98 1/13/00 1/15/01

2
0
2
4
6
SV-AR(1) via Particle Learning
This example was kindly prepared by my PhD student Samir Warty.
Recall the AR(1) stochastic volatility model:
y
t+1
= exp
_
x
t+1
2
_

t+1
x
t+1
= +x
t
+
t+1
137
where (
t
,
t
) N(0
2
, I
2
) and = (, , ).
Prior distribution:

2
[s
0
1(
_
n
0
2
,
n
0
S
0
2
_
, [
2
, s
0
N(m
0
,
2
C
0
)
where s
0
= (n
0
, S
0
, m
0
, C
0
).
Data augmentation argument
Following Kim, Shephard and Chibs (1998) idea:
z
t+1
log y
2
t+1
= x
t+1
+ log
2
t+1
x
t+1
+u
t
where
u
t

7

i=1

i
^(
i
,
2
i
)
Particle learning (PL) uses augments the state vector to include
t+1
1, . . . , 7, the component of
the Normal mixture approximation.
Let s
t
denote the set of sucient statistics for (, ,
2
) at time t.
Algorithm
Resample old particles c
t
= (x
t
, s
t
, ) with weights
w
t
p(z
t+1
[c
t
) =
7

i=1

i
f
N
(z
t+1
;
i
+ +x
t
,
2
i
+
2
)
Propagate new states x
t+1
from
p(x
t+1
[c
t
, z
t+1
) =
7

i=1

i
f
N
(x
t+1
;
i
,
i
)
where

i
= (
2
i
+
2
)
1

i
=
i
(
2
i
(z
t+1

i
) +
2
( +x
t
))
Algorithm (cont.)
Update sucient statistics s
t+1
= (n
t+1
, S
t+1
, m
t+1
, C
t+1
)
n
t+1
= n
t
+ 1
n
t+1
S
t+1
= n
t
S
t
+
(x
t+1
X
t
m
t
)
2
1 +X
t
C
t
X

t
C
1
t+1
= C
1
t
+X

t
X
t
C
1
t+1
m
t+1
= C
1
t
m
t
+X

t
x
t+1
where X
t
= (1, x
t
).
138
Sample parameters

2
[s
t
1(
_
n
t+1
2
,
n
t+1
S
t+1
2
_
, [
2
, s
t
^(m
t+1
,
2
C
t+1
)
Resampling weights
wt p(zt+1|ct, t, z
t
)

7
X
i=1
Z
R
p(zt+1|xt, st, , t+1 = i, t, z
t
, xt+1)p(xt+1|xt, st, , t+1 = i, t)dxt+1
(Marginalization over data augmentation)

7
X
i=1
Z
R
p(zt+1|t+1 = i, xt+1)p(xt+1|xt, )dxt+1 (Conditional independence)

7
X
i=1
Z
R
f
N
(zt+1; i + xt+1,
2
i
)f
N
(xt+1; + xt,
2
)dxt+1

7
X
i=1
if
N
(zt+1; i + + xt,
2
i
+
2
)
Posterior distribution for new states
p(xt+1|ct, t, zt+1) =
7
X
i=1
ip(xt+1|ct, t+1 = i, t, zt+1) (Marginalization over data augmentation)
=
7
X
i=1
i
p(zt+1|xt, st, , t+1 = i, t, xt+1)p(xt+1|xt, st, , t+1 = i, t)
p(zt+1|xt, st, , t+1 = i, t)
(Bayes theorem)
=
7
X
i=1
i
p(zt+1|t+1 = i, xt+1)p(xt+1|xt, )
p(zt+1|xt, , t+1 = i)
(Conditional independence)
=
7
X
i=1
i
f
N
(zt+1; i + xt+1,
2
i
)f
N
(xt+1; + xt,
2
)
f
N
(zt+1; i + + xt,
2
i
+
2
)
=
7
X
i=1
if
N
(xt+1; i, i)
where i = (
2
i
+
2
)
1
and i = i(
2
i
(zt+1 i) +
2
( + xt)).
Recursive sucient statistics
139
nt+1St+1 = ntSt + (xt+1 Xt(C
1
t
+ X

t
Xt)
1
(C
1
t
mt + X

t
xt+1))

xt+1
+ (mt (C
1
t
+ X

t
Xt)
1
(C
1
t
mt + X

t
xt+1))

C
1
t
mt
= ntSt + (x

t+1
xt+1 x

t+1
Xt(C
1
t
+ X

t
Xt)
1
(C
1
t
mt + X

t
xt+1))
+ (m

t
(C
1
t
)

mt m

t
(C
1
t
)

(C
1
t
+ X

t
Xt)
1
(C
1
t
mt + X

t
xt+1))
= ntSt + (x

t+1
xt+1 x

t+1
Xt

Ct
CtX

t
XtCt
1 + XtCtX

(C
1
t
mt + X

t
xt+1))
+ (m

t
C
1
t
mt m

t
C
1
t

Ct
CtX

t
XtCt
1 + XtCtX

(C
1
t
mt + X

t
xt+1))
= ntSt + x

t+1

1 c +
c
2
1 + c
!
xt+1 x

t+1

1
c
1 + c

Xtmt
m

t
X

1
c
1 + c

xt+1 +
m

t
X

t
Xtmt
1 + c
(where c XtCtX

t
)
= ntSt +

1
1 + c

(x

t+1
xt+1 2x

t+1
Xtmt + m

t
X

t
Xtmt)
= ntSt +

1
1 + c

(xt+1 Xtmt)

(xt+1 Xtmt)
where
(C
1
t
+ X

t
Xt)
1
=

Ct
CtX

t
XtCt
1 + XtCtX

when Xt is a vector.
140
A few references: MC and MCMC
1. Casella and George (1992) Explaining the Gibbs sampler. The American Statistician,46,167-74.
2. Chib and Greenberg (1995) Understanding the Metropolis-Hastings algorithm. The American Statistician,
49, 327-35.
3. Gelfand and Smith (1990) Sampling-Based Approaches to Calculating Marginal Densities, JASA, 85, 398-409.
4. Geman and Geman (1984) Stochastic Relaxation, Gibbs Distributions and the Bayesian Restoration of Images. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 6, 721-41.
5. Geweke (1989) Bayesian inference in econometric models using Monte Carlo integration. Econometrica, 57, 1317-39.
6. Gilks and Wild (1992) Adaptive rejection sampling for Gibbs sampling. Applied Statistics, 41, 337-48. Computational
Statistics and Data Analysis, 51, 4526-42.
7. Hastings (1970) Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika, 57, 97-109.
8. Metropolis, Rosenbluth, Rosenbluth, Teller and Teller (1953) Equation of State Calculations by Fast Computing Machines.
Journal of Chemical Physics, Number 21, 1087-92.
9. Smith, A. F. M. and Gelfand, A. E. (1992) Bayesian statistics without tears: a sampling-resampling perspec-
tive. American Statistician, 46, 84-8.
A few references: Sequential Monte Carlo methods
1. Carvalho, Johannes, Lopes and Polson (2008) Particle learning and smoothing. University of Chicago Grad-
uate School of Business.
2. Carvalho and Lopes (2007) Simulation-based sequential analysis of Markov switching stochastic volatility models,Computational
Statistics and Data Analysis, 51, 4526-4542.
3. Doucet, de Freitas, and Gordon, 2001, Sequential Monte Carlo Methods in Practice, Springer, New York.
4. Fearnhead, P. (2002). Markov chain Monte Carlo, sucient statistics, and particle lters. Journal of Computational and
Graphical Statistics, 11, 848-862.
5. Gordon, Salmond, and Smith, 1993, Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation,
IEE Proceedings, F-140, 107-113.
6. Liu and West (2001). Combined parameters and state estimation in simulation-based ltering. In Sequential Monte Carlo
Methods in Practice. Springer-Verlag New York.
7. Pitt and Shephard (1999) Filtering via Simulation: Auxiliary Particle Filter, Journal of the American Statis-
tical Association, 94, 590-599.
8. Polson, Stroud and Muller (2008). Practical ltering with sequential parameter learning. Journal of the Royal Statistical
Society, Series B, 70, 413-428.
9. Storvik (2002). Particle lters in state space models with the presence of unknown static parameters, IEEE.
Trans. of Signal Processing, 50, 281-289.
A few references: Dynamic models
1. Carlin, Polson and Stoer (1992) A Monte Carlo approach to nonnormal and nonlinear state-space modeling.
Journal of the American Statistical Association, 87, 493-500.
2. Carter and kohn (1994) On Gibbs sampling for state space models, Biometrika, 81, 541-553.
3. Fruhwirth-Schnatter (1994) Data augmentation and dynamic linear models, Journal of Time Series Analysis, 15, 183-102.
4. Migon, Gamerman, Lopes and Ferreira (2005) Dynamic models. In Dey and Rao (Eds.) Handbook of
Statistics, Volume 25.
5. West and Harrison (1989/1997) Bayesian Forecasting and Dynamic Models. New York: Springer-Verlag.
141
A few references: Stochastic volatility models
1. Berg, Meyer and Yu (2004), Deviance Information Criterion for Comparing Stochastic Volatility Models, Journal of Business
and Economic Statistics, 22, 107-20.
2. Eraker, Johannes and Polson (2003) The Impact of Jumps in Volatility and Returns, Journal of Finance,
2003, 58, 1269-300.
3. Jacquier, Polson and Rossi (1994) Bayesian analysis of stochastic volatility models. Journal of Business and Economic
Statistics, 12, 371-415.
4. Jensen and Maheu (2008) Bayesian semiparametric stochastic volatility modeling. Working paper 2008-15, Federal Reserve
Bank of Atlanta.
5. Johannes and Polson (2006) MCMC Methods for Financial Econometrics, Handbook of Financial Economet-
rics, Yacine At-Sahalia and Lars Hansen.
6. Kim, Shephard and Chib (1998) Stochastic volatility: Likelihood inference and comparison with ARCH
models, Review of economic studies, 65, 36193.
7. Lopes and Polson (2009) Extracting SP500 and NASDAQ volatility: The credit crisis of 2007-2008, Handbook of Applied
Bayesian Analysis.
8. Lopes and Carvalho (2007) Factor stochastic volatility with time varying loadings and Markov switching regimes, Journal
of Statistical Planning and Inference, 37, 3082-3091.
9. Lopes and Salazar (2006) Time series mean level and stochastic volatility modeling by smooth transition autoregressions:
a Bayesian approach, In Fomby, T.B. (Ed.) Advances in Econometrics: Econometric Analysis of Financial and Economic
Time Series/Part B, 2006, Volume 20, 229-242.
10. Polson, Stroud and Muller (2008) Practical Filtering with Sequential Parameter Learning, Journal of the Royal Statistical
Society. Series B: Statistical Methodology, 70, pp. 413-28.
142

Das könnte Ihnen auch gefallen