Beruflich Dokumente
Kultur Dokumente
Raúl Toral
1 Rate equations. 5
1.1 The Poisson distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Rate equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Master equations. 13
2.1 Master equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.1 Radioactive decay . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.2 Birth and death processes . . . . . . . . . . . . . . . . . . . . . 15
2.1.3 Birth and death from a reservoir . . . . . . . . . . . . . . . . . . 17
2.1.4 Reproduction and death . . . . . . . . . . . . . . . . . . . . . . 19
2.1.5 The autocatalitic reaction . . . . . . . . . . . . . . . . . . . . . 21
2.1.6 Gene transcription . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.7 The prey-predator Lotka-Volterra model . . . . . . . . . . . . . . 23
2.2 General results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3 The mean-field theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 The enzimatic reaction . . . . . . . . . . . . . . . . . . . . . . 27
2.4 Beyond meand-field: The Gaussian approximation . . . . . . . . . . . . 28
2.5 The Fokker-Planck equation . . . . . . . . . . . . . . . . . . . . . . . . 28
2.6 The Langevin equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.7 The 1/Ω expansion of the master equation . . . . . . . . . . . . . . . . 29
Rate equations.
λk
pλ (k) = e−λ k = 0, 1, . . . , ∞ (1.2)
k!
This Poisson distribution is one of the most important distributions in nature, probably
second to the Gaussian distribution1 . The Poisson distribution has both mean and
variance equal to the parameter λ:
∞
X
hki = kpλ (k) = λ (1.3)
k=0
σ 2 = hk 2 i − hki2 = λ (1.4)
p
So σ = hki, a typical property of the Poisson distribution.
We can think of the Poisson distribution simply as a convenient limit which simplifies
the calculations in many occasions. For instance, the probability that a person was born
on a particular day, say January 1st, is p = 1/365, approximately. Imagine that we have
now a large group of N = 500 people. Which is the probability that exactly 3 people
were born on January 1st? The correct answer is given by the binomial distribution by
1
It is worth saying that in the limit λ → ∞ the Poisson distribution can be itself approximated by
(k−µ)2
a Gaussian distribution √1 e− 2σ 2 of mean µ = λ and variance σ 2 = λ.
σ 2π
6 Rate equations
1
considering the events A=“being born on January 1st” with probability p ≈ 365
and
364
Ā=“not being born on January 1st” with probability 1 − p ≈ 365 :
3 497
500 1 364
p500 (3) = = 0.108919 . . . (1.5)
3 365 365
1.373
p1.37 (3) = e−1.37 = 0.108900 . . . (1.6)
3!
which is good enough. Let us compute now the probability that at least two persons
were born on May 11th
1. A typist makes on average 5 mistakes every 100 words. Find the probability that
in a text of 1000 words the typist has made (a) exactly 10 mistakes, (b) at least
10 mistakes.
2. Use the Gaussian approximation to the Poisson distribution to find the probability
that in a group of 10000 people, at least 10 people were born on January 1st.
There are occasions in which the Poisson limit occurs exactly. Imagine we distribute
N dots randomly and uniformly in the interval [0, T ] (we will think immediately of this
as events occuring randomly in time with a uniform rate, hence the notation). We call
ρ = N/T the rate at which points are distributed. We now ask the question: which is
the probability that exactly k of the N dots lie in the interval [t1 , t1 + t]? The event
t
A=“one given dot lies in the interval [t1 , t1 + t]” has probability p = whereas the
T
event Ā=“the given dot does not lie in the interval [t1 , t1 + t]” has probability q = 1 − p.
It should be clear that the answer to the previous question is given by the binomial
1.2 Rate equations 7
Taking the time derivative we conclude that f (t) = ρe−ρt , the well known exponential
law for radiactive decay which allows to identify τ = 1/ρ = T /N as the average time
between emissions. Let us denote by fˆ(t)dt the probability that the next electron is
emitted in the interval (t, t + dt) giving that there was an emission at t = 0. That
implies that no other electron has been emitted in the interval (0, t). Let us denote by
X the event “an electron has been emitted in (t, t + dt)”, the probability of this event is
P (X) = f (t)dt. Let us denote by Y the event “an electron was emitted at t = 0 but no
other electron has been emitted in (0, t)”. We are asking for the conditional probability
P (X, Y )
fˆ(t)dt = P (X|Y ) = (1.10)
P (Y )
but X and Y are independent events, since the probability of emission in the interval
(t, t+dt) does not depend on whether an electron has been emitted before in the interval
(0, t). Hence we have P (X, Y ) = P (X)P (Y ) and fˆ(t)dt = P (X) = f (t)dt. This leads
to fˆ(t) = f (t). In other words, the waiting time between two consequtive emissions
follows also the exponential law with the same rate ρ.
switches occur uniformly and randomly at a rate ω(1 → 2). The inverse process, that of
switching from 2 to 1 might or might not occur. If it occurs, its rate ω(2 → 1) has, in
principle, no relation whatosever with the rate ω(1 → 2). If we start at t = 0 in state 1,
we ask the probabilities P1 (t) and P2 (t) that the system is in state 1 or 2, respectively, at
time t. Obviously, they must satisfy P1 (t) + P2 (t) = 1. We will derive now a differential
equation for P1 (t). The probability that the system is in state 1 at time t + dt has two
contributions: that of being in 1 at time t and not having jumped to state 2 during the
interval (t, t + dt), and that of being at 2 at time t and having made a jump from 2 to
1 in the interval (t, t + dt). This leads to:
Or in terms of the currents J(i → j) = −ω(i → j)Pi (t) + ω(j → i)Pj (t):
dPi (t) X
= J(i → j) (1.17)
dt j6=i
Although it is not very common in practice, nothing prevents us from considering the
more general case where the transition rates depend on time. Hence, the more general
rate equations are:
dPi (t) X
= [−ω(i → j; t)Pi (t) + ω(j → i; t)Pj (t)] (1.18)
dt j6=i
1.2 Rate equations 9
X
Note that the sum can be replaced by since the term j = i does not contribute
∀j
to this sum. To find the solution of these rate equations we need to specify an initial
condition Pi (t = t0 ).
We stress again that the rate coefficients ωi→j ≡ ω(i → j) do not need to satisfy any
relation amongst them2 . Remember also that the rates ωi→j are not probabilities and
do not need to be bounded to the interval [0, 1] (although they are positive quantities).
Moreover, wij has units of time−1 . It is easy now to verify that whatever the coefficients
ωi→j it is
d X
Pi (t) = 0 (1.19)
dt i
P
and, Pagain, we have the normalization condition i Pi (t) = 1 for all times t provided
that i Pi (t0 ) = 1.
It is usual to define the total escape rate Ωj from state j as the sum of all rates to
all possible states: X X
Ωj = ωj→i = ω(j → i). (1.20)
i6=j i6=j
When the total number of states N is finite, it is possible, and useful sometimes, to
define the matrix Ω as
Ωij = ω(jP→ i) if i 6= j
(1.21)
Ωii = − j6=i ω(i → j)
dPi (t) X
= Ωij Pj (t) (1.22)
dt j
The matrix Ω is such that the rows add to zero. This property ensures that the solutions
Pi (t) respect the positivity condition Pi (t) ≥ 0 provided that Pi (0) ≥ 0. It can also be
proven that Ω has a zero eigenvalue λ1 = 0 and the others −λ2 , . . . , −λN are real and
negative (maybe the real part is negative?). This ensures that the functions Pi (t) can
be written as:
XN
st
Pi (t) = Pi + Cik e−λk t (1.23)
k=2
with the stationary (or steady state) probabilities Pist = limt→∞ Pi (t) are the elements
of the eigenvector corresponding to the zero eigenvalue. Irreducibility and ergodicity
should be discussed here. It seems that not all initial conditions Pi (t0 ) must lead to the
stationary distribution unless the process is ergodic.
It would be nice to be able to give the stationary solution Pist given the transition
rates coefficients ωi→j . However, this is not possible in general. A simple case in which
2
The elements ω(i → i) are not defined and one usually takes ω(i → i) = 0 although their precise
value is irrelevant in the majority of formulas.
10 Rate equations
this is possible is that of two states N = 2 (that we label as before by state 1 and state
2). In this case the solution is simply found as:
ω2→1
P1 (t) = P1 (t0 )e−(ω2→1 +ω1→2 )(t−t0 ) + 1 − e−(ω2→1 +ω1→2 )(t−t0 )
ω2→1 + ω1→2
ω1→2
P2 (t) = P2 (t0 )e−(ω2→1 +ω1→2 )(t−t0 ) + 1 − e−(ω2→1 +ω1→2 )(t−t0 )
ω2→1 + ω1→2
(1.24)
The stationary distribution is:
ω2→1
P1st =
ω2→1 + ω1→2
(1.25)
ω1→2
P2st =
ω2→1 + ω1→2
a particularly simple solution. Note that in this case the stationary distribution satisfies
and, of course, P (2, t|1, t0 ) = 1−P (1, t|1, t0 ) and equivalent expressions for P (1, t|2, t0 )
and P (2, t|2, t0 ). Let us stress, again, that no equivalent solution exists in the case of
having N ≥ 3 states in the system.
Chapter 2
Master equations.
where X denotes a radiactive atom and Y the product of the desintegration. Each
one of the N atoms of the substance can be in the state 1 (it has not yet emitted an
electron) or in state 2 (it has emitted an electron). The transitions are from state 1 to
state 2 at a rate ω ≡ ω(1 → 2).The rate equations are (there are no transitions from 2
to 1):
dP1 (t)
dt
= −ωP1 (t)
dP2 (t) (2.2)
dt
= +ωP1 (t)
Let n1 (t) and n2 (t) be the number of atoms in state 1 and 2 respectively at time t.
Since the particles that leave state 1 go to state 2, it is n1 (t) + n2 (t) = N . The values
of n1 (t) and n2 (t) are random variables that can take values with different probabilities.
Let us introduce the probability P (n1 , n2 ; t) of finding n1 atoms in state 1 and n2 atoms
in state 2 at time t. We assume that the initial condition is that all N atoms are in
state 1 at time t0 = 0. Using Kronecker-delta functions, this initial condition is:
The condition n1 (t) + n2 (t) = N means that we can actually simplify notation and
focus on the probability P (n; t) of having n particles in state 1 at time t; the number
of particles in state 2 will be simply N − n. We now derive a differential equation for
P (n; t).
The probability P (n; t) of having n particles is state 1 at time t can change because
one particle has left state 1 to go into state 2. A single atom can jump from 1 to 2 in the
14 Master equations.
time interval (t, t + dt) with a probability ωdt. Therefore, the probability of any atom
jumping from 1 to 2 is ωndt since the atoms jump independently one of the other. The
probability that there are n particles in state 1 at time t + dt has two contributions: (i)
the event they were n + 1 particles at time t and a jump occurred in (t, t + dt) or (ii)
they were n particles at time t and no jumps occured in (t, t + dt). Translated into an
equations:
P (n; t + dt) = P (n + 1; t)ω(n + 1)dt + P (n; t)(1 − nωdt) + o(dt)2 (2.4)
or, taking the limit dt → 0:
∂P (n; t)
= −ωnP (n; t) + ω(n + 1)P (n + 1; t) (2.5)
∂t
This is the master equation of the radiactive process. Let us introduce now the ste-
operator E which acts on any function on integers as E[f (n)] = f (n + 1). With this
notation, the master equation can be written as:
∂P (n; t)
= (E − 1)[ωnP (n; t)] (2.6)
∂t
This is to be solved with the initial condition P (n; 0) = δn,N . The solution can be found
in this case (as well as in other cases) with the help of the generating function G(s, t).
This is defined as: ∞
X
G(s, t) = sn P (n; t) (2.7)
n=−∞
(note that the sum is effectively limited to 0 ≤ n ≥ N since P (n; t) = 0 for n < 0 or
n > N ). Multiplying Eq.(2.5) by sn and summing over n we get:
X ∂P (n; t) X X
sn = −ω nsn P (n; t) + ω (n + 1)sn P (n + 1; t) (2.8)
n
∂t n n
which after a simple manipulation replacing n + 1 → n in the last terms leads to the
differential equation:
∂G(s, t) ∂G(s, t)
= ω(1 − s) , (2.9)
∂t ∂s
whose solution with the initial condition G(s, t = 0) = G0 (s) is found by Lagrange’s
method:
G(s, t) = G0 1 + (s − 1)e−ωt .
(2.10)
For instance, if at t = 0 there are N particles, then pn (t)0 = δn,N , G0 (s) = sN , and
G(s, t) = [(s − 1)e−ωt + 1]N = [se−ωt + 1 − e−ωt ]N (2.11)
P
Note that the condition G(1, t) = 1 is equivalent to the normalization condition n Pn (t) =
1. Expanding in taylor series using the binomial expansion
N
X N
G(s, t) = [se−ωt ]n [1 − e−ωt ]N −n (2.12)
n=0
n
2.1 Master equations 15
which, recalling the definition of the generation function, Eq.(2.7), gives the solution for
the probabilities:
N −nωt
P (n; t) = e [1 − e−ωt ]N −n (2.13)
n
This is nothing by a binomial distribution of parameter p(t) = e−ωt , the expected result1 .
The mean value of particles in state 1 at time t is N p(t):
hn(t)i = N e−ωt (2.14)
which is again the well known law of radiactive decay. Alternatively, the mean value can
be computed from the generating function as
∂G(s, t)
hn(t)i = (2.15)
∂s s=1
The variance can be computed in a similar way using
2 ∂ ∂G(s, t)
hn(t) i = s (2.16)
∂s ∂s
s=1
with the well known result for a binomial distribution, σ 2 [n(t)] = N p(1 − p), or:
σ 2 [n(t)] = N e−ωt (1 − e−ωt ) (2.17)
We can make connection with the results of the previous chapter by noticing that
the number of electrons emitted is k = N − n and the probability Pe− (k, t) of emitting
k electrons in the interval (0, t) also follows a binomial distribution:
N −(N −k)ωt
Pe− (k; t) = e [1 − e−ωt ]k , n = 0, 1, . . . , N (2.18)
k
of probability p = 1 − e−ωt . In the large N limit, this reduces to the Poisson distribution
if we write ω = Nρ such that N p → ρt in the limit N → ∞. So everything agrees if
we used that the rate ρ of emission of N atoms is N times the rate of emission ω of a
single atom.
Let n1 be the number of particles in the state X while n2 is the number of particles
in the state 2. The total number of particles is N = n1 + n2 , constant. We ask for the
probability P (n; t) of having n particles in state X (and, consequently, N − n in state
Y ) at time t. The equation satisfied by P (n; t), the master equation, can be obtained
by considering all transitions that can occur during the time interval (t, t + ∆t) and that
lead that at time t + ∆t there are n particles in X. They are:
(1) There were n + 1 particles at X at time t and one of them jumped to Y during the
time interval (t, t + ∆t). Since each one of the n + 1 particles can make the transition
independently of the others with a probability P (n; t + ∆t|n + 1; t) = ω1 ∆t + O(∆t)2 ,
the total probability that any particle jumps is (n + 1)ω1 ∆t. We exclude the possibility
that two (or more particles) made the transition X → Y since this is of order (∆t)2 or
larger.
(2) There were n − 1 particles at X at time t and one of the N − n + 1 particles at
Y made a jump to X. This occurs with probability P (n; t + ∆t|n − 1; t) = (N − n +
1)Ω2 ∆t + O(∆t)2 , again neglecting higher order terms.
(3) There were n particles at X at time t and no particle made a jump from X to
Y o from Y to X. The probability that this happened is 1 − P (n; t + ∆t|n; t), being
P (n; t+∆t|n; t) = nω1 ∆t−(N −n)ω2 ∆t+O(∆t)2 , the probability of the complementary
event (a particle jumped from X to Y or a particle jumped from Y to X) happened.
No other possiblities can occur, according to the rules of the process. Putting all bits
together we get:
X
P (n; t + ∆t) = P (n + k; t)P (n; t + ∆t|n; t) (2.20)
k=−1,0,1
= P (n; t)[1 − nω1 ∆t − (N − n)ω2 ∆t] + P (n + 1; t)(N − n + 1)Ω2 ∆t
+P (n − 1; t)(n + 1)ω1 ∆t + O(∆t)2 (2.21)
Arranging conveniently and taking the limit ∆t → 0 we arrive at the desired master
equation:
∂P (n; t)
= ω1 (n + 1)P (n + 1; t) + ω2 (N − n + 1)P (n − 1; t) − [ω1 n + ω2 (N − n)]P (n; t)
∂t
(2.22)
or generalizing the definition of the step operator to E k [f (n)] = f (n + k):
∂P (n; t)
= (E − 1)[ω1 nP (n; t)] + (E −1 − 1)[ω2 (N − n)P (n; t)] (2.23)
∂t
Next, we need to solve this master equation. We use the generating function G(s, t),
defined by (2.7). It is a matter of simple algebra to obtain:
∂G(s, t) ∂G(s, t)
= (1 − s) (ω1 + ω2 s) − ω2 N G(s, t) . (2.24)
∂t ∂s
The solution to this differential equation with the initial condition G(s, t = 0) = G0 (s),
can be found by the method of the characteristics:
N
ω1 + ω2 s + ω2 e−(ω1 +ω2 )t (1 − s) ω1 + ω2 s − ω1 e−(ω1 +ω2 )t (1 − s)
G(s, t) = G0
ω1 + ω2 ω1 + ω2 s + ω2 e−(ω1 +ω2 )t (1 − s)
(2.25)
2.1 Master equations 17
a binomial distribution:
N ω2
p(t)n (1 − p(t))N −n , with p(t) = 1 − e−ωt .
P (n, t) = (2.27)
n ω
If, on the other hand, at t = 0 all N particles are alive, then p(n, t = 0) = δn,N ,
G0 (s) = sN and:
N
ω1 + ω2 s − ω1 e−ωt (1 − s)
G(s, t) = (2.28)
k
−ωt
again, a binomial distribution, but with p(t) = ω2 +ωω1 e . Other initial distributions do
not yield in general a binomial form for p(n, t). Note, however, that for t → ∞ it is
N
ω1 + ω2 s
Gst (s) = lim G(s, t) = (2.29)
t→∞ ω
(following Gillespie, the bar on top of the A means that its population is assumed to
remain unchanged). If the (very large) number of particles of the reservoir is nA we
can think that the rate ωA is proportional to the density nΩA , being Ω a measure of the
18 Master equations.
volume of the reservoir, rather that to the number nA . We write then, ωA = ω2 nA /Ω.
The problem is then formally equivalent to the previously considered birth and death
with a number of particles N → ∞ and a vanishing rate ω2 such that ωA = ω2 N is
finite. Its solution can be obtained using that limit, but we will start from scratch.
We want to find the master equation for the probability P (n; t) that there are n
particles left in X at time t. We have now three elementary contributions to P (n; t + dt)
according to what happened in the time interval (t, t + dt): (i) X had n particles at time
t and none was lost to the bath and none was obtained from the bath; (ii) X had n + 1
particles in time t and one particle was lost to the bath; (iii) X had n − 1 particles and
one was transferred from the bath. Combining the probabilities of these four events we
get the evolution equation:
P (n; t + dt) = P (n; t)[1 − nω1 dt][1 − ωA dt] case (i)
+ P (n + 1; t)ω1 (n + 1)dt case (ii) (2.33)
+ P (n − 1; t)ωA dt + o(dt)2 case (iii)
or, taking the limit dt → 0:
dP (n; t)
= −(ω1 n + ωA )P (n; t) + ω1 (n + 1)P (n + 1; t) + ωA P (n − 1; t). (2.34)
dt
Again, this can be written using the step operator E as:
dP (n; t)
= (E − 1)[ω1 nP (n; t)] + (E −1 − 1)[ωA P (n; t)]. (2.35)
dt
This equation is solved again by introducing the generating function G(s, t). The
resulting partial differential equation is:
∂G ∂G
= ωA (s − 1)G − ω1 (s − 1) . (2.36)
∂t ∂s
The method of Lagrange gives us the general solution satisfying the initial condition
G(s, t = 0) = G0 (s):
ωA
(s−1)(1−e−ω1 t )
G(s, t) = e ω1 G0 (1 + (s − 1)e−ω1 t ) (2.37)
If initally, there are no X particles, then P (n, t = 0) = δn , G0 (s) = 1 and the corre-
sponding solution is:
ωA
G(s, t) = eλ(t)(s−1) , λ(t) ≡ (1 − e−ω1 t ), (2.38)
ω1
n
which corresponds to a Poisson distribution P (n, t) = e−λ(t) λ(t)
n!
. This has first moment
and variance:
hn(t)i = λ(t),
(2.39)
σ 2 [n(t)] = λ(t).
Whatever the initial condition, in the stationary limit t → ∞ we have from Eq.(2.37):
ωA
(s−1)
Gst (s) = G(s, t → ∞) = e ω1 (2.40)
ωA
A Poisson distribution of parameter λ = ω1
.
2.1 Master equations 19
The reproduction rate is C−1 (n) = ωn and the annihilation rate is CP 1 (n) = γn. We
write down directly the equation for the generating function G(s, t) = ∞ n
n=0 P (n, t)s :
∂G ∂G ∂G ∂G
= (s−1 − 1)γs + (s − 1)ωs = (1 − s)(γ − ωs) (2.42)
∂t ∂s ∂s ∂s
If γ 6= ω the solution of this partial differential equation with the initial condition
G(s, t = 0) = G0 (s) is:
γ − ωs + γ(s − 1)e−Γt
G(s, t) = G0 , (2.43)
γ − ωs + ω(s − 1)e−Γt
with Γ = γ − ω. The mean value and the variance can be obtained from the derivatives
of G:
hn(t)i = hn(0)ie−Γt (2.44)
and
γ + ω −Γt
σ[n(t)]2 = σ[n(0)]2 e−2Γt + hn(0)i e (1 − e−Γt ). (2.45)
γ−ω
If γ > ω, the mean value and the fluctuations decay to 0 indicating that all particles
eventually disappear. If γ < ω, both increase exponentially. The case γ = ω can be
treated as a limiting case, and it yields:
and
σ[n(t)]2 = σ[n(0)]2 + hn(0)i(γ + ω)t. (2.47)
It is interesting to solve the case γ = ω directly. The differential equation for G(s, t)
has the solution:
1 + (γt − 1)(1 − s)
G(s, t) = G0 (2.48)
1 + γt(1 − s)
from where hn(t)i and σ[n(t)]2 follow readily as before. If we take G0 (s) = sN , cor-
responding to a situation in which there are exactly N particles at time t = 0, it is
possible to expand this function in power series of s to find the time evolution of the
probabilities:
b(t)N
PN (0, t) = , (2.49)
[1 + b(t)]N
N −1
b(t)n−N
X N n−1
PN (n, t) = N +n
b(t)2` , n ≥ 1, (2.50)
[1 + b(t)] `=0
` N −`−1
20 Master equations.
being b(t) = γt. This shows an interesting behavior: as limt→∞ PN (0, t) = 1 it means
that eventually all particles disappear, but as the variance increases linearly σ[n(t)]2 =
2N γt, it means that there is a large tail in the distribution of PN (n, t). From
b(t)N −1
PN (0, t) − PN (1, t) = N +1
(b(t)2 + b(t) − N ) (2.51)
[1 + b(t)]
we derive that the collapse time tc at which the probability at n = 0 begins to overcome
the one at n = 1 is the solution of b(tc )2 + b(tc ) = N or tc ∼ γ −1 N 1/2 for large N .
The probabilities can be written in the alternative form:
b(t)n−N
n−1
2
2 F1 (−N, 1 − N ; n − N + 1; b(t) ), if n ≥ N,
n+N N −1
[1 + b(t)]
PN (n, t) =
b(t)N −n
N
2
2 F1 (−n, 1 − n; N − n + 1; b(t) ), if n ≤ N.
[1 + b(t)]n+N n
(2.52)
It is very easy to add immigration to this process, i.e. to consider:
ω
X −→
2X
γ
X −→
∅ (2.53)
a
∅ −→
X
The creation rate is now C−1 (n) = ωn + a and the generating function satisfies the
equation:
∂G −1 ∂G ∂G ∂G
= (s − 1)γs + (s − 1)(ωs + aG) = (1 − s) (γ − ωs) − aG (2.54)
∂t ∂s ∂s ∂s
The solution is:
−a/ω
γ − sω + ω(s − 1)e−Γt γ − sω + γ(s − 1)e−Γt
G(s, t) = G0 (2.55)
γ−ω γ − sω + ω(s − 1)e−Γt
The mean value:
a
hn(t)i = hn(0)ie−Γt + (1 − e−Γt ). (2.56)
Γ
If Γ > 0 there is a limit distribution:
−a/ω a/ω −a/ω
γ − sω ω ω
Gst (s) = = 1− 1− s (2.57)
γ−ω γ γ
from where: a/ω n
a
ω Γ ω
+n ω
Pst (n) = 1− a
(2.58)
γ Γ ω
n!γ
aγ
a negative-binomial distribution. The variance is σst [n]2 = . For γ = ω this
(γ − ω)2
distribution is not normalizable. In fact, in this case, the population grows without limit
as hn(t)i = hn(0)i + at.
2.1 Master equations 21
The rate at which one particle reproduces when in contact with the reservoir is ω2 .
However, the destruction of one particle requires that it finds another particle. If there
are n particles, the rate at which one particle then dies is ω2 (n − 1) since it can meet any
of the other n − 1 particles. Consequently, the rate at which n particles become n + 1
is ω1 n since any particle can interact with the reservoir. The rate at which n particles
become n − 1 is ω1 n(n − 1). We can now reason as we did in the previous examples to
find the master equation for the probability of having n particles at time t:
dP (n; t)
= (E − 1)[ω1 nP (n; t)] + (E −1 − 1)[ω2 n(n − 1)P (n; t)]. (2.60)
dt
The generating function satisfies the differential equation:
∂G ∂ ∂G
= s(1 − s) ω2 − ω1 G . (2.61)
∂t ∂s ∂s
which should be solved with the initial condition G(s, t = 0) = G0 (s), but I am unable
∂G
to find this general solution. The stationary solution can be found by setting = 0.
∂t
ω
This yields: Gst (s) = c1 eλs + c2 with λ = ω21 and c1 , c2 integration constants. To find
them we use the general relation G(s = 1, t) = 1 and, in this case, we note that the
probability that there are n = 0 particles is exactly equal to 0, as particles meet in pairs
but only one gets annihilated. This means that Gst (0) = Pst (0) = 0. Implementing
these two conditions we find:
∞
eλ(s−1) − 1 1 X λn n
Gst (s) = 1 + = s , (2.62)
1 − e−λ eλ − 1 n=1 n!
The master equation describing this process of creation and degradation of mRNA is:
∂P (n; t)
= ωT P (n − 1; t) − ωT P (n; t) + γ(n + 1)P (n + 1; t) − γnP (n; t) (2.65)
∂t
This equation can be solved using the generating function technique to find that in the
steady state the probability of finding n mRNA’s is a Poisson distribution of parameter
ωT /γ. Hence, the average number of mRNA’s molecules is hni = ωT /γ. Typically, a
gen of about 1500 base pairs will take 60s for transcription. That gives us an idea of
the order of magnitude of ωT ≈ 1/60s. The degradation rate is of the order of 4 times
smaller, γ ≈ 1/240s. Hence the average number of mRNA’s transcribed by a particular
gene is of the order of hni ≈ 4. This is correct experimentally, but the model has a
problem: the variability is too high. This is because p the fluctuations in the Poisson
distributions, as measured by the root mean square σ = hni ≈ 2, which is a variability
of the 50% in the number of mRNA molecules. This is too high.
We might want to include some other effects present in gene expression. We know
that mRNA is translated into proteins inside the ribosomes. A codon is a sequence of
three nucleotides (Adenin, Thymin, Cytosin or Guanin) and each codon is translated
into one of the possible 20 aminoacids (this is the genetic code). This translation is
mediated by 20 different tRNA’s. Each tRNA couples to the right codon to generate the
aminoacid. The sequence of aminoacids Hence we have the following process2 : genes
create mRNA molecules at a rate ωr . An mRNA molecule can either degradate at a rate
γ of produce a protein at a rate ωp . The protein finally degradates at a rate γ.
If we introduce the probability P (r, n; t) of having r mRNA’s, n proteins at time t,
we can write the master equation of the standard dogma as:
∂P (r, n; t)
= ωr P (r − 1, n; t) − ωr P (r, n; t) transcription
∂t
+ ωp rP (r, n − 1; t) − ωp rP (r, n; t)translation
+ γr (r + 1)P (r + 1, n; t) − γr rP (r, n; t)
degradation of mRNA
+ γp (n + 1)P (r, n + 1; t) − γp nP (r, n; t)
degradation of protein
(2.66)
We can use now the generating function technique to compute the mean values and the
fluctuations in the steady state. The result is
ωr
hri = (2.67)
γr
σ 2 [r] = (2.68)
ωr ωp
hni = (2.69)
γr γp
2
σ [n] ωp
= 1+ (2.70)
hni γr + γp
The last equation shows that in this model the distribution of proteins is super-Poissonian,
since the fluctuations are larger that in the Poisson distribution. This has been named as
2
This whole process is known as the standard dogma of molecular biology.
2.1 Master equations 23
noise amplification. The situation is then even worse that it was in the previous model,
as far as the magnitude of the variability is concerned. It is believed that the number of
proteins is regulated by a feedback mechanism between different genes. A gene B can
regulate the production of gene A by producing proteins that bind to the promotors of
gene A.
A recent modification of this model [A. Oudenaidon, PNAS 98, 8614 (2001)], in-
cludes the presence on inhibitory circuits in gene expression. Basically it amounts to
replacing ωr by ωr (1 − n) with a small number (a more realistic approach could be
to include some non-linear saturation terms). One can now solve the master equation
and after a lengthy calculation find that the average number of proteins decreases to
hni = ωγrr 1 − ωγrr . The variance is then reduced to:
σ 2 [n] ωp ωr ωp
=1+ − (2.71)
hni γr + γp γr γp
X + Y → 2Y (2.73)
with a rate ω1 . Finally, the species Y can die of natural causes at a rate ω2 :
Y →∅ (2.74)
Of course, this is a very simplified model of population dynamics, but let us analyze it
in some detail.
We denote by P (n1 , n2 ; t) the probability that there are n1 animals of species X and
n2 animals of species Y at time t. The master equation can be obtained by enumerating
the elementary processes occuring in the time interval (t, t + dt) that might contribute
to P (n1 , n2 ; t + dt) namely:
(i) The population was (n1 , n2 ) at time t and no rabbit reproduced and no rabbit was
eaten and no fox died.
(ii) The population was (n1 − 1, n2 ) at time t and a rabbit reproduced.
(iii) The population was (n1 , n2 + 1) at time t and a fox died.
(iv) The population was (n1 + 1, n2 − 1) at time t and a fox ate a rabbit and reproduced.
24 Master equations.
∂G ∂G ∂2G
= −ωA (1 − s1 )G + ω2 (1 − s2 ) + ω̄1 s2 (s1 − s2 ) (2.78)
∂t ∂s2 ∂s1 ∂s2
but it is wrong, I am a little bit tired now. In any case, the solution looks hopeless (?).
3
It can also be considered as a kind of mean field approach, since spatial inhomegeneities are
not considered. However, we will using shortly the name mean-field to denote a situation in which
correlations between the populations of prey and predators are neglected.
2.2 General results 25
∂P (n, t) X k
= (E − 1) [Ck (n)P (n, t)] , (2.79)
∂t k
begin Ck (n) some coefficients and E the linear step operator such that E k [f (n)] ≡
f (n + k) and k runs over the integer numbers. The k-th term of this sum corresponds
to the process in which −k particles are created (hence destroyed if k > 0) at a rate Ck .
It is possible
P n to obtain the general form of the equation for the generating function
G(s, t) = n s P (n; t), starting from:
∂G X −k X
= (s − 1) sn Ck (n)P (n, t). (2.80)
∂t k n
From (2.79) we get the (exact) equations for these first two moments, as:
dhni X dhn2 i X
=− hkCk (n)i , = hk(k − 2n)Ck (n)i . (2.82)
dt k
dt k
We know that X(t) = X(0)e−ωt , but we want to obtain directly from the master
equation a differential equation for X(t). Taking the derivative of the previous equation
and substituting Eq.(2.5):
dX(t) X ∂P (n; t) X
= n = n [−ωnP (n; t) + ω(n + 1)P (n + 1; t)] (2.84)
dt n
∂t n
26 Master equations.
we now make changes of variables n + 1 → n in the second term of the sum to obtain:
dX(t) X
= −ω nP (n; t) (2.85)
dt n
or
dX(t)
= −ωX(t) (2.86)
dt
the desired mean-field equation, exact in this case.
If we do the same for the birth and death process, we obtain again an exact equation
for the mean value:
dX(t)
= −ω1 X(t) + ωA (2.87)
dt
whose solution is
ωA
X(t) = X(0)e−ω1 t + 1 − e−ω1 t
(2.88)
ω1
in agreement with the previous treatment.
Example which is not linear.
We turn now to the pre-predator Lotka-Volterra model. We need to compute two
averages X(t) = hn1 (t)i and Y (t) = hn2 (t)i. After some careful calculation one obtains:
dX(t)
= ω̄A X(t) − ω̄1 hn1 (t)n2 (t)i
dt (2.89)
dY (t)
= ω̄1 hn1 (t)n2 (t)i − ω2 Y (t)
dt
And the equations are not closed. This is typical of non-linear problems. We could now
compute the evolution of hn1 (t)n2 (t)i but then it would be coupled to higher and higher
order moments, a complete mess! Mean-field approach assumes that the populations
are independent and hence hn1 (t)n2 (t)i = hn1 (t)ihn2 (t)i = X(t)Y (t). This is simply
not true, but ...
dX(t)
= ω̄A X(t) − ω̄1 X(t)Y (t)
dt (2.90)
dY (t)
= ω̄1 X(t)Y (t) − ω2 Y (t)
dt
X(t)
Now we can derive the evolution equation for the density of species x(t) = ,
Ω
Y (t)
y(t) = . With the above definitions we get:
Ω
dx(t)
= ωA cA x(t) − ω1 x(t)y(t)
dt
(2.91)
dy(t)
= ω1 y(t)y(t) − ω2 y(t)
dt
being cA = nA /Ω the concentration of food. Now all the parameters in the equations
are intensive. These are the celebrated Lotka-Volterra equations.
2.3 The mean-field theory 27
In 1913 the two scientists Maud L. Menten (1879-1960) and Leonor Michaelis (1875-
1949) published a famous work on the function of invertase (or saccharase). Invertase is
an enzyme, found for example in yeast, which catalyses the breakdown of sucrose. What
Menten and Michaelis postulated and reasoned was the following: the reaction starts
with the relatively fast combination of the complex
ω1
S+E −→ ES (2.93)
←−
ω−1
and is followed by a rather slow decay into the product and the enzyme
ω2
ES −→ P +E. (2.94)
←−
ω−2
By assuming a high energy barrier for the combination of a product with an enzyme
the backwards rate ω−2 can be neglected. In this way one can write down a set of
differential equations for the dynamical variables s(t), e(t), c(t) and p(t), resembling
substrate, enzyme, complex and product concentration respectively:
ṡ(t) = ω−1 c(t) − ω1 s(t)e(t) (2.95)
ė(t) = (ω−1 + ω2 )c(t) − ω1 s(t)e(t) (2.96)
ċ(t) = −(ω−1 + ω2 )c(t) + ω1 s(t)e(t) (2.97)
ṗ(t) = ω2 c(t) (2.98)
After a short time of rapid complex building the rates of complex formation and
breakdown will be in a steady state of flow, leading to a constant concentration c(t)
meaning ċ(t) = 0. The sum of bound and unbound enzyme molecules is constant
c(t) + e(t) = e0 and one can eliminate e(t) in (2.97). The steady state concentration
of complexes is
e0 s(t) e0 s(t)
c(t) = ω−1 +ω2 ≡ (2.99)
s(t) + ω1 s(t) + KM
and KM is called the Michaelis-Menten constant. When this equation is substituted into
the dynamics of the product one finds:
ω2 e0 s(t) s(t)
ṗ(t) = ≡ Vmax , (2.100)
s(t) + KM s(t) + KM
which is a form that can easily be compared with an experiment. For large substrate
concentrations the production velocity saturates at Vmax whereas low substrate con-
centrations lead to velocities of Vmax s/KM . The constants KM and Vmax have been
determined for many enzymes.
28 Master equations.
dn(t)
= −ωn(t) + Gξ(t) (2.101)
dt
where ξ(t) is white noise hξ(t)ξ(t0 )i = δ(t − t0 ) and G is a function to be determined.
We would like that the solution of this equation to have the exact mean and variance
for the process n(t), namely:
hn(t)i = n(0)e−ωt
(2.102)
hn(t) i − hn(t)i2 = n(0)e−ωt (1 − e−ωt )
2
Let us assume first that G = G(t) is a function of time. The solution of equation (2.101)
is in this case: Z t
n(t) = n(0)e−ωt + e−ωt dse−ωs G(s)ξ(s) (2.103)
0
and
Z t Z t
2 2 −ωt 2 −2ωt
dueω(s+u) G(s)G(t)hξ(s)ξ(u)i
hn(t) i−hn(t)i = h n(t) − n(0)e i=e ds
0 0
(2.105)
0 0
replacing hξ(t)ξ(t )i = δ(t−t ) and the left hand side by the desired variance, we obtain:
Z t
ωt
n(0)(e − 1) = dse2ωs G(s)2 (2.106)
0
2.7 The 1/Ω expansion of the master equation 29
The stochastic process is a series of jumps from one of the two states to the other.
Imagine that at time t0 we are at state A. The jump to state B will happen randomly
with a probability density function fA→B (t) such that fA→B (t)dt is the probability that
the system remains at state A for a time t and then jumps to state B in the time interval
(t, t+dt). This is equal to the probability that no jump occured in the interval (t0 , t0 +t)
which, according to the discussion in section 1.2, is e−ωA→B t , times the probability that
a jump does occur in the interval (t, t + dt), which is ωA→B dt, or:
or
fA→B (t) = e−ωA→B t ωA→B . (3.3)
This is nothing but an exponential distribution. Next thing we have to do is to generate a
time of jump tA→B using this distribution. This is done by generating a random number
u0 uniformly distributed in the interval (0, 1) and solving the equation
Z tA→B
u0 = fA→B (t)dt = 1 − e−ωA→B tA→B (3.4)
0
32 Numerical simulations of master equations: The Gillespie’s algorithm.
c /home/raul/COHERENCE/rate1.f
implicit double precision(a-h,o-z)
tmax=10000.0d0
t=0.0d0
wab=0.5d0
wba=1.0d0
call dran_ini(12345)
i=i_dran(2)
write(66,*) t,i
do while (t.lt.tmax)
if (i.eq.1) then
tn=-dlog(dran_u())/wab
in=2
else
tn=-dlog(dran_u())/wba
in=1
endif
t=t+tn
write(66,*) t,i
i=in
write(66,*) t,i
enddo
end
We now consider the more general case that there are M states labeled by 1, 2, . . . , M .
Imagine that at time t0 we are at state i0 . Now there can be jumps to M − 1 different
states with rates ωi0 →k for k = 1, . . . , N , k 6= i0 . If ωi0 →j = 0, then the correspond-
ing jump ti0 →j is not permitted. We generate now M − 1 random numbers uk0 for
3.1 Numerical simulations of master equations. 33
k = 1, . . . , M , k 6= i0 and compute the jumping times to every one of these states as:
− ln uk0
ti0 →k = , k = 1, . . . , M, k 6= i0 (3.8)
ωi0 →k
The next jump to happen will be the one that occurs in the smallest possible time
ti0 →i1 = min(ti0 →1 , ti0 →2 , · · · , ti0 →M ). The, at time t1 = t0 + ti0 →i1 we jump from i0 to
i1 . Now that we are at state i1 , we generate the random numbers uk1 for k = 1, . . . , M ,
k 6= i1 and compute the times of possible jumps
− ln uk1
ti1 →k = , k = 1, . . . , M, k 6= i1 . (3.9)
ωi1 →k
The actual jump i1 → i2 is the one that occurs in the earliest time
ti1 →i2 = min(ti1 →1 , ti1 →2 , · · · , ti1 →M ). Then, at time t2 = t1 + ti1 →i2 the state jumps
from i1 to i2 . The process starts again at state i2 at time t2 .
Here we present a computer program that implements this numerical method:
c /home/raul/COHERENCE/rate2.f
implicit double precision(a-h,o-z)
parameter (M=10)
dimension w(M,M)
do i=1,M
do j=1,M
w(i,j)=abs(i-j)
enddo
enddo
tmax=10000.0d0
t=0.0d0
call dran_ini(12345)
i=i_dran(M)
write(66,*) t,i
do while (t.lt.tmax)
if (i.eq.1) then
j0=2
else
j0=1
endif
tn=-dlog(dran_u())/w(i,j0)
in=j0
do j=j0+1,M
if (j.ne.i) then
if (w(i,j).gt.0.0d0) then
t1=-dlog(dran_u())/w(i,j)
if (t1.lt.tn) then
tn=t1
34 Numerical simulations of master equations: The Gillespie’s algorithm.
in=j
endif
endif
endif
enddo
t=t+tn
write(66,*) t,i
i=in
write(66,*) t,i
enddo
end
The same problems can be considered from a different point of view. Imagine first
that we are interested in the behavior of an ensemble of N independent systems. Each
of the systems follows a stochastic dynamics with jumps between two possible states A
and B. In order to simulate the behavior of the ensemble, we can either run the above
program rate1 N times and then analyze the data accordingly or we can just focus on
the stochastic variable that gives the number n of systems which at time t are in state
A. By conservation, the number of systems which are at state B is N − n.
From this alternative point of view, the variable n can take any of the N + 1 values
n = 0, 1, . . . , N . So, we consider that the ensemble can be in any of N + 1 states
labeled by the value of n. This is similar to the second case explained before (program
rate2). However the problem gets simpler as the only possible transitions allowed are
those that increase (or decrease) in one unit the value of n, corresponding to transitions
from one system from B to A (or from A to B). The rate of the transition from n to
n + 1 is (N − n)ωB→A and the rate of the transition from n to n − 1 is nωA→B . Then,
if at time t0 we are in state n we have to compute the time tn→n+1 of the next jump
n → n + 1 and the time tn→n−1 of the next jump to n → n − 1 and realize the action
implied by the minimum of these two values. Let us now give a specific program that
implements this numerical method.
c /home/raul/COHERENCE/rate1b.f
implicit double precision(a-h,o-z)
tmax=10000.0d0
t=0.0d0
wab=0.5d0
wba=1.0d0
call dran_ini(12345)
n=i_dran(N+1)-1
write(66,*) t,n
do while (t.lt.tmax)
if (n.eq.0) then
tn=-dlog(dran_u())/(N*wba)
in=1
elseif (n.eq.N)
3.2 The Gillespie’s algorithm. 35
tn=-dlog(dran_u())/(N*wab)
else
tn1=-dlog(dran_u())/((N-n)*wba)
tn2=-dlog(dran_u())/(n*wab)
if (tn1.lt.tn2) then
tn=tn1
in=n-1
else
tn=tn2
in=n+1
endif
endif
t=t+tn
write(66,*) t,n
n=in
write(66,*) t,n
enddo
end
An extension of this description can be used in the case that an individual system can
be in more than 2 states. Instead of giving now an specific example, we will explain first
a modification introduced by Gillespie that leads to a much more efficient programming
of the numerical simulations.
− ln u0
ti0 →i1 = . (3.10)
Ωi0
Once the time of the next jump has been determined as t1 = t0 + ti0 →i1 then we have
to determine where to jump, or which is the final state i1 . The probability pi0 →j , of
reaching state j 6= i0 is proportional to the rate ωi0 →j , or
ωi0 →j
pi0 →j = (3.11)
Ωi0
It is easy now to determine the final state i1 by using a random number P 1 v0 uniformly
distributed in the interval (0, 1) and finding the smallest i1 that satisfies ij=1 pi0 →j > v0 .
36 Numerical simulations of master equations: The Gillespie’s algorithm.
c /home/raul/COHERENCE/rate3.f
implicit double precision(a-h,o-z)
parameter (M=100)
dimension w(M,M),wt(M)
do i=1,M
wt(i)=0.0d0
do j=1,M
w(i,j)=abs(i-j)
wt(i)=wt(i)+w(i,j)
enddo
enddo
tmax=10000.0d0
t=0.0d0
call dran_ini(12345)
i=i_dran(M)
write(66,*) t,i
do while (t.lt.tmax)
tn=-dlog(dran_u())/wt(i)
p=0.0d0
j=0
r=dran_u()*wt(i)
do while (r.gt.p)
j=j+1
p=p+w(i,j)
enddo
t=t+tn
write(66,*) t,i
i=j
write(66,*) t,i
enddo
end
If we take now the point of view that there are N (possibly interacting) systems we
need to consider the variables that give the number of systems nk which are on each
of the possible states k = 1, . . . , M . These variables will change (typically by a small
amount) and the rates of the transitions (n1 , . . . , nM ) → (n01 , . . . , n0M ) will depend on
the variables (n1 , . . . , nM ) themselves. It is easier if we consider an specific example.
A simple model for the spread of an epidemics is the so-called SIR model: S (for
susceptible), I (for infectious) and R (for recovered). In its simplest form a population
of N individuals is splitted into these three groups: susceptible people can get the
disease, infectious people have the disease and can hence pass the infection to susceptible
people. Infected people cure and then they become immune to antoher infection. In
this simple version, there are no death or birth of individuals and their total number
3.2 The Gillespie’s algorithm. 37
c /home/raul/COHERENCE/epidemics.f
implicit double precision(a-h,o-z)
doube precision nu
N=10000
tmax=10000.0d0
t=0.0d0
nu=0.5d0
beta=1.0d0
Omega=100.0d0
call dran_ini(12345)
ni=i_dran(N+1)-1
ns=N-ni
nr=0
write(66,*) t,ns,ni,nr
38 Numerical simulations of master equations: The Gillespie’s algorithm.
do while (t.lt.tmax)
if (ni.eq.0) stop
w1=beta*ns*ni/Omega
w2=nu*ni
w=w1+w2
tn=-dlog(dran_u())/w
t=t+tn
write(66,*) t,ns,ni,nr
if (dran_u().lt.w1/w) then
ns=ns-1
ni=ni+1
else
ni=ni-1
nr=nr+1
endif
write(66,*) t,ns,ni,nr
enddo
end