Beruflich Dokumente
Kultur Dokumente
n
j=1,j=i
ij
[X
j
]
c
ij
increases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.7 The Hill curve gets steeper as the value of autocatalytic cooperativity c
i
increases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.8 The graph of Y = H
i
([X
i
]) is translated upwards by g
i
units. . . . . . . . 45
5.9 The 3-dimensional curve induced by H
i
([X
1
], [X
2
]) + g
i
and the plane in-
duced by
i
[X
i
], an example. . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.10 The intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
with varying values
of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
, an example. . . . . . . . . . . . . . . . . . . . 47
5.11 The possible number of intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
= 1 and g
i
= 0. The value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
is xed. . . 49
5.12 The possible number of intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
= 1 and g
i
> 0. The value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
is xed. . . 49
5.13 The possible number of intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
> 1 and g
i
= 0. The value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
is xed. . . 50
5.14 The possible number of intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
> 1 and g
i
> 0. The value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
is xed. . . 50
xi
5.15 Finding the univariate xed points using cobweb diagram, an example.
We dene the xed point as [X
i
] satisfying H([X
i
]) +g
i
=
i
[X
i
]. . . . . . 51
5.16 The curves are rotated making the line Y =
i
[X
i
] as the horizontal axis.
Positive gradient means instability, negative gradient means stability. If
the gradient is zero, we look at the left and right neighboring gradients. . 51
5.17 When g
i
= 0, [X
i
] = 0 is a component of a stable equilibrium point. . . . 56
5.18 When g
j
> 0, [X
j
] = 0 will never be a component of an equilibrium point. 56
6.1 Sample numerical solution in time series with the upper bound and lower
bound. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.2 Y =
[X
i
]
c
i
K+[X
i
]
c
i
will never touch the point (1, 1) for 1 < c
i
< . . . . . . . . 70
6.3 An example where
i
(K
i
1/c
i
) >
i
; Y = H
i
([X
i
]) and Y =
i
[X
i
] only
intersect at the origin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
7.1 When g
i
= 0, c
i
= 1 and the decay line is tangent to the univariate Hill
curve at the origin, then the origin is a saddle. . . . . . . . . . . . . . . . 76
7.2 Varying the values of parameters may vary the size of the basin of at-
traction of the lower-valued stable intersection of Y = H
i
([X
i
]) + g
i
and
Y =
i
[X
i
]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7.3 The possible number of intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c > 1 and g = 0. The value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
is taken as a
parameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7.4 The possible topologies when Y = H
i
([X
i
]) essentially lies below the decay
line Y =
i
[X
i
], g
i
= 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.5 The origin is unstable while the points where [X
i
]
n
j=1,j=i
[X
j
]
are stable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.6 Increasing the value of g
i
can result in an increased value of [X
i
] where
Y = H
i
([X
i
]) +g
i
and Y =
i
([X
i
]) intersects. . . . . . . . . . . . . . . . 83
7.7 Increasing the value of g
i
can result in an increased value of [X
i
]
, and
consequently in decreased value of [X
j
] where Y = H
j
([X
j
]) + g
j
and
Y =
j
([X
j
]) intersects, j = i. . . . . . . . . . . . . . . . . . . . . . . . . 84
xii
8.1 For Illustration 1; ODE solution and SDE realization with G(X) = 1. . . 88
8.2 For Illustration 1; ODE solution and SDE realization with G(X) = X. . 88
8.3 For Illustration 1; ODE solution and SDE realization with G(X) =
X. 89
8.4 For Illustration 1; ODE solution and SDE realization with G(X) = F(X). 89
8.5 For Illustration 1; ODE solution and SDE realization using the random
population growth model. . . . . . . . . . . . . . . . . . . . . . . . . . . 90
8.6 For Illustration 2; ODE solution and SDE realization with G(X) = 1. . . 92
8.7 For Illustration 2; ODE solution and SDE realization with G(X) = X. . 92
8.8 For Illustration 2; ODE solution and SDE realization with G(X) =
X. 93
8.9 For Illustration 2; ODE solution and SDE realization with G(X) = F(X). 93
8.10 For Illustration 2; ODE solution and SDE realization using the random
population growth model. . . . . . . . . . . . . . . . . . . . . . . . . . . 94
8.11 For Illustration 3; ODE solution and SDE realization with G(X) = 1. . . 96
8.12 For Illustration 3; ODE solution and SDE realization with G(X) = X. . 96
8.13 For Illustration 3; ODE solution and SDE realization with G(X) =
X. 97
8.14 For Illustration 3; ODE solution and SDE realization with G(X) = F(X). 97
8.15 For Illustration 3; ODE solution and SDE realization using the random
population growth model. . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.16 Phase portrait of [X
1
] and [X
2
]. . . . . . . . . . . . . . . . . . . . . . . . 98
8.17 Reactivating switched-o TFs by introducing random noise where G(X) = 1. 99
9.1 The simplied MacArthur et al. GRN . . . . . . . . . . . . . . . . . . . . 100
A.1 Intersections of F
1
, F
2
and zero-plane, an example. . . . . . . . . . . . . 106
A.2 The intersection of Y = H
1
([X
1
]) + 1 and Y = 10[X
1
] with [X
2
] = 1.001
and [X
3
] = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
A.3 The intersection of Y = H
2
([X
2
]) and Y = 10[X
2
] with [X
1
] = 0.10103
and [X
3
] = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
A.4 The intersection of Y = H
3
([X
3
]) and Y = 10[X
3
] with [X
1
] = 0.10103
and [X
2
] = 1.001. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
A.5 A sample phase portrait of the system with innitely many non-isolated
equilibrium points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
xiii
C.1 Determining the adequate g
1
> 0 that would give rise to a sole equilibrium
point where [X
1
]
> [X
2
]
. . . . . . . . . . . . . . . . . . . . . . . . . . . 133
C.2 An example where without g
1
, [X
1
]
= 0. . . . . . . . . . . . . . . . . . . 135
C.3 [X
1
]
= ([X
1
]
, [X
2
]
, . . . , [X
n
]
)
represents a certain cell type, e.g., pluripotent, tripotent, bipotent, unipotent or terminal
state.
Figure 3.3: Gene expression or the concentration of the TFs can be represented by a
state vector, e.g. ([X
1
], [X
2
], [X
3
], [X
4
]) [70]. For example, TFs of equal concentration
can be represented by a vector with equal components, such as (2.4, 2.4, 2.4, 2.4).
Modelers of GRN often use the function H
+
(or H
([X], K, c) := 1 H
+
([X], K, c) =
K
c
K
c
+ [X]
c
(3.2)
for repression, where the variable [X] is the concentration of the molecule involved [69,
73, 96, 121, 144]. The parameter K is the threshold or dissociation constant and is equal
Chapter 3. Preliminaries Mathematical Models of Gene Networks 16
to the value of X at which the Hill function is equal to 1/2. The parameter c is called
the Hill constant or Hill coecient and describes the steepness of the Hill curve. The Hill
constant often denotes multimerization-induced cooperativity (a multimer is an assembly
of multiple monomers or molecules) and may represent the number of cooperative binding
sites if c is restricted to a positive integer. However, in some cases, the Hill constant can
be a positive real number (usually 1 < c < n where n is the number of equivalent
cooperative binding sites) [73, 174]. If c = 1, then there is no cooperativity [38] and the
Hill function becomes the Michaelis-Menten function which is hyperbolic. If data are
available, we can estimate the value of c by inference.
Various ODE models and formulations are presented in [13, 14, 27, 30, 31, 32, 43,
47, 69, 76, 96, 115, 135, 173]. Examples of these are the neural network [166] model,
the S-systems (power-law) [167] model, the Andrecut [7] model, the Cinquin-Demongeot
2002 [36] model, and the Cinquin-Demongeot 2005 [38] model. The Cinquin-Demongeot
2002 and 2005 models can represent various GRNs and are more amenable to analysis.
3.2.1 Cinquin and Demongeot ODE formalism
According to Waddingtons model [168], cell dierentiation is similar to a ball rolling
down a landscape of hills and valleys. The ridges of the hills can be regarded as the
unstable equilibrium points while the parts of the valleys where the ball can stay without
rolling further (i.e., at relative minima of the landscape) can be regarded as stable equi-
librium points (attractors). Hence, the movement of the ball and its possible location
after some time can be represented by dynamical systems, specically ODEs. However,
it should be noted that existing evidence showing the presence of attractors is limited to
some mammalian cells [112].
The theory that some cells can dierentiate into many dierent cell types gives the
idea that the model representing the dynamics of such cells may exhibit multistability
(multiple stable equilibrium points). However, not all GRNs are reducible to binary or
boolean hierarchic decision network (see Figure (3.4)), that is why Cinquin and Demon-
geot formulated models that can represent cellular dierentiation with more than two
Chapter 3. Preliminaries Mathematical Models of Gene Networks 17
Figure 3.4: Hierarchic decision model and simultaneous decision model. Bars represent
repression or inhibition, while arrows represent activation. [36].
possible outcomes (multistability) obtained through dierent developmental pathways
[3, 38, 35]. The simultaneous decision network (see Figure (3.4)) is a near approximation
of the Waddington illustration where there are possibly many cell lineages involved.
In 2002, Cinquin and Demongeot proposed an ODE model representing the simulta-
neous decision network [36]. In 2005, they proposed another ODE model representing
the simultaneous decision network but with autocatalysis (autoactivation) [38]. Both the
Cinquin-Demongeot models are based on the simultaneous decision graph where there is
mutual inhibition. All elements in the Cinquin-Demongeot models are symmetric, that
is, each node has the same relationship with all other nodes, and all equations in the
system of ODEs have equal parameter values.
Chapter 3. Preliminaries Mathematical Models of Gene Networks 18
Equations (3.3) and (3.4) are the Cinquin-Demongeot ODE models without autocatal-
ysis (2002 version, [36]) and with autocatalysis (2005 version, [38]), respectively. Let us
suppose we have n antagonistic transcription factors. The state variable [X
i
] represents
the concentration of the corresponding TF protein such that the TF expression is subject
to a rst-order degradation (exponential decay). The parameters c, and g represent the
relative speed of transcription (or strength of the unrepressed TF expression relative to
the rst-order degradation), cooperativity and leak, respectively. The parameter g is a
basal expression of the corresponding TF and a constant production term that enhances
the value of [X
i
], which is possibly aected by an exogenous stimulus. For simplication,
only the transcription regulation process is considered in [38]. The models are assumed
to be intracellular and cell-autonomous (i.e., we only consider processes inside a single
cell without the inuence of other cells).
Without autocatalysis :
d[X
i
]
dt
=
1 +
n
j=1,j=i
[X
j
]
c
[X
i
], i = 1, 2, . . . , n (3.3)
With autocatalysis :
d[X
i
]
dt
=
[X
i
]
c
1 +
n
j=1
[X
j
]
c
[X
i
] +g, i = 1, 2, . . . , n (3.4)
The terms
1 +
n
j=1,j=i
[X
j
]
c
and
[X
i
]
c
1 +
n
j=1
[X
j
]
c
(3.5)
are Hill-like functions. In this study, we only consider Cinquin-Demongeot (2005 version)
model (3.4) because autocatalysis is a common property of cell fate-determining factors
known as master switches [38].
Chapter 3. Preliminaries Mathematical Models of Gene Networks 19
In [38], Cinquin and Demongeot observed that their model (with autocatalysis) can
show the priming behavior of stem cells (i.e., genes are equally expressed) as well as
the up-regulation of one gene and down-regulation of the others. They also proved that
multistability of their model where g = 0 is manipulable by changing the value of c
(cooperativity); however, manipulating the level of cooperativity is of minimal biological
relevance. Also, their model is more sensitive to stochastic noise when the equilibrium
points are near each other.
3.2.2 ODE model by MacArthur et al.
MacArthur et al. [113] proposed an ODE model (Equations (3.6) and (3.7)) to rep-
resent their GRN (refer to Figure (3.2)). Let [P
i
] be the concentration of the TF
protein in the pluripotency module, specically, [P
1
] := [OCT4], [P
2
] := [SOX2] and
[P
3
] := [NANOG]. Also, let [L
i
] be the concentration of the TF protein in the dieren-
tiation module where [L
1
] := [RUNX2], [L
2
] := [SOX9] and [L
3
] := [PPAR]. The
parameter s
i
represents the eect of the growth factors stimulating the dierentiation
towards the i-th cell lineage, specically, s
1
:= [RA+BMP4], s
2
:= [RA+TGF] and
s
3
:= [RA+Insulin]. In mouse ES cells, RUNX2 is stimulated by retinoic acid (RA) and
BMP4; SOX9 by RA and TGF-; and PPAR- by RA and Insulin. The derivation of the
ODE model and the interpretation of the parameters are discussed in the supplementary
materials of [113].
d[P
i
]
dt
=
k
1i
[P
1
][P
2
](1+[P
3
])
(1+k
0
j
s
j)(1+[P
1
][P
2
](1+[P
3
])+k
PL
j
[L
j
])
b[P
i
] (3.6)
d[L
i
]
dt
=
k
2(s
i
+k
3
j=i
s
j)[L
i
]
2m
1+k
LC
1
[P
1
][P
2
]+k
LC
2
[P
1
][P
2
][P
3
]+[L
i
]
2
+k
LL(s
i
+k
3
j=i
s
j)
j=i
[L
j
]
2
b[L
i
] (3.7)
However, this system of coupled ODEs is dicult to study using analytic techniques.
MacArthur et al. [113] simply conducted numerical simulations to investigate the behav-
ior of the system. They tried to analytically analyze the system but only for a specic
Chapter 3. Preliminaries Mathematical Models of Gene Networks 20
case where P
i
= 0, i = 1, 2, 3, that is, when the pluripotency module is switched-o.
The ODE model (3.8) that they analyzed when the pluripotency module is switched-o
follows the Cinquin-Demongeot [38] formalism with c = 2, that is,
d[L
i
]
dt
=
[L
i
]
2
1 + [L
i
]
2
+a
j=i
[L
j
]
2
b[L
i
], i = 1, 2, 3 (3.8)
MacArthur et al. [113] analytically proved that the three cell types (tripotent, bipo-
tent and terminal states) are simultaneously stable for some parameter values in (3.8).
However, as the eect of an exogenous stimulus is increased above some threshold value,
the tripotent state becomes unstable leaving only two stable cell types (bipotent and
terminal state). If the eect of the exogenous stimulus is further increased, the bipotent
state also becomes unstable leaving the terminal state as the sole stable cell type. In
addition, MacArthur et al. [113] showed that dedierentiation is not possible without
the aid of stochastic noise.
3.3 Stochastic Dierential Equations
A time-dependent Gaussian white noise term can be added to the ODE model to inves-
tigate the eect of random uctuations in gene expression. This Gaussian white noise
term combines and averages multiple heterogeneous sources of temporal noise. Equations
(3.10) to (3.13) show some of the dierent SDE models [71, 72, 113, 174] of the form
dX = F(X)dt +G(X)dW (3.9)
that we use in this study. We employ dierent G(X) to observe the various eects of
the added Gaussian white noise term. We let F(X) be the right-hand side of our ODE
equations, be a diagonal matrix of parameters representing the amplitude of noise, and
Chapter 3. Preliminaries Mathematical Models of Gene Networks 21
W be a Brownian motion (Wiener process). If the genes in a cell are isogenic (essentially
identical) then we can suppose the diagonal entries of the matrix are all equal.
dX = F(X)dt +dW (3.10)
dX = F(X)dt +XdW (3.11)
dX = F(X)dt +
XdW (3.12)
dX = F(X)dt +F(X)dW (3.13)
Notice that in Equations (3.11) and (3.12), the noise term is aected by the value
of X. As the concentration X increases, the eect of the noise term also increases.
Whereas, in Equations (3.13), the noise term is aected by the value of F(X), that is,
as the deterministic change in the concentration X with respect to time
_
dX
dt
= F(X)
_
increases, the eect of the noise term also increases. In Equation (3.10), the noise term
is not dependent on any variable.
For a more detailed discussion about various modeling techniques, the following ref-
erences may be consulted [6, 11, 18, 21, 46, 52, 55, 60, 66, 67, 75, 77, 79, 87, 88, 92, 118,
137, 140, 143, 149, 153, 154, 163, 165, 175, 179, 182].
Chapter 4
Preliminaries
Analysis of Nonlinear Systems
This chapter gives a brief discussion of the theoretical background on the qualitative
analysis of coupled nonlinear dynamical systems.
Consider autonomous system of ODEs
d[X
i
]
dt
= F
i
([X
1
], [X
2
], . . . , [X
n
]), i = 1, 2, . . . , n, (4.1)
with initial condition [X
i
](0) := [X
i
]
0
i. We assume that t 0 and F
i
: B R
n
,
i = 1, 2, . . . , n where B R
n
. If we have a nonautonoumous system of ODEs,
d[X
i
]
dt
=
F
i
([X
1
], [X
2
], . . . , [X
n
], t), i = 1, 2, . . . , n, then we convert it to an autonomous system by
dening t := [X
n+1
] and
d[X
n+1
]
dt
= 1 [134].
For simplicity, let F := (F
i
, i = 1, 2, . . . , n), X := ([X
i
], i = 1, 2, . . . , n) and X
0
:=
([X
i
]
0
, i = 1, 2, . . . , n).
For an ODE model to be useful, it is necessary that it has a solution. Existence of a
unique solution for a given initial condition is important to eectively predict the behavior
of our system. Moreover, we are assured that the solution curves of an autonomous system
do not intersect with each other when existence and uniqueness conditions hold [56].
Suppose X(t) is a dierentiable function. The solution to (4.1) satises the following
integral equation:
X(t) = X
0
+
_
t
0
F(X())d. (4.2)
22
Chapter 4. Preliminaries Analysis of Nonlinear Systems 23
The following are theorems that guarantee local existence and uniqueness of solutions
to ODEs:
Theorem 4.1 Existence theorem (Peano, Cauchy). Consider the autonomous system
(4.1). Suppose that F is continuous on B. Then the system has a solution (not necessarily
unique) on [0, ] for suciently small > 0 given any X
0
B.
Theorem 4.2 Local existence-uniqueness theorem (Picard, Lindel orf, Lipschitz,
Cauchy). Consider the autonomous system (4.1). Suppose that F is locally Lipschitz
continuous on B, that is, F satises the following condition: For each point X
0
B
there is an -neighborhood of X
0
(denoted as B
(X
0
) where B
(X
0
) B) and a positive
constant m
0
such that |F(X) F(Y )| m
0
|X Y | X, Y B
(X
0
). Then the system
has exactly one solution on [0, ] for suciently small > 0 given any X
0
B.
Theorem (4.2) can be extended to a global case stated as:
Theorem 4.3 Global existence-uniqueness theorem. If there is a positive constant
m such that |F(X) F(Y )| m|X Y | X, Y B (i.e., F is globally Lipschitz
continuous on B) then the system has exactly one solution dened for all t R
for
any X
0
B.
If all the partial derivatives
F
i
[X
j
]
i, j = 1, 2, . . . , n are continuous on B (i.e., F
C
1
(B)) then F is locally Lipschitz continuous on B. If the absolute value of these partial
derivatives are also bounded for all X B then F is globally Lipschitz continuous on
B. The global condition says that if the growth of F with respect to X is at most linear
then we have a global solution. If F satises the local Lipschitz condition but not the
global Lipschitz condition, then it is possible that after some nite time t, the solution
will blow-up.
We dene a point X = ([X
1
], [X
2
], . . . , [X
n
]) as a state of the system, and the collec-
tion of these states is called the state space. The solution curve of the system starting
from a xed initial condition is called a trajectory or orbit. The collection of trajectories
Chapter 4. Preliminaries Analysis of Nonlinear Systems 24
given any initial condition is called the ow of the dierential equation and is denoted by
(X
0
). The concept of the ow of the dierential equation indicates the dependence of
the system on initial conditions. The ow of the dierential equation can be represented
geometrically in the phase space R
n
using a phase portrait. There exists a corresponding
vector dened by the ODE that is tangent to each point in every trajectory; and the
collection of all tangent vectors of the system is a vector eld. A vector eld is often
helpful in visualizing the phase portrait of the system. Moreover, various methods are
also available to numerically solve the system (4.1) such as the Euler and Runge-Kutta
4 methods.
4.1 Stability analysis
In nonlinear analysis of systems, it is important to nd points where our system is at rest
and determine whether these points are stable or unstable. In modeling cellular dier-
entiation, an asymptotically stable equilibrium point, which is an attractor, is associated
with a certain cell type. For any initial condition in a neighborhood of the attractor, the
trajectories tend towards the attractor even if slightly perturbed.
Denition 4.1 Equilibrium point. The point X
:= ([X
1
]
, [X
2
]
, . . . , [X
n
]
) R
n
is
said to be an equilibrium point (also called as critical point, stationary point or steady
state) of the system (4.1) if and only if F(X
) = 0.
Finding the equilibrium points corresponds to solving for the real-valued solutions to
the system of equations F(X) = 0. It is possible that this system of equations has a
unique solution, several solutions, a continuum of solutions, or no solution.
In order to describe the local behavior of the system (4.1) near a specic equilibrium
point X
_
F
1
[X
1
]
F
1
[X
2
]
F
1
[Xn]
F
2
[X
1
]
F
2
[X
2
]
F
2
[Xn]
.
.
.
.
.
.
.
.
.
.
.
.
Fn
[X
1
]
Fn
[X
2
]
Fn
[Xn]
_
_
(4.3)
and then evaluating JF(X
) has zero
real part then X
converge to X
as t .
Theorem 4.4 Stability of equilibrium points. If all the eigenvalues of JF(X
) have
negative real parts then X
is an unstable equilibrium
point.
For simplicity, we will call an asymptotically stable equilibrium point, stable. There
are various tests for determining the stability of an equilibrium point such as by using
Theorem (4.4), or by using geometric analysis as shown in Figure (4.1). In addition, we
dene X
) has at least
one eigenvalue with negative real part. For further details regarding the local behavior
of nonlinear systems in the neighborhood of an equilibrium point, refer to the Stable
Manifold Theorem and the Hartman-Grobman Theorem [134].
Chapter 4. Preliminaries Analysis of Nonlinear Systems 26
Figure 4.1: The slope of F(X) at the equilibrium point determines the linear stability.
Positive gradient means instability, negative gradient means stability. If the gradient is
zero, we look at the left and right neighboring gradients. Refer to the Insect Outbreak
Model: Spruce Budworm in [122].
It is also useful to determine the set of initial conditions X
0
with trajectories con-
verging to a specic stable equilibrium point X
, denoted by
X
:=
_
X
0
: lim
t
(X
0
) = X
_
. (4.4)
In addition, a set
B B is called positively invariant with respect to the ow (X
0
)
if for any X
0
B, (X
0
)
B for all t 0, that is, the ow of the ODE remains in
B.
There are other types of attractors, such as -limit cycles and strange attractors
[56]. A limit cycle is a periodic orbit (a closed trajectory which is not an equilibrium
point) that is isolated. An asymptotically stable limit cycle is called an -limit cycle.
Strange attractors usually occur when the dynamics of the system is chaotic. Moreover,
under some conditions, a trajectory may be contained in a non-attracting but neutrally
stable center (see [56] for discussion about centers). However, the extensive numerical
simulations by MacArthur et al. [113] suggest that their ODE model (Equations (3.6)
and (3.7)) does not have oscillators (periodic orbit) and strange trajectories. Cinquin and
Demongeot [38] also claim that the solutions to their model (refer to Equations (3.4))
always tend towards an equilibrium and never oscillate [38].
Chapter 4. Preliminaries Analysis of Nonlinear Systems 27
The existence of a center, -limit cycle or strange attractor that would result to
recurring changes in phenotype is abnormal for a natural fully dierentiated cell. Limit
cycles are associated with the concept of continuous cell proliferation (self-renewal) where
there are recurring biochemical states during cell division cycles [82]. However, cell
division is beyond the scope of this thesis.
Various theorems are available for checking the possible existence or non-existence of
limit cycles (although most are for two-dimensional planar systems only). The Poincare-
Bendixson Theorem for planar systems [134] states that if F C
1
(B) and a trajectory
remains in a compact region of B whose -limit set (e.g. attracting set) does not contain
any equilibrium point, then the trajectory approaches a periodic orbit. Furthermore, if
F C
1
(B) and a trajectory remains in a compact region of B as well as if there are
only a nite number of equilibrium points, then the -limit set of any trajectory of the
planar system can be one of three types an equilibrium point, a periodic orbit or a
compound separatrix cycle.
Some researches have shown the eect of the presence of positive or negative feedback
loops in GRNs such as possible multistability (existence of multiple stable equilibrium
points) and existence of oscillations [8, 37, 45, 104, 119, 155]. It is also important to note
that a strange (chaotic) attractor will not exist for n < 3 [56].
4.2 Bifurcation analysis
The behavior of the solutions of system (4.1) depends not only on the initial conditions
but also on the values of the parameters. The parameters of the model may be associated
with real-world quantities that can be manipulated to control the solutions. Varying the
value of a parameter (or parameters) may result in dramatic changes in the qualitative
nature of the solutions, such as a change in the number of equilibrium points or a change
in the stability. Here, we now let F be a function of the state variables X and of the
parameter matrix (i.e., F(X, )). We dene the values of the parameters where such
dramatic change occurs as bifurcation value, denoted by
. If we simultaneously vary
the values of p number of parameters then we have a p-parameter bifurcation.
Chapter 4. Preliminaries Analysis of Nonlinear Systems 28
If p-parameter bifurcation is sucient for a bifurcation type to occur then we classify
the bifurcation type as codimension p. Examples of codimension one bifurcation type
are saddle-node (fold), supercritical Poincare-Andronov-Hopf and subcritical Poincare-
Andronov-Hopf bifurcations. Transcritical, supercritical pitchfork and subcritical pitch-
fork bifurcations are also often regarded as codimension one. Cusp bifurcation is of
codimension two.
Figure 4.2: Sample bifurcation diagram showing saddle-node bifurcation.
In a local bifurcation, the equilibrium point X
) = X
.
Chapter 4. Preliminaries Analysis of Nonlinear Systems 29
We use xed point iteration (FPI) to nd approximate stable equilibrium points of
the Cinquin-Demongeot [38] model. If X
(where X
0
= X
.
While
X
(i+1)
X
(i)
> do X
(i+1)
:= Q(X
(i)
).
If
X
(i+1)
X
(i)
is satised then X
(i+1)
is the approximate xed point.
Figure 4.3: An illustration of cobweb diagram.
The geometric illustration of FPI is called a cobweb diagram as illustrated in Figure
(4.3).
Chapter 4. Preliminaries Analysis of Nonlinear Systems 30
4.4 Sylvester resultant method
To nd the equilibrium points, we can rewrite the Cinquin-Demongeot ODE model where
the exponent is a positive integer as a system of polynomial equations. Assume F(X) = 0
can be written as a polynomial system P(X) = 0. The topic of solving multivariate
nonlinear polynomial systems is still in its development stage. However, there are already
various available algebraic and geometric methods for solving P(X) = 0 such as Newton-
like methods, homotopic solvers, subdivision methods, algebraic solvers using Grobner
basis, and geometric solvers using resultant construction [120]. In resultant construction,
we treat the problem of solving P(X) = 0 as a problem of nding intersections of curves.
All P
i
(X) should have no common factor of degree greater than zero so that P(X)
has a nite number of complex solutions. The following Bezout Theorem gives a bound
on the number of complex solutions including the multiplicities.
Theorem 4.5 Bezout theorem. Consider real-valued polynomials P
1
, P
2
, . . . , P
n
where
P
i
has degree deg
i
. Suppose all the polynomials have no common factor of degree greater
than zero (i.e., they are collectively relatively prime). Then the number of isolated
complex solutions to the system P
1
(X) = P
2
(X) = . . . = P
n
(X) = 0 is at most
(deg
1
)(deg
2
) . . . (deg
n
).
The method of using the Sylvester resultant is a classical algorithm in Algebraic
Geometry used to nd the complex solutions of a system of two polynomial equations in
two variables. It can also be used for solving a polynomial system of n equations with
n variables where n > 2, by repeated application of the algorithm. The idea of using
Sylvester resultants for solving multivariate polynomial systems is to eliminate all except
for one variable. There are other resultant construction methods for solving multivariate
polynomial systems with n > 2 such as the Dixon resultant, Macaulay resultant and
U-resultant methods, but we will only focus on the Sylvester resultant. The algorithm
for using Sylvester resultants is illustrated in the following paragraphs.
Chapter 4. Preliminaries Analysis of Nonlinear Systems 31
Consider two polynomials P
1
([X
1
], [X
2
]) and P
2
([X
1
], [X
2
]). We eliminate [X
1
] by
constructing the Sylvester matrix associated to the two polynomials with [X
1
] as the
variable (i.e., we take [X
2
] as xed parameter). The size of the Sylvester matrix is
(deg
1
+ deg
2
) (deg
1
+ deg
2
) where deg
1
and deg
2
are the degrees of the polynomial P
1
and P
2
in the variable [X
1
], respectively.
We give an example to show how to construct a Sylvester matrix. Let us suppose
P
1
([X
1
], [X
2
]) = 2[X
1
]
3
+ 4[X
1
]
2
[X
2
] + 7[X
1
][X
2
]
2
+ 10[X
2
]
3
+ 8 (4.5)
P
2
([X
1
], [X
2
]) = 5[X
1
]
2
+ 2[X
1
][X
2
] + [X
2
]
2
+ 6. (4.6)
Since the degree of P
1
in terms of [X
1
] is 3 and the degree of P
2
in terms of [X
1
] is 2, then
the size of the Sylvester matrix (with [X
1
] as variable) is 5 5. The Sylvester matrix of
P
1
and P
2
with [X
1
] as variable is
_
_
2 4[X
2
] 7[X
2
]
2
10[X
2
]
3
+ 8 0
0 2 4[X
2
] 7[X
2
]
2
10[X
2
]
3
+ 8
5 2[X
2
] [X
2
]
2
+ 6 0 0
0 5 2[X
2
] [X
2
]
2
+ 6 0
0 0 5 2[X
2
] [X
2
]
2
+ 6
_
_
. (4.7)
The rst row of the Sylvester matrix contains the coecients of [X
1
]
3
, [X
1
]
2
, [X
1
]
1
and
[X
1
]
0
in P
1
. We shift each element of the rst row one column to the right to form the
second row. The third row contains the coecients of [X
1
]
2
, [X
1
]
1
and [X
1
]
0
in P
2
. We
shift each element of the third row one column to the right to form the fourth row. We
again shift each element of the fourth row one column to the right to form the fth row.
Generally, we continue the process of shifting each element of the previous row to form
the next row until the coecient of [X
1
]
0
reaches the last column. All cells of the matrix
without entries coming from the coecients of the polynomials are assigned the value
zero.
We use the determinant of the Sylvester matrix to nd the intersection of P
1
and P
2
.
Chapter 4. Preliminaries Analysis of Nonlinear Systems 32
Denition 4.4 Sylvester resultant. We call the determinant of the Sylvester matrix
of P
1
and P
2
in [X
1
] (where [X
2
] is a xed parameter) the Sylvester resultant, denoted by
res(P
1
, P
2
; [X
1
]).
Theorem 4.6 Zeroes of the Sylvester resultant. The values where res(P
1
, P
2
; [X
1
]) =
0 are the complex values of [X
2
] where P
1
([X
1
], [X
2
]) = P
2
([X
1
], [X
2
]) = 0.
We denote the complex values of [X
2
] where P
1
([X
1
], [X
2
]) = P
2
([X
1
], [X
2
]) = 0 by
[X
2
]
. To nd
[X
1
]
) = P
2
([X
1
],
[X
2
]
) = 0
for all possible values of
[X
2
]
.
The following theorem can be used to determine if P
1
and P
2
either do not intersect,
or intersect at innitely many points.
Theorem 4.7 None and innitely many solutions. res(P
1
, P
2
; [X
1
]) is nonzero
for any [X
2
] if and only if P
1
([X
1
], [X
2
]) = P
2
([X
1
], [X
2
]) = 0 has no complex solutions.
Furthermore, the following statements are equivalent:
1. res(P
1
, P
2
; [X
1
]) is identically zero (i.e., zero for any values of [X
2
]).
2. P
1
and P
2
have a common factor of degree greater than zero.
3. P
1
= P
2
= 0 has innitely many complex solutions.
We can extend the Sylvester resultant method to a multivariate case, say with three
polynomials P
1
([X
1
], [X
2
], [X
3
]), P
2
([X
1
], [X
2
], [X
3
]) and P
3
([X
1
], [X
2
], [X
3
]), by getting
R
1
= res(P
1
, P
2
; [X
1
]) and R
2
= res(P
2
, P
3
; [X
1
]). Notice that R
1
and R
2
are both in
terms of [X
2
] and [X
3
]. We then get R
3
= res(R
1
, R
2
; [X
2
]) which is in terms of [X
3
].
We solve the univariate polynomial equation R
3
= 0 by using available solvers to obtain
[X
3
]
. After this, we nd
[X
2
]
by substituting
[X
3
]
in R
1
and R
2
and solve R
1
=
Chapter 4. Preliminaries Analysis of Nonlinear Systems 33
R
2
= 0. We then nd
[X
1
]
by solving P
1
([X
1
],
[X
2
]
,
[X
3
]
) = P
2
([X
1
],
[X
2
]
,
[X
3
]
) =
P
3
([X
1
],
[X
2
]
,
[X
3
]
) = 0.
For a more detailed discussion on solving systems of multivariate polynomial equa-
tions, the following references may be consulted [17, 49, 98, 156, 157, 178].
4.5 Numerical solution to SDEs
The solutions to ODEs are functions, while the solutions to SDEs are stochastic processes.
We dene a continuous-time stochastic process X as a set of random variables X
(t)
where the index variable t 0 takes a continuous set of values. The index variable t may
represent time.
Suppose we have an SDE model of the form dX = F(X)dt + G(X)dW where W
is a stochastic process called Brownian motion (Wiener process). The dierential dW of
W is called white noise. Brownian motion is the continuous version of random walk
and has the following properties:
1. For each t, the random variable W
(t)
is normally distributed with mean zero and
variance t.
2. For each t
i
< t
i+1
, the normal random variable W
(t
i
)
= W
(t
i+1
)
W
(t
i
)
is in-
dependent of the random variables W
(t
j
)
, 0 j t
i
(i.e., W has independent
increments).
3. Brownian motion W can be represented by continuous paths (but is not dieren-
tiable).
Suppose W
(t
0
)
= 0. We can simulate a Brownian motion using computers by dis-
cretizing time as 0 = t
0
< t
1
< . . . and choosing a random number that would represent
W
(t
i1
)
from the normal distribution N(0, t
i
t
i1
) =
t
i
t
i1
N(0, 1). This implies
that we obtain W
(t
i
)
by multiplying
t
i
t
i1
by a standard normal random number and
then adding the product to W
(t
i1
)
.
Chapter 4. Preliminaries Analysis of Nonlinear Systems 34
The solution to an SDE model has dierent realizations because it is based on
random numbers. We can approximate a realization of the solution by using numerical
solvers such as the Euler-Maruyama and Milstein methods. In this thesis, we use the
Euler-Maruyama method. The Euler-Maruyama method is similar to the Euler method
for ODEs.
Algorithm 2 Euler-Maruyama method
Discretize the time as 0 < t
1
< t
2
< . . . < t
end
.
Suppose Y
t
i
is the approximate solution to X
(t
i
)
.
Input initial condition X
t
0
. Let Y
t
0
:= X
t
0
.
For i = 0, 1, 2 . . . , end 1 do
W
(t
i
)
=
t
i+1
t
i
rand
N(0,1)
, where rand
N(0,1)
is a standard normal random number.
Y
t
i+1
= Y
t
i
+F(Y
t
i
)(t
i+1
t
i
) +G(Y
t
i
)(W
(t
i
)
).
end
Euler-Maruyama has order 1/2, that is, for any time t the expected value of the error
E {|X
t
Y
t
|} is an element of O((t)
1/2
) as t 0. Note that for easy simulation, we
can suppose that we have equal step sizes t
i
= (t
i+1
t
i
). For a more detailed discussion
on Brownian motion and SDEs, the following reference may be consulted [95, 147].
For a more detailed discussion on the analysis of nonlinear systems, the following
references may be consulted [3, 56, 134, 146, 147].
Chapter 5
Results and Discussion
Simplied GRN and ODE Model
In this thesis, we represent the dynamics of the simplied gene network of MacArthur
et al. [113] using a system of Ordinary Dierential Equations (ODEs) based on the
Cinquin-Demongeot formalism [38]. We prove the existence and uniqueness of solutions
to the ODE model under some assumptions.
5.1 Simplied MacArthur et al. model
Figure 5.1: The original MacArthur et al. [113] mesenchymal gene regulatory network.
35
Chapter 5. Results and Discussion Simplied GRN and ODE Model 36
Let us recall the MacArthur et al. [113] GRN in Chapter (3) (see Figure (5.1)). This
GRN represents a multipotent cell that could dierentiate into three cell types bone,
cartilage and fat.
Figure 5.2: Possible paths that result in positive feedback loops. Shaded boxes denote
that the path repeats.
We refer to the group of OCT4, SOX2, NANOG and their multimers (protein com-
plexes) as the pluripotency module, and the group of SOX9, RUNX2 and PPAR- as
the dierentiation module. OCT4, SOX2, NANOG and their multimers in the original
MacArthur et al. GRN [113] do not have autoactivation loops, but notice that the path
NANOG OCT4-SOX2-NANOG OCT4 OCT4-SOX2 SOX2 OCT4-SOX2-
NANOG NANOG is one of the positive feedback loops of the GRN (see Figure (5.2)).
A positive feedback loop that contains OCT4, SOX2, NANOG and their multimers can
be regarded as an autoactivation loop of the pluripotency module.
Both the OCT4-SOX2-NANOG and OCT4-SOX2 multimers inhibit SOX9, RUNX2
and PPAR- (as represented by the green bars in Figure (5.1)). On the other hand,
SOX9, RUNX2 and PPAR- inhibit OCT4, SOX2 and NANOG (as represented by the
Chapter 5. Results and Discussion Simplied GRN and ODE Model 37
blue bars in Figure (5.1)). These inhibitions imply that the pluripotency module inhibits
the dierentiation module and vice versa.
Figure 5.3: The simplied MacArthur et al. GRN
Since the pluripotent module can be represented as a node with autoactivation and
mutual inhibition with the other nodes, then we can simplify the GRN (5.1) by coarse-
graining. We represent the pluripotency module as one node, and we call it the sTF
(stemness transcription factor) node. From eight nodes, we only have four nodes. The
coarse-grained biological network of the MacArthur et al. GRN [113] is shown in Figure
(5.3) and from now on we shall refer to this as our simplied network. This simplied
Chapter 5. Results and Discussion Simplied GRN and ODE Model 38
network represents a phenomenological model of the mesenchymal cell dierentiation
system. Since each node undergoes autocatalysis (autoactivation) and inhibition by
the other nodes (as shown by the arrows and bars) then the simplied GRN is in the
simultaneous-decision-model form that can be translated into a Cinquin-Demongeot [38]
ODE model (refer to Figure (3.4)).
It is dicult to study the qualitative behavior of the ODE model by MacArthur et
al. [113] (see Equations (3.6) and (3.7)) using analytic methods. This is the reason for
the simplication of the MacArthur et al. [113] GRN where the essential qualitative
dynamics are still preserved. We translate the dynamics of the simplied network into a
Cinquin-Demongeot [38] ODE model for easier analysis.
One limitation of a phenomenological model is that it excludes time-delays that may
arise from the deleted molecular details. However, a phenomenological model is sucient
to address the general principles of cellular dierentiation and cellular programming such
as the temporal behavior of the dynamics of the GRN [70].
5.2 The generalized Cinquin-Demongeot ODE model
In [38], Cinquin and Demongeot suggested to extend their model to include combinato-
rial interactions and non-symmetrical networks (i.e., each node does not have the same
relationship with other nodes and each equation in the system of ODEs does not have
equal parameter values). We include more adjustable parameters to their model to rep-
resent a wider range of situations. In this generalized model, some dierentiation factors
can be stronger than others. We generalize the Cinquin-Demongeot (2005) ODE model
as follows (X = ([X
1
], [X
2
], . . . , [X
n
])):
d[X
i
]
dt
= F
i
(X) =
i
[X
i
]
c
i
K
i
c
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
+
i
s
i
i
[X
i
] (5.1)
where i = 1, 2, ..., n and n is the number of nodes.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 39
In our simplied network, we have four nodes and thus, n = 4. Some of our results
are applicable not only to n = 4 but to any dimension. The state variable [X
i
] represents
the concentration of the corresponding TF. Specically, let [X
1
] := [RUNX2], [X
2
] :=
[SOX9], [X
3
] := [PPAR] and [X
4
] := [sTF]. To have biological signicance, we
restrict [X
i
] and the parameters to be nonnegative real numbers.
The parameter
i
is the relative speed of transcription,
i
is the assumed rst-order
degradation rate associated with X
i
, and
ij
is the dierentiation stimulus that aects
the inhibition of X
i
by X
j
. If
ij
= 0 then X
j
does not inhibit the growth of [X
i
]. We
denote the term
i
s
i
by
g
i
:=
i
s
i
which represents basal or constitutive expression of the corresponding TF that is aected
by the exogenous stimulus with concentration s
i
. In other words,
i
s
i
is a constant
production term that enhances the concentration of X
i
. Specically, let s
1
:= [RA +
BMP4], s
2
:= [RA +TGF], s
3
:= [RA +Insulin] and s
4
:= 0.
We dene the multivariate function H
i
by
H
i
([X
i
], [X
2
], . . . , [X
n
]) =
i
[X
i
]
c
i
K
i
c
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
(5.2)
which comes from the typical Hill equation. The terms
n
j=1,j=i
ij
[X
j
]
c
ij
in the denomi-
nator reects the inhibitory inuence of other TFs on the change of concentration of X
i
.
We denote the parameter K
i
c
i
> 0 by
K
i
:= K
i
c
i
which is related to the threshold or dissociation constant.
The parameter c
i
1 represents the Hill constant and aects the steepness of the
Hill curve associated with [X
i
], and denotes the homomultimerization-induced positive
cooperativity (for autocatalysis). The parameter c
ij
denotes the heteromultimerization-
induced negative cooperativity (for mutual inhibition). Cooperativity describes the inter-
actions among binding sites where the anity or relationship of a binding site positively
Chapter 5. Results and Discussion Simplied GRN and ODE Model 40
or negatively changes depending on itself or on the other binding sites. Note that coop-
erativity requires more than one binding site.
Notice that the lower bound of H
i
(5.2) is zero and its upper bound is
i
. Thus, the
parameter
i
can also be interpreted as the maximal expression rate of the corresponding
TF.
Explicitly, two mathematically unequal amounts of concentration can be regarded as
biologically equal if their dierence is not signicant, that is, [X
i
] [X
j
] if [X
i
] = [X
j
]
where is acceptably small. We say that [X
i
] suciently dominates [X
j
] if [X
i
] >
[X
j
] and [X
i
] [X
j
]. In addition, scientists compare the concentration of X
i
to the
concentration of X
j
by looking at the ratio of [X
i
] and [X
j
] for example, [X
i
] [X
j
],
[X
j
] = 0 if
[X
i
]
[X
j
]
>
1
1 or if
[X
i
]
[X
j
]
<
2
1 where
1
and
2
are some acceptable tolerance
constants.
We say that a TF is switched-o or inactive if [TF] = 0, and switched-on oth-
erwise. However, as an approximation, a TF with suciently low concentration can be
considered to be switched-o.
If no component representing a node from the dierentiation module suciently dom-
inates [sTF] (e.g. [sTF] [OCT4], [sTF] [SOX2] and [sTF] [PPAR]) and sTF
is switched-on, then the state represents a pluripotent cell. If all the components of a
state are (approxmiately) equal and all TFs are switched-on, then the state represents a
primed stem cell.
If at least one component from the dierentiation module suciently dominates
[sTF], then the state represents either a partially dierentiated or a fully dier-
entiated cell. If exactly three components from the dierentiation module are (approx-
imately) equal, then the state represents a tripotent cell. If exactly two components
from the dierentiation module are (approximately) equal and suciently dominate all
other components (possibly including [sTF]), then the state represents a bipotent cell.
If sTF is switched-o, then the cell had lost its ability to self-renew.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 41
If exactly one component from the dierentiation module suciently dominates all
other components (possibly including [sTF]) but sTF is still switched-on, then the state
represents a unipotent cell. If exactly one TF from the dierentiation module remains
switched-on and all other TFs including sTF are switched-o, then the state represents
a fully dierentiated cell.
A trajectory converging to the equilibrium point (0, 0, . . . , 0) is a trivial case because
the zero state neither represents a pluripotent cell nor a cell dierentiating into bone,
cartilage or fat. The trivial case may represent a cell dierentiating towards other cell
types (e.g., towards becoming a neural cell) which are not in the domain of our GRN.
The zero state may also represent a cell that is in quiescent stage.
Denition 5.1 Stable component and stable equilibrium point. If [X
i
] converges
to [X
i
]
of an equilibrium point X
is stable; otherwise, [X
i
]
= ([X
1
]
, [X
2
]
, . . . , [X
n
]
j=1,j=i
ij
[X
j
]
c
ij
(5.3)
Chapter 5. Results and Discussion Simplied GRN and ODE Model 42
where [X
j
], j = i is taken as a parameter. This means that we project the high-
dimensional space onto a two-dimensional plane. If c
i
= 1, the graph of the univariate
Hill function in the rst quadrant of the Cartesian plane is hyperbolic (for any value of
[X
j
], j = i), similar to the topology shown in Figure (5.4). If c
i
> 1, the graph of the
univariate Hill function in the rst quadrant is sigmoidal or S-shaped (for any value of
[X
j
], j = i), similar to one of the topologies shown in Figure (5.5).
When the value of
K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
(5.4)
in the denominator of H
i
([X
i
]) increases, the graph of the Hill curve (for any c 1)
shrinks, as illustrated in Figure (5.6). When the value of c
i
increases, the graph of
Y = H
i
([X
i
]) gets steeper, as illustrated in Figure (5.7). If we add a term g
i
to H
i
([X
i
])
then the graph of Y = H
i
([X
i
]) in the Cartesian plane is translated upwards by g
i
units,
as illustrated in Figure (5.8).
We investigate the geometry of the Hill function as a prerequisite to our study of
determining the behavior of equilibrium points of our system (5.1).
Figure 5.4: Graph of the univariate Hill function when c
i
= 1.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 43
Figure 5.5: Possible graphs of the univariate Hill function when c
i
> 1.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 44
Figure 5.6: The graph of Y = H
i
([X
i
]) shrinks as the value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
increases.
Figure 5.7: The Hill curve gets steeper as the value of autocatalytic cooperativity c
i
increases.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 45
Figure 5.8: The graph of Y = H
i
([X
i
]) is translated upwards by g
i
units.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 46
5.4 Positive invariance
We solve the multivariate equation F
i
(X) = 0 (for a specic i) by solving the intersections
of the (n+1)-dimensional curve induced by H
i
([X
1
], [X
2
], . . . , [X
n
]) +g
i
and the (n+1)-
dimensional hyperplane induced by
i
[X
i
], as illustrated in Figure (5.9). That is, we nd
the real solutions to
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
+
i
s
i
=
i
[X
i
]. (5.5)
For easier analysis, we observe the intersections of the univariate functions dened by
Y = H
i
([X
i
])+g
i
and Y =
i
[X
i
] while varying the value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
in the
denominator of the univariate Hill function H
i
([X
i
]) (see Figure (5.10) for illustration).
In the univariate case, we can look at Y =
i
[X
i
] as a line in the Cartesian plane passing
through the origin with slope equal to .
Figure 5.9: The 3-dimensional curve induced by H
i
([X
1
], [X
2
])+g
i
and the plane induced
by
i
[X
i
], an example.
The following theorem guarantees that the state variables of our ODE model (5.1)
will never take negative values.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 47
Figure 5.10: The intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
with varying values
of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
, an example.
Lemma 5.1 Positive invariance. The ow (X
0
) of the generalized (multivariate)
Cinquin-Demongeot ODE model (5.1) (where X
0
= ([X
1
]
0
, [X
2
]
0
, . . . , [X
n
]
0
) R
n
can
be any initial condition) is always in R
n
.
Proof. Suppose
i
> 0 i and we have a nonnegative initial value X
0
. Figures (5.11) to
(5.14) illustrate all possible cases showing the topologies of the intersections of Y =
i
[X
i
]
and Y = H
i
([X
i
]) +g
i
.
We employ the concept of xed point iteration (where we dene our xed point as
[X
i
] satisfying H
i
([X
i
]) + g
i
=
i
[X
i
]), or the geometric analysis shown in Figure (4.1)
(where we rotate the graph of the curves, making Y =
i
[X
i
] the horizontal axis) to each
topology of the intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) + g
i
. Figures (5.15) and
(5.16) illustrate how the xed point method and the geometric analysis shown in Figure
(4.1) are done.
Given specic values of [X
j
], j = i, the univariate Hill curve Y = H
i
([X
i
]) and
Y =
i
[X
i
] have the following possible number of intersections (see Figures (5.11) to
(5.14)):
two intersections (where one is stable);
Chapter 5. Results and Discussion Simplied GRN and ODE Model 48
one intersection (which is stable); and
three intersections (where two are stable).
We can see that there exists a stable intersection located in the rst quadrant (including
the axes) of the Cartesian plane that always attracts the trajectory of our ODE model
for any initial condition without escaping the rst quadrant (including the axes). Hence,
the ow of the ODE model (5.1) will stay in R
n
for
i
> 0 i.
Now, suppose
i
= 0 for at least one i. Then
d[X
i
]
dt
0 for all ([X
1
], [X
2
], . . . , [X
n
])
given nonnegative initial condition [X
i
]
0
that is, the change in [X
i
] with respect to
time is always nonnegative implying that the value of [X
i
] will never decrease starting
from the initial condition [X
i
]
0
. Since [X
i
]
0
0, then [X
i
] 0 for any time t.
Thus, (X
0
) R
n
X
0
R
n
.
Consequently, Lemma (5.1) implies that F
i
is a function F
i
: R
n
R
n
, for i =
1, 2, . . . , n.
The following theorems are consequences of the proof of Theorem (5.1). We use
Lemma (5.2) in proving theorems in the succeeding chapters.
Lemma 5.2 Suppose
i
> 0 for all i. Then the generalized Cinquin-Demongeot ODE
model (5.1) with X
0
R
n
always has a stable equilibrium point. Moreover, any trajec-
tory of the model will converge to a stable equilibrium point.
Proof. This follows from the proof of Lemma (5.1).
Proposition 5.3 Suppose
i
> 0 for all i. Then F
i
(X) will not blow-up and will not
approach innity given any initial condition X
0
R
n
.
Proof. Since all trajectories of our system converge to a stable equilibrium point by
Lemma (5.1) and (5.2).
Chapter 5. Results and Discussion Simplied GRN and ODE Model 49
Figure 5.11: The possible number of intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
= 1 and g
i
= 0. The value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
is xed.
Figure 5.12: The possible number of intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
= 1 and g
i
> 0. The value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
is xed.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 50
Figure 5.13: The possible number of intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
> 1 and g
i
= 0. The value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
is xed.
Figure 5.14: The possible number of intersections of Y =
i
[X
i
] and Y = H
i
([X
i
]) +g
i
where c
i
> 1 and g
i
> 0. The value of K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
is xed.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 51
Figure 5.15: Finding the univariate xed points using cobweb diagram, an example.
We dene the xed point as [X
i
] satisfying H([X
i
]) +g
i
=
i
[X
i
].
Figure 5.16: The curves are rotated making the line Y =
i
[X
i
] as the horizontal axis.
Positive gradient means instability, negative gradient means stability. If the gradient is
zero, we look at the left and right neighboring gradients.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 52
5.5 Existence and uniqueness of solution
Recall Peanos Existence Theorem (4.1) stating that if each F
i
is continuous on B then the
system of ODEs has a local solution (not necessarily unique), given any initial condition
X
0
B R
n
.
Also, recall the local and global existence-uniqueness theorems (4.2) and (4.3). If the
partial derivatives
F
i
[X
j
]
i, j = 1, 2, . . . , n are continuous on B R
n
, then the system of
ODEs has a unique local solution given any initial condition X
0
B. Moreover, if the
absolute value of these partial derivatives are bounded for all X B, then the system of
ODEs has exactly one solution dened for all t R
n
R
n
, for i = 1, 2, . . . , n. Suppose that for all i, c
i
is
of type 1. Then the generalized Cinquin-Demongeot ODE model (5.1) has exactly one
solution dened for all t [0, ) for any initial condition X
0
R
n
.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 53
Proof. Since K
i
> 0, then the denominator of the Hill function H
i
(5.2) is not identically
zero for any X R
n
. This implies that each F
i
is dened and continuous on R
n
. Since
for all i, c
i
is of type 1, then it follows that F
i
is dierentiable on R
n
.
The partial derivative
F
i
[X
i
]
is as follows:
F
i
[X
i
]
=
_
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
i
c
i
[X
i
]
c
i
1
i
c
i
[X
i
]
2c
i
1
_
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
2
i
. (5.6)
The partial derivative
F
i
[X
l
]
, i = l is as follows:
F
i
[X
l
]
=
i
[X
i
]
c
i
(c
il
)
il
[X
l
]
c
il
1
_
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
2
. (5.7)
The denominator
_
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
2
(5.8)
in the partial derivative
F
i
[X
l
]
(for l = i and l = i) is not identically zero for all X R
n
.
Hence, all the partial derivatives
F
i
[X
l
]
, i, l = 1, 2, . . . , n are continuous on R
n
.
Notice that the degree of the denominator (5.8) is greater than the degree of its
corresponding numerator in Equations (5.6) and (5.7). It follows that as the value of
at least one state variable approaches innity, then the value of
F
i
[X
i
]
approaches the
constant
i
and the value of
F
i
[X
l
]
(l = i) vanishes.
Since
F
i
[X
l
]
(for l = i and l = i) is continuous on R
n
(i.e., there are no asymptotes
on R
n
that would make the partial derivatives blow-up) and
F
i
[X
l
]
(l = i and l = i)
approaches a constant as the value of at least one state variable approaches innity, then
F
i
[X
l
]
n
.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 54
Therefore, the system has a unique solution dened for all t R
n
.
Proposition 5.5 Suppose F
i
: R
n
R
n
, for i = 1, 2, . . . , n. Suppose that for at least
one i, c
i
is of type 2. Then the generalized Cinquin-Demongeot ODE model (5.1) has
a local solution (not necessarily unique) given [X
i
]
0
= 0 as an initial value. Moreover,
the generalized Cinquin-Demongeot ODE model (5.1) has a unique local solution given
[X
i
]
0
= 0 as an initial value.
Proof. Since K
i
> 0, then the denominator of the Hill function H
i
(5.2) is not identically
zero for any X R
n
. This implies that each F
i
is dened and continuous on R
n
.
By Peanos Existence Theorem, the system has a local solution (not necessarily unique)
given [X
i
]
0
= 0 as an initial condition.
Suppose that for at least one i, c
i
is of type 2. Then for such certain i, F
i
is
dierentiable on R
n
except when [X
i
] = 0. Note that the partial derivatives
F
i
[X
l
]
,
i, l = 1, 2, . . . , n (see Equation (5.6) and (5.7)) are continuous on R
+
n
(i.e., F
i
is locally
Lipschitz continuous on R
+
n
). Hence, the generalized Cinquin-Demongeot ODE model
(5.1) has a unique local solution given [X
i
]
0
= 0 as an initial value.
Remark: From the preceding proposition, at [X
i
] > 0 the trajectory of our ODE model
(5.1) is unique, but when the trajectory passes through [X
i
] = 0 the ODE model (5.1)
may (i.e., we are not sure) have more than one solution. Nevertheless, this will not aect
our analysis to eectively predict the behavior of our system when g
i
= 0 since [X
i
] = 0
is a component of a stable equilibrium point (i.e., [X
i
] will stay zero as t ). Thus,
assuming g
i
= 0, the ow of our ODE model does not change its qualitative behavior
even if the trajectory passes through [X
i
] = 0. See Figure (5.17) for illustration.
If g
i
> 0, we can show that even with the assumption that c
i
is of type 2 for at least
one i, our ODE model (5.1) can still have a unique solution dened for all t [0, ) by
restricting the domain of F
i
.
Chapter 5. Results and Discussion Simplied GRN and ODE Model 55
Proposition 5.6 Suppose there are F
j
having g
j
> 0 and c
j
of type 2. Suppose there
are no F
i
, i = j having g
i
= 0 and c
i
of type 2. Then the generalized Cinquin-Demongeot
ODE model (5.1) has exactly one solution dened for all t [0, ) for any initial values
[X
j
]
0
R
+
and [X
i
]
0
R
, i = j.
Proof. Notice that for g
j
> 0, [X
j
] = 0 will never be a component of a stable equilibrium
point (see Figure (5.18)). This implies that we can reduce the space of positive invariance
associated to [X
j
] (refer to Lemma (5.1)) from R
to R
+
. Thus, for initial value [X
j
] > 0,
we can restrict the domain of F
j
with respect to the variable [X
j
] to R
+
(i.e., we eliminate
the possibility of making [X
j
] = 0 as an initial value).
We follow the same ow of proof as in Theorem (5.4). Since we only consider [X
j
]
R
+
, then F
j
is now dierentiable everywhere. The absolute value of the partial derivatives
of F
j
are continuous and bounded for all X where [X
j
] R
+
. Moreover, since there are
no F
i
, i = j having g
i
= 0 and c
i
of type 2, then it means that F
i
must have c
i
of type
1. The absolute value of the partial derivatives of F
i
are continuous and bounded for all
X where [X
i
] R
.
Hence, we conclude that the generalized Cinquin-Demongeot ODE model (5.1) has
exactly one solution dened for all t [0, ) for any initial values [X
j
]
0
R
+
and
[X
i
]
0
R
, i = j.
Suppose g
j
> 0 but [X
j
]
0
= 0 is the j-th component of the initial condition. We
can still do numerical simulations to solve the ODE model (5.1) even when F
j
is not
dierentiable at [X
j
] = 0. However, we need to do the numerical simulation with caution
because we are not sure if multiple solutions will arise. We can use multivariate xed
point algorithm to investigate the corresponding stable equilibrium point for this kind of
system.
Note that a c
ij
of type 2 does not aect the existence and uniqueness of a solution
because [X
j
]
c
ij
only aects the shrinkage of the graph of H
i
([X
i
]) (see Figure (5.6)).
Chapter 5. Results and Discussion Simplied GRN and ODE Model 56
Figure 5.17: When g
i
= 0, [X
i
] = 0 is a component of a stable equilibrium point.
Figure 5.18: When g
j
> 0, [X
j
] = 0 will never be a component of an equilibrium point.
Chapter 6
Results and Discussion
Finding the Equilibrium Points
In this chapter, we determine the location and number of equilibrium points of the gen-
eralized Cinquin-Demongeot ODE model (5.1). We only consider the biologically feasible
equilibrium points those that are real-valued and nonnegative. For the following dis-
cussions, recall that K
i
> 0 i. Appendix A contains illustrations related to this chapter.
6.1 Location of equilibrium points
Lemma 6.1 Given nonnegative state variables and parameters in (5.1), if g
i
> 0 then
i
> 0 is a necessary and sucient condition for the existence of an equilibrium point.
Proof. Since [X
i
] 0, g
i
> 0 and all other parameters are nonnegative, then the decay
term
i
[X
i
] with
i
> 0 is necessary for
F
i
(X) =
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
i
[X
i
] +g
i
to be zero. The Hill curve induced by H
i
([X
1
], [X
2
], . . . , [X
n
]) (5.2) translated upwards
by g
i
> 0 and the hyperplane induced by
i
[X
i
] will always intersect when
i
> 0 and
will not intersect if
i
= 0 (see Figures (5.11) to (5.14)).
Remark: If g
i
= 0 and
i
= 0 then we have an equilibrium point with zero i-th component
(i.e.
i
(..., 0, ...)) but this equilibrium point is obviously unstable.
Theorem 6.2 The generalized Cinquin-Demongeot ODE model (5.1) has an equilibrium
point with i-th component equal to zero (i.e., [X
i
]
= 0) if and only if g
i
= 0.
57
Chapter 6. Results and Discussion Finding the Equilibrium Points 58
Proof. If g
i
= 0 then
F
i
(X) =
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
i
[X
i
] + 0 = 0,
implying [X
i
] = 0 is a root of F
i
(X) = 0. Furthermore, if [X
i
] = 0 is a root of F
i
(X) = 0
then by substitution,
i
[0]
c
i
K
i
+ [0]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
i
[0] +g
i
= 0,
g
i
must be zero.
The following corollary is very important because the case where the trajectory con-
verges to the origin is trivial. This zero state neither represents a pluripotent cell nor a
cell dierentiating into bone, cartilage or fat.
Corollary 6.3 The zero state (0, 0, . . . , 0) can only be an equilibrium point if and only
if g
i
= 0 i.
Proposition 6.4 Suppose
i
> 0. If both
i
> 0 and g
i
> 0 then
g
i
i
cannot be an i-th
component of an equilibrium point.
Proof. Suppose
i
> 0, g
i
> 0 and
g
i
i
is an i-th component of an equilibrium point. Then
F
i
_
[X
1
], . . . ,
g
i
i
, . . . , [X
n
]
_
=
i
_
g
i
i
_
c
i
K
i
+
_
g
i
i
_
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
i
g
i
i
+g
i
= 0
=
i
_
g
i
i
_
c
i
K
i
+
_
g
i
i
_
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
= 0
implying that
i
_
g
i
i
_
c
i
= 0. Thus
i
= 0 or g
i
= 0, a contradiction.
Chapter 6. Results and Discussion Finding the Equilibrium Points 59
Remark: If g
i
,
i
> 0 then [X
i
] =
g
i
i
can only be an i-th component of an equilibrium
point if
i
= 0.
Theorem 6.5 Suppose
i
> 0. The value
g
i
+
i
i
is the upper bound of, but will never be
equal to, [X
i
]
(where [X
i
]
1
,
g
1
+
1
1
_
_
g
2
2
,
g
2
+
2
2
_
. . .
_
g
n
n
,
g
n
+
n
n
_
. (6.1)
Proof. From Lemma (5.2), our system (5.1) always has an equilibrium point. Note that
[X
i
]
< i because [X
i
]
i
[X
i
] = 0, implying [X
i
] =
g
i
i
.
The upper bound of H
i
is
i
which will only happen when [X
i
] = . If H
i
([X
1
], [X
2
],
. . . , [X
n
]) =
i
then F
i
(X) =
i
i
[X
i
] + g
i
= 0, implying [X
i
] =
g
i
+
i
i
(but note that
this is just an upper bound and cannot be a component of an equilibrium point). See
Figure (6.1) for illustration.
Remark: The Hill curve and [X
i
] intersect at innity when g
i
,
i
or
i
0.
Moreover, if we have multiple stable equilibrium points lying on the hyperspace (6.1)
then one strategy for increasing the basin of attraction of a stable equilibrium point is by
increasing the value of
i
(however, the number of stable equilibrium points may change
by doing this strategy).
In Chapter 5 Section 5.4, we are able to show the existence of an equilibrium point
but we do not know the value of the equilibrium point. Solving the system F
i
(X) = 0,
i = 1, 2, . . . , n can be interpreted as nding the intersections of the (n + 1)-dimensional
curves induced by each F
i
(X) and the (n + 1)-dimensional zero-hyperplane.
Chapter 6. Results and Discussion Finding the Equilibrium Points 60
Figure 6.1: Sample numerical solution in time series with the upper bound and lower
bound.
6.2 Cardinality of equilibrium points
In this section, we use the Bezout Theorem (4.5) and Sylvester resultant method to
determine the number and exact values of equilibrium points.
Suppose c
i
and c
ij
are integers for all i and j. The corresponding polynomial equation
to (i = 1, 2, . . . , n)
F
i
(X) =
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
i
[X
i
] +g
i
= 0 (6.2)
is
P
i
(X) =
i
[X
i
]
c
i
+ (g
i
i
[X
i
])
_
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
= 0
=
i
[X
i
]
c
i
+1
+ (
i
+g
i
) [X
i
]
c
i
_
K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
(
i
[X
i
])
+g
i
n
j=1,j=i
ij
[X
j
]
c
ij
+g
i
K
i
= 0. (6.3)
Chapter 6. Results and Discussion Finding the Equilibrium Points 61
Theorem 6.6 Assume that all equations in the polynomial system (6.3) have no common
factor of degree greater than zero given a certain set of parameter values. Then the number
of equilibrium points of the generalized Cinquin-Demongeot ODE model (5.1) (where c
i
and c
ij
are integers) is at most
max{c
1
+ 1, c
1j
+ 1 j} max{c
2
+ 1, c
2j
+ 1 j} . . . max{c
n
+ 1, c
nj
+ 1 j}.
Proof. The degree of P
i
is deg
i
= max{c
i
+1, c
ij
+1 j}. Since for some parameter values
we assume that all equations in the polynomial system (6.3) have no common factor of
degree greater than zero then by the Bezout Theorem (4.5), the number of complex
solutions to the polynomial system is at most max{c
1
+1, c
1j
+1 j}max{c
2
+1, c
2j
+
1 j} . . . max{c
n
+ 1, c
nj
+ 1 j}. It follows that this is the upper bound of the
number of equilibrium points.
The Bezout Theorem (4.5) does not give the exact number of equilibrium points
but only the upper bound. Also, Theorem (6.6) is dependent on the value of c
i
and
c
ij
as well as on n. According to Cinquin and Demongeot, manipulating the strength
of cooperativity (c
i
and c
ij
) is of minimal biological relevance [38]. Nevertheless, the
possible dependence of the number of equilibrium points on n (dimension of our state
space) has a biological implication. The dependence on n may be due to the potency of
the cell.
It is necessary to check if all equations in the polynomial system have no common
factor of degree greater than zero, because if they do then there will be innitely many
complex solutions. Recall from Theorem (4.7) that we can determine the existence of
innitely many complex solutions by checking if res(P
1
, P
2
; X
i
) (the determinant of the
Sylvester matrix) is identically zero, or by checking if P
1
and P
2
have a non-constant
common factor.
However, the innite number of complex solutions arise if [X
i
] can take any complex
value. There can be solutions with negative (and possibly complex-valued) components
Chapter 6. Results and Discussion Finding the Equilibrium Points 62
that have no biological importance. Consequently, we need to do ad hoc investigation to
remove the solutions with negative or non-real-valued components and to check whether
the innite number of solutions still arise when [X
i
] i are restricted to be nonnegative
real numbers. It is possible that our polynomial system (6.3) has a nite number of
nonnegative real solutions even though the system has a non-constant common factor.
In order to determine the exceptions, we determine the set of parameter values (where
the strengths of cooperativity are integer-valued) that would give rise to a system of
equations having a non-constant common factor. We have found one case (and this is
the only case) where such common factor exists.
Theorem 6.7 Suppose c
i
= c
ij
= 1, g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0
and K
i
= K
j
= K > 0, for all i and j. Then the ODE model (5.1) has innitely many
non-isolated equilibrium points if and only if > K. Moreover, if K then there
is exactly one equilibrium point which is the origin.
Proof. Recall (6.3), from the nonlinear system F
i
(X) = 0 (i = 1, 2, . . . , n),
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
i
[X
i
] +g
i
= 0
we have the corresponding polynomial system P
i
(X) = 0 (i = 1, 2, . . . , n),
i
[X
i
]
c
i
i
K
i
[X
i
]
i
[X
i
]
c
i
+1
i
[X
i
]
n
j=1,j=i
ij
[X
j
]
c
ij
+g
i
K
i
+g
i
[X
i
]
c
i
+g
i
n
j=1,j=i
ij
[X
j
]
c
ij
= 0.
Suppose c
i
= c
ij
= 1, g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0 and
K
i
= K
j
= K > 0 (notice that we have a Michaelis-Menten-like symmetric system).
Chapter 6. Results and Discussion Finding the Equilibrium Points 63
Then the polynomial system can be written as (i = 1, 2, . . . , n)
[X
i
] K[X
i
] [X
i
]
2
[X
i
]
n
j=1,j=i
[X
j
] = 0
[X
i
]
_
K [X
i
]
n
j=1,j=i
[X
j
]
_
= 0
[X
i
] = 0 or
_
K [X
i
]
n
j=1,j=i
[X
j
]
_
= 0. (6.4)
Notice that the factor
K [X
i
]
n
j=1,j=i
[X
j
]
= K
n
j=1
[X
j
] (6.5)
is common to all equations in the polynomial system given the assumed parameter values.
Thus, by Theorem (4.7), there are innitely many complex solutions where [X
j
] can be
any complex number. However, note that we have restricted [X
j
] to be nonnegative, so
we do further investigation to determine the conditions for the existence of an innite
number of solutions given strictly nonnegative [X
j
]. We focus our investigation on real-
valued solutions.
Suppose B = K.
Case 1: If = K then B = 0 and thus, B
n
j=1
[X
j
] will never be zero except when
[X
j
] = 0 j (since [X
j
] can take only nonnegative values). Hence, the only equilibrium
point to the system is the origin.
Case 2: If < K then B < 0 and thus, B
n
j=1
[X
j
] will always be negative and
will not have any zero for any nonnegative value of [X
j
]. Hence, the only equilibrium
point is the origin (which is derived from [X
i
] = 0, i = 1, 2, . . . , n in Equation (6.4)).
Case 3: If > K then B > 0 and thus, there exist solutions to the equation
B
n
j=1
[X
j
] = 0. Notice that the set of nonnegative real-valued solutions to B
Chapter 6. Results and Discussion Finding the Equilibrium Points 64
n
j=1
[X
j
] = 0 is a hyperplane (e.g., it is a line for n = 2 and it is a plane for n = 3).
Hence, there are innitely many non-isolated equilibrium points when > K.
Conversely, if the generalized Cinquin-Demongeot ODE model (5.1) with c
i
= c
ij
= 1,
g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0 and K
i
= K
j
= K > 0 has
innitely many equilibrium points then the models corresponding polynomial system
has a common factor of degree greater than zero. The only possible common factor is
shown in (6.5). The only case where such factor will have innitely many non-isolated
nonnegative solutions is when > K.
Corollary 6.8 Suppose c
i
= c
ij
= 1, g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0
and K
i
= K
j
= K > 0. If > K then the equilibrium points of system (5.1) are the
origin and the non-isolated points lying on the hyperplane with equation
n
j=1
[X
j
] =
K, [X
j
] 0 j. (6.6)
Proof. This is a consequence of the proof of Theorem (6.7).
Theorem 6.9 The generalized Cinquin-Demongeot ODE model (5.1) (where c
i
and c
ij
are integers) has a nite number of equilibrium points except when all of the following
conditions are satised: c
i
= c
ij
= 1, g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0,
K
i
= K
j
= K > 0 and > K, for all i and j.
Proof. When c
i
= c
ij
= 1, g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0,
K
i
= K
j
= K > 0 and > K for all i and j then, by Theorem (6.7), the generalized
Cinquin-Demongeot ODE model (5.1) has an innite number of equilibrium points.
Now, suppose at least one of the following conditions is not satised: c
i
= c
ij
= 1,
g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0, K
i
= K
j
= K > 0 and > K,
for all i and j. Recall the corresponding polynomial system P
i
(X) = 0 (6.3) to our
generalized Cinquin-Demongeot ODE model (5.1) (where c
i
and c
ij
are integers), which
is (for i = 1, 2, . . . , n)
Chapter 6. Results and Discussion Finding the Equilibrium Points 65
i
[X
i
]
c
i
i
K
i
[X
i
]
i
[X
i
]
c
i
+1
i
[X
i
]
n
j=1,j=i
ij
[X
j
]
c
ij
+g
i
K
i
+g
i
[X
i
]
c
i
+g
i
n
j=1,j=i
ij
[X
j
]
c
ij
= 0. (6.7)
Suppose g
i
= 0. The factorization of the polynomials in the above polynomial system
(6.7) is of the form:
[X
i
]
_
i
[X
i
]
c
i
1
i
K
i
i
[X
i
]
c
i
i
n
j=1,j=i
ij
[X
j
]
c
ij
_
(6.8)
The factor [X
i
] is denitely not a common factor of our system. Moreover, there will
always be a [X
i
]
c
i
1
term in the factor
i
[X
i
]
c
i
1
i
K
i
i
[X
i
]
c
i
n
j=1,j=i
ij
[X
j
]
c
ij
that will make
i
[X
i
]
c
i
1
i
K
i
i
[X
i
]
c
i
n
j=1,j=i
ij
[X
j
]
c
ij
not a common factor of
our system.
For example, suppose g
i
= 0,
ij
= 1,
i
= ,
i
= , K
i
= K and c
i
= c
ij
for all i
and j, then we have
Factor in equation 1 : [X
1
]
c
1
1
K
n
j=1
[X
j
]
c
j
1
Factor in equation 2 : [X
2
]
c
2
1
K
n
j=1
[X
j
]
c
j
1
(6.9)
.
.
.
Factor in equation 3 : [X
n
]
cn1
K
n
j=1
[X
j
]
c
j
1
.
Notice that the presence of [X
i
]
c
i
1
in equation i makes at least one factor unique (at
least one because at most n 1 equations may satisfy the restriction: c
i
= c
ij
= 1,
g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0, K
i
= K
j
= K > 0 and > K, and
at least one equation does not).
Chapter 6. Results and Discussion Finding the Equilibrium Points 66
Suppose g
i
= 0 for at least one i. By the above proof (for g
j
= 0, j = i) as well as by
the presence of [X
i
]
c
i
(in the rst term of Equation (6.7)) and [X
i
] (in the second, third
and fourth terms of Equation (6.7)), then the polynomials in the polynomial system (6.7)
are collectively relatively prime.
For example, suppose
ij
= 1,
i
= ,
i
= , K
i
= K, g
i
= g and c
i
= c
ij
for all i
and j, then we have
[X
1
]
c
1
K[X
1
] [X
1
]
n
j=1
[X
j
]
c
j
+gK +g
n
j=1
[X
j
]
c
j
[X
2
]
c
2
K[X
2
] [X
2
]
n
j=1
[X
j
]
c
j
+gK +g
n
j=1
[X
j
]
c
j
(6.10)
.
.
.
[X
n
]
cn
K[X
n
] [X
n
]
n
j=1
[X
j
]
c
j
+gK +g
n
j=1
[X
j
]
c
j
Notice that the presence of [X
i
] in equation i makes at least one equation relatively prime
(at least one because at most n 1 equations may satisfy the restriction: c
i
= c
ij
= 1,
g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0, K
i
= K
j
= K > 0 and > K, and
at least one equation does not).
Therefore, by Theorem (4.7), there is a nite number of equilibrium points.
[X
2
]
, [X
3
]
, 0) of
a system with n = 4 and g
4
= 0 are exactly the equilibrium points of the corresponding
system with n = 3. Generally, we state the following theorem:
Theorem 6.10 Suppose g
n
= 0. Then the n-dimensional system is more general than
the (n 1)-dimensional system. That is, we can derive the equilibrium points of the
(n1)-dimensional system by getting the equilibrium points of the n-dimensional system
where [X
n
]
= 0.
Proof. When [X
n
]
= 0 and g
n
= 0, the n-dimensional system reduces to an (n 1)-
dimensional system.
Chapter 6. Results and Discussion Finding the Equilibrium Points 67
In the following illustrations, we show how to nd equilibrium points using the
Sylvester resultant. We assign specic values to some parameters. Let us consider our
simplied MacArthur et al. GRN where n = 4 (5.3).
6.2.1 Illustration 1
Consider that all parameters are equal to 1 except for g
2
= g
3
= g
4
= 0. We have the
following polynomial system:
P
1
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
1
] [X
1
](1 + [X
1
] + [X
2
] + [X
3
] + [X
4
])
+ (1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]) = 0
P
2
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
2
] [X
2
](1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]) = 0 (6.11)
P
3
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
3
] [X
3
](1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]) = 0
P
4
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
4
] [X
4
](1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]) = 0.
The Sylvester matrix associated with P
1
and P
2
with [X
1
] as variable is
_
_
1 1 [X
2
] [X
3
] [X
4
] 1 + [X
2
] + [X
3
] + [X
4
]
[X
2
] [X
2
]
2
[X
2
][X
3
] [X
2
][X
4
] 0
0 [X
2
] [X
2
]
2
[X
2
][X
3
] [X
2
][X
4
]
_
_
. (6.12)
Then res(P
1
, P
2
; [X
1
]) = [X
2
]
2
. Therefore, [X
2
]
= 0.
By doing the same procedure as above, res(P
1
, P
3
; [X
1
]) = [X
3
]
2
and res(P
1
, P
4
; [X
1
])
= [X
4
]
2
. Hence, [X
3
]
= [X
4
]
= [X
3
]
= [X
4
]
= 0 in P
1
we have [X
1
]
2
+ 1 + [X
1
]
= 0. This
means that [X
1
]
=
1+
5
2
(we disregard
1
5
2
because this is negative).
Therefore, we only have one equilibrium point,
_
1+
5
2
, 0, 0, 0
_
.
6.2.2 Illustration 2
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2, g
i
= 0, i, j = 1, 2, 3, 4.
We have the following polynomial system:
P
1
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
1
]
2
[X
1
](1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
) = 0
P
2
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
2
]
2
[X
2
](1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
) = 0 (6.13)
P
3
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
3
]
2
[X
3
](1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
) = 0
P
4
([X
1
], [X
2
], [X
3
], [X
4
]) = [X
4
]
2
[X
4
](1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
) = 0.
The Sylvester matrix associated with P
1
and P
2
with [X
1
] as variable is
_
_
a
11
a
12
a
13
a
14
0
0 a
11
a
12
a
13
a
14
a
31
a
32
a
33
0 0
0 a
31
a
32
a
33
0
0 0 a
31
a
32
a
33
_
_
. (6.14)
where a
11
= 1, a
12
= 1, a
13
= 1 [X
2
]
2
[X
3
]
2
[X
4
]
2
, a
14
= 0, a
31
= [X
2
], a
32
= 0
and a
33
= [X
2
]
2
[X
2
] [X
2
]
3
[X
2
][X
3
]
2
[X
2
][X
4
]
2
. Then
res(P
1
, P
2
; [X
1
]) =[X
2
]
3
([X
2
] + [X
4
]
2
+ 2[X
2
]
2
+ [X
3
]
2
+ 1) (6.15)
([X
2
] + [X
4
]
2
+ [X
2
]
2
+ [X
3
]
2
+ 1).
Chapter 6. Results and Discussion Finding the Equilibrium Points 69
Notice that the factors [X
2
]+[X
4
]
2
+2[X
2
]
2
+[X
3
]
2
+1 and [X
2
]+[X
4
]
2
+[X
2
]
2
+[X
3
]
2
+1
in (6.15) do not have real zeros. Therefore, [X
2
]
= 0.
Since the system is symmetric, then it follows that [X
1
]
= [X
3
]
= [X
4
]
= 0, too.
Hence, we only have one equilibrium point which is the origin.
Additional illustrations are presented in Appendix A (for n = 2 and n = 3).
When all parameters are equal to 1 except for c
i
= c
ij
= 2 and g
i
= 0 for all i, j,
then the only equilibrium point is the origin. Actually, this kind of system is the original
Cinquin-Demongeot ODE model [38] without leak where = 1 and c = 2 (refer to
system (3.4)). We state the following proposition:
Proposition 6.11 If c
i
> 1, g
i
= 0, K
i
1,
i
= 1 and
i
= 1 for all i, then our system
has only one equilibrium point which is the origin.
Proof. Let us rst consider the case where [X
j
] = 0 j = i and K
i
= 1. The graphs of
Y = H
i
([X
i
]) (with increasing value of c
i
) and Y = [X
i
] is illustrated in Figure (6.2). If
c
i
then [X
i
]
c
i
for any [X
i
] > 1. As [X
i
]
c
i
(with [X
i
] > 1), the univariate
Hill function H
i
([X
i
]) = 1. Hence, the univariate Hill curve Y = H
i
([X
i
]) will never
touch the point (1, 1) lying on Y = [X
i
] for nite c
i
.
Now, as the values of
ij
[X
j
] j = i and K
i
increase then the univariate Hill curve
Y = H
i
([X
i
]) will just shrink and will denitely not intersect the decay line Y = [X
i
]
except at the origin (see Chapter 5 Section 5.3 for the discussion regarding the geometry
of the Hill curve).
Hence, for any value of [X
i
] and [X
j
] (for all j = i), the univariate Hill curve Y =
H
i
([X
i
]) will only intersect the decay line Y = [X
i
] at the origin. In other words,
[X
i
]
c
i
K + [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
i
< [X
i
] (6.16)
except when [X
i
] = 0.
Chapter 6. Results and Discussion Finding the Equilibrium Points 70
Figure 6.2: Y =
[X
i
]
c
i
K+[X
i
]
c
i
will never touch the point (1, 1) for 1 < c
i
< .
Proposition (6.11) implies that the system with c
i
> 1, g
i
= 0, K
i
1,
i
= 1 and
i
= 1 for all i, j represents a trivial case (i.e., the fate of the cell is not in the domain
of our GRN, or the cell is in quiescent stage). This is not the only set of parameters
that gives a trivial case. A generalization of the above Proposition (6.11) is stated in the
following statement:
Proposition 6.12 If c
i
> 1, g
i
= 0 and
i
(K
i
1/c
i
)
i
(6.17)
for all i, then our system has only one equilibrium point which is the origin.
Proof. Let us rst consider the case where [X
j
] = 0, for all j = i. Recall that the upper
bound of H
i
([X
i
]) is
i
. Also, recall that when [X
i
] = K
1/c
i
i
then H
i
([X
i
]) =
i
/2 (see
Section 3.2 in Chapter 3). Note that (K
1/c
i
i
,
i
/2) is the inection point of our univariate
Hill curve. We substitute [X
i
] = K
1/c
i
i
in the decay function Y =
i
[X
i
], and if the value
of
i
(K
i
1/c
i
) is larger or equal to the value of the upper bound
i
then Y = H
i
([X
i
]) and
Y =
i
[X
i
] only intersect at the origin. See Figure (6.3) for illustration.
Chapter 6. Results and Discussion Finding the Equilibrium Points 71
Figure 6.3: An example where
i
(K
i
1/c
i
) >
i
; Y = H
i
([X
i
]) and Y =
i
[X
i
] only
intersect at the origin.
Now, as the values of
ij
[X
j
] for all j = i increase then the univariate Hill curve
Y = H
i
([X
i
]) will just shrink and will denitely not intersect the decay line Y = [X
i
]
except at the origin.
However, note that Proposition (6.12) is a sucient but not a necessary condition.
There are some cases where
i
(K
i
1/c
i
) <
i
yet Y = H
i
([X
i
]) and Y =
i
[X
i
] only
intersect at the origin.
Corollary 6.13 If c
i
> 1, g
i
= 0, K
i
1 and
i
i
for all i, then our system has only
one equilibrium point which is the origin.
Proof. Since
i
i
and K
i
1, then
i
(K
i
1/c
i
)
i
. Then we invoke Theorem
(6.12).
For c
i
= 1 and g
i
= 0, we state the following proposition:
Chapter 6. Results and Discussion Finding the Equilibrium Points 72
Proposition 6.14 Suppose c
i
= 1, g
i
= 0 and
i
K
i
i
for all i. Then our system has
only one equilibrium point which is the origin.
Proof. Let us rst consider the case where [X
j
] = 0, for all j = i. Recall that Y =
H
i
([X
i
]) where c
i
= 1 is a hyperbolic curve. The partial derivative
H
i
[X
i
]
=
[X
i
]
_
i
[X
i
]
K
i
+ [X
i
]
_
=
K
i
i
(K
i
+ [X
i
])
2
(6.18)
means that the slope of the hyperbolic curve is monotonically decreasing as [X
i
] increases.
The partial derivative at [X
i
] = 0 is
H
i
[X
i
]
=
i
K
i
i
, (6.19)
which means that the slope of Y = H
i
([X
i
]) at [X
i
] = 0 is less than the slope of the decay
line Y =
i
[X
i
] at [X
i
] = 0. Hence, the Hill curve Y = H
i
([X
i
]) lies below the decay line
for all [X
i
] > 0.
Suppose c
i
1 and g
i
= 0 for all i. In general, the origin is the only equilibrium
point of our ODE model (5.1) if and only if the univariate curve Y = H
i
([X
i
]) lies below
the decay line Y =
i
[X
i
] (i.e., H
i
([X
i
]) <
i
[X
i
], [X
i
] > 0) for all i. This statement is
similar to Theorem (7.6) in the next Chapter.
Remark: We have seen the importance of the univariate Hill function H
i
([X
i
]). For
instance, when n = 1, c
1
= 2,
1
> 0 and g
1
= 0, the Hill curve and the decay line
intersect at
[X
1
]
= 0,
1
2
1
4
2
1
K
1
2
1
. (6.20)
Notice that the equilibrium points depend on the parameters
1
,
1
and K
1
.
According to Cinquin and Demongeot [38], a suciently large c coupled with a suf-
ciently large are needed for the existence of an equilibrium point with a component
dominating the other components. Moreover, decreasing the value
i
or adding the term
g
i
may result to an increased value of [X
i
]
.
Chapter 7
Results and Discussion
Stability of Equilibria and Bifurcation
We determine the stability of the equilibrium points of the generalized Cinquin-Demongeot
(2005) ODE model (5.1) for a given set of parameters. We also identify if varying the
values of some parameters, such as those associated with the exogenous stimuli, can steer
the system towards a desired state.
7.1 Stability of equilibrium points
Recall Lemma (5.2): Suppose
i
> 0 for all i. Then the generalized Cinquin-Demongeot
ODE model (5.1) with X
0
R
n
always has a stable equilibrium point. Moreover, any
trajectory of the model will converge to a stable equilibrium point.
Theorem 7.1 Given a set of parameters where
i
> 0 for all i, if the system (5.1) has
only one equilibrium point then this point is stable.
Proof. This is a consequence of Lemma (5.2).
The following theorem assures us that our system (for any dimension n) will never
have an asymptotically stable limit cycle:
Theorem 7.2 Suppose
i
> 0 for all i, then any trajectory of our system (5.1) never
converges to a neutrally stable center, to an -limit cycle, or to a strange attractor. This
also implies that (5.1) will never have an asymptotically stable limit cycle.
73
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 74
Proof. Since for any nonnegative initial condition, the trajectory of the ODE model
converges to a stable equilibrium point (see Lemma (5.2)), then any trajectory will never
stay orbiting a center, will never converge to an -limit cycle, and will never converge to
a strange attractor.
Moreover, suppose an -limit cycle exists. Then given some initial condition, the
trajectory of the system converges to this -limit cycle. This contradicts Lemma (5.2)
stating that any trajectory always converges to a stable equilibrium point.
Now, the following is the Jacobian of our system.
JF(X) =
_
_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
a
nn
_
_
(7.1)
where
a
ii
=
F
i
[X
i
]
=
_
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
i
c
i
[X
i
]
c
i
1
i
c
i
[X
i
]
2c
i
1
_
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
2
i
=
_
K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
i
c
i
[X
i
]
c
i
1
_
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
2
i
(7.2)
a
il
=
F
i
[X
l
]
=
i
[X
i
]
c
i
(c
il
)
il
[X
l
]
c
il
1
_
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
2
, i = l. (7.3)
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 75
Notice that
F
i
[X
i
]
> 0 if
i
<
_
K
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
i
c
i
[X
i
]
c
i
1
_
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
_
2
. (7.4)
Hence, if
i
= 0 then the value of F
i
will always increase as [X
i
] increases. That is why
when
i
= 0 for at least one i, we do not have a stable equilibrium point. Moreover,
F
i
[X
l
]
, i = l is always non-positive because as [X
l
] increases, the value of F
i
decreases.
Theorem 7.3 In our system (5.1), suppose g
i
= 0 and c
i
= 1 i. Then the origin
is a stable equilibrium point when
i
>
i
K
i
i, or an unstable equilibrium point when
i
<
i
K
i
for at least one i. When
i
=
i
K
i
for at least one i, then we have a nonhyperbolic
equilibrium point, which is an attractor if [X
i
] is restricted to be nonnegative and
j
j
K
j
j = i.
Proof. The characteristic polynomial associated with the Jacobian of our system when
X = (0, 0, ..., 0) is
|JF(0) I| =
1
K
1
1
0 0
0
2
K
2
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
Kn
=
_
1
K
1
__
2
K
2
_
. . .
_
n
K
n
_
. (7.5)
The eigenvalues () are
1
K
1
1
,
2
K
2
2
, . . . ,
n
Kn
n
. Therefore, the zero vector is a
stable equilibrium point when
i
K
i
i
< 0 or
i
>
i
K
i
i. The zero vector is an unstable
equilibrium point when
i
K
i
i
> 0 or
i
<
i
K
i
for at least one i.
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 76
Figure 7.1: When g
i
= 0, c
i
= 1 and the decay line is tangent to the univariate Hill
curve at the origin, then the origin is a saddle.
If
i
K
i
i
= 0 or
i
=
i
K
i
for at least one i then we have a nonhyperbolic equilibrium
point. Geometrically, we can see that this is a saddle stable at the right and unstable
at the left of [X
i
]
1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
= (
1
)(
2
) . . . (
n
). (7.6)
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 77
The eigenvalues () are
1
,
2
, . . . ,
n
which are all negative. Therefore, the zero
state is a stable equilibrium point.
We can vary the size of the basin of attraction of the stable zero i-th component (or of
any lower-valued stable component) of an equilibrium point by varying the value of
i
or
K
i
, or sometimes by varying the value of
i
. Let us consider Figure (7.2) for illustration.
In Figure (7.2), the original basin of attraction of the origin is [0, +) but increasing
the value of
i
decreases the basin of attraction to [0, C). Decreasing the value of K
i
decreases the basin of attraction of the origin to [0, A), and decreasing the value of
i
decreases the basin of attraction of the origin to [0, B).
Figure 7.2: Varying the values of parameters may vary the size of the basin of attraction
of the lower-valued stable intersection of Y = H
i
([X
i
]) +g
i
and Y =
i
[X
i
].
In addition, the size of the basin of attraction of an equilibrium point depends on
the number of existing equilibrium points and on the size of the hyperspace (6.1). Given
specic parameter values, the hyperspace (6.1) is xed and the basin of attraction of each
existing equilibrium point is distributed in this hyperspace. If there are multiple stable
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 78
equilibrium points then there are multiple basins of attraction that share the size of the
hyperspace.
Now, we propose two additional methods for determining the stability of equilibrium
points other than the usual numerical methods for solving ODEs and other than using
the Jacobian using a multivariate xed point algorithm and using ad hoc geometric
analysis. We discuss multivariate xed point algorithm in Appendix B. We prove the
following theorems using ad hoc geometric analysis.
Theorem 7.5 Suppose c
i
> 1. Then [X
i
]
= 0 (where [X
i
]
n
j=1,j=i
ij
[X
j
]
c
ij
is taken as a parameter.
Theorem (7.5) is very important because this proves that when the pluripotency
module (where g
4
= 0, see discussion in Chapter 5 Section 5.2) is switched-o then it
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 79
can never be switched-on again, unless we make g
4
> 0 or introduce some random noise.
This is consistent with the observation of MacArthur et al. in [113].
Theorem 7.6 Suppose c
i
1 and g
i
= 0 for all i. The only stable equilibrium point
of our ODE model (5.1) is the origin if and only if the univariate curve Y = H
i
([X
i
])
essentially lies below the decay line Y =
i
[X
i
] (i.e., H
i
([X
i
])
i
[X
i
], [X
i
] > 0) for all
i.
Proof. Suppose, the curve Y = H
i
([X
i
]) essentially lies below the decay line Y =
i
[X
i
],
then the intersections can be any of the forms shown in Figure (7.4). It is clear that zero
is the only stable intersection.
Figure 7.4: The possible topologies when Y = H
i
([X
i
]) essentially lies below the decay
line Y =
i
[X
i
], g
i
= 0.
Conversely, suppose the only stable equilibrium point is the origin. Hence [X
j
], j = i
must converge to zero. We substitute [X
j
]
= 0, j = i to H
i
([X
1
], [X
2
], . . . , [X
n
]) (5.2).
The intersections of Y = H
i
(0, . . . , [X
i
], . . . , 0) = H([X
i
]) and Y =
i
[X
i
] must contain
the origin (since we assumed that the origin is an equilibrium point). Looking at the
possible topologies of the intersections of Y = H
i
([X
i
]) and Y =
i
[X
i
] (see Figures (5.11)
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 80
and (5.13)), zero can only be the sole stable intersection if the intersection is any of the
form shown in Figure(7.4). Therefore, the curve Y = H
i
([X
i
]) essentially lies below the
decay line Y =
i
[X
i
].
Theorem 7.7 Suppose c
i
= c
ij
= 1, g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0,
K
i
= K
j
= K > 0 and > K. Then the origin is an unstable equilibrium point of the
system (5.1) while the points lying on the hyperplane
n
j=1
[X
j
] =
K. (7.7)
are stable equilibrium points.
Proof. From Corollary (6.8), the origin and the points lying on the hyperplane are equi-
librium points of the system (5.1) given the assumed parameter values.
Suppose
n
j=1,j=i
[X
j
] = 0 in the denominator of H
i
(5.2). At [X
i
] = 0, the slope of
the Hill curve Y = H
i
([X
i
]) is
H
i
[X
i
]
=
K
. (7.8)
Since > K then
K
> . This implies that the slope of Y = H
i
([X
i
]) at [X
i
] = 0 is
greater than the slope of the decay line Y = [X
i
]. Therefore, when
n
j=1,j=i
[X
j
] = 0
in the denominator of H
i
(5.2), there are two possible intersections of Y = H
i
([X
i
]) and
Y = [X
i
]. The intersection is at the origin (which is unstable) and at [X
i
] =
K
(which is stable).
Now, suppose
n
j=1,j=i
[X
j
] in the denominator of H
i
varies. Then the intersection
of Y = H
i
([X
i
]) and Y = [X
i
] is at the origin (which is unstable) and at [X
i
] =
n
j=1,j=i
[X
j
] (which is stable). Hence, the hyperplane [X
i
] =
n
j=1,j=i
[X
j
],
where [X
i
] and [X
j
] are nonnegative, is a set of stable equilibrium points. See Figure
(7.5) for illustration.
n
j=1,j=i
[X
j
]
are stable.
In GRNs, the existence of innitely many non-isolated equilibrium points is biolog-
ically volatile. A small perturbation in the initial value of the system may lead the
trajectory of the system to converge to a dierent equilibrium point that may result to a
change in the phenotype of the cell. The basin of attraction of each stable non-isolated
equilibrium point may not be as large compared to the basin of attraction of a stable
isolated equilibrium point. This special phenomenon represents competition where the
co-expression, extinction and domination of the TFs depend on the value of each TF, and
the dependence among TFs is a continuum. The existence of an attracting hyperplane
is also discovered by Cinquin and Demongeot in [38].
7.2 Bifurcation of parameters
We have seen in Chapter 6 and in Section 7.1 (also see Appendix C) the eect of the
parameters
i
, K
i
,
i
, c
i
, c
ij
and g
i
on the number, size of the basins of attraction and
behavior of equilibrium points. Varying the values of some parameters can decrease the
size of the basin of attraction of an undesirable equilibrium point as well as increase the
size of the basin of attraction of a desirable equilibrium point. We can mathematically
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 82
manipulate the parameter values to ensure that the initial condition is in the basin of
attraction of our desired equilibrium point.
Intuitively, we can make the i-th component of an equilibrium point dominate other
components by increasing
i
or g
i
or, in some instances, by decreasing
i
. Decreasing
the value of K
i
or sometimes increasing the value of c
i
minimizes the size of the basin of
attraction of the lower-valued stable intersection of Y = H
i
([X
i
]) + g
i
and Y =
i
[X
i
],
thus, the chance of converging to an equilibrium point with [X
i
]
> [X
j
]
j = i may
increase. However, the eect of K
i
and c
i
in increasing the value of [X
i
]
is not as drastic
compared to
i
, g
i
and
i
, since K
i
and c
i
do not aect the upper bound of the hyperspace
(6.1). In addition, increasing the value of c
i
or of c
ij
may result in an increased number
of equilibrium points, and probably in multistability (by Theorem (6.6)).
We show in Appendix C some numerical bifurcation analysis to illustrate possible
bifurcation types that may occur.
In this section, we determine how to obtain an equilibrium point that has an i-th
component suciently dominating other components, especially by introducing an ex-
ogenous stimulus. We focus on the parameter g
i
because the introduction of an exogenous
stimulus is experimentally feasible.
Increasing the eect of exogenous stimuli
If we increase the value of g
i
up to a sucient level, then we can increase the value of [X
i
]
where Y = H
i
([X
i
]) + g
i
and Y =
i
([X
i
]) intersect. We can also make such increased
[X
i
] the only intersection. See Figure (7.6) for illustration.
Moreover, as we increase the value of g
i
up to a sucient level, we increase the possible
value of [X
i
]
. Since [X
i
] inhibits [X
j
], then as we increase the value of [X
i
]
, we can
decrease the value of [X
j
], j = i where Y = H
j
([X
j
]) +g
j
and Y =
j
([X
j
]) intersect. We
can also make such decreased [X
j
] the only intersection. If g
j
= 0, we can make [X
j
] = 0
the only intersection of Y = H
j
([X
j
]) and Y =
j
([X
j
]). See Figure (7.7) for illustration.
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 83
Figure 7.6: Increasing the value of g
i
can result in an increased value of [X
i
] where
Y = H
i
([X
i
]) +g
i
and Y =
i
([X
i
]) intersects.
Therefore, by changing the value of g
i
we can have a sole stable equilibrium point
where the i-th component dominates the others. For any initial condition, the trajectory
of the ODE model (5.1) will converge to this sole equilibrium point. By varying the
value of g
i
, we can manipulate the cell fate of a stem cell controlling the tripotency,
bipotency, unipotency and terminal state of the cell. We present illustrations in Appendix
C showing the eect of increasing the value of g
i
.
Remark: Suppose, given a specic initial condition, the solution to our system tends
to an equilibrium point with [X
i
]
= 0
then one strategy is to add g
i
> 0. The idea of adding a sucient amount of g
i
> 0
is to make the solution of our system escape a certain equilibrium point. However, it
is sometimes impractical or infeasible to continuously introduce a constant amount of
exogenous stimulus to control cell fates, that is why we may rather consider introducing
g
i
that degrades through time.
We can make g
i
a function of time (i.e., g
i
is varying through time). This strategy
means that we are adding another equation and state variable to our system of ODEs.
Chapter 7. Results and Discussion Stability of Equilibria and Bifurcation 84
Figure 7.7: Increasing the value of g
i
can result in an increased value of [X
i
]
, and
consequently in decreased value of [X
j
] where Y = H
j
([X
j
]) + g
j
and Y =
j
([X
j
])
intersects, j = i.
We can think of g
i
as an additional node to our GRN and we call it as the injection
node. In our case, we consider functions that represent a degrading amount of g
i
. Refer
to Appendix C for illustration.
Adding a degrading amount of g
i
aects cell fate but this strategy may not give rise
to a sole equilibrium point. Moreover, this strategy is only applicable to systems with
multiple stable equilibrium points where convergence of trajectories is sensitive to initial
conditions.
Chapter 8
Results and Discussion
Introduction of Stochastic Noise
We numerically investigate the eect of random noise to the cell dierentiation system us-
ing Stochastic Dierential Equations (SDEs). In [38], Cinquin and Demongeot suggested
to extend their model to include stochastic kinetics.
We have written a Scilab [150] program (see Algorithm (5)-(6) in Appendix D) to
simulate the eect of stochastic noise to the dynamics of our GRN. We employ several
functions G (see Section 3.3 in Chapter 3) to observe the various eect of the added
Gaussian white noise term. The dierent functions G are:
G(X) = 1, (8.1)
G(X) = X, (8.2)
G(X) =
X, (8.3)
G(X) = F(X), and (8.4)
G(X) =
_
H(X) + g + X. (8.5)
In function (8.1), the noise term is not dependent on any variable. This function is
used by MacArthur et al. in [113].
The noise term with (8.2) or (8.3) is aected by the value of X. That is, as the
concentration X increases/decreases, the eect of the noise term also increases/decreases.
Function (8.2) is used by Glauche et al. in [71]. However, using (8.2) or (8.3) has
undesirable biological implication as [X
i
] dominates the other concentration of TFs,
the eect of random noise to [X
i
] intensies.
85
Chapter 8. Results and Discussion Introduction of Stochastic Noise 86
In (8.4), the noise term is aected by the value of F(X) (the right hand side of our
corresponding ODE model) that is, as the deterministic change in the concentration
X with respect to time
_
dX
dt
= F(X)
_
increases/decreases, the eect of the noise term
also increases/decreases. In using (8.4), we expect a decreasing amount of noise through
time because our ODE system (5.1) always converges to an equilibrium point. In other
words, as F(X) 0 (since F(X
B
_
XdW
B
+
C
_
gdW
C
. (8.6)
In the following examples we suppose n = 4 and
ii
= 0.5. Let the simulation step
size be 0.01.
Illustration 1
Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters equal to
1 except for
i
= 5 and g
i
= 0 for all i. This system has innitely many non-isolated
stable equilibrium points (see Theorem (7.7)).
The corresponding system of SDEs is as follows:
Chapter 8. Results and Discussion Introduction of Stochastic Noise 87
d[X1] =
_
5[X
1
]
1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]
[X
1
]
_
dt +
11
G
1
([X
1
])dW
1
d[X2] =
_
5[X
2
]
1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]
[X
2
]
_
dt +
22
G
2
([X
2
])dW
2
(8.7)
d[X3] =
_
5[X
3
]
1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]
[X
3
]
_
dt +
33
G
3
([X
3
])dW
3
d[X4] =
_
5[X
4
]
1 + [X
1
] + [X
2
] + [X
3
] + [X
4
]
[X
4
]
_
dt +
44
G
4
([X
4
])dW
4
.
We assume G
1
= G
2
= G
3
= G. Suppose the initial condition is [X
i
]
0
= 4 for all i.
Figures (8.1) to (8.5) show dierent realizations of the corresponding SDE model.
In the deterministic case, we expect that the solutions to [X
1
], [X
2
], [X
3
] and [X
4
] will
converge to an equilibrium point with equal components because our system is symmetric
and [X
1
]
0
= [X
2
]
0
= [X
3
]
0
= [X
4
]
0
. However, because of the presence of noise, some TFs
seem to dominate the others. It is possible that the solution to the SDE may approach a
dierent equilibrium point. This biological volatility is due to the presence of innitely
many stable equilibrium points.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 88
Figure 8.1: For Illustration 1; ODE solution and SDE realization with G(X) = 1.
Figure 8.2: For Illustration 1; ODE solution and SDE realization with G(X) = X.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 89
Figure 8.3: For Illustration 1; ODE solution and SDE realization with G(X) =
X.
Figure 8.4: For Illustration 1; ODE solution and SDE realization with G(X) = F(X).
Chapter 8. Results and Discussion Introduction of Stochastic Noise 90
Figure 8.5: For Illustration 1; ODE solution and SDE realization using the random
population growth model.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 91
Illustration 2
Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters equal to
1 except for c
i
= c
ij
= 2 (for all i, j), g
1
= 5, g
2
= 3, g
3
= 1 and g
4
= 0. This
system has a sole equilibrium point which is ([X
1
]
5.72411, [X
2
]
3.23066, [X
3
]
1.02313, [X
4
]
= 0).
The corresponding system of SDEs is as follows:
d[X1] =
_
[X
1
]
2
1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
[X
1
] + 5
_
dt +
11
G
1
([X
1
])dW
1
d[X2] =
_
[X
2
]
2
1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
[X
2
] + 3
_
dt +
22
G
2
([X
2
])dW
2
(8.8)
d[X3] =
_
[X
3
]
2
1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
[X
3
] + 1
_
dt +
33
G
3
([X
3
])dW
3
d[X4] =
_
[X
4
]
2
1 + [X
1
]
2
+ [X
2
]
2
+ [X
3
]
2
+ [X
4
]
2
[X
4
]
_
dt +
44
G
4
([X
4
])dW
4
.
We assume G
1
= G
2
= G
3
= G. Figures (8.6) to (8.10) show dierent realizations of the
corresponding SDE model with initial condition [X
i
]
0
= 3 for all i.
In the deterministic case, we expect that the solution will converge to the sole equilib-
rium point. From our simulation, it seems that our system is robust against the presence
of moderate noise. The realization of the SDE model nearly follows the deterministic
trajectory. We expect this to happen because for any initial condition, the solution to
our system will tend towards only one attractor.
Recall that one possible strategy for controlling our system to have only one stable
equilibrium point is by introducing adequate amount of exogenous stimuli (see Chapter
7 Section 7.2). In order to have assurance that cells will not change lineages, we need to
make the desired i-th lineage have a corresponding [X
i
]
X.
Figure 8.9: For Illustration 2; ODE solution and SDE realization with G(X) = F(X).
Chapter 8. Results and Discussion Introduction of Stochastic Noise 94
Figure 8.10: For Illustration 2; ODE solution and SDE realization using the random
population growth model.
Chapter 8. Results and Discussion Introduction of Stochastic Noise 95
Illustration 3
Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters c
i
= c
ij
=
2,
i
= 1, K
i
= 1,
ij
= 1/8,
i
= 1/21 and g
i
= 0 for all i, j. This system has multiple
stable equilibrium points (see Figure (8.16)).
The corresponding system of SDEs is as follows:
d[X1] =
_
[X
1
]
2
1 + [X
1
]
2
+
1
8
[X
2
]
2
+
1
8
[X
3
]
2
+
1
8
[X
4
]
2
1
21
[X
1
]
_
dt +
11
G
1
([X
1
])dW
1
d[X2] =
_
[X
2
]
2
1 +
1
8
[X
1
]
2
+ [X
2
]
2
+
1
8
[X
3
]
2
+
1
8
[X
4
]
2
1
21
[X
2
]
_
dt +
22
G
2
([X
2
])dW
2
(8.9)
d[X3] =
_
[X
3
]
2
1 +
1
8
[X
1
]
2
+
1
8
[X
2
]
2
+ [X
3
]
2
+
1
8
[X
4
]
2
1
21
[X
3
]
_
dt +
33
G
3
([X
3
])dW
3
d[X4] =
_
[X
4
]
2
1 +
1
8
[X
1
]
2
+
1
8
[X
2
]
2
+
1
8
[X
3
]
2
+ [X
4
]
2
1
21
[X
4
]
_
dt +
44
G
4
([X
4
])dW
4
.
We assume G
1
= G
2
= G
3
= G. Figures (8.11) to (8.15) show dierent realizations of
the corresponding SDE model with initial condition [X
i
]
0
= 2 for all i.
In the deterministic case, we expect that the solutions to [X
1
], [X
2
], [X
3
] and [X
4
] will
converge to an equilibrium point with equal components because our system is symmetric
and [X
1
]
0
= [X
2
]
0
= [X
3
]
0
= [X
4
]
0
. For a system with regulated noise, the solution to the
SDE may follow the behavior of the trajectory of the ODE. However, it is also possible
that [X
1
]
, [X
2
]
, [X
3
]
and [X
4
]
and [X
2
]
X.
Figure 8.14: For Illustration 3; ODE solution and SDE realization with G(X) = F(X).
Chapter 8. Results and Discussion Introduction of Stochastic Noise 98
Figure 8.15: For Illustration 3; ODE solution and SDE realization using the random
population growth model.
Figure 8.16: Phase portrait of [X
1
] and [X
2
].
Chapter 8. Results and Discussion Introduction of Stochastic Noise 99
Illustration 4
Suppose our generalized Cinquin-Demongeot ODE model (5.1) has parameters c
i
= c
ij
=
2,
i
= 1, K
i
= 1,
ij
= 1,
i
= 1 and g
i
= 0 for all i, j. Suppose the initial condition
is [X
i
]
0
= 0 for all i, which means that all TFs are switched-o. Figure (8.17) shows
that with the presence of noise, the TFs can be reactivated again. However, inactive TFs
cannot be activated by using any of the functions (8.2), (8.3), (8.4) or (8.5).
Figure 8.17: Reactivating switched-o TFs by introducing random noise where
G(X) = 1.
Chapter 9
Summary and Recommendations
We simplify the gene regulatory network (GRN) model of MacArthur et al. [113] to
study the mesenchymal cell dierentiation system. The simplied MacArthur GRN is
given in the following gure:
Figure 9.1: The simplied MacArthur et al. GRN
We translate the simplied network model into a system of Ordinary Dierential
Equations (ODEs) using a generalized Cinquin-Demongeot model [38]. We generalize
the Cinquin-Demongeot ODE model as:
100
Chapter 9. Summary and Recommendations 101
d[X
i
]
dt
=
i
[X
i
]
c
i
K
i
+ [X
i
]
c
i
+
n
j=1,j=i
ij
[X
j
]
c
ij
+g
i
i
[X
i
]
where i = 1, 2, ..., n.
The state variables of the ODE model represent the concentration of the transcription
factors (TFs) involved in gene expression. For our simplied network, [X
1
] := [RUNX2],
[X
2
] := [SOX9], [X
3
] := [PPAR] and [X
4
] := [sTF]. Some of our results are appli-
cable not only to n = 4 but to any dimension.
An asymptotically stable equilibrium point is associated with a certain cell type,
e.g., tripotent, bipotent, unipotent or terminal state. If [X
i
] suciently dominates the
concentrations of the other TFs then the chosen lineage is towards the i-th cell type.
For an ODE model to be useful, it is necessary that it has a solution. It is dicult to
predict the behavior of our system if the solution is not unique. We are able to prove that
there exists a unique solution to our model for some values of c
i
and c
ij
. The exponents
c
i
and c
ij
represent cooperativity among binding sites.
We propose two additional methods for determining the behavior of equilibrium points
other than the usual numerical methods for solving ODEs, and other than using the
Jacobian (1) using ad hoc geometric analysis; and (2) using multivariate xed point
algorithm.
The geometry of the Hill function H
i
is essential in understanding the behavior of
the equilibrium points of our ODE system. From the geometric analysis, we are able to
prove that our state variables [X
i
] will never be negative (i.e., R
n
is positively invariant
with respect to the ow of our ODE model) and that our ODE model always has a stable
equilibrium point. Any trajectory of the model will converge to a stable equilibrium
point.
A stable equilibrium point (0, 0, . . . , 0) is trivial because this state neither represents
a pluripotent cell nor a cell dierentiating into bone, cartilage or fat. In our case, the cell
Chapter 9. Summary and Recommendations 102
may dierentiate to other cell types which are not in the domain of our GRN. The zero
state may also represent a cell that is in quiescent stage. We are able to prove theorems
associated to the existence of a stable zero state, such as
Our system has an equilibrium point with i-th component equal to zero if and
only if g
i
= 0. Moreover, if
i
> 0 and c
i
> 1 then this zero i-th component
is always stable.
The zero state (0, 0, . . . , 0) can only be an equilibrium point if and only if
g
i
= 0 i. If
i
> 0 and c
i
> 1 for all i then the zero state is stable.
Suppose g
i
= 0 for all i. The only stable equilibrium point of our ODE model
is the origin if and only if the univariate Hill curve Y = H
i
([X
i
]) essentially
lies below the decay line Y =
i
[X
i
] for all i.
If converging to the zero state is undesirable, we can decrease the size of the basin
of attraction of the zero state by suciently increasing the value of
i
or by suciently
decreasing the value of K
i
and
i
. In addition, we can add g
i
> 0 to escape a stable zero
state.
In the case where c
i
> 1 and g
i
= 0, if the TF associated to [X
i
] is switched-o, then
it can never be switched-on again because the zero i-th component of an equilibrium
point is stable. Two possible strategies for escaping an inactive state is to increase the
value of g
i
or to introduce some random noise.
The following theorems give us ideas regarding the location and number of equilibrium
points ([X
1
]
, [X
2
]
, . . . , [X
n
]
):
Suppose
i
> 0. The equilibrium points of our system lie in the hyperspace
_
g
1
1
,
g
1
+
1
1
_
_
g
2
2
,
g
2
+
2
2
_
. . .
_
gn
n
,
gn+n
n
_
.
The generalized Cinquin-Demongeot ODE model (where c
i
and c
ij
are inte-
gers) has a nite number of equilibrium points except when all of the follow-
ing conditions are satised: c
i
= c
ij
= 1, g
i
= 0,
ij
= 1,
i
=
j
= > 0,
i
=
j
= > 0, K
i
= K
j
= K > 0 and > K, for all i and j.
If the generalized Cinquin-Demongeot ODE model (where c
i
and c
ij
are in-
tegers) has a nite number of equilibrium points then the possible number
Chapter 9. Summary and Recommendations 103
of equilibrium points is at most max{c
1
+ 1, c
1j
+ 1 j} max{c
2
+ 1, c
2j
+
1 j} . . . max{c
n
+ 1, c
nj
+ 1 j}.
We are able to nd one case where there are innitely many stable non-isolated
equilibrium points. This happens in a symmetric Michaelis-Menten-type system. In
GRNs, the existence of innitely many non-isolated equilibrium points is biologically
volatile. A small perturbation in the initial value of the system may lead the trajectory
of the system to converge to a dierent equilibrium point that may result to a change in
the phenotype of the cell. This special phenomenon represents a competition where the
co-expression, extinction and domination of the TFs continuously depend on the value
of each TF.
If g
n
= 0 then the n-dimensional system is more general than the (n1)-dimensional
system. That is, we can derive the equilibrium points of the (n 1)-dimensional system
by getting the equilibrium points of the n-dimensional system where [X
n
]
= 0. It is
clear that when [X
n
]
= 0 and g
n
= 0, the n-dimensional system reduces to an (n 1)-
dimensional system.
Furthermore, we are able to prove an additional theorem related to the behavior of
the solution of our ODE model. The theorem states that if
i
> 0 for all i, then our
system never converges to a center, to an -limit cycle or to a strange attractor. The
existence of a center, -limit cycle or strange attractor that would result to recurring
changes in phenotype is abnormal for a natural fully dierentiated cell.
The parameters
i
, K
i
,
i
,
ij
, c
i
, c
ij
and g
i
aect the number, size of the basins
of attraction and behavior of equilibrium points. We can make the i-th component of
an equilibrium point dominate other components by increasing
i
, by increasing g
i
or
sometimes by decreasing
i
. Decreasing the value of K
i
or increasing the value of c
i
may
also increase the chance of having an [X
i
]
is not as
drastic compared to that of
i
, g
i
and
i
, since K
i
and c
i
do not aect the upper bound
of [X
i
]
12
[X
2
] +g
1
K
1
= 0 (A.1)
P
2
([X
1
], [X
2
]) =
2
[X
2
]
2
+ (
2
+g
2
) [X
2
] (K
2
+
21
[X
1
]) (
2
[X
2
])
+g
2
21
[X
1
] +g
2
K
2
= 0
If P
1
and P
2
have no common factors then by Theorem (6.6), the number of complex
solutions to the polynomial system (A.1) is at most 4.
The corresponding Sylvester matrix of P
1
and P
2
with X
1
as variable is
_
_
a
11
a
12
a
13
a
21
a
22
0
0 a
21
a
22
_
_
(A.2)
where a
11
=
1
, a
12
=
1
+ g
1
K
1
1
12
1
[X
2
], a
13
= g
1
12
[X
2
] + g
1
K
1
, a
21
=
21
2
[X
2
] + g
2
21
and a
22
=
2
[X
2
]
2
+ (
2
+ g
2
K
2
2
)[X
2
] + g
2
K
2
. The Sylvester
resultant res(P
1
, P
2
; X
1
) is a polynomial in the variable X
2
and is of degree at most 4. By
Fundamental Theorem of Algebra, res(P
1
, P
2
; X
1
) = 0 has at most 4 complex solutions
which is consistent with Theorem (6.6).
It is dicult and computationally expensive to nd the exact solutions to res(P
1
, P
2
;
X
1
) = 0 in terms of the arbitrary parameters. We investigate specic cases where we
assign values to some parameters.
Appendix A. More on Equilibrium Points: Illustrations 108
A.1.1 Illustration 1
Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary g
1
and
arbitrary g
2
. Assume g
1
> 0 or g
2
> 0. The Sylvester matrix with [X
1
] as variable is as
follows:
_
_
1 g
1
[X
2
] g
1
([X
2
] + 1)
g
2
[X
2
] [X
2
]
2
+g
2
[X
2
] +g
2
0
0 g
2
[X
2
] [X
2
]
2
+g
2
[X
2
] +g
2
_
_
(A.3)
It follows that
res(P
1
, P
2
; X
1
) = (g
1
+g
2
)[X
2
]
2
(g
1
g
2
+g
2
2
)[X
2
] g
2
2
. (A.4)
Then the root of res(P
1
, P
2
; X
1
) = 0 is
[X
2
] =
g
1
g
2
+g
2
2
_
(g
1
g
2
+g
2
2
)
2
+ 4(g
1
+g
2
)g
2
2
2(g
1
+g
2
)
. (A.5)
By the same procedure as above, the root of res(P
1
, P
2
; X
2
) = 0 is
[X
1
] =
g
1
g
2
+g
2
1
_
(g
1
g
2
+g
2
1
)
2
+ 4(g
1
+g
2
)g
2
1
2(g
1
+g
2
)
. (A.6)
Since g
1
> 0 or g
2
> 0 then g
1
g
2
+g
2
2
_
(g
1
g
2
+g
2
2
)
2
+ 4(g
1
+g
2
)g
2
2
and g
1
g
2
+g
2
1
_
(g
1
g
2
+g
2
1
)
2
+ 4(g
1
+g
2
)g
2
1
. Hence, we have equilibrium point ([X
1
]
, [X
2
]
) equal to
_
g
1
g
2
+g
2
1
+
_
(g
1
g
2
+g
2
1
)
2
+ 4(g
1
+g
2
)g
2
1
2(g
1
+g
2
)
,
g
1
g
2
+g
2
2
+
_
(g
1
g
2
+g
2
2
)
2
+ 4(g
1
+g
2
)g
2
2
2(g
1
+g
2
)
_
.
Therefore, for this example, we have exactly one equilibrium point.
Appendix A. More on Equilibrium Points: Illustrations 109
Now, observe that if g
1
> g
2
then [X
1
]
> [X
2
]
and if if g
2
> g
1
then [X
2
]
> [X
1
]
.
For example, assume g
1
= 2g
2
> 0, that is the ratio of g
1
to g
2
is 1 : 2. Then the
equilibrium point will be
_
2g
2
2
+ 4g
2
2
+
_
(2g
2
2
+ 4g
2
2
)
2
+ 4(2g
2
+g
2
)4g
2
2
2(2g
2
+g
2
)
,
2g
2
2
+g
2
2
+
_
(2g
2
2
+g
2
2
)
2
+ 4(2g
2
+g
2
)g
2
2
2(2g
2
+g
2
)
_
_
6g
2
2
+
_
36g
4
2
+ 48g
3
2
6g
2
,
3g
2
2
+
_
9g
4
2
+ 12g
3
2
6g
2
_
_
6g
2
+ 2
_
9g
2
2
+ 12g
2
6
,
3g
2
+
_
9g
2
2
+ 12g
2
6
_
.
Clearly, [X
1
]
=
6g
2
+2
9g
2
2
+12g
2
6
> [X
2
]
=
3g
2
+
9g
2
2
+12g
2
6
.
In addition, if g
1
= g
2
= g > 0, then [X
1
]
= [X
2
]
=
g+g
g
2
+2g
2
.
On the other hand, if g
1
and g
2
are both zero, then, by Theorem (6.7), the only
equilibrium point is (0, 0).
A.1.2 Illustration 2
Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary
1
=
2
=
, arbitrary g
1
and g
2
= 0. The Sylvester matrix with [X
1
] as variable is as follows:
_
_
1 +g
1
1 [X
2
] g
1
([X
2
] + 1)
[X
2
] [X
2
]
2
+ ( 1)[X
2
] 0
0 [X
2
] [X
2
]
2
+ ( 1)[X
2
]
_
_
(A.7)
It follows that
res(P
1
, P
2
; X
1
) = g
1
[X
2
]
2
. (A.8)
Appendix A. More on Equilibrium Points: Illustrations 110
Then the root of res(P
1
, P
2
; X
1
) = 0 is
[X
2
] = 0. (A.9)
Substituting [X
2
] = 0 to the polynomial system (A.1) with the assumed parameter
values, we have
P
1
([X
1
], 0) = [X
1
]
2
+ ( +g
1
)[X
1
] [X
1
] +g
1
= 0 (A.10)
P
2
([X
1
], 0) = 0.
Thus,
[X
1
] =
( +g
1
1)
_
( +g
1
1)
2
+ 4g
1
2
=
( +g
1
1)
_
( +g
1
1)
2
+ 4g
1
2
. (A.11)
Suppose g
1
> 0. Since + g
1
1 <
_
( +g
1
1)
2
+ 4g
1
then we have equilibrium
point ([X
1
]
, [X
2
]
) equal to
_
( +g
1
1) +
_
( +g
1
1)
2
+ 4g
1
2
, 0
_
.
Therefore, we have exactly one equilibrium point where [X
1
]
> [X
2
]
when g
1
> 0.
If g
1
= 0 and > 1 then we have two equilibrium points: (0, 0) and ( 1, 0). If
g
1
= 0 and 1 then the only equilibrium point is (0, 0).
Appendix A. More on Equilibrium Points: Illustrations 111
A.1.3 Illustration 3
Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary K
1
=
K
2
= K, arbitrary g
1
and g
2
= 0. The Sylvester matrix with [X
1
] as variable is as follows:
_
_
1 1 +g
1
K [X
2
] g
1
([X
2
] +K)
[X
2
] [X
2
]
2
+ (1 K)[X
2
] 0
0 [X
2
] [X
2
]
2
+ (1 K)[X
2
]
_
_
(A.12)
It follows that
res(P
1
, P
2
; X
1
) = g
1
[X
2
]
2
. (A.13)
Then the root of res(P
1
, P
2
; X
1
) = 0 is
[X
2
] = 0. (A.14)
Substituting [X
2
] = 0 to the polynomial system (A.1) with the assumed parameter
values, we have
P
1
([X
1
], 0) = [X
1
]
2
+ (1 +g
1
)[X
1
] K[X
1
] +g
1
K = 0 (A.15)
P
2
([X
1
], 0) = 0.
Thus,
[X
1
] =
(1 +g
1
K)
_
(1 +g
1
K)
2
+ 4g
1
K
2
=
(1 +g
1
K)
_
(1 +g
1
K)
2
+ 4g
1
K
2
. (A.16)
Suppose g
1
> 0. Since 1+g
1
K <
_
(1 +g
1
K)
2
+ 4g
1
K then we have equilibrium
point ([X
1
]
, [X
2
]
) equal to
Appendix A. More on Equilibrium Points: Illustrations 112
_
(1 +g
1
K) +
_
(1 +g
1
K)
2
+ 4g
1
K
2
, 0
_
.
Therefore, we have exactly one equilibrium point where [X
1
]
> [X
2
]
when g
1
> 0.
If g
1
= 0 and K < 1 then we have two equilibrium points: (0, 0) and (1 K, 0). If
g
1
= 0 and K 1 then the only equilibrium point is (0, 0).
A.1.4 Illustration 4
Suppose all parameters in the system (A.1) are equal to 1, except for arbitrary
1
=
2
=
, arbitrary g
1
and g
2
= 0. Assume > 0. The Sylvester matrix with [X
1
] as variable is
as follows:
_
_
1 +g
1
[X
2
] g
1
([X
2
] + 1)
[X
2
] [X
2
]
2
+ (1 )[X
2
] 0
0 [X
2
] [X
2
]
2
+ (1 )[X
2
]
_
_
(A.17)
It follows that
res(P
1
, P
2
; X
1
) = g
1
[X
2
]
2
. (A.18)
Then the root of res(P
1
, P
2
; X
1
) = 0 is
[X
2
] = 0. (A.19)
Substituting [X
2
] = 0 to the polynomial system (A.1) with the assumed parameter
values, we have
P
1
([X
1
], 0) = [X
1
]
2
+ (1 +g
1
)[X
1
] [X
1
] +g
1
= 0 (A.20)
P
2
([X
1
], 0) = 0.
Appendix A. More on Equilibrium Points: Illustrations 113
Thus,
[X
1
] =
(1 +g
1
)
_
(1 +g
1
)
2
+ 4g
1
2
=
(1 +g
1
)
_
(1 +g
1
)
2
+ 4g
1
2
. (A.21)
Suppose g
1
> 0. Since 1 + g
1
<
_
(1 +g
1
)
2
+ 4g
1
then we have equilibrium
point ([X
1
]
, [X
2
]
) equal to
_
(1 +g
1
) +
_
(1 +g
1
)
2
+ 4g
1
2
, 0
_
.
Therefore, we have exactly one equilibrium point where [X
1
]
> [X
2
]
when g
1
> 0.
If g
1
= 0 and < 1 then we have two equilibrium points: (0, 0) and (1 , 0). If
g
1
= 0 and 1 then the only equilibrium point is (0, 0).
A.2 Assume n = 2, c
i
= 2
A.2.1 Illustration 1
Consider c
i
= 2 and c
ij
= 1 for all i and j. The system of polynomial equations is as
follows:
P
1
([X
1
], [X
2
]) =
1
[X
1
]
3
+ (
1
+g
1
) [X
1
]
2
(K
1
+
12
[X
2
]) (
1
[X
1
])
+g
1
12
[X
2
] +g
1
K
1
= 0 (A.22)
P
2
([X
1
], [X
2
]) =
2
[X
2
]
3
+ (
2
+g
2
) [X
2
]
2
(K
2
+
21
[X
1
]) (
2
[X
2
])
+g
2
21
[X
1
] +g
2
K
2
= 0
Appendix A. More on Equilibrium Points: Illustrations 114
By Theorem (6.6), the number of complex solutions to the polynomial system (A.22)
is at most 9.
The corresponding Sylvester matrix of P
1
and P
2
with X
1
as variable is
_
_
a
11
a
12
a
13
a
14
a
21
a
22
0 0
0 a
21
a
22
0
0 0 a
21
a
22
_
_
(A.23)
where a
11
=
1
, a
12
=
1
+ g
1
, a
13
= K
1
1
12
1
[X
2
], a
14
= g
1
12
[X
2
] + g
1
K
1
,
a
21
=
21
2
[X
2
] + g
2
21
and a
22
=
2
[X
2
]
3
+ (
2
+ g
2
)[X
2
]
2
K
2
2
[X
2
] + g
2
K
2
. The
Sylvester resultant res(P
1
, P
2
; X
1
) is a polynomial with variable X
2
and degree of at most
9. By Fundamental Theorem of Algebra, res(P
1
, P
2
; X
1
) = 0 has at most 9 complex
solutions which is consistent with Theorem (6.6).
It is dicult and computationally expensive to nd the exact solutions to res(P
1
, P
2
;
X
1
) = 0 in terms of the arbitrary parameters. We investigate specic cases where we
assign values to some parameters.
Suppose all parameters in the system (A.22) are equal to 1, except for arbitrary g
1
and arbitrary g
2
. The Sylvester matrix is as follows:
_
_
a
11
a
12
a
13
a
14
a
21
a
22
0 0
0 a
21
a
22
0
0 0 a
21
a
22
_
_
(A.24)
where a
11
= 1, a
12
= 1 + g
1
, a
13
= 1 [X
2
], a
14
= g
1
([X
2
] + 1), a
21
= g
2
[X
2
]
and a
22
= [X
2
]
3
+ (1 + g
2
)[X
2
]
2
[X
2
] + g
2
. The Sylvester resultant res(P
1
, P
2
; X
1
) is
a polynomial with variable X
2
and degree of at most 9. By Fundamental Theorem of
Algebra, res(P
1
, P
2
; X
1
) = 0 has at most 9 complex solutions. But it is still dicult and
computationally expensive to nd the exact solutions to res(P
1
, P
2
; X
1
) = 0 in terms of
the arbitrary g
1
and g
2
.
Appendix A. More on Equilibrium Points: Illustrations 115
If we add another assumption g
1
= 2g
2
(thus, we only have one arbitrary parameter),
the Sylvester matrix is as follows:
_
_
a
11
a
12
a
13
a
14
a
21
a
22
0 0
0 a
21
a
22
0
0 0 a
21
a
22
_
_
(A.25)
where a
11
= 1, a
12
= 1 +2g
2
, a
13
= 1 [X
2
], a
14
= 2g
2
([X
2
] +1), a
21
= g
2
[X
2
] and
a
22
= [X
2
]
3
+ (1 + g
2
)[X
2
]
2
[X
2
] + g
2
. The Sylvester resultant res(P
1
, P
2
; X
1
) of the
above matrix is a polynomial with variable X
2
and degree of at most 9. By Fundamental
Theorem of Algebra, res(P
1
, P
2
; X
1
) = 0 has at most 9 complex solutions. Notice that
we only know the upper bound of the number of equilibrium points and not the exact
values. Even with only one arbitrary parameter, it is still dicult and computationally
expensive to nd the exact solutions to res(P
1
, P
2
; X
1
) = 0.
Hence, we deem not to continue nding the exact value of all the equilibrium points
using Sylvester resultant method for systems that is more complicated than a system
with n = 2, c
i
= 2, c
ij
= 1 and at least one arbitrary parameter. Notice that for the
above Sylvester matrix, we only have 4 4 dimension which means nding the solution
to res(P
1
, P
2
; X
1
) = 0 for a larger Sylvester matrix with at least one arbitrary parameter
may be more dicult.
Nevertheless, in some instances where we do not have any arbitrary parameter, solving
for res(P
1
, P
2
; X
1
) = 0 is easy. For example, if we further assume that g
1
= 2g
2
where
g
2
= 1 then res(P
1
, P
2
; X
1
) = 0 has only one real nonnegative solution: [X
2
]
1.3143.
A.2.2 Illustration 2
If c
i
= c
ij
= 2, according to Theorem (6.6), the upper bound of the number of equilibrium
points is 9.
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2 and g
i
= 0,
i, j = 1, 2. The only equilibrium point is the origin.
Appendix A. More on Equilibrium Points: Illustrations 116
A.2.3 Illustration 3
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2 (i, j = 1, 2) and g
2
= 0.
The only equilibrium point is ([X
1
]
1.7549, [X
2
]
= 0).
A.2.4 Illustration 4
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2, g
i
= 0 and
i
= 3,
i, j = 1, 2. There are seven equilibrium points (the following values are approximates):
([X
1
]
= 2.618, [X
2
]
= 0),
([X
1
]
= 0, [X
2
]
= 2.618),
([X
1
]
= 0.38197, [X
2
]
= 0),
([X
1
]
= 0, [X
2
]
= 0.38197),
([X
1
]
= 0.5, [X
2
]
= 0.5),
([X
1
]
= 1, [X
2
]
= 1), and
([X
1
]
= 0, [X
2
]
= 0).
A.2.5 Illustration 5
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2, g
2
= 0,
i
= 20
and
i
= 10, i, j = 1, 2. There are three equilibrium points (the following values are
approximates):
([X
1
]
= 1.4633, [X
2
]
= 0),
([X
1
]
= 0.5, [X
2
]
= 0), and
([X
1
]
= 0.13668, [X
2
]
= 0).
A.3 Assume n = 3
A.3.1 Illustration 1
If c
i
= c
ij
= 1, i, j = 1, 2, 3, then according to Theorem (6.6), the upper bound of the
number of equilibrium points is 8.
Appendix A. More on Equilibrium Points: Illustrations 117
Consider that all parameters are equal to 1 except for g
2
= 0 and g
3
= 0. The only
equilibrium point is ([X
1
]
1.618, [X
2
]
= 0, [X
3
]
= 0).
A.3.2 Illustration 2
If c
i
= c
ij
= 2, i, j = 1, 2, 3, then according to Theorem (6.6), the upper bound of the
number of equilibrium points is 27.
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2 (i, j = 1, 2, 3),
g
2
= 0 and g
3
= 0. The only equilibrium point is ([X
1
]
1.7549, [X
2
]
= 0, [X
3
]
= 0).
A.3.3 Illustration 3
If c
i
= c
ij
= 3, i, j = 1, 2, 3, then according to Theorem (6.6), the upper bound of the
number of equilibrium points is 64.
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 3 and g
i
= 0,
i, j = 1, 2, 3. The only equilibrium point is the origin.
A.3.4 Illustration 4
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 3 (i, j = 1, 2, 3), g
2
= 0
and g
3
= 0. The only equilibrium point is ([X
1
]
1.8668, [X
2
]
= 0, [X
3
]
= 0).
A.3.5 Illustration 5
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 3,
i
= 3 and g
i
= 0,
i, j = 1, 2, 3. There are ten equilibrium points (the following values are approximates):
([X
1
]
= 1.0097 10
28
0, [X
2
]
= 0.6527, [X
3
]
= 0),
([X
1
]
= 1.5510 10
25
0, [X
2
]
= 2.8794, [X
3
]
= 0),
([X
1
]
= 0.6527, [X
2
]
= 0, [X
3
]
= 0),
([X
1
]
= 0, [X
2
]
= 0, [X
3
]
= 0.6527),
Appendix A. More on Equilibrium Points: Illustrations 118
([X
1
]
= 2.8794, [X
2
]
= 0, [X
3
]
= 0),
([X
1
]
= 0, [X
2
]
= 0, [X
3
]
= 2.8794),
([X
1
]
= 1, [X
2
]
= 1, [X
3
]
= 0),
([X
1
]
= 1, [X
2
]
= 0, [X
3
]
= 1),
([X
1
]
= 0, [X
2
]
= 1, [X
3
]
= 1), and
([X
1
]
= 0, [X
2
]
= 0, [X
3
]
= 0).
A.3.6 Illustration 6
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 3,
i
= 20,
i
= 10,
g
2
= 0 and g
3
= 0, i, j = 1, 2, 3. There are seven equilibrium points (the following values
are approximates):
([X
1
]
= 0.10103, [X
2
]
= 1.001, [X
3
]
= 0),
([X
1
]
= 0.10103, [X
2
]
= 0, [X
3
]
= 1.001),
([X
1
]
= 0.10039, [X
2
]
= 1.6173, [X
3
]
= 0),
([X
1
]
= 0.10039, [X
2
]
= 0, [X
3
]
= 1.6173),
([X
1
]
= 0.10213, [X
2
]
= 0, [X
3
]
= 0),
([X
1
]
= 0.83362, [X
2
]
= 0, [X
3
]
= 0), and
([X
1
]
= 1.8123, [X
2
]
= 0, [X
3
]
= 0).
A.3.7 Illustration 7
Consider that all parameters are equal to 1 except for c
i
= c
ij
= 2,
ij
= ,
i
= and
g
i
= 0, i, j = 1, 2, 3. Notice that this system is the system used by MacArthur et al. in
[113] (refer to system (3.8)). The nonlinear system (3.8) is of the form:
[X
1
]
2
1 + [X
1
]
2
+[X
2
]
2
+[X
3
]
2
[X
1
] = 0
[X
2
]
2
1 + [X
2
]
2
+[X
1
]
2
+[X
3
]
2
[X
2
] = 0 (A.26)
[X
3
]
2
1 + [X
3
]
2
+[X
1
]
2
+[X
2
]
2
[X
3
] = 0.
Appendix A. More on Equilibrium Points: Illustrations 119
The corresponding polynomial system is
P
1
([X
1
], [X
2
], [X
3
]) = [X
1
]
2
[X
1
] [X
1
]
3
[X
1
][X
2
]
2
[X
1
][X
3
]
2
= 0
P
2
([X
1
], [X
2
], [X
3
]) = [X
2
]
2
[X
2
] [X
2
]
3
[X
1
]
2
[X
2
] [X
2
][X
3
]
2
= 0 (A.27)
P
3
([X
1
], [X
2
], [X
3
]) = [X
3
]
2
[X
3
] [X
3
]
3
[X
1
]
2
[X
3
] [X
2
]
2
[X
3
] = 0.
The Sylvester matrix associated to P
1
and P
2
with [X
1
] as variable is as follows:
_
_
a
11
a
12
a
13
0 0
0 a
11
a
12
a
13
0
a
31
0 a
33
0 0
0 a
31
0 a
33
0
0 0 a
31
0 a
33
_
_
(A.28)
where a
11
= , a
12
= 1, a
13
= [X
2
]
2
[X
3
]
2
, a
31
= [X
2
] and a
33
=
[X
2
]
2
[X
2
] [X
2
]
3
[X
2
][X
3
]
2
.
The Sylvester matrix associated to P
1
and P
3
with [X
1
] as variable is as follows:
_
_
a
11
a
12
a
13
0 0
0 a
11
a
12
a
13
0
a
31
0 a
33
0 0
0 a
31
0 a
33
0
0 0 a
31
0 a
33
_
_
(A.29)
where a
11
= , a
12
= 1, a
13
= [X
2
]
2
[X
3
]
2
, a
31
= [X
3
] and a
33
=
[X
3
]
2
[X
3
] [X
3
]
3
[X
2
]
2
[X
3
].
The following are the Sylvester resultants (with [X
1
] as variable) associated to the
polynomial system (A.27):
Appendix A. More on Equilibrium Points: Illustrations 120
res(P
1
, P
2
; [X
1
]) = [X
2
]
3
( [X
2
] +[X
2
]
2
+[X
3
]
2
) ( [X
2
] +[X
2
]
2
+
[X
2
]
2
+ [X
3
]
2
) ( 2
2
+
2
2
+
2
[X
2
]
2
[X
2
] +
2
2
2
[X
2
]
2
+
2
[X
2
]
2
2
2
2
[X
3
]
2
+
3
2
[X
3
]
2
+
2
[X
2
]
2
[X
2
]
2
+
2
[X
3
]
2
)
res(P
1
, P
3
; [X
1
]) = [X
3
]
3
( [X
3
] +[X
3
]
2
+[X
2
]
2
) ( [X
3
] +[X
3
]
2
+
[X
3
]
2
+ [X
2
]
2
) ( 2
2
+
2
2
+
2
[X
3
]
2
[X
3
] +
2
2
2
[X
3
]
2
+
2
[X
3
]
2
2
2
2
[X
2
]
2
+
3
2
[X
2
]
2
+
2
[X
3
]
2
[X
3
]
2
+
2
[X
2
]
2
).
We investigate all possible combination of the factors of res(P
1
, P
2
; [X
1
]) and res(P
1
,
P
3
; [X
1
]) and their simultaneous nonnegative real zeros. For example, the factors [X
2
]
3
in res(P
1
, P
2
; [X
1
]) and [X
3
]
3
in res(P
1
, P
3
; [X
1
]) have a simultaneous nonnegative
real zero which is [X
2
]
= [X
3
]
= [X
3
]
= 0;
2. [X
2
]
= 0 and [X
3
]
> [X
2
]
;
3. [X
3
]
= 0 and [X
2
]
> [X
3
]
;
4. [X
2
]
= [X
3
]
> 0;
5. [X
2
]
> [X
3
]
> 0; and
6. [X
3
]
> [X
2
]
> 0.
Since the structure of each equation in the nonlinear system (A.26) are similar, then
the above enumeration of characteristics of solutions also apply to the relationship be-
tween [X
1
] and [X
2
] as well as to the relationship between [X
1
] and [X
3
].
We can conclude that an equilibrium point satises one of the following characteristics
(depending on the value of and ):
Appendix A. More on Equilibrium Points: Illustrations 121
1. [X
1
]
= [X
2
]
= [X
3
]
;
2. [X
1
]
> [X
2
]
= [X
3
]
;
3. [X
2
]
> [X
1
]
= [X
3
]
;
4. [X
3
]
> [X
1
]
= [X
2
]
;
5. [X
1
]
< [X
2
]
= [X
3
]
;
6. [X
2
]
< [X
1
]
= [X
3
]
;
7. [X
3
]
< [X
1
]
= [X
2
]
;
8. [X
1
]
> [X
2
]
> [X
3
]
;
9. [X
2
]
> [X
3
]
> [X
1
]
;
10. [X
3
]
> [X
1
]
> [X
2
]
;
11. [X
1
]
> [X
3
]
> [X
2
]
;
12. [X
2
]
> [X
1
]
> [X
3
]
; and
13. [X
3
]
> [X
2
]
> [X
1
]
.
Each characteristic may represent a cell that is tripotent (primed), bipotent, unipotent
or in terminal state. However, it is also possible to have the origin as the equilibrium
point which is a trivial case. Our observation regarding these possible characteristics of
equilibrium points is also consistent with the ndings of MacArthur et al. [113].
Based from Theorem (6.6), our system may have at most 27 equilibrium points.
Appendix A. More on Equilibrium Points: Illustrations 122
A.3.8 Illustration 8
Consider the system in (A.3.7) (Illustration 7), where =
1
8
and =
1
21
. This system has
equilibrium points satisfying all the characteristics enumerated in Illustration 7 (A.3.7).
Moreover, this system has 27 equilibrium points which is equal to the upper bound of the
number of possible equilibrium points. The equilibrium points are (the following values
are approximates):
([X
1
]
= 1.3235 10
23
0, [X
2
]
= 20.952, [X
3
] = 0),
([X
1
]
= 18.619, [X
2
] = 0, [X
3
] = 18.619),
([X
1
]
= 18.619, [X
2
]
= 18.619, [X
3
]
= 0),
([X
1
]
= 0, [X
2
]
= 18.619, [X
3
]
= 18.619),
([X
1
]
= 20.832, [X
2
]
= 3.1685, [X
3
]
= 3.1685),
([X
1
]
= 3.1685, [X
2
]
= 20.832, [X
3
]
= 3.1685),
([X
1
]
= 3.1685, [X
2
]
= 3.1685, [X
3
]
= 20.832),
([X
1
]
= 4.7755 10
2
, [X
2
]
= 4.7755 10
2
, [X
3
]
= 4.7755 10
2
),
([X
1
]
= 20.894, [X
2
]
= 3.1056, [X
3
]
= 0),
([X
1
]
= 20.894, [X
2
]
= 0, [X
3
]
= 3.1056),
([X
1
]
= 3.1056, [X
2
]
= 20.894, [X
3
]
= 0),
([X
1
]
= 3.1056, [X
2
]
= 0, [X
3
]
= 20.894),
([X
1
]
= 0, [X
2
]
= 20.894, [X
3
]
= 3.1056),
([X
1
]
= 0, [X
2
]
= 3.1056, [X
3
]
= 20.894),
([X
1
]
= 4.7741 10
2
, [X
2
]
= 0, [X
3
]
= 4.7741 10
2
),
([X
1
]
= 4.7741 10
2
, [X
2
]
= 4.7741 10
2
, [X
3
]
= 0),
([X
1
]
= 0, [X
2
]
= 4.7741 10
2
, [X
3
]
= 4.7741 10
2
),
([X
1
]
= 16.752, [X
2
]
= 16.752, [X
3
]
= 16.752),
([X
1
]
= 20.952, [X
2
]
= 0, [X
3
]
= 0),
([X
1
]
= 0, [X
2
]
= 0, [X
3
]
= 20.952),
([X
1
]
= 2.0033 10
25
0, [X
2
]
= 4.7728 10
2
, [X
3
]
= 0),
([X
1
]
= 4.7728 10
2
, [X
2
]
= 0, [X
3
]
= 0),
([X
1
]
= 0, [X
2
]
= 0, [X
3
]
= 4.7728 10
2
),
([X
1
]
= 18.432, [X
2
]
= 18.432, [X
3
]
= 5.5685),
Appendix A. More on Equilibrium Points: Illustrations 123
([X
1
]
= 18.432, [X
2
]
= 5.5685, [X
3
]
= 18.432),
([X
1
]
= 5.5685, [X
2
]
= 18.432, [X
3
]
= 18.432), and
([X
1
]
= 0, [X
2
]
= 0, [X
3
]
= 0).
The equilibrium points where [X
3
]
= [X
3
]
= 1.3235 10
23
0, [X
2
]
= 20.952, [X
3
] = 0) stable (terminal
state),
([X
1
]
= 18.619, [X
2
] = 0, [X
3
] = 18.619) stable (bipotent),
([X
1
]
= 18.619, [X
2
]
= 18.619, [X
3
]
= 0) stable (bipotent),
([X
1
]
= 0, [X
2
]
= 18.619, [X
3
]
= 20.832, [X
2
]
= 3.1685, [X
3
]
= 3.1685) unstable,
([X
1
]
= 3.1685, [X
2
]
= 20.832, [X
3
]
= 3.1685) unstable,
([X
1
]
= 3.1685, [X
2
]
= 3.1685, [X
3
]
= 20.832) unstable,
([X
1
]
= 4.7755 10
2
, [X
2
]
= 4.7755 10
2
, [X
3
]
= 4.7755 10
2
)
unstable,
Appendix A. More on Equilibrium Points: Illustrations 124
([X
1
]
= 20.894, [X
2
]
= 3.1056, [X
3
]
= 0) unstable,
([X
1
]
= 20.894, [X
2
]
= 0, [X
3
]
= 3.1056) unstable,
([X
1
]
= 3.1056, [X
2
]
= 20.894, [X
3
]
= 0) unstable,
([X
1
]
= 3.1056, [X
2
]
= 0, [X
3
]
= 20.894) unstable,
([X
1
]
= 0, [X
2
]
= 20.894, [X
3
]
= 3.1056) unstable,
([X
1
]
= 0, [X
2
]
= 3.1056, [X
3
]
= 20.894) unstable,
([X
1
]
= 4.7741 10
2
, [X
2
]
= 0, [X
3
]
= 4.7741 10
2
) unstable,
([X
1
]
= 4.7741 10
2
, [X
2
]
= 4.7741 10
2
, [X
3
]
= 0) unstable,
([X
1
]
= 0, [X
2
]
= 4.7741 10
2
, [X
3
]
= 4.7741 10
2
) unstable,
([X
1
]
= 16.752, [X
2
]
= 16.752, [X
3
]
= 20.952, [X
2
]
= 0, [X
3
]
= 0, [X
2
]
= 0, [X
3
]
= 2.0033 10
25
0, [X
2
]
= 4.7728 10
2
, [X
3
]
= 0) unstable,
([X
1
]
= 4.7728 10
2
, [X
2
]
= 0, [X
3
]
= 0) unstable,
([X
1
]
= 0, [X
2
]
= 0, [X
3
]
= 4.7728 10
2
) unstable,
([X
1
]
= 18.432, [X
2
]
= 18.432, [X
3
]
= 5.5685) unstable,
([X
1
]
= 18.432, [X
2
]
= 5.5685, [X
3
]
= 18.432) unstable,
([X
1
]
= 5.5685, [X
2
]
= 18.432, [X
3
]
= 0, [X
2
]
= 0, [X
3
]
= 0.10103, [X
2
]
= 1.001, [X
3
]
= 0). To
determine the stability of this equilibrium point we use ad hoc geometric analysis. First,
we look at the intersection of Y = H
1
([X
1
]) + 1 and Y = 10[X
1
] with [X
2
] = 1.001 and
Appendix A. More on Equilibrium Points: Illustrations 125
[X
3
] = 0. Then we determine if [X
1
]
= 0.10103 is stable.
Now, we test if [X
2
]
= 0
is stable or not by looking at the intersection of Y = H
3
([X
3
]) and Y = 10[X
3
] with
[X
1
] = 0.10103 and [X
2
] = 1.001. As shown in Figure (A.3) and (A.4), we conclude that
[X
2
]
= 1.001), hence,
the equilibrium point ([X
1
]
= 0.10103, [X
2
]
= 1.001, [X
3
]
= 0) is unstable.
Figure A.2: The intersection of Y = H
1
([X
1
]) + 1 and Y = 10[X
1
] with [X
2
] = 1.001
and [X
3
] = 0.
Appendix A. More on Equilibrium Points: Illustrations 126
Figure A.3: The intersection of Y = H
2
([X
2
]) and Y = 10[X
2
] with [X
1
] = 0.10103 and
[X
3
] = 0.
Figure A.4: The intersection of Y = H
3
([X
3
]) and Y = 10[X
3
] with [X
1
] = 0.10103 and
[X
2
] = 1.001.
Appendix A. More on Equilibrium Points: Illustrations 127
A.5 Phase portrait with innitely many equilibrium
points
For example, the phase portrait of the system
d[X
1
]
dt
=
5[X
1
]
1 + [X
1
] + [X
2
]
[X
1
] (A.30)
d[X
2
]
dt
=
5[X
2
]
1 + [X
1
] + [X
2
]
[X
2
]
is shown in Figure (A.5). The phase portrait was graphed using the Java applet in
http://www.scottsarra.org/applets/dirField2/dirField2.html [145].
Figure A.5: A sample phase portrait of the system with innitely many non-isolated
equilibrium points.
Appendix B
Multivariate Fixed Point Algorithm
We have written a Scilab [150] program for nding approximate values of stable equilib-
rium points. This program employs the Fixed Point Iteration method. However, if we
do numerical computations, we need to be cautious about the possible round-o errors
that we may encounter.
Algorithm 3 Multivariate xed point algorithm (1st Part)
//input
n=input("Input n=")
for i=1:n
disp(i, "FOR EQUATION")
coeffbeta(i)=input("beta=")
K(i)=input("K=")
rho(i)=input("positive rho=")
g(i)=input("g=")
disp(i, "exponent of x")
c(i)=input("ci=")
for m=1:n
if m~=i then
disp(m, "coefficient of x")
gam(i,m)=input("gamma=")
disp(m, "exponent of x")
z(i,m)=input("cij=")
else
gam(i,m)=1
end
end
end
for i=1:n
disp(i, "initial value for x")
x(i,1)=input("=")
end
128
Appendix B. Multivariate Fixed Point Algorithm 129
Algorithm 4 Multivariate xed point algorithm (2nd Part)
//fixed point iteration process
tol=input("tolerance error=")
j=1
y(1)=1000
while (y(j)>tol)&(j<500002) then //500,000 max number of steps
for i=1:n
summ(i)=0
for k=1:n
if k~=i then
summ(i)=summ(i)+gam(i,k)*x(k,j)^z(i,k)
end
end
x(i,j+1)=((coeffbeta(i)*x(i,j)^c(i))/(K(i)+x(i,j)^c(i)+summ(i))..
+g(i))/rho(i)
end
j=j+1
y(j)=max(abs(x(:,j)-x(:,j-1)))
end
q=input("q for test of Q-convergence=")
for i=1:j-2
lambda(i)=(norm(x(:,i+2)-x(:,i+1)))/((norm(x(:,i+1)-x(:,i)))^q)
lambdalim=lambda(i)
end
//output
disp(x(:,j), "The approx equilibrium point is ")
disp(j-1, "number of iterations is ")
disp(lambdalim, "when q=1, the approx asymptotic error constant is ")
Appendix B. Multivariate Fixed Point Algorithm 130
We illustrate how to use multivariate xed point algorithm in determining the stability
of equilibrium points.
Consider the case where all parameters are equal to 1 except for c
i
= c
ij
= 3,
i
= 20,
i
= 10, g
2
= 0 and g
3
= 0, i, j = 1, 2, 3.
One of the equilibrium points is ([X
1
]
0.10039, [X
2
]
1.6173, [X
3
]
= 0). We
perturb this equilibrium point by adding and subtracting a small positive number per
component (but note that the components should not be negative). Suppose we use
the small positive number 0.00001, then we have ([X
1
] = 0.10039 + 0.00001, [X
2
] =
1.6173 + 0.00001, [X
3
] = 0 + 0.00001) and ([X
1
] = 0.10039 0.00001, [X
2
] = 1.6173
0.00001, [X
3
] = 0). We use these two perturbed points as the initial conditions in the
multivariate xed point algorithm.
If both the sequences of points generated by the multivariate xed point algorithm (us-
ing the two initial conditions) converge to the equilibrium point ([X
1
]
0.10039, [X
2
]
1.6173, [X
3
]
= 0), then we can conclude that this equilibrium point is stable. If at least
one of the two sequences of points tends away from the equilibrium point then we can
approximately conclude that the equilibrium point is unstable. We need to use the
two perturbed points to minimize the probability of converging to a saddle point. We
say approximately conclude because a point may seem unstable yet it might be stable
with only very small basin of attraction.
Assuming a tolerance error of 10
5
, then by the multivariate xed point algorithm,
the equilibrium point ([X
1
]
= 0.10039, [X
2
]
= 1.6173, [X
3
]
= 0) is stable. Moreover, in
this example, the sequences of points generated by the multivariate xed point algorithm
converges linearly.
Appendix C
More on Bifurcation of Parameters:
Illustrations
C.1 Adding g
i
> 0, Illustration 1
Consider the following system
d[X
1
]
dt
=
3[X
1
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
[X
1
]
d[X
2
]
dt
=
3[X
2
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
[X
2
] (C.1)
d[X
3
]
dt
=
3[X
3
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
[X
3
].
This system has the following equilibrium points (the following are approximate values):
([X
1
]
= 0, [X
2
]
= 0.6527, [X
3
]
= 0) unstable,
([X
1
]
= 0, [X
2
]
= 2.8794, [X
3
]
= 0.6527, [X
2
]
= 0, [X
3
]
= 0) unstable,
([X
1
]
= 0, [X
2
]
= 0, [X
3
]
= 0.6527) unstable,
([X
1
]
= 2.8794, [X
2
]
= 0, [X
3
]
= 0, [X
2
]
= 0, [X
3
]
= 1, [X
2
]
= 1, [X
3
]
= 0) stable (bipotent),
([X
1
]
= 1, [X
2
]
= 0, [X
3
]
= 1) stable (bipotent),
([X
1
]
= 0, [X
2
]
= 1, [X
3
]
= 0, [X
2
]
= 0, [X
3
]
3.9522, [X
2
]
= 0, [X
3
]
=
0), which is stable.
If we also add g
2
= 1, that is, consider the following system
d[X
1
]
dt
=
3[X
1
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
[X
1
] + 1
d[X
2
]
dt
=
3[X
2
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
[X
2
] + 1 (C.3)
d[X
3
]
dt
=
3[X
3
]
3
1 + [X
1
]
3
+ [X
2
]
3
+ [X
3
]
3
[X
3
],
then we have the following equilibrium points (the following are approximate values):
([X
1
]
= 2.4507, [X
2
]
= 2.4507, [X
3
]
= 0) stable (bipotent),
([X
1
]
= 1.0581, [X
2
]
= 3.8929, [X
3
]
= 3.8929, [X
2
]
= 1.0581, [X
3
]
= 0) stable (unipotent).
C.2 Adding g
i
> 0, Illustration 2
Consider the following system
d[X
1
]
dt
=
5[X
1
]
2
1 + [X
1
]
2
+ [X
2
]
2
2[X
1
] (C.4)
d[X
2
]
dt
=
5[X
2
]
2
1 + [X
1
]
2
+ [X
2
]
2
2[X
2
].
We can add g
1
> 0 to the system if we want [X
1
]
to suciently dominate [X
2
]
. We
can do ad hoc geometric analysis to determine if the value of g
1
is enough to drive the
system to have sole equilibrium point where [X
1
]
> [X
2
]
.
Appendix C. More on Bifurcation of Parameters: Illustrations 133
We rst graph
Y =
5[X
1
]
2
1 + [X
1
]
2
+g
1
and
Y = 2[X
1
],
then we determine if the two curves have a sole intersection. If they do have more than
one intersection, we increase the value of g
1
. We nd the value of the sole intersection
and denote it by [X
1
]
()
.
We substitute [X
1
]
()
to [X
1
] in Y =
5[X
2
]
2
1+[X
1
]
2
+[X
2
]
2
. Then we determine if
Y =
5[X
2
]
2
1 + [X
1
]
()
2
+ [X
2
]
2
and
Y = 2[X
2
]
intersect only at one point. If there are more than one intersections, we increase g
1
and
adjust [X
1
]
()
. If there is only one intersection then [X
2
]
= 0.
Figure C.1: Determining the adequate g
1
> 0 that would give rise to a sole equilibrium
point where [X
1
]
> [X
2
]
.
The sole stable equilibrium point of the system with adequate g
i
> 0 is the computed
Appendix C. More on Bifurcation of Parameters: Illustrations 134
([X
1
]
= [X
1
]
()
, [X
2
]
2.698, [X
2
]
= 0).
C.3 g
i
as a function of time
Making g
i
as a function of time (i.e., g
i
changes through time) means that we are adding
another equation and state variable to our system of ODEs. We can think g
i
as an
additional node to our GRN and we call it as the injection node. In this thesis, we
consider two types of functions linear function with negative slope and exponential
function with negative exponent. Notice that the two functions represent a g
i
that
degrades through time.
Suppose given a specic initial condition, the solution to our system tends to an
equilibrium point with [X
i
]
= 0 then
one strategy is to add g
i
> 0. The idea of adding an enough amount of g
i
> 0 is
to make the solution of our system escape a certain equilibrium point. However, it
is sometimes impractical or infeasible to continuously introduce a constant amount of
exogenous stimulus to control cell fates, that is why we can rather consider introducing
g
i
that degrades through time.
We numerically investigate the case where adding a degrading amount of g
i
aects cell
fate. However, this strategy is only applicable to systems with multiple stable equilibrium
points where convergence of trajectories is sensitive to initial conditions.
C.3.1 As a linear function
Suppose
g
i
(t) =
i
t +g
i
(0) or (C.5)
dg
i
dt
=
i
where the degradation rate
i
is positive. We dene g
i
(t) < 0 as g
i
(t) = 0.
Appendix C. More on Bifurcation of Parameters: Illustrations 135
Here is an example where without g
1
, [X
1
]
2.98745
(see Figure (C.3)). Figure (C.2) shows the numerical solution to the system
d[X
1
]
dt
=
3[X
1
]
5
1 + [X
1
]
5
+ [X
2
]
5
[X
1
] (C.6)
d[X
2
]
dt
=
3[X
2
]
5
1 + [X
1
]
5
+ [X
2
]
5
[X
2
].
While, Figure (C.3) shows the numerical solution to the system (with g
i
(t))
d[X
1
]
dt
=
3[X
1
]
5
1 + [X
1
]
5
+ [X
2
]
5
[X
1
] +g
1
d[X
2
]
dt
=
3[X
2
]
5
1 + [X
1
]
5
+ [X
2
]
5
[X
2
] (C.7)
dg
1
dt
= 1
Limit g
1
0.
The initial values are [X
1
]
0
= 0.5, [X
2
]
0
= 1 and g
i
(0) = 5.
By looking at Figure (C.4), we can see that [X
1
]
= 0.
Appendix C. More on Bifurcation of Parameters: Illustrations 136
Figure C.3: [X
1
]
i
t
or (C.8)
dg
i
dt
=
i
g
i
where the degradation rate
i
is positive. We dene g
i
(t) < 0 as g
i
(t) = 0.
Consider the system used in the previous subsection (C.3.1). The time series in Figure
(C.5) has the same behavior as Figure (C.3). Figure (C.5) shows the numerical simulation
to the system
d[X
1
]
dt
=
3[X
1
]
5
1 + [X
1
]
5
+ [X
2
]
5
[X
1
] +g
1
d[X
2
]
dt
=
3[X
2
]
5
1 + [X
1
]
5
+ [X
2
]
5
[X
2
] (C.9)
dg
1
dt
= g
1
Limit g
1
0
where the initial values are [X
1
]
0
= 0.5, [X
2
]
0
= 1 and g
i
(0) = 5.
Figure C.5: [X
1
]
2
= 3,
12
= 1,
21
= 1, K
1
= 1, K
2
= 1,
1
= 1,
2
= 1, g
1
= 1 and g
2
= 0.
A saddle node bifurcation arises when we vary the parameter
2
the bifurcation
diagram is shown in Figure (C.15). Also, a saddle node bifurcation arises when we vary
the parameter g
2
the bifurcation diagram is shown in Figure (C.16).
Figure C.15: Saddle node bifurcation;
2
is varied.
Appendix C. More on Bifurcation of Parameters: Illustrations 145
Figure C.16: Saddle node bifurcation; g
2
is varied.
C.5.3 Illustration 3
Suppose we have the system
d[X
1
]
dt
=
1
[X
1
]
c
K
1
+ [X
1
]
c
+[X
2
]
c
+[X
3
]
c
+[X
4
]
c
1
[X
1
] +g
1
d[X
2
]
dt
=
2
[X
2
]
c
K
2
+ [X
2
]
c
+[X
1
]
c
+[X
3
]
c
+[X
4
]
c
2
[X
2
] +g
2
(C.13)
d[X
3
]
dt
=
3
[X
3
]
c
K
3
+ [X
3
]
c
+[X
1
]
c
+[X
2
]
c
+[X
4
]
c
3
[X
3
] +g
3
d[X
4
]
dt
=
4
[X
4
]
c
K
4
+ [X
4
]
c
+[X
1
]
c
+[X
2
]
c
+[X
3
]
c
4
[X
4
] +g
4
with initial condition X
0
= (5, 5, 5, 5) as well as parameter values c = 2,
i
= 3 (i =
1, 2, 3, 4), = 1, K
i
= 1 (i = 1, 2, 3, 4),
i
= 1 (i = 1, 2, 3, 4), g
1
= 1, g
2
= 0, g
3
= 0 and
g
4
= 0.
Figures (C.17) and (C.18) illustrate the possible occurrence of saddle node bifurcation.
Appendix C. More on Bifurcation of Parameters: Illustrations 146
Figure C.17: Saddle node bifurcation;
2
is varied.
Figure C.18: Saddle node bifurcation; g
2
is varied.
Appendix D
Scilab Program for Euler-Maruyama
Algorithm 5 Euler-Maruyama with Euler method (1st Part)
//input parameters
n=input("Input n=")
for i=1:n
disp(i, "FOR EQUATION")
coeffbeta(i)=input("beta=")
K(i)=input("K=")
rho(i)=input("rho=")
g(i)=input("g=")
disp(i, "exponent of x")
c(i)=input("ci=")
for m=1:n
if m~=i then
disp(m, "coefficient of x")
gam(i,m)=input("gamma=")
disp(m, "exponent of x")
z(i,m)=input("cij=")
else
gam(i,m)=1
end
end
sig(i)=input("sigma=")
end
for i=1:n
disp(i, "initial value for x")
y(i,1)=input("=")
x(i,1)=y(i,1)
end
147
Appendix D. Scilab Program for Euler-Maruyama 148
Algorithm 6 Euler-Maruyama with Euler method (2nd Part)
//Euler-Maruyama process
tend=input("end time of simulation t_end=")
hstep=input("step size=")
j=1
while (j<tend/hstep+1) then
for i=1:n
summ(i)=0
summx(i)=0
for k=1:n
if k~=i then
summ(i)=summ(i)+gam(i,k)*y(k,j)^z(i,k)
summx(i)=summx(i)+gam(i,k)*x(k,j)^z(i,k)
end
end
G=sqrt(y(i,j)) //You can change G
rand(normal)
y(i,j+1)=y(i,j)+((coeffbeta(i)*y(i,j)^c(i))/(K(i)+y(i,j)^c(i)+..
summ(i))+g(i)-rho(i)*y(i,j))*hstep+sig(i)*(G)*..
((sqrt(hstep))*rand())
x(i,j+1)=x(i,j)+((coeffbeta(i)*x(i,j)^c(i))/(K(i)+x(i,j)^c(i)+..
summx(i))+g(i)-rho(i)*x(i,j))*hstep
if y(i,j+1)<0 then
y(i,j+1)=0
end
t(1)=0
t(j+1)=t(j)+hstep
end
j=j+1
end
plot(t,y)
plot(t,x,".")
List of References
[1] Understanding stem cells: An overview of the science and issues from the na-
tional academies. Published by The National Academies, 2006. Available at
www.nationalacademies.org/stemcells.
[2] Reection paper on stem cell-based medicinal products, Tech. Report
EMA/CAT/571134/2009, Committee for Advanced Therapies, European
Medicines Agency, London, United Kingdom, 2011.
[3] B. D. Aguda and A. Friedman, Models of Cellular Regulation, Oxford Univer-
sity Press, NY, 2008.
[4] R. Albert and A.-L. Barab asi, Statistical mechanics of complex networks,
Reviews of Modern Physics, 74 (2002), pp. 4795.
[5] E. Allen, A practical introduction to SDEs and SPDEs in mathematical biol-
ogy. Lecture notes for Joint 2011 MBI-NIMBioS-CAMBAM Summer Graduate
Workshop; available at http://www.mbi.osu.edu/eduprograms/2011materials/
eallenMBI2011.pdf, July-August 2011.
[6] M. Andrecut, Monte-carlo simulation of a multi-dimensional switch-like model
of stem cell dierentiation, in Applications of Monte Carlo Methods in Biology,
Medicine and Other Fields of Science, InTech, 2011.
[7] M. Andrecut et al., A general model for binary cell fate decision gene circuits
with degeneracy: Indeterminacy and switch behavior in the absence of cooperativity,
PLoS ONE, 6 (2011), p. e19358.
[8] D. Angeli, J. E. Ferrel, Jr., and E. D. Sontag, Detection of multistability,
bifurcations, and hysteresis in a large class of biological positive-feedback systems,
PNAS, 101 (2004), pp. 18221827.
149
List of References 150
[9] J. Ansel et al., Cell-to-cell stochastic variation in gene expression is a complex
genetic trait, PLoS Genetics, 4 (2008), p. e1000049.
[10] A. Arbel aez et al., A generic framework to model, simulate and verify ge-
netic regulatory networks, in Conferencia Latinoamericana de Informatica (CLEI),
Santiago de Chile, Chile, August 2006.
[11] M. N. Artyomov, A. Meissner, and A. K. Chakraborty, A model for
genetic and epigenetic regulatory networks identies rare pathways for transcription
factor induced pluripotency, PLoS Computational Biology, 6 (2010), p. e1000785.
[12] G. Bal azsi, A. van Oudenaarden, and J. J. Collins, Cellular decision
making and biological noise: From microbes to mammals, Cell, 144 (2011), pp. 910
925.
[13] P. Baldi and G. W. Hatfield, DNA Microarrays and Gene Expression: From
Experiments to Data Analysis and Modeling, Cambridge University Press, Cam-
bridge, 2002.
[14] D. Bergmann, Genetic Network Modelling and Inference, PhD thesis, The Uni-
versity of Nottingham, 2010.
[15] W. J. Blake et al., Noise in eukaryotic gene expression, Nature, 422 (2003),
pp. 633637.
[16] B. R. Blazar, P. A. Taylor, and D. A. Vallera, Adult bone marrow-derived
pluripotent hematopoietic stem cells are engraftable when transferred in utero into
moderately anemic fetal recipients, Blood, 85 (1995), pp. 833841.
[17] I. W. M. Bleylevens, Algebraic Polynomial System Solving and Applications,
PhD thesis, Universiteit Maastricht, 2010.
[18] R. Blossey, L. Cardelli, and A. Phillips, A compositional approach to
the stochastic dynamics of gene networks, Lecture Notes in Computer Science,
3653/2005 (2005).
List of References 151
[19] K. R. Boheler, Stem cell pluripotency: A cellular trait that depends on tran-
scription factors, chromatin state and a checkpoint decient cell cycle, Journal of
Cellular Physiology, 221 (2009), pp. 1017.
[20] A. Bongso and E. H. Lee, Stem cells: Their denition, classication and
sources, in Stem Cells - From Bench to Bedside, World Scientic, 2005.
[21] L. Bortolussi, Constraint-based approaches to stochastic dynamics of biological
systems, PhD thesis, Universit` a degli Studi di Udine, 2006.
[22] T. Burdon, A. Smith, and P. Savatier, Signalling, cell cycle and pluripotency
in embryonic stem cells, Trends in Cell Biology, 12 (2002), pp. 432438.
[23] L. A. Buttitta and B. A. Edgar, Mechanisms controlling cell cycle exit upon
terminal dierentiation, Current Opinion in Cell Biology, 19 (2007), p. 697704.
[24] N. A. Campbell and J. B. Reece, Biology, Pearson, CA, sixth ed., 2002.
[25] H. H. Chang et al., Multistable and multistep dynamics in neutrophil dieren-
tiation, BMC Cell Biology, 7:11 (2006).
[26] , Transcriptome-wide noise controls lineage choice in mammalian progenitor
cells, Nature, 453 (2008), p. 544548.
[27] M. Chaves, A. Sengupta, and E. D. Sontag, Geometry and topology of pa-
rameter space: investigating measures of robustness in regulatory networks, Journal
of Mathematical Biology, 59 (2009), pp. 315358.
[28] B. Chen and C. Li, On the interplay between entropy and robustness of gene
regulatory networks, Entropy, 12 (2010), pp. 10711101.
[29] L. Chen et al., eds., Modeling Biomolecular Networks in Cells, Springer-Verlag,
London, 2010.
[30] L. Chen, R.-S. Wang, and X.-S. Zhang, Biomolecular Networks: Methods and
Applications in Systems Biology, Wiley, NJ, 2009.
List of References 152
[31] T. Chen, H. L. He, and G. M. Church, Modeling gene expression with dier-
ential equations, in Pacifc Symposium of Biocomputing, 1999.
[32] V. Chickarmane et al., Transcriptional dynamics of the embryonic stem cell
switch, PLoS Computational Biology, 2 (2006), p. e123.
[33] S. Choi, ed., Introduction to Systems Biology, Humana Press, NJ, 2007.
[34] B. Christen et al., Regeneration and reprogramming compared, BMC Biology,
8 (2010).
[35] O. Cinquin, Horloges, gradients, et reseaux moleculaires : mod`eles mathematiques
de la morphogen`ese, PhD thesis, Universite Joseph Fourier - Grenoble, 2005.
[36] O. Cinquin and J. Demongeot, Positive and negative feedback: striking a bal-
ance between necessary antagonists, Journal of Theoretical Biology, 216 (2002),
pp. 229241.
[37] , Roles of positive and negative feedback in biological systems, Comptes Rendus
Biologies, 325 (2002), pp. 10851095.
[38] , High-dimensional switches and the modelling of cellular dierentiation, Jour-
nal of Theoretical Biology, 233 (2005), pp. 391411.
[39] P. Collas and A. Hakelien, Teaching cells new tricks, Trends in Biotechnology,
21 (2003), p. 354361.
[40] E. Conrad, Oscill8. Software; version 2.0.11.24095.
[41] F. Crick, Central dogma of molecular biology, Nature, 227 (1970), p. 561563.
[42] A. Crombach and P. Hogeweg, Evolution of evolvability in gene regulatory
networks, PLoS Computational Biology, 4 (2008), p. e1000112.
[43] F. dAlch e-Buc et al., A dynamic model of gene regulatory networks based on
inertia principle, in Studies in Fuzziness and Soft Computing, Springer, 2005.
List of References 153
[44] L. David et al., Looking into the black box: Insights into the mechanisms of
somatic cell reprogramming, Genes, 2 (2011), pp. 81106.
[45] H. de Jong, Modeling and simulation of genetic regulatory systems: A literature
review, Journal of Computational Biology, 9 (2002), pp. 67103.
[46] H. de Jong et al., Qualitative simulation of genetic regulatory networks: Method
and application, in Proceedings of the Seventeenth International Joint Conference
on Articial Intelligence, IJCAI-01, San Mateo, CA, 2001, pp. 6773.
[47] H. de Jong and J. Geiselmann, Modeling and simulation of genetic regula-
tory networks by ordinary dierential equations, in Genomic Signal Processing and
Statistics, Hindawi, 2005.
[48] W. Deng, Stem cells and drug discovery: the time has never been better, Trends
in Bio/Pharmaceutical Industry, 6 (2010), pp. 1220.
[49] A. Diaz et al., Algebraic algorithms, in Handbook on Algorithms and Theory of
Computation, CRC Press, 1998.
[50] P. J. Donovan and J. Gearhart, The end of the beginning for pluripotent stem
cells, Nature, 414 (2001), pp. 9297.
[51] D. Egli, G. Birkhoff, and K. Eggan, Mediators of reprogramming: tran-
scription factors and transitions through mitosis, Nature Reviews: Molecular Cell
Biology, 9 (2008), pp. 505516.
[52] H. El Samad et al., Stochastic modelling of gene regulatory networks, Interna-
tional Journal of Robust and Nonlinear Control, 15 (2005), pp. 691711.
[53] H. El Samad and M. Khammash, Stochastic stability and its application to the
analysis of gene regulatory networks, in 43rd IEEE Conference on Decision and
Control, vol. 3, 2004, pp. 30013006.
[54] , Regulated degradation is a mechanism for suppressing stochastic uctuations
in gene regulatory networks, Biophysical Journal, 90 (2006), pp. 37493761.
List of References 154
[55] R. Erban et al., Gene regulatory networks: A coarse-grained, equation-free ap-
proach to multiscale computation, The Journal of Chemical Physics, 124 (2006),
p. 084106.
[56] J. Farlow et al., Dierential Equations and Linear Algebra, Pearson, NJ, sec-
ond ed., 2007.
[57] W. L. Farrar, ed., Cancer Stem Cells, Cambridge University Press, Cambridge,
2010.
[58] B. Feng et al., Reprogramming of broblasts into induced pluripotent stem cells
with orphan nuclear receptor esrrb, Nature Cell Biology, 11 (2008), pp. 197203.
[59] S. Filip et al., Stem cells and the phenomena of plasticity and diversity: a lim-
iting property of carcinogenesis, Stem Cells and Development, 17 (2008), pp. 1031
1038.
[60] T. Fournier, Stochastic Models of a Self Regulated Gene Network, PhD thesis,
Universite de Fribourg, 2008.
[61] P. Francois and V. Hakim, Design of genetic networks with specied functions
by evolution in silico, PNAS, 101 (2004), pp. 580585.
[62] P. Fu and S. Panke, eds., Systems Biology and Synthetic Biology, Wiley, NJ,
2009.
[63] A. Funahashi et al., Celldesigner: A modeling tool for biochemical networks, in
Proceedings of the 2006 Winter Simulation Conference, 2006, pp. 17071712.
[64] C. Furusawa and K. Kaneko, Theory of robustness of irreversible dierentia-
tion in a stem cell system: chaos hypothesis, Journal of Theoretical Biology, 209
(2001), pp. 395416.
[65] R. Ganguly and I. K. Puri, Mathematical model for the cancer stem cell hy-
pothesis, Cell Proliferation, 39 (2006), pp. 314.
List of References 155
[66] T. S. Gardner and J. J. Faith, Reverse-engineering transcription control net-
works, Physics of Life Reviews, 2 (2005), pp. 6588.
[67] A. Garg, Implicit Methods for Modeling Gene Regulatory Networks, PhD thesis,