Sie sind auf Seite 1von 138

Advances in Physics

ISSN: 0001-8732 (Print) 1460-6976 (Online) Journal homepage: https://www.tandfonline.com/loi/tadp20

Landscape and flux theory of non-equilibrium


dynamical systems with application to biology

Jin Wang

To cite this article: Jin Wang (2015) Landscape and flux theory of non-equilibrium
dynamical systems with application to biology, Advances in Physics, 64:1, 1-137, DOI:
10.1080/00018732.2015.1037068

To link to this article: https://doi.org/10.1080/00018732.2015.1037068

Published online: 20 May 2015.

Submit your article to this journal

Article views: 1180

View Crossmark data

Citing articles: 37 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tadp20
Advances in Physics, 2015
Vol. 64, No. 1, 1–137, http://dx.doi.org/10.1080/00018732.2015.1037068

REVIEW ARTICLE

Landscape and flux theory of non-equilibrium dynamical systems with


application to biology
Jin Wanga,b,c∗
a State
Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese
Academy of Sciences, Jilin, People’s Republic of China; b College of Physics, Jilin University, Jilin,
People’s Republic of China; c Department of Chemistry, Physics & Applied Mathematics,
State University of New York at Stony Brook, Stony Brook, USA
(Received 12 September 2014; accepted 24 March 2015 )

We present a review of the recently developed landscape and flux theory for non-equilibrium
dynamical systems. We point out that the global natures of the associated dynamics for
non-equilibrium system are determined by two key factors: the underlying landscape and,
importantly, a curl probability flux. The landscape (U) reflects the probability of states (P)
(U = − ln P) and provides a global characterization and a stability measure of the system.
The curl flux term measures how much detailed balance is broken and is one of the two main
driving forces for the non-equilibrium dynamics in addition to the landscape gradient. Equilib-
rium dynamics resembles electron motion in an electric field, while non-equilibrium dynamics
resembles electron motion in both electric and magnetic fields. The landscape and flux the-
ory has many interesting consequences including (1) the fact that irreversible kinetic paths
do not necessarily pass through the landscape saddles; (2) non-equilibrium transition state
theory at the new saddle on the optimal paths for small but finite fluctuations; (3) a gener-
alized fluctuation–dissipation relationship for non-equilibrium dynamical systems where the
response function is not just equal to the fluctuations at the steady state alone as in the equilib-
rium case but there is an additional contribution from the curl flux in maintaining the steady
state; (4) non-equilibrium thermodynamics where the free energy change is not just equal to
the entropy production alone, as in the equilibrium case, but also there is an additional house-
keeping contribution from the non-zero curl flux in maintaining the steady state; (5) gauge
theory and a geometrical connection where the flux is found to be the origin of the gauge field
curvature and the topological phase in analogy to the Berry phase in quantum mechanics; (6)
coupled landscapes where non-adiabaticity of multiple landscapes in non-equilibrium dynam-
ics can be analyzed using the landscape and flux theory and an eddy current emerges from the
non-zero curl flux; (7) stochastic spatial dynamics where landscape and flux theory can be gen-
eralized for non-equilibrium field theory. We provide concrete examples of biological systems
to demonstrate the new insights from the landscape and flux theory. These include models of
(1) the cell cycle where the landscape attracts the system down to an oscillation attractor while
the flux drives the coherent motion on the oscillation ring, the different phases of the cell cycle
are identified as local basins on the cycle path and biological checkpoints are identified as local
barriers or transition states between the local basins on the cell-cycle path; (2) stem cell differ-
entiation where the Waddington landscape for development as well as the differentiation and
reprogramming paths can be quantified; (3) cancer biology where cancer can be described as a
disease of having multiple cellular states and the cancer state as well as the normal state can be
quantified as basins of attractions on the underlying landscape while the transitions between
normal and cancer states can be quantified as the transitions between the two attractors; (4)
evolution where more general evolution dynamics beyond Wright and Fisher can be quanti-
fied using the specific example of allele frequency-dependent selection; (5) ecology where the
landscape and flux as well as the global stability of predator–prey, cooperation and competition
are quantified; (6) neural networks where general asymmetrical connections are considered for
learning and memory, gene self-regulators where non-adiabatic dynamics of gene expression

*Email: jin.wang.1@stonybrook.edu

c 2015 Taylor & Francis
2 J. Wang

can be described with the landscape and flux in expanded dimensions and analytically treated;
(7) chaotic strange attractor where the flux is crucial for the chaotic dynamics; (8) development
in space where spatial landscape can be used to describe the process and pattern formation. We
also give the philosophical implications of the theory and the outlook for future studies.
PACS: 05, statistical physics, thermodynamics, and nonlinear dynamical systems; 64.60.aq, networks;
87.18.Vf, systems biology; 87.19.ll, models of single neurons and networks; 87.23.-n, ecology and
evolution; 05.30.-d, quantum statistical mechanics

Keywords: landscape; flux; non-equilibrium systems and networks; non-adiabatic; spatial;


quantum

Contents PAGE
1 Introduction 4
2 Landscape and flux theory of non-equilibrium dynamical complex systems 9
2.1 Decomposition of the driving force into a landscape gradient and curl flux
for non-equilibrium dynamical complex systems 9
2.2 Global stability for non-equilibrium systems 12
2.2.1 Intrinsic non-equilibrium potential landscape as a Lyapunov function for
quantifying global stability 12
2.2.2 Force decomposition into a non-equilibrium intrinsic potential
landscape and an intrinsic curl flux for deterministic dynamics 13
2.3 The origin of non-equilibrium flux in the dynamical systems and networks 14
2.3.1 Flux and its origin in a mono-stable kinetic cycle 15
2.3.2 Flux and the potential landscape in the stable limit cycle 18
2.4 Non-equilibrium kinetic paths 21
2.5 Non-equilibrium transition state rates 26
2.5.1 Equilibrium transition state rate 27
2.5.2 The exponential contribution of the non-equilibrium transition state rate 29
2.5.3 The transition state rate theory for non-equilibrium systems 30
2.6 Non-equilibrium thermodynamics 32
2.7 FDT for intrinsic non-equilibrium systems 34
2.8 Gauge field, FDT 37
3 Multiple landscapes, curl flux, and non-adiabaticity 40
3.1 Introduction 40
3.2 Theory of non-adiabatic non-equilibrium potential and flux
landscape for general dynamical systems 42
3.3 A one variable coupled landscape 44
4 Spatial fields, landscapes, and fluxes 47
4.1 Potential and flux field landscape theory for stochastic spatial
non-equilibrium systems 48
4.2 The Lyapunov functional for spatially dependent dynamical systems 52
4.3 Non-equilibrium thermodynamics of stochastic spatial systems 54
5 The cell cycle: limit cycle oscillations 57
5.1 Model for the cell-cycle network 58
5.2 Self-consistent mean field approximation 59
5.3 Results 60
6 Cell fate decisions: stem cell differentiation and reprogramming, paths, and rates 66
6.1 The Waddington landscape for key modules of stem cell differentiation,
paths, and rates 67
Advances in Physics 3

6.2 The Waddington landscape and epigenetics 70


6.3 The Waddington landscape for large networks 72
7 Landscapes and paths of cancer 77
7.1 Introduction 77
7.2 Results 79
7.2.1 The underlying cancer gene regulatory network 79
7.2.2 The landscape of the cancer network 79
7.2.3 Dominant paths among normal, cancer, and apoptosis states 82
7.2.4 Changes in landscape topography of cancer 84
8 Evolution 85
8.1 Introduction 85
8.2 Evolutionary driving forces: selection, mutation, and genetic drift 86
8.2.1 Selection 86
8.2.2 Mutation 87
8.2.3 Genetic drift 87
8.3 Mean fitness as the adaptive landscape for the frequency-independent
selection systems 88
8.4 The adaptive landscape for general evolutionary dynamics 88
8.5 Potential and flux landscape of a group-help model with frequency-dependent
selection for evolution 89
8.6 The underlying potential–flux landscape for general evolutionary dynamics 90
8.7 Generalizing Fisher’s FTNS 92
8.8 The Red Queen hypothesis explained by a generalized FTNS 94
9 Ecology 95
9.1 Introduction 95
9.2 The ecological dynamical models 96
9.3 The potential landscapes and fluxes of ecosystems: predation, competition,
and mutualism 98
9.4 Discussion 100
10 Landscape and fluxes of neural networks 100
10.1 Introduction 100
10.2 The dynamics of general neural networks 102
10.3 Potential and flux landscape of neural networks 103
10.4 Flux and asymmetric synaptic connections in general neural networks 105
10.5 Potential and flux landscape for REM/non-REM cycle 106
11 Chaos: Lorentz strange attractor 107
11.1 Introduction 107
11.2 Landscape and probabilistic flux 107
11.3 Flux may provide a clue to the formation of chaos 111
12 Multiple landscapes and the curl flux for a self-regulator 113
12.1 Potential, circulation flux, and eddy current 115
12.2 Results and discussions 117
13 Spatial landscapes and development 123
13.1 Introduction 123
13.2 Master equation 123
13.3 Results and discussions 124
14 Conclusion 127
Acknowledgments 129
Notation 129
4 J. Wang

Disclosure statement 130


Funding 130
References 130

1. Introduction
The world around us and even ourselves are dynamic and not at equilibrium [1–17]. The physical
world, including the weather, oceans, the Earth, the Sun, the solar systems, the galaxies, and
even our universe (multiverses), is dynamically evolving. The biological world, including the
cells, organs, plants, animals, humans, evolution, and ecology as well as society and economics,
functions in a dynamical way. These dynamical complex systems are not closed systems. In fact,
they are open systems with constant exchange of energy, materials, and information with outside
environments. Therefore, these systems are intrinsically at non-equilibrium.
Traditionally, a successful framework and corresponding comprehensive recipes have been
laid out for studying complex closed or equilibrium systems in a global and quantitative way
without significant exchange of energy, materials, and information with the outside environments
[1,4]. Equilibrium thermodynamics and statistical mechanics have been successfully applied in
physical and biological systems [18–20]. One starts with a priori known interactions among indi-
vidual units of the systems. From these, the partition function and the free energy of the system
can be quantified. The phase diagram characterizing the global behavior of the system can then
be explored. In other words, in equilibrium systems, the global behavior can be determined by
the underlying interaction potential energy landscape, while the local dynamics is determined
by the gradient of the interaction potential energy. The funneled protein folding energy land-
scape provides a nice illustration [20]. The predictions of folding landscape can be tested by the
corresponding equilibrium thermodynamic and kinetic experiments [18–20].
Following the remarkable successes in describing near equilibrium systems, a natural ques-
tion would be: Can we apply the same recipe to dynamical non-equilibrium systems? This turns
out to be a huge challenge for several reasons. First, it is challenging to specify the rules gov-
erning the dynamics. Even though these dynamical rules are specified from those and the local
stability and function can be quantified, the global stability and behavior for the dynamics are still
often out of reach. Furthermore, since the systems are not at equilibrium, even with the dynam-
ical rules specified, the corresponding interaction energies are not a priori known in general. In
other words, the driving forces for the dynamics in general cannot be written as a gradient of the
interaction potential energy as in equilibrium systems [13–15,17]. Therefore, the recipe of equi-
librium statistical mechanics of exploring the partition function, free energy, and phase diagram
from interaction energy cannot be applied to non-equilibrium systems. In other words, the global
behavior cannot be explored in the equilibrium way. It is then challenging to see the relationship
between the dynamics and the non-equilibrium nature of the system.
Progress has been made in exploring non-equilibrium systems and addressing various
issues such as the characterization of the non-equilibrium steady state, deterministic and
stochastic dynamics [1–12,21,22], and non-equilibrium thermodynamics, the stochastic ther-
modynamics, the relations between non-equilibrium work and equilibrium free energy, the
link between the statistics of entropy production and entropy annihilation [1–5,9–12,21,23–
37]. Also transition rates and paths have been studied with different methods including the
Wentzel-Kramers-Brillouin (WKB) method, and methods of characteristics and optimal weights
near zero-fluctuation limit have been explored [22,38–52]. Despite the progress made in this
area, there are still challenges remaining. These include theoretical issues on the origin of
the underlying driving force for non-equilibrium systems and how this differs from equi-
librium systems having a gradient driving force. Other issues are global stability, kinetic
Advances in Physics 5

paths, and rates in finite fluctuations. Also the role of the driving force and the connection
between dynamics and non-equilibrium thermodynamics needs to be understood. Other issues
are the fluctuation–dissipation theorem (FDT), and the intrinsic symmetry of non-equilibrium
systems, as well as the understanding of the underlying mechanisms for different physical
and biological systems illustrating many of the above-mentioned non-equilibrium dynamic
behaviors.
In this review, we will address all the above-mentioned challenges, with a systematic and
unified theoretical framework in terms of both a landscape and a curl flux to quantify dynam-
ical non-equilibrium complex systems in a global way based on our recent studies. We show
that the existence of the flux in addition to the landscape is the origin of many of the character-
istic non-equilibrium behaviors deviating from the naive expectations based on the equilibrium
perspectives. The theoretical framework has been laid down and already published in previous
papers [13–17,26,53–85]. We present only a review here. For more details, see the above-
mentioned references. Our starting point is to realize that for general non-equilibrium dynamical
systems, the driving force of the dynamics cannot be written as a pure gradient of an “equilibrium
like” potential landscape [13–15,17,62,72]. However, the general dynamics can be decomposed
into a gradient of a non-equilibrium potential landscape and another force which has a curl rotat-
ing nature (curl flux force) [13–15,17,62,72]. In an analogy, the dynamics of equilibrium systems
is like that of an electron moving in an electric field going straight down the electrical potential
gradient, while the dynamics of non-equilibrium systems is like an electron moving in an elec-
tric field and magnetic field spiraling down the electrical potential gradient [13–15,17,62,72].
Therefore, we can see that in non-equilibrium systems: the dynamics is determined by both the
non-equilibrium potential landscape and the curl flux, rather than by a single factor of potential
landscape as in the equilibrium systems. As we will show later on, the non-equilibrium potential
landscape is closely linked to the steady-state probability distribution for non-equilibrium sys-
tems and therefore has a probabilistic nature. This is analogous to the equilibrium case where the
equilibrium potential landscape is closely linked to the equilibrium probability of the equilibrium
systems. It turns out that the non-equilibrium potential landscape topography is essential and can
be used to quantify the global stability and behavior of the non-equilibrium dynamical systems
[13–15,17,62,72].
Furthermore, the dynamics of non-equilibrium systems is distinctly different from those of
equilibrium systems. First, the kinetic paths from one state to another do not follow the landscape
gradient due to the presence of the curl flux [14,61,62,82]. As a result, at finite fluctuations, the
dominant kinetic paths do not go through the landscape saddle point from one state basin to
another. This is in contrast to equilibrium systems where the dominant minimum energy kinetic
paths go through the saddle point of the landscape [14,61,62,82]. Moreover, the forward and
backward paths are irreversible in contrast to the case of equilibrium systems where the forward
and backward paths are reversible. The presence of the curl flux in non-equilibrium systems
thus breaks time reversal symmetry. Importantly, the kinetic rates from one state to another for
non-equilibrium systems do not follow the conventional transition state or Kramers’ theory for
equilibrium systems [14,61,62,82]. Instead of the saddle on the landscape, a new transition state
or Kramers’ theory for non-equilibrium systems emerges: the kinetic rates for non-equilibrium
systems are determined by a new “saddle” of dominant paths instead of a point on the landscape
[14,61,62,82].
The non-equilibrium thermodynamics can also be quantified based on the combined landscape
and flux theory. In conventional equilibrium thermodynamics, entropy production arises from
free energy dissipation during the relaxation to the equilibrium. In non-equilibrium systems, the
entropy production is from the free energy dissipation during the relaxation to the steady state
plus the additional house-keeping contribution for maintaining the non-equilibrium steady state
6 J. Wang

from the presence of the flux due to the energy pump from outside environments [9,15,67]. The
total entropy including the system and environment is never decreasing, while the system non-
equilibrium free energy is never increasing. We can also consider phases and phase transition in
non-equilibrium systems. Some new phases emerge only in non-equilibrium systems, while they
cannot exist in equilibrium systems [67]. An example is the limit cycle.
We also see that the FDT will have to be modified for non-equilibrium systems [15]. Instead of
the conventional FDT in equilibrium systems where the response or relaxation to the equilibrium
is closely linked to the equilibrium fluctuations, a new FDT for the non-equilibrium systems
emerges. Based on this theory, the response or relaxation to the steady state is closely linked
to the steady-state fluctuations and additional contributions from the presence of the flux for
maintaining the non-equilibrium steady state [15]. Furthermore, we can also link the presence
of the flux to the fluctuation theorem giving the statistical relationship between the forward and
backward processes [15].
We find that non-equilibrium dynamics is closely related to gauge field theory. The presence
of the curl flux provides the basis for the gauge field. In fact, it is closely related to gauge field
curvature [15]. Therefore, there is an intimate link between the non-equilibrium statistical nature
and the geometry of the gauge field (non-zero curvature). These two descriptions of the world
through statistical numbers and geometry/topology are both fundamental and seem not to be
related to one another. In our non-equilibrium descriptions, the non-equilibriumness is quantified
by the degree to which the curl probability flux vector differs from zero. On the other hand, we
also see that this is equivalent to measuring the degree of the curvature of the internal geometry
(symmetry) of the corresponding gauge field being away from zero (flat). So we can link the
non-equilibriumness to the internal geometry. As a result, there is a path-dependent gauge field
where its integral over a closed loop is not zero due to the presence of the flux [15]. This non-zero
loop integral is analogous to the Aharonov–Bohm or Berry phase factor except that it is now real
rather than imaginary. This generates the non-trivial global topology of the gauge field which is
also linked closely to the thermodynamics of heat dissipation to the environments. Therefore, the
non-equilibrium nature of systems is deeply linked to geometry and topology.
We can extend landscape and flux theory to include collective dynamics in physical spaces of
higher dimension. The original non-equilibrium statistical mechanics theory now becomes a non-
equilibrium statistical field theory [73,75,85]. The field dynamics in space can still be determined
by the key dual pair: the non-equilibrium field landscape and curl flux field. The global stabil-
ity and thermodynamics for the system with non-equilibrium field can still be specified through
a non-equilibrium statistical mechanical landscape and flux framework except now the dynam-
ical variables of fields and the correlations in physical space are included [73,75,85]. Similar
conclusions are nevertheless obtained.
We extend the approach to include not only a single landscape but multiple landscapes.
We see that a discrete number of multiple landscapes can be seen as a single landscape with
more extended dimensions. This is an effective way of exploring the non-adiabatic effects when
more timescales are involved in the coupling of the landscapes or from the extended dimen-
sions [63,65,66,74]. We find additional flux coming from the extended dimensions which can
contribute significantly to the dynamics.
We illustrate the landscape and flux theory with a few concrete examples, the cell cycle
[13,17,83], stem cell differentiation and development [60,62,66,76,77,79], chaotic Lorentz
attractors [68] and spatial development [73,75,85], evolution [69] and ecology [81], neural
networks [72], self-regulating genes as an example of multiple landscapes [63,74] and spatial
development [58].
For the cell cycle, the resulting dynamics is a limit cycle with distinct biological phases
of G0/G1, S/G2, and M and corresponding checkpoints. We point out that the underlying
Advances in Physics 7

non-equilibrium potential landscape has an inhomogeneous Mexican hat shape. Away from the
oscillation ring, the gradient of the non-equilibrium potential landscape attracts the system down
to the ring, while on the ring, it is the curl flux which drives the oscillation. The local basins along
the oscillation ring quantify the cell-cycle phases while the barriers or transition states between
them quantify the checkpoints [13,17,83]. Global sensitivity analysis on landscape topography
and flux can be used to explore the key hot spot structure elements in the cell-cycle network
responsible for the global function. We also illustrate how the presence of the flux from the
nutrition supply breaks the time reversal symmetry from the three-point correlation functions
[13,17,83].
For stem cell differentiation and development, we point out that stem cells and differenti-
ated cells can be quantified as basins of attractions on the underlying non-equilibrium landscape
[60,62,66,76,77,79]. The landscape and the transitions from stem cell basin to differentiated
cell basins quantifies Waddington’s qualitative metaphor picture for development. The kinetic
paths do not go through the saddle point of the landscape due to the presence of the curl flux.
As a consequence, the forward differentiation paths and backward reprogramming paths are
not mirror images. Quantifying the differentiation and reprogramming paths is crucial for tis-
sue re-engineering. The kinetic rates do not follow the equilibrium transition state rate theory.
Instead, from non-equilibrium transition state theory (TST), the transition state is determined as
the “saddle” on the path.
For cancer biology, we describe cancer as a disease of states rather than being caused purely
by mutation. Cancer states then naturally emerge from the gene–gene interactions in an underly-
ing gene regulatory network. We apply the non-equilibrium landscape and flux theory to cancer
[84]. We show that (1) The normal states and cancer states can be quantified as basins of attrac-
tions on the cancer landscape, the depth of which represents the associated probabilities. (2) The
global stability, behavior, and function of cancer through underlying regulatory networks can be
quantified through the landscape topology in terms of depths of and barriers between the normal
and cancer states. (3) The paths and transition rates from normal (cancer) state to cancer (nor-
mal) state representing the underlying tumor-genesis and reverse processes will be quantitatively
uncovered, leading to the hope for prevention and treatment. This will be developed through a
path integral approach and a non-equilibrium TST. Through global sensitivity analysis based on
landscape topography, we can identify the key genes and regulations important for the global
stability, behavior, and function of cancer networks. The results of global sensitivity analysis will
provide multiple targets for cancer intervention. This will quantitatively uncover the link between
the structure and the function of cancer networks, and provide the theoretical basis for explor-
ing cancer as a network disease and designing new anti-cancer strategies based on this view.
Our focus of study then shifts from exploring local properties (cancer as a disease of individual
driver-mutation) in the conventional approach to quantitatively study the global nature (cancer as
a disease of states) of cancer through the underlying regulatory networks.
We establish a landscape and flux theory of evolution [69]. Conventional theory is inadequate
for describing the general evolution dynamics. In Wright’s adaptive landscape picture, evolution
dynamics follows the gradient of a landscape. In Fisher’s fundamental theorem of natural selec-
tion (FTNS), however, the fitness never decreases. For general evolution dynamics, none of these
notions turns out to be correct. A good example is given by allele frequency-dependent selection
where the driving force for evolution cannot be expressed as a pure gradient. This has been called
the “Red Queen effect” where evolution can still keep running even at optimal fitness [69]. Red
Queen effects exist widely in the coevolution of species and have been used to question the valid-
ity of Wright and Fisher’s theory of evolution [69,86]. The combined landscape and flux theory
provides a quantitative and physical basis for such general evolution. Evolution dynamics is not
only determined by a landscape gradient as in the conventional evolution theory but also by the
8 J. Wang

curl flux [69]. Even when the landscape reaches an optimum (such as the limit cycle oscillation
ring), evolution still keeps on going, being driven by the curl flux. This naturally explains Red
Queen effects. The source of the curl flux can be from allele frequency-dependent selection or
mutation, recombination, epistasis, etc.
We also discuss a landscape and flux theory for ecology [81]. We uncover the underlying
intrinsic potential landscape as a global Lyapunov function. This provides a quantitative mea-
sure of the global stability of ecosystems. We can study several typical and important ecological
systems involving predation, competition, and mutualism [81]. A single attractor, multiple sta-
ble attractors, and also limit cycle attractors emerge from these systems. Both the landscape
gradient and curl flux determine the dynamics of these ecosystems. The theory provides a quan-
titative and physical framework for exploring the global stability, function, and dynamics of
ecosystems [81].
We can also use landscape and flux theory for neural networks in the brain [72]. Previously,
neural cognition in learning and memory has been thought as information storage and retrieval
from basins of attractions. Such a conclusion has been based on the assumption that neurons are
symmetrically connected in neural networks. This leads to a global energy landscape for stabil-
ity and associated gradient dynamics. However, in reality, the neurons in neural networks of the
brain are asymmetrically connected. In this general case, the conventional theory does not apply.
From combined landscape and flux theory, we can uncover the underlying landscape and Lya-
punov function for quantifying the global stability of general neural networks [72]. Furthermore,
we show that the asymmetrical connections of the neurons lead to a curl flux which contributes
to the driving force of the dynamics in addition to the gradient of the landscape. Many neural
oscillations have been found in the brain. These cannot be explained by the conventional sym-
metrically connected network dynamics. Instead, landscape and flux theory can form the basis
for describing the global nature of the neural oscillation dynamics. For learning and memory,
the oscillation may help to store and retrieve information in a continuous way for sequential
events [72].
We also map out the non-equilibrium potential for the chaotic Lorentz attractor [68]. There,
we find that the underlying potential has a butterfly shape determining the global stability. The
dynamics of chaotic attractor is determined primarily by the gradient of the potential away from
the butterfly sheet and by the curl flux within the butterfly sheet. Therefore, the flux is closely
related to the onset of the chaotic attractor behavior [68].
Using a simple example of self-regulation of genes, we explore multi-landscape dynamics
where discrete transitions of a single molecule are involved [63,74]. We show that the intra-
landscape dynamics in protein concentrations alone with inter-landscape dynamics in gene states
can be described as the dynamics in a single landscape in protein concentration and probability
of gene states. We explore the non-adiabatic effect of weak coupling between the landscapes
which again turns out to be equivalent to the emergence of a curl flux on the single land-
scape in the expanded dimensions of protein concentration and gene probability states [63,74].
This curl flux influences significantly the dynamics of the self-regulator leading to the emer-
gence of multiple states, larger variations, time irreversibility of the kinetic process, and energy
dissipation.
We also extended the landscape and flux framework to systems extended in the physical spa-
tial domain [58,73,75,85]. As a simple example of spatial development, we show that a spatially
varying landscape is crucial for specifying the local patterns for development.
We will give more detailed descriptions of the combined landscape and flux theory as well as
the associated applications in the following sections.
In the conclusion section, we will provide a brief summary. We will also give the philosophical
implications of the theory and the outlook for future studies.
Advances in Physics 9

2. Landscape and flux theory of non-equilibrium dynamical complex systems


2.1. Decomposition of the driving force into a landscape gradient and curl flux for
non-equilibrium dynamical complex systems
For dynamic complex systems, the dynamics are often described in the continuous representation
of dynamic equations as follows [7,87]:

ẋ = F(x), (1)

x is a vector representing the system variables (concentrations, densities, etc). ẋ represents the
change of the system variables with respect to time. The above equation can be thought of as an
over-damped limit of Newton’s second law with friction α ẍ + ẋ = F(x), while the F(x) is also a
vector representing the driving force [7,87]. The dynamic equation above thus has the meaning
that the temporal evolution of the system variables is determined by the driving force. This type of
ordinary differential equations describes the dynamics of a wide range of complex systems, such
as nonlinear dynamics, chaos, biological and social networks, weather, ecology and evolution,
etc. The same type of equation has been the focus of study of nonlinear dynamics and chaotic
behavior. Protein dynamics, protein folding, molecular recognition, cellular networks and even
biological evolutions represent some biological applications.
The conventional way to analyze system dynamics based on the above equation is to follow
the temporal trajectories of the system variables. The local stability can be analyzed through
the identification of the fixed points of the system [7,87]. The local stability around those fixed
points can be quantitatively studied. However, if we aim to explore the global nature of the
system, we need to quantify the global stability. If the driving force can be expressed as the gra-
dient of a scalar potential, then the system’s dynamics is described by gradient dynamics and
the system’s global behavior can be quantified by this scalar potential [13,16,17,62]. In fact, this
scalar potential for some physical and biological systems is known directly from the interaction
potential energy. In these examples, equilibrium statistical mechanics can be applied to study the
global behavior, such as global phases, phase diagrams, and the corresponding global thermody-
namic behavior by quantifying the corresponding partition function and associated free energies
[13,16,17,62,69].
However, in general, the driving force of dynamic complex systems cannot always be quan-
tified as a pure gradient of a scalar potential. In such cases, equilibrium statistical mechanics
cannot be applied. The challenge is then how to study the global behavior and function, and
what are the keys to characterizing the dynamics globally. In order to address this, we look at
stochastic versions of the dynamic evolution equations. The rationale is the following: in the
natural world, fluctuations and noise never vanish. So the more realistic description of dynamics
is always stochastic. Furthermore, we will show a strategy of studying the stochastic dynamics
first and then taking the zero fluctuation limit for reaching the original deterministic dynamics
[1,2,7,88]. It turns out to be much more convenient to study the global behavior in the stochas-
tic representation than in the original deterministic format where there is no natural probability
measure to start with. The stochastic dynamic equations are given as follows [1,7]:

ẋ = F(x) + η(x, t). (2)

Here η(x, t) is a stochastic driving force whose amplitude of fluctuation is measured by its
autocorrelation function. η(x, t)η(x, t ) = 2DDδ(t − t ), where D is a scale factor representing
the magnitude of the fluctuations and D is the diffusion tensor or matrix [1,7]. Since the systems
now follows stochastic dynamics, tracing the individual trajectories becomes unpredictable and
10 J. Wang

less meaningful than the deterministic case. Instead, one should trace the evolution of probability
distribution leading to a global characterization of the system as we will show later. The stochastic
equation of motion is called the Langevin equation, while the corresponding probability evolution
equation follows the Fokker–Planck equation [1,7],

∂P(x, t)
+ ∇ · J(x, t) = 0. (3)
∂t

The meaning of the above equation is the local conservation of probability. The flux is given
as J(x, t) = F(x)P(x, t) − ∇ · (DDP(x, t) [1,7,13,17]. The change of the local probability at any
location is equal to the flux in or out of that region.
For the statistical steady state expected to engage in the long time limit, the time deriva-
tive of the probability becomes zero and the probability no longer changes with respect to time.
This implies that the divergence of the flux is zero from the above equation ∇ · Jss (x) = 0,
where ss stands for steady state. There are several possibilities for how the vanishing divergence
is achieved.
(1) The flux itself is zero. Jss = 0. The zero flux implies no net flux in or out of any subpart
of the ensemble. Such a system obeys detailed balance and has come to a true equilibrium. The
driving force in this case can then be written as a pure gradient up to a gradient with respect to
the diffusion coefficient. F = DD · ∇Pss /Pss + ∇DD = −DD · ∇U + ∇DD, where the poten-
tial function U is defined as the logarithm of the equilibrium probability U = − ln Pss . Here
the steady-state probability is the same as the equilibrium probability due to the detailed bal-
ance condition [13,17]. Then, we can see for equilibrium systems where the detailed balance
is preserved, the Boltzmann relationship is recovered (the potential landscape is related to the
equilibrium probability), and the dynamics is determined by the gradient of the potential up to
a gradient of the inhomogeneity of the diffusion coefficient. So for equilibrium systems, the
global behavior is determined by the equilibrium probability landscape or the potential landscape
which can characterize the basins and barriers as well as the global stability and phases once the
potential landscape is known [13,17]. On the other hand, local dynamics is determined by the
potential landscape through the gradient. The dynamics of such equilibrium systems is analogous
to electrons moving straight down to the electric field gradient [13,17].
(2) When a steady state emerges, the flux itself is not necessarily zero even for the diver-
gence free conditions [13,17]. In this case, there is a net flux into or out of the regions of the
ensemble. With such a non-zero flux, the system is no longer in real equilibrium and detailed
balance is broken. In fact, the degree of non-equilibriumness can be measured by how much
the curl flux differs from zero. The divergence free condition of the flux signaling the statistical
steady state means that there are no sources and sinks for probability to go to or to come from.
Therefore, locally the flux must be rotational and be derived from a curl. The driving force then
can be decomposed to the gradient of the steady-state non-equilibrium potential U, where U is
defined as the logarithm of the steady-state probability distribution U = − ln Pss , up to a gradi-
ent of diffusion but there is also a curl flux [13,17,67]: F = DD · ∇Pss /Pss + ∇DD + Jss /Pss =
−DD · ∇U + ∇DD + Jss /Pss . The non-equilibrium potential is still closely linked to the steady-
state probability since the steady-state probability quantifies the probability of each state and the
underlying probability landscape topography can be described in terms of basins and barriers.
Therefore, the non-equilibrium potential still can be used to quantify the global behavior and sta-
bility of non-equilibrium systems. On the other hand, the dynamics of non-equilibrium systems
is not determined by the potential gradient alone, but also by the curl flux. The non-equilibrium
potential and curl flux are a dual pair for determining the non-equilibrium dynamics [13,17,67].
The dynamics of non-equilibrium systems is analogous to electrons moving in an electric and
Advances in Physics 11

magnetic field. The dynamics evolves spiraling down an electric field gradient due to effective
Lorentz forces.
There are some crucial differences between equilibrium and non-equilibrium systems: (1) For
equilibrium systems, the potential landscape is given a priori, while the landscapes for strictly
non-equilibrium systems are emergent. In order to determine the non-equilibrium potential, one
needs at the outset to obtain the steady-state probability distribution from the dynamics. This is
not known a priori. In other words, for equilibrium systems, the potential is an input that directly
provides the driving force for the dynamics, while the non-equilibrium potential is the result of
the dynamics. (2) The dynamics is determined by the statistical steady-state condition for non-
equilibrium systems. The non-equilibrium potential is determined by the resulting steady-state
probability, and the curl part of the flux is determined by the originally given driving forces and
the emergent steady-state probability. In other words, the steady-state behavior still influences
the temporal evolution dynamics, but it is not the whole story. (3) The force decomposition to the
non-equilibrium potential gradient and curl flux is not unique. However, we always can choose
a physical gauge here so that the non-equilibrium potential is represented as the logarithm of
the steady-state probability. This choice has a clear physical meaning and is directly related
to the probability landscape which is crucial for quantifying the global stability and behavior.
This choice is also reasonable since it naturally leads to gradient dynamics at equilibrium con-
ditions where there is precisely zero flux. (4) The non-equilibrium situation we discuss here
is the intrinsic non-equilibrium with detailed balance breaking. Such steady states must be con-
trasted to the often studied cases where the physical systems start out in equilibrium with detailed
balance, but upon perturbation the system moves away from equilibrium position and the sub-
sequent relaxation dynamics to the equilibrium state is termed “non-equilibrium” dynamics. For
us, non-equilibrium dynamics refers to the case of intrinsic non-equilibrium with detailed balance
explicitly broken. Relaxation after the perturbation moves only toward the steady state without
detailed balance rather than to a true equilibrium state with detailed balance. We will discuss this
further when we study the changes in the FDT for non-equilibrium steady states. (5) The nature
of the curl flux is global. In equilibrium systems, a detailed balance condition of zero flux Jss = 0
needs to be satisfied everywhere. This condition is local, since if we break the detailed balance
condition locally, one only needs to fix a corresponding local part to restore the detailed balance
without influencing the other parts of the state space. However, the situation for non-equilibrium
systems is very different. The flux is not zero and satisfies a divergence free condition due to
the steady-state condition which implies its rotational curl nature. The rotation is global in state
space. Therefore, any local perturbations away from the rotational curl will require global adjust-
ments to be restored back. Here the local adjustments are not enough for restoring the global
rotational curl. It turns out that the true non-equilibrium behavior is deeply connected to the
global topology of the underlying dynamical systems due to the nature of the curl flux.
Recently, several other methods of decomposing the continuous equations of motion have
been proposed [89–91] for stochastic systems. One method is aimed at global stochastic dynam-
ics [89]. The numerical solutions of this decomposition seem challenging. It was argued that
the decomposition might be specific due to the constraints imposed resulting orthogonality [92].
Another method suggests still another different representation and mapping [90]. A decomposi-
tion method recently proposed works only very near the zero noise limit [91] and this analysis
reaches similar conclusions to an earlier general study [69] in the zero-fluctuation limit. Relation-
ship between decomposition and conservative/non-conserved systems as well as time reversal
symmetry have also been discussed [92–94]. The network thermodynamics and decomposition
for discrete Markov chains has also been discussed [3,78,95–101].
The landscape and flux theory can be applied to both zero and finite fluctuations (for
finite fluctuations, the free energy functional instead of the intrinsic potential becomes the
12 J. Wang

Lyapunov function, see more details in the later sections on non-equilibrium thermodynamics)
[13–15,62,69,73,75,82,85]. The landscape gradient and the curl flux have an emergent duality
which measures the extent of detailed balance breaking, in analogy to how an electron moves in
electric and magnetic fields at the same time.
We will now turn to the discussion of how we can quantify the global stability with the concept
of the non-equilibrium potential.

2.2. Global stability for non-equilibrium systems


2.2.1. Intrinsic non-equilibrium potential landscape as a Lyapunov function for quantifying
global stability
Global stability is often critical for understanding complex dynamical systems [1–13,16,17].
Great efforts have been made to study the subject. A conventional way of exploring the sta-
bility is first to search for the fixed points of the dynamical equations governing complex system.
Following this, the stability to small displacements around these fixed points can be analyzed.
However, such an analysis is limited to the specific fixed points which are found and therefore
can only explore local stability. For global stability, one has to search the entire space and not just
specific local regions of the state space. One way of doing this is to seek a Lyapunov function for
the system that monotonically changes with respect to time so that the global stability is quan-
tifiable. We are going to follow this route to quantify the global stability of complex dynamical
systems.
We first look at the global stability of general dynamical systems specified by the equation:
ẋ = F(x) [7,13,87]. To do so, we first add the stochastic noise to the system, making the dynam-
ical equations stochastic. Due to the stochastic nature of the problem, no single trajectory will
allow us to predict the individual outcome of an initial set of conditions. We can only obtain
information statistically. Nevertheless, the evolution of the probability is deterministic and actu-
ally linear. So instead of studying individual stochastic trajectories, we focus on the probability
evolution which is predictable and can be used to describe the global behavior of the stochastic
dynamics. The probability evolution of the continuous observables follows the Fokker–Planck
equation as described earlier. Let us look at the behavior of the probability in the small noise
limit mimicking the deterministic system. In the literature, the WKB method has been used
in the study of stochastic processes in the zero noise limit [1,2,5–7,12,22,38–52]. We can fol-
low a similar route by expanding the steady-state probability or non-equilibrium  potential with
respect to the scale of the fluctuations D  1 [5–7,12,69,71,81]: U(x) =[ ∞ k=0 D k
φk (x)]/D =

φ0 (x)/D
 + φ (x)
1 + Dφ 2 (x) + · · · . Consequently, P ss (x) = (1/Z) exp[−( k=0 Dk
φ k (x))/D] and
Z = exp[−( ∞ k=0 Dk
φ k (x))/D] dx. There are several motivations for doing such an expansion.
(1) When the fluctuations become small and reach the zero limit, the deterministic dynamic
equations are recovered. The challenge is whether we can find a potential landscape having
the Lyapunov property for the corresponding deterministic dynamics. (2) For realistic systems
with finite but weak fluctuations, one might be able to use the weak fluctuation approximation
through expansion in the fluctuation strengths D to address the global stability issue. (3) One can
study how the probability landscape changes with respect to the fluctuations this way. (4) With
large fluctuations, one can solve the Fokker–Planck or master equation and the corresponding
probability directly without making the weak fluctuation expansion.
By substituting the expression of P from the weak fluctuation expansion into the steady-
state Fokker–Plank equation and comparing the corresponding coefficients with the same powers
of D on both sides of the equation, a set of equations in each order of 1/D about φi (x)
[7,69,71,81] can be obtained. The equation up to the leading order expansion D−1 resembles
Advances in Physics 13

the Hamilton–Jacobian (HJ) equation of classical mechanics:

F · ∇φ0 + ∇φ0 · D · ∇φ0 = 0. (4)

The solution at the zero order, φ0 , does not depend on the fluctuation strength D explicitly. The
physical meaning of φ0 can be seen clearly since φ0 up to a scale factor D is closely associated
with the steady-state probability Pss ∼ exp(−φ0 /D) at the low noise limit. So, φ0 is tied up with
probability which characterizes the steady state globally. In fact, the norm of φ0 monotonously
decreases along a deterministic trajectory due to the positive definite nature of the diffusion
matrix D [7,69,71,81]:

dφ0 (x)
= F · ∇φ0 = −∇φ0 · D · ∇φ0 ≤ 0. (5)
dt

As we can see from the above equation, the temporal evolution of the φ0 will not stop (the
temporal change is not zero, dφ0 (x)/dt = 0) until ∇φ0 = 0 where there are no longer changes in
time (dφ0 (x)/dt = 0).
As we can see, φ0 is thus a Lyapunov function. φ0 monotonically decreases with respect
to time. This property can be used to study the global stability. With both the probability and
Lyapunov properties, the φ0 acquires its physical meaning as the intrinsic (zero-fluctuation
limit) non-equilibrium potential characterizing the global stability for general dynamical systems
[7,69,71,81].
As we can see, the ∇φ0 = 0 provides conditions for finding the attractors on the potential
landscape. For point attractors, the temporal evolution of φ0 settles at minimum values. For con-
tinuous attractors such as the line attractors, the limit cycles or the strange attractors, the φ0 must
settle to constant values. The rationale is the following: if the values of the φ0 on continuous
attractors are different, then the right side of the above equation will not be equal to zero. This
implies that the φ0 has not settled to a final value (dφ0 /dt = 0). This is obviously not the case.
We thus see that the condition dφ0 /dt = 0 requires that the φ0 has a constant value on continuous
attractors.
The numerical solutions of the HJ equation have been discussed and a self-consistent method
to solve it has been proposed [39]. Another numerical level-set method has been developed
for solving this equation [102]. Recently a method [50] was developed to construct landscapes
self-consistently in phase space for multi-stable systems at the zero noise limit. The local and
global mapping and global construction of the landscape at the zero noise limit was also recently
discussed [51].
In summary, due to its monotonic decreasing nature, the intrinsic non-equilibrium potential
landscape φ0 is a Lyapunov function [7,69,71,81]. Finding the Lyapunov function is crucial for
quantifying the global stability for complex dynamical systems.

2.2.2. Force decomposition into a non-equilibrium intrinsic potential landscape and an


intrinsic curl flux for deterministic dynamics
After addressing the importance of the non-equilibrium intrinsic potential landscape, it is natural
to explore the other component of the driving force, the curl flux.
In the zero-fluctuation limit, D → 0, we not only can expand the steady-state probability but
also the steady-state probability flux Jss in terms of the fluctuation strengths characterized by
the scale factor D. The leading order results are given as follows: (Jss /Pss )|D→0 = F + D · ∇φ0
[69,71,81]. We can define the intrinsic flux velocity, V = (Jss /Pss )|D→0 . From Equation (4), we
14 J. Wang

can see that


V · ∇φ0 = 0. (6)
This implies that the gradient of the non-equilibrium intrinsic potential φ0 is perpendicular to the
intrinsic flux vector (or intrinsic flux velocity) in the zero-fluctuation limit.
As we can see, in the zero-fluctuation limit for the deterministic dynamics, the driving force
can be decomposed to the gradient of intrinsic potential φ0 and the intrinsic flux velocity V
[7,69,71,81]:
F = −D · ∇φ0 + V. (7)
Therefore, the global stability for the non-equilibrium complex dynamical systems is deter-
mined and quantified by the non-equilibrium potential landscape, while the dynamics of non-
equilibrium dynamical systems is determined by both the non-equilibrium potential and the curl
flux.
Note that the Lyapunov function φ is valid in the zero noise limit. For finite fluctuations, the
free energy function F is a Lyapunov function (see details in later sections on non-equilibrium
thermodynamics). We want to point out that using the population landscape U instead of Lya-
punov function in finite fluctuations has certain advantages. For example, the Lyapunov function
only gives global behavior but not the local details. For limit cycle oscillations (more details in
cell cycle section), the Lyapunov function gives a perfect Mexican hat with the oscillation ring
with equal potential depths. On the other hand, the population landscape U reflects the differ-
ences on the oscillation path therefore giving an inhomogeneous oscillation ring with different
potential depths on the ring. The population landscape thus directly reflects the inhomogeneous
speed on the ring while the Lyapunov function cannot.
The condition for the small fluctuation limit is that the scaled diffusion coefficient should
be significantly smaller than 1 for Fokker–Planck equation to work well. In our examples, the
numerical values of the scaled diffusion coefficient D satisfy this criterion. When the fluctuations
are large, the diffusion equation with the second-order truncation may not be a good approxima-
tion. The higher order terms have to be included. The full master equation without the truncation
should be considered. In practice, stochastic simulations are often more efficient for realizing
the goal than directly solving the master equation which is time and allocation consuming (an
exponential size scaling bottleneck is quickly reached when the system size becomes large).

2.3. The origin of non-equilibrium flux in the dynamical systems and networks
In this section, we explore the origin of the non-equilibrium flux. The non-zero flux is closely
related to the breaking of detailed balance [67]. For a typical model of dynamical systems, the
driving force is described in a phenomenological way. Therefore, it is difficult to see explicitly the
analytical expression of the two components of the driving force: gradient and flux [13–15,17].
Furthermore, it is also hard sometimes to see how precisely the detailed balance is broken and
how the flux becomes non-zero mechanistically. In this section, we explore the microscopic ori-
gin of curl probability flux. We have studied the dynamics of several systems including ones
exhibiting mono-stability and those having limit cycles in searching for the microscopic origin
of the probability flux. The origin of the probability flux turns out to be in the energy pump pro-
vided from non-equilibrium conditions, that is, on the concentration differences in specific energy
producing sources. In chemical kinetic systems, the probabilistic flux is also closely related to
the steady-state deterministic chemical flux. In a mono-stable model of the kinetic cycle, the
probabilistic flux is directly related to the deterministic flux which is generated by the chemical
potential difference (non-equilibrium energy pump) from adenosine triphosphate (ATP) hydrol-
Advances in Physics 15

ysis (the energy production source or energy supply in the cell). In the reversible Schnakenberg
model for limit cycles, the probabilistic flux is correlated to the chemical driving force [67].
We can see that the curl probability flux is closely associated with existence of a non-
equilibrium energy pump, either in a deterministic or probabilistic way. The curl flux generated
from the energy pump is essential for keeping coherent and stable limit cycle oscillations.

2.3.1. Flux and its origin in a mono-stable kinetic cycle


Deterministicpump andflux of thekinetic cycle. The kinetic cycle model shown in Figure 1 is a
popular one in enzyme kinetics [103,104]. For example, the substrates D and E in Figure 1(b)
denote the energy providers ATP and adenosine diphosphate (ADP) in a cell. The numbers of ATP
or ADP molecules are often large enough in the cell so that their concentrations are kept almost
constant relative to other reactants due to their key specific biologic function. This provides a
relatively stable energy pump by keeping a non-equilibrium ratio of the concentrations [D]/[E],
while the internal cycle remains functional.
It turns out that this open chemical cycle can be mapped onto a closed cycle in Figure 1(a),
with the kinetic rate coefficients modified as follows: k1 = k10 [D] and k−1 = k−1 0
[E]. We can
include the energy pump from outside by changing k1 and k−1 , [104]. The energy pump gen-
erating the chemical potential originates from the non-equilibrium ratio in concentrations of the
substrates, [D]/[E] (such as ATP and ADP concentrations ). Maintaining such a non-equilibrium
ratio in concentrations will lead to broken detailed balance.
From the law of mass action, we study the deterministic chemical reactions described by the
ordinary differential equations of the kinetic cycle. Since the total number of the molecules of A,
B, and C is conserved, the chemical reaction equations can be reduced to two variable differential
equations. x, y, and Nc − x − y denote the concentration of species A, B, and C, respectively.

dx
= k−1 y + k3 (Nc − x − y) − (k1 + k−3 )x,
dt
(8)
dy
= k1 x + k−2 (Nc − x − y) − (k−1 + k2 )y.
dt

The above equations not only describe the closed cycle but also include the open chemical
cycle, with certain modifications, k1 = k10 [D] and k−1 = k−1
0
[E].

(a) (b)

Figure 1. Kinetic cycles of the simplified three species enzyme kinetics. Comparing to the closed system
of cycle (a), cycle (b) brings in two more substrates D and E which can break the detailed balance to gener-
k10
ate a non-equilibrium steady-state flux, with the reaction: A + D −


−− B + E by keeping a non-equilibrium
0
k−1
concentration ratio [D]/[E]. By neglecting the fluctuation of concentrations of D and E, cycle (b) can be rep-
resented by cycle (a) in terms of the pseudo-first-order rate constants: k1 = k10 [D] and k−1 = k−1
0 [E](from

Ref. [67]).
16 J. Wang

For convenience, we use the substitutions below:

K1 = k−1 k−2 + k−1 k3 + k2 k3 ,


K2 = k1 k−2 + k1 k3 + k−2 k−3 , (9)
K3 = k1 k2 + k−1 k−3 + k2 k−3 .

We call the net chemical reaction flux in the steady state the deterministic flux. Taking the reac-
tions between A and B as an example, the net flux can be expressed as J1 = x0 k1 − y0 k−1 , where
x0 and y0 are the steady-state fixed point concentrations (the subscript 1 labels the deterministic
flux between A and B, subscript 2 labels deterministic flux between B and C, the subscript 3 labels
the deterministic flux between C and A) and the steady-state deterministic flux becomes [67]

k1 K1 − k−1 K2
JSS1 = x0 k1 − y0 k−1 = Nc
K1 + K2 + K3
k1 k2 k3 − k−1 k−2 k−3
= Nc . (10)
k−1 k−2 + k−1 k3 + k2 k3 + k1 k−2 + k1 k3 + k−2 k−3 + k1 k2 + k−1 k−3 + k2 k−3

The above expression also represents the non-equilibrium steady-state flux for the open
kinetic cycle, shown in Figure 1(b) with k1 = k10 [D] and k−1 = k−1 0
[E]. The deterministic steady-
state flux indicates how fast the chemical species can be transferred when the reactions reach the
steady state. For this three node cycle, the net fluxes between each pair of species are equal,
JSS = JSS1 = JSS2 = JSS3 . This means that there is only one deterministic flux in the kinetic
cycle [67].
In chemical thermodynamics, the chemical potential difference in terms of Gibbs free energy
of the kinetic cycle is defined as the ratio between the products of the forward rates and the
backward rates.
   
k1 k2 k3 k10 k2 k3 [D]
G = ln = ln 0 . (11)
k−1 k−2 k−3 k−1 k−2 k−3 [E]

When G = 0, the whole kinetic cycle is in equilibrium without any input or output. In this
case, the ratio between the products of the forward rates and backward rates is equal to one. This
preserves time reversal symmetry:
0
k1 k2 k3 [D]eq k−1 k−2 k−3
=1 or eq
= 0
. (12)
k−1 k−2 k−3 [E] k1 k2 k3

From Equation (10), this condition leads to

JSS = JSS1 = JSS2 = JSS3 = 0

Equations (12) represents the equilibrium condition in chemical reaction kinetics. However, if
the ratio of the constant energy pump concentrations [D] and [E] does not satisfy the equilibrium
condition of Equations (12), detailed balance will be broken. The Gibbs free energy difference
G will not be zero as in the equilibrium case. As a result, a non-zero deterministic flux JSS
emerges [67]. The energy pump such as ATP/ADP is then the origin of the non-zero deterministic
flux.
Probabilistic flux and energy pump of the kinetic cycle. External fluctuations as well as the
inherent stochasticity of molecular processes often lead to the stochastic chemical dynamics. For
Advances in Physics 17

stochastic chemical dynamics, we need to follow the time evolution of the probabilistic distribu-
tion of the species to describe the global behavior instead of the individual trajectories, which are
unpredictable. We start with the chemical master equation describing the probability evolution:
dpm,n
= −pm,n [(N − m − n)(k3 + k−2 ) + n(k−1 + k2 ) + m(k−3 + k1 )]
dt
+ pm+1,n (m + 1)k−3 + pm+1,n−1 (m + 1)k1
+ pm,n+1 (n + 1)k2 + pm−1,n+1 (n + 1)k−1
+ pm,n−1 [N − (m + n − 1)]k−2
+ pm−1,n [N − (m + n − 1)]k3 . (13)

Here m and n denote the number of x, y molecules, respectively. The steady-state probability
arising from Equation (13) follows a multinomial distribution [67,104].
To get an intuitive feeling and quantitative description of the probability flux, we explore
the large molecular number regime where the underlying discrete chemical master equation
in integer molecular number reduces to the Fokker–Planck equation in continuous concentra-
tions after second-order truncation [105]. We start from Equation (13) by replacing m, n with
x = m/V , y = n/V , where V is the volume of the biochemical system. For convenience, we use
p(x, y) to express p({x, y}, t) below.
After some algebra and Taylor expansions, we obtain the stationary probabilistic flux J 0 [67]:
⎛ N −x−y x ⎞
c


J0 = ⎝ N − K3 K1 ⎟ K1 + K2 + K3 J p0 (x, y)
c x−y x ⎠ Nc
SS
− +
K3 K1
⎛N − x − y x x ⎞
c
+ −
1 ⎜ K3 K1 K1 ⎟ K1 + K2 + K3
− ∇ ·⎝ x Nc − x − y x ⎠ JSS p0 (x, y). (14)
2V − − + Nc
K1 K3 K1
From Equation (11), the relationship between the probabilistic flux and chemical potential
difference becomes [67]
⎛ N −x−y x ⎞
c

⎜ K3 K1 ⎟ (1 − exp(− G))p0 (x, y)
J 0 = k1 k2 k3 ⎝ N − x−y x ⎠
c
− +
K3 K1
⎛N − x − y x x ⎞
c
+ −
k1 k2 k3 ⎜ K3 K1 K1 ⎟
− ∇ ·⎝ x Nc − x − y x ⎠ (1 − exp(− G))p (x, y). (15)
0
2V − − +
K1 K3 K1

It is clear that the stationary probability flux J 0 consists of two parts. The first is related to the
drift or driving force of stochastic dynamics (appeared in the Fokker–Planck equation), while the
second is related to the diffusion tensor due to the fluctuations.
From Equations (14) and (15), we can see that the probabilistic flux is directly correlated with
the deterministic flux as well as the chemical potential difference. In this example, the probability
flux is directly proportional to the deterministic chemical flux. When the concentrations of the
energy providers do not obey the equilibrium condition ([D]/[E] = k−1 0
k−2 k−3 /k10 k2 k3 ), an energy
18 J. Wang

pump emerges, ( G = 0), resulting in the chemical potential difference and the deterministic
flux, as well as the broken detailed balance. We can state that the energy pump is the origin of the
non-equilibrium flux [67]. The relationship between the chemical potential difference and flux is
similar to that for the electric current in electrical circuits under the voltage from a battery. This
analogy can give us insights into how to study non-equilibrium biological systems with physical
theory.

2.3.2. Flux and the potential landscape in the stable limit cycle
We will now explore the origin of the flux in an oscillation system. We prove that in an oscillation
system, it is also the energy pump that generates the chemical potential difference G which then
produces not only the deterministic chemical reaction flux JSS , but also the probabilistic flux.
Limit cycle in the reversible Schnakenberg model. We study an extension of the Schnakenberg
model [106] where all the individual reactions are reversible [67]:
k1 k2 k3
X −

− A, B−

− Y, 2X + Y −

− 3X . (16)
k−1 k−2 k−3

From the law of mass action, the deterministic equations of the model are
dx
= k−1 A − k1 x + k3 x2 y − k−3 x3 ,
dt
(17)
dy
= k2 B − k−2 y − k3 x2 y + k−3 x3 .
dt
Here the species B and A are assumed to be kept at constant concentrations, since they act
as the source of energy pump in the non-equilibrium conditions (see Figure 2). In the biological
context, these are related to ATP and ADP concentrations. For convenience, we term the ratio B/A
as the strength of the energy pump. This is because the non-equilibrium concentration ratio of B/A
provides a quantitative measure of how strong the non-equilibrium pump is. This is quantitatively
realized through the Gibbs free energy G over the reaction loop ( G = 0 is the equilibrium
condition). We choose the reaction parameters such that some of the backward rates are much
smaller than the forward rates. This will keep the current reversible model consistent with the
original Schnakenberg model [107].

k1 = k2 = k3 = k−1 = 1.0,
(18)
k−2 = k−3 = 0.01, A = 0.05.

From the linear stability analysis, in the regime of 0.1 < B < 0.9, a stable limit cycle emerges.
In the regime of 0 < B <= 0.1 or B >= 0.9, a mono-stable cycle appears. The details are shown
in Figure 3.

2 X

k2 k3 k1
B(input) Y k-3 3 X A(output)
k-2 k-1

Figure 2. Scheme for the reversible Schnakenberg model. The dash curve in the box uncovers the self-cat-
alytic mechanism of species X, and the dash box shows the key part that generates oscillation, while B and
A are energy input and output through the non-equilibrium concentrations (from Ref. [67]).
Advances in Physics 19

4
Steady States y0

↓ Hopf bifurcation point


1

0
0 10 20 30 40 50 60
Energy pump (B/A)
Figure 3. The stationary solution y0 of Equations (17) in three different regions of energy pump B/A. The
unstable fixed states are marked as the dash line within 0.1 < B < 0.9, that is, 2 < B/A < 18, which refers
to a limit cycle around the unstable fixed point. The star on the curve is noted as the Hopf bifurcation point.
All parameters are set as in Equations (18) (from Ref. [67]).

The effective variant form of the flux in the oscillation model. For this non-equilibrium
open system, from the law of mass action, we can define the forward and backward reac-
tion fluxes Jj+ , Jj− for each reaction. The chemical potential difference is then defined as
Gj = kB T ln Jj+ /Jj− . This reversible model
 leads to a simplification B  A. The total chemical
potential difference becomes GAB = j Gj = kB T ln(k1 k2 k3 B/k−1 k−2 k−3 A).
When the system reaches a steady state, the deterministic flux becomes JSS = Bk2 − y0 k−2 . In
Figure 4, we show the deterministic flux varying with G ( GAB ) in the mono-stability regime.
We can see that the non-zero chemical potential can generate a non-zero deterministic flux. It
can also create a non-zero steady-state probability flux with an exponential relationship. This is
similar to what we already have discussed analytically in the mono-stable model. That is to say,
the non-equilibrium chemical potential difference G leads to a non-zero deterministic flux.
When we increase the energy input, the mono-stable fixed point becomes unstable and a limit
cycle emerges. For the limit cycle, the deterministic flux becomes oscillatory in time. To solve
this problem, we average the deterministic flux with respect to time t over one period of the oscil-
lation. This averaged deterministic flux is termed the effective flux, analogous to the effective
electric current for an alternating current. In this way, we can compare the effective flux, instead
of the oscillatory flux in time, with the probabilistic flux and other dynamic/thermodynamic
variables.
To study the relationship between probabilistic flux and deterministic flux, we need to map the
vectors of the probabilistic flux onto the deterministic trajectory. We can then compare them using
two similar quantities under different chemical potential differences G. From the definition
G = kB T ln(k1 k2 k3 B/k−1 k−2 k−3 A), the chemical potential difference as Gibbs free energy, is
closely related to B/A, the non-equilibrium concentrations giving the non-zero energy pump G.
It is important to understand how the energy pump changes the flux.
20 J. Wang

0.06

0.05

0.04
JSS

0.03

0.02

0.01

0
7.5 8 8.5 9 9.5 10
∆G

Figure 4. The non-equilibrium chemical potential difference G can generate the non-zero deterministic
flux JSS . The parameters we use are all shown in Equations (18), while B is chosen within (0, 0.1), and with
these parameters this model shows one mono-stable steady state and leads to a steady-state deterministic
flux JSS (from Ref. [67]).

Chemical energy pump drives the flux. In the parameter regions 0.1 < B < 0.9, a limit cycle
emerges. We find that in non-equilibrium conditions the energy pump, quantified by B/A, is
the main source to keep the limit cycle going. Analogous to the case of the voltage pump that
generates the electric current and drives it in an electrical circuit, the chemical energy pump here
drives both the average deterministic flux and the probabilistic flux in a similar way.
We calculate the steady-state probabilistic flux from the general case Equation (A6) in [67].
We show the flux together with the landscape and deterministic trajectory to illustrate the rela-
tionship to each other (Figure 5). We can clearly see that the flux flows inside the groove of the
potential landscape and the direction is always pointing toward the deterministic trajectory.

J·l
FluxIntTra = L . (19)
Ll

We computed the loop integral of the probabilistic flux defined by Equation (19), where we
mapped the stochastic flux to the deterministic trajectory and averaged the tangential component
over the whole trajectory (red solid line in Figure 5). The result is shown by the solid line in
Figure 6. The dashed line denotes the deterministic effective flux. We see that the trajectory
average of the stochastic flux shows behavior similar to that of the deterministic effective flux as
the chemical potential G increases. There is indeed a correlation between the two fluxes.
These above results support our view that the energy pump is the origin of non-equilibrium
flux. The energy pump breaks the detailed balance [67]. This directly generates the chemical
potential difference. From the perspectives of chemical thermodynamics, it is the chemical poten-
tial difference that drives the molecules to start to react and transforms the chemical species into
each other. This leads to the non-zero deterministic flux. For non-equilibrium systems, we dis-
cussed the probabilistic flux in concentration space, which can be used to describe the global
nature of the dynamic system. Now we connect two kinds of fluxes together under an energy
pump. We state that the energy pump from the environments breaks the detailed balance and
Advances in Physics 21

(a) 8 Flux Flux direction


(b)

Y (B=0.2)
6
4
2
0
(c) 8
Y (B=0.4) (d)
6
4
2
0
(e) 8 (f)
Y (B=0.6)

6
4
2
0
(g) 8
(h)
Y (B=0.8)

6
4
2
0
0 2 4 6 8 0 2 4 6 8
X X

Figure 5. Flux and its direction flowing on the landscape and along the deterministic trajectory. The red
arrows in (a), (c), (e), and (g) are vectors of probabilistic flux for different energy pump strengths, and (b),
(d), (f), and (h) show their directions. The black solid line refers to the deterministic trajectory. The gradual
change color from red to blue represents hill to valley of the potential landscape, which is similar as in Ref.
[13] (from Ref. [67]).

drives the dynamics through the flux and potential landscape. This can influence the structure
and stability of the system.

2.4. Non-equilibrium kinetic paths


One of the most crucial aspects of the dynamics of complex physical and biological systems is
how to go from one state to another and furthermore how fast to go from one state to another.
We are going to address and quantify these two issues with the landscape and flux theory
[14,61,62,82].
The dynamics of complex systems is often realized through specific pathways. Identifying
and quantifying these paths will reveal how the dynamical processes actually occur and therefore
uncovers the underlying mechanisms for the dynamics. There have been various studies on the
kinetic paths of equilibrium systems. However, despite the significant efforts and progress made
[22,38–52,108–114] in the zero-fluctuation limit, there are still open questions in non-equilibrium
path studies, for example, under finite fluctuations as well as the underlying driving force iden-
tification in determining the paths. In this section, we will quantify the non-equilibrium paths
via the landscape and flux theory [14,61,62,82] for small but finite fluctuations including the
zero-fluctuation limit.
There are often many possible routes for realizing the dynamics. Different paths give different
contributions to the dynamics. Therefore, not all the paths are equivalent. The challenge then is
22 J. Wang

Figure 6. The relationship of deterministic effective flux and loop integration of probability flux. FluxAve
represents effective deterministic flux and FluxIntTra is loop averaged probability flux which can be cal-
culated by the definition in Equation (19), and G means chemical potential difference and also Gibbs
free energy. The two fluxes show the similar behavior with changing of G, while the variation range
G ∈ (10.3, 12.05) is corresponding to the limit cycle range B ∈ (0.15, 0.85) (from Ref. [67]).

how to identify and quantify the paths which give the dominant contributions to the dynamics.
We employ a path integral formulation for the non-equilibrium dynamical complex systems to
quantify the kinetic paths and identify the dominant pathways [14,61,62,82].
There are six advantages of this approach. (1) The weights of each individual path can be
quantified and the dominant paths can be identified based on the weights. (2) By varying the
positions in the state space and quantifying the weights of the associated dominant paths connect-
ing the states with the reference state, one can find the relative weights or probability landscape
of the state space. (3) The path integral formulation can be directly used to quantify the kinetic
rates from one state to another. (4) By identifying the dominant paths, the effective degrees of
freedom and therefore the associated computation times of uncovering the dynamics are reduced
from the original impossible exponentials to the manageable polynomials. (5) The identification
and quantification of the kinetic paths make it possible to visualize how the dynamical processes
are being realized step by step. (6) One can identify the key factors and driving force (landscape
and flux) in determining the dominant paths.
The dynamics of non-equilibrium complex systems as discussed before can be formulated as
[14,61,62,82]: dxdt = F(x) + η. The stochastic fluctuation term is related to the intensity fluctu-
ations either from the environmental external fluctuations or intrinsic statistical fluctuations from
a finite number of components.
We now formulate the path dynamics for the probability of starting at the initial state xinitial at
t = 0 and ending at the final state of xfinal at time t, with the path integral [14,53–55,61,62,82,115–
118] as

P(xfinal , t, xinitial , 0)
       
1 1 dx 1 dx
= DxExp − dt ∇ · F(x) + − F(x) · · − F(x)
2 4 dt D(x) dt
Advances in Physics 23

= DxExp[−S(x)]
   
= DxExp − L(x(t)) dt . (20)

The integral over Dx refers to the sum over all possible paths connecting the initial state
xinitial at time t = 0 to the final state xfinal at time t. D(x) is the diffusion coefficient matrix (or
tensor). The second term of the exponent gives the weight of a specific path due to the underlying
Gaussian fluctuations. The first term of the exponent gives the contribution from the variable
transformation of the Gaussian fluctuations η to the path x (the Jacobian). The whole exponent
represents the weight of each individual path. Therefore, the probability of the system’s going
from the initial state xinitial to the final state xfinal can be expressed as the sum of all the possible
paths, each with a different weight. S(x) denotes the action and L(x(t)) denotes the Lagrangian
for each path (Figure 7).
It is expected that not all the paths give the same contribution. Sometimes we can approximate
the transition probability with the path integrals coming from the dominant paths only. Since
each path is exponentially weighted by the action, the contributions from sub-leading paths to the
weights are from the next order in actions in the exponential. The differences of actions between
dominant and sub-leading paths transform to the differences in weights [14,61,62,82]. The weight
differences are exponentially amplified from the action differences. Therefore, when there is any
significant difference in the actions of the paths, there will be a huge difference in the weights
due to the exponential relationship between the weight and the action. Therefore, the sub-leading
path contributions to the weight or probability are often significantly smaller than the dominant
path contributions. Of course, in some cases, the actions of different paths are about the same.
In such situations, every path contributes equally and there are no dominant paths. In order to
quantify the probability, one has to sum over contributions from all the paths [14,61,62,82].
Let us explore the paths with the optimal weights from the above discussion. Since the prob-
ability or the weight is exponentially related to the action (by a negative sign), the dominant
paths should have the least action which leads to the maximum weight. We should then mini-
mize the action to identify the dominant paths. By minimization of the action, the dominant paths
should satisfy the Euler–Lagrangian (E–L) equation (see Figure 7): (d/dt)(∂L/∂ ẋ) − ∂L/∂x = 0.
If the driving force F is a gradient of a potential U, F(x) = −∇U(x), the E–L equation is simpli-
fied as follows: ẍ − 12 (∂D(x)/∂x/D(x))ẋ2 = −2D(x)(∂V (x)/∂x), Where V (x) = −(1/4D)F2 −

Figure 7. Possible kinetic paths versus time: the kinetic paths from the initial position to final position are
illustrated. The dominant path with the highest weight is illustrated along with another paths.
24 J. Wang

1
2
∇ · F(x) is the effective potential. The equation of motion of x has the acceleration term
ẍ, the frictional (positive and negative) term 12 (∂D(x)/∂x/D(x))ẋ2 and the driving force term
−2D(x)(∂V (x)/∂x). The determination of the dominant paths becomes equivalent to an n-
dimensional particle moving in a force field with friction (n is the number of different variables
in the system) [14,61,62,82].
In general, the driving force cannot be written as a pure gradient for non-equilibrium dynam-
ical systems. In such cases, we can obtain the equation of the dominant path as follows from the
minimization of the action [14,61,62,82]:

1 ∂D(x)/∂x 2
ẍ − ẋ = E − ∇(ẋ · F) + (ẋ · ∇)F, (21)
2 D(x)

where E = −2D(x)(∂V (x)/∂x). This is similar to the equation when the driving force is a pure
gradient except for the last two terms of the right side. One can check that the last two terms are
equal to zero if the input driving force is a pure gradient.
 To see the origin of the last two terms, we
look at the cross-product terms in the action: − 12 (1/D) · F · ẋ dt = − 12 (1/D) · F · dx. When
the force is a gradient F = −D · ∇U, this part of the action is a constant. It does not contribute to
the dominant path equation. However, if the driving force is not equal to a gradient, then this part
of the action is not constant. Furthermore, the loop integral back to itself will not be equal to zero
and this part of the action becomes path dependent. It will contribute to the dominant path and is
the origin of the two additional terms formed on the right side of the dominant path equation.
As discussed, the driving force F of non-equilibrium complex dynamical systems in gen-
eral can be decomposed into a gradient of a potential and an additional curl flow flux (F =
DD · ∇Pss /Pss + ∇DD + Jss /Pss = −DD · ∇U + ∇DD + Jss /Pss ). With detailed balance, the
gradient of potential U determines the dynamics. For non-equilibrium systems, both the gra-
dient of potential landscape U and the curl flux of probability determine dynamics [13]. Here
we see that the dominant path is determined by both the gradient and the curl flux force.
In fact, the path-dependent contribution to the action from the force − 12 (1/D) · F · dx =

− 12 [(1/D)(Jss /Pss ) + (1/D) · ∇ · ·D]dx. The path-dependent contribution is mostly from the
divergent-free flux Jss due to its rotational curl nature. The curl flux is the origin of there being
a non-zero action for the closed loop. This contributes to a “real phase” in the exponentials of
the probability. The “real phase” gives the distinct probabilities classified by different topological
windings. This real phase is similar to the Aharonov–Bohm effect and Berry phase in quantum
mechanics except that there the phase is “imaginary”. As we can see, the non-equilibrium dynam-
ics has a deep connection to the topological nature of the trajectories of the underlying system.
The origin of the differences in “real phase” versus “imaginary phase” comes from the fact that
the classical non-equilibrium dynamics discussed here in general comes from the curl flux. This
leads to the evolution equations containing non-Hermitian operators while traditional quantum
mechanics has Hermitian Hamiltonians and unitary evolution [14,78,82].
The above dominant path equation is general and works in any dimension. For easy under-
standing and visualization, let us consider the situation in three dimensions. When D(x) is a
constant, the friction term is zero, the right side of the equation can be written as follows:
ẍ = E + ẋ × B, where B = ∇ × (−F) (this mathematical form of × and ∇× can only be writ-
ten in equal or below three dimensions). The divergence of B is zero, ∇ · B = 0. So in general, B
is a rotational curl since there is no sink or source to go to or come out from (the right side of the
divergence equation for B is zero). As we see, if the driving force of the dynamics F is a gradient
force, then B = 0, and there is no curl component of the driving force. If F cannot be written as
a gradient force, then in general B = 0, and there is a curl flux component to the driving force.
Without friction, the dominant path equation in three dimensions looks exactly like an electron
Advances in Physics 25

moving in an electric field E and magnetic field B. Notice that in classical electrodynamics, the
electric field is the gradient of the electrostatic potential, while the magnetic field has no magnetic
charge and therefore is divergence free.
The solution of the dominant path equations is important for understanding the kinetic mecha-
nisms of non-equilibrium dynamical systems. We can solve the dominant path equations directly
using the boundary condition specifying the initial and final states. We can then quantify the dom-
inant kinetic paths from the initial to the final state. But it is not always easy to solve the problem
with two specified boundaries numerically, especially when the system is large.
On the other hand, we can evaluate the weights of the kinetic paths from the path integral
formalism directly. When the action S(x) is minimized, the most probable path can be obtained.
The Lagrangian can be written as follows [14,82]:
1 2 1
L(x) = ẋ − V (x) − F(x) · ẋ. (22)
4D 2D
A generalized momentum can be written as follows: P(x) = ∂L/∂ ẋ = (1/2D)(ẋ − F(x)). The
corresponding Hamiltonian of the system can be written as follows:

H(x) = −L(x) + P(x) · ẋ = Eeff . (23)

From the above equation, we can obtain

ẋ2
+ V (x) = Eeff , (24)
4D

|ẋ| = 4D(Eeff − V (x)). (25)

When we substitute Equation (23) into the action, we obtain S(x) = (P(x) · ẋ − H(x)) dt. We
see that the action quantifying the weights of the paths depends on the values of the Hamiltonians.
Specific values of the Hamiltonians or effective energies Eeff correspond to specific final times T.
For a given Hamiltonian, there is an optimal path associated with it which minimizes the action.
Computing the above action in terms of the integral SHJ = i pi (x) dxi , a dot product in
multidimensional space of x is still very challenging. It is also difficult to visualize. We see that
the action can be simplified further and  is equivalent to  a line integral along√ a particular one-
dimensional (1D) path so that SHJ = i pi (x) dxi = pl dl, where pl = (Eeff − V (x))/D −
(1/2D)Fl . This transformation allows us to switch from the time-dependent description to the
Hamiltonian-dependent HJ description [119,120]. The dominant pathways connecting the given
initial and final coordinates can therefore be obtained by minimizing the action in the HJ
representation:
 xf  
(Eeff − V (x)) 1
SHJ = − Fl dl, (26)
xi D 2D
where dl is an infinitesimal displacement along the 1D path. Eeff is an effective energy parameter
which determines the total time elapsed during the transition, according to
 xf 
1
t f − ti = dl . (27)
xi 4D(Eeff − V (x)

Here, we have adopted a simple choice Eeff = V (xmax ) (V (xmax ) corresponding to the extreme
(maximum) of effective potential ∇V = 0. This leads to the longest kinetic time. Using the HJ
formulation of the dynamics in length space is much more efficient in avoiding long time traps
26 J. Wang

than the conventional approach. This is realized by considering intervals of fixed displacements
in length space rather than fixed displacements in time. The numerical advantages of the HJ
formalism for describing long-time dynamics at constant energy were applied to equilibrium sys-
tems where dynamics is determined by the gradient energy [119,120]. Here, the computational
advantages can be achieved for non-equilibrium stochastic dynamical systems, when the under-
lying dynamics of the system cannot be written as a pure gradient of the potential landscape alone
(determined by both gradient of the potential and curl flux).
In the small fluctuation limit, we can identify the fixed points of the deterministic equa-
tions. The stable fixed points correspond to the basins of attraction of the underlying landscape.
When the system has only one basin of attraction, we can choose the basin position as the
reference state (final state). By exploring the state space through varying the initial states, we
can find the effective actions S and the associated weights W of each state relative to the ref-
erence state: W (x) = exp[−S(x)] (under dominant path approximations). From this, we can
quantify the generalized potential landscape from the weights for a system with mono-stability:
U(x) = − ln W (x) = S(x). When the system has multiple basins of attraction, we can then
choose all the basin positions as the reference states (final states, xf 1 , xf 2 , . . . , xfN ). For any ini-
tial state, we can calculate the actions S relative to each basin: S1 (x), S2 (x), . . . , SN (x), and the
associated weights, W: W1 = exp[−S1 (x)], W2 = exp[−S2 (x)], . . . , WN = exp[−SN (x)]. We can
then select the least action S or the dominant (largest) weight W from this state relative to the
reference states as the action or the weight of this state. By exploring the state space through vary-
ing the initial states, we can finally find the least action or dominant weight associated with each
state relative to the reference states of the basins xf 1 , xf 2 , . . . , xfN . In this way, we can quantify
the generalized potential landscape from the weights for the whole system with multi-stability:
U(x) = − ln Soptimal (x).
The advantage of quantifying the landscape from the dominant path approximation via the
path integral is that the computational task reduces from exponential of M N to polynomial of
M × N, where M is number of intervals for each variable and N is the number of variables.
Another advantage of quantifying the landscape from the dominant path approximation via path
integral is that there is no assumption regarding the coupling among variables. Therefore, this
way of quantifying the landscape can work even when the couplings among variables are strong.
This is in contrast to the cases where another approximation in terms of the self-consistent mean
field for reducing the large dimensionality is often used. The mean field approximation only
works for the weak coupling regime.

2.5. Non-equilibrium transition state rates


For complex physical, chemical, and biological systems, quantifying the rate of transition from
one state to another is critical for understanding the behavior, function, and global stability
[121,122]. Although the analytical quantification of kinetic rates from TST or Kramers’ rate the-
ory has been successfully applied to equilibrium systems, the analytical quantification of kinetic
rates for non-equilibrium systems is still challenging. The main efforts have been focused on the
zero noise limit [22,38–52,82,108].
For equilibrium systems, the global stability can be quantitatively studied when the underlying
potential landscape is known a priori. The dynamics and the corresponding dominant kinetic
paths between different states follow the gradient. The transition state or Kramers’ rate theory for
kinetic rate for equilibrium systems is determined mainly by the barrier between the stable states
characterized by the basins of attraction. Here the barrier height is determined by the difference
in potential energy between the stable fixed point and the saddle point on the underlying potential
landscape. The kinetic rates also depend on the fluctuations around one stable basin and also
Advances in Physics 27

the saddle point between the basins of attractions. This theory for kinetic rates was proposed by
Eyring from a chemistry perspectives and by Kramers from a physics perspective on thermally
activated barrier crossing more than 70 years ago [82,123,124]. This provides a good analytical
quantification of the kinetic rate from one attractor to another for equilibrium systems.
However, for general non-equilibrium dynamical systems, the equilibrium TST or Kramers’
rate formula are expected to fail. This is because the dominant kinetic paths do not follow the
gradient path of the underlying non-equilibrium potential landscape, as electrons moving in an
electric field (potential landscape gradient) in the equilibrium case do. Instead, the dominant
kinetic paths deviate from the gradient paths due to the presence of the curl flux force that breaks
detailed balance. The non-equilibrium dynamics is like that of an electron moving in both electric
(potential landscape gradient) and magnetic (flux) fields [14,61,62,82].
The equilibrium transition state rate can be analytically quantified through the path inte-
gral formulation [125]. However, the analytical quantification of the transition rate for non-
equilibrium systems is still challenging. It has been argued that in the zero noise limit, the
dominant kinetic path will pass through the saddle point between the basins of attraction. A cor-
responding analytical approximation of the escape rate from a stable basin can be found [38,43].
However, for general non-equilibrium systems, finite fluctuations are often present. The domi-
nant kinetic paths do not necessarily go through the saddle points due to the presence of the curl
flux, breaking the detailed balance as mentioned previously [14,61,62,82].
In this section, we develop an analytical TST for the kinetic rate of general non-equilibrium
dynamical systems [14,61,82]. In this theory, (1) we first quantify the dominant path according to
the path integral by minimizing the action. Here the starting point and the ending point for the path
integral are the two stable fixed points S and S  . (2) Due to the presence of the non-zero curl flux,
the dominant path will not follow the gradient path of the underlying landscape. Furthermore,
under finite fluctuations, the dominant paths may not go through the original saddle point Ŝ. (3)
On the dominant path, we search for Ŝ  , the new “saddle”, which will not likely be at the original
saddle point Ŝ of the driving force. (4) The action of the path integral from S to S  obtained in
(1) is less than the action through the original saddle Ŝ along the gradient path of the underlying
landscape. The action calculated from the stable fixed point to the new saddle along the dominant
path provides the exponential part of the new non-equilibrium transition state rate theory. For
comparison, in conventional TST for equilibrium systems, the kinetic rate is determined by the
barrier at the saddle point between the basins of attraction on the underlying landscape. In the
new non-equilibrium TST, the prefactor part of the transition rate reflects the fluctuations around
the stable basin and the local curvature around the new saddle along the optimal path.
The present non-equilibrium TST can be applied to general non-equilibrium, physical,
chemical, and biological systems.

2.5.1. Equilibrium transition state rate


In this section, we will first give a review of the transition state and Kramer’s rate theories for
the equilibrium systems as preparation of describing a TST for non-equilibrium systems. The
fluctuations in complex dynamical systems are often not constant and location dependent. The
corresponding stochastic dynamics can  be quantified in continuous spaces by Langevin equa-
tions (in Ito’s form): ẋμ = Fμ (x) + a Baμ (x)ξ a (t), where x denotes the dynamical variables
of the system. Fμ (x) represents the driving force, ξ a (t) denotes the Gaussian distributed white
noise with unit fluctuations. Baμ (x) denotes the strength of the location-dependent fluctuations:
ξ a (t)ξ b (t ) = δ ab δ(t − t ). Again, instead of following un-predicable individual stochastic tra-
jectory, we focus on the predicable probability distribution P(x, t) obeying the Fokker–Planck
28 J. Wang

equation [2]:
dP  1
= ∂μ (−Fμ P) + ∂μ ∂ν (εμν P) (28)
dt μ μ,ν
2

with the diffusion coefficient εμν (x) = a,b Baμ (x)Bbμ (x)δ ab .
We adopt the notation ∂μ ≡ ∂/∂xμ . For convenience, we also denote P(x) ≡ P(x, t) as the
time-dependent probability distribution and PSS (x) as the time independent steady-state prob-
ability distribution. Under the intrinsic noise from statistical number fluctuations, the resulting
Fokker–Planck equation, with location-dependent diffusion coefficients, can be derived from a
second-order expansion in the noise of the underlying master equations in the noise [105].
The Fokker–Planck equation can be rewritten in terms of probability conservation law. The
change in local probability is equal to the net incoming or outgoing  probability flux [14,61,82]:
dP(x, t)/dt = −∂ · j. When the steady-state flux: Fμ (x)PSS (x) − ν 12 ∂ν [εμν (x)PSS (x)] = jμSS (x)
is zero: jSS = 0, the system is in detailed balance since there is no net flux in and out of the
system. In this case, the system is in equilibrium. The equilibrium probability distribution is
closely linked to the underlying potential by the Bolzmann law. The associated driving force
can be determined  by the gradient of the equilibrium potential: U = − ln Peq and Fμ (x) =
− 12 ∂μ [U(x)] + ν 12 ∂ν [εμν (x)]. The last term reflects the contribution from the inhomogene-
ity of the diffusion coefficients in x. For general non-equilibrium systems, the net flux in and
out of the system is not zero, jSS = 0. Detailed balance is broken. The steady-state flux satisfies
the condition ∂ · jSS = 0 at steady state. This divergence free condition of the steady-state flux
reflects its rotational curl free nature. The curl flux quantifies how far the system departs from
equilibrium. For non-equilibrium dynamical systems, the global stability and dynamics (Fμ (x) =
− 12 ∂μ [U(x)] + ν 12 ∂ν [εμν (x)] + jSS /Pss ) are determined by both the steady-state probability
distribution defining the non-equilibrium landscape U = − ln Pss and the curl probability flux:
jμSS (x).
1D systems are integrable with zero flux jSS = 0 with natural boundary condition. Transition
rates from one basin to another basin based on TST turn out to be given by [82]:

eq U”(S))|U”(Ŝ)| −2[U(Ŝ)−U(S)]/ε
rK = e . (29)

U(x) is the potential energy function in the equilibrium system. The associated driving force is the
gradient of this potential energy: F(x) = −U  (x), as shown in Figure 8. The basin of the attraction
of the underlying potential energy landscape is located at S and the saddle point is located at Ŝ.
The saddle point locates the barrier between the basins of attraction. In the small fluctuation limit
ε → 0, the transition state rate in Equation (29) can be rewritten as follows [126]:

−1 dF dF
(S) (Ŝ)e−SHJ ,
eq DOM
rK = (2π ) − (30)
dx dx
 Ŝ
where SHJDOM
= S p · dx is the HJ weight action from S to Ŝ along the dominant path [14,82,125].
The p denotes the canonical momentum and dx denotes the variable displacement of the system.
The underlying physical picture is quite clear. The transition state rate for equilibrium process
is determined by two key factors. The dominant contribution comes from the exponential of the
action. The rest is a prefactor that comes from the fluctuations around stable and saddle (tran-
sition state) points of the equilibrium potential landscape. The 1D transition state rate given in
Equation (29) has been generalized to the N dimensions [124,127,128] for equilibrium systems.
Advances in Physics 29

Figure 8. The potential barrier U for calculating the transition state or Kramers’ escaping rate. The basins
of attractions are localized at S and S  . Ŝ is the saddle point (from Ref. [82]).

However, for general non-equilibrium systems, the driving force F is not a pure gradient of a
potential U, Fμ (x) = −∂μ U(x). Furthermore, without the detailed balance, the curl current flux
jss is not zero. The non-zero flux contributes to the driving force. This leads to the deviation of
the kinetic path from that of the pure gradient, resulting in a path-dependent HJ weight action
DOM
SHJ [14,82]. Therefore, the dominant paths do not necessarily go through the saddle point or
transition state. Then, the new transition state has to be specified. Therefore, finding the rate for
general non-equilibrium systems demands specifying the dominant kinetic path as well as the
new saddle on the path and the complete form of the weight action needs to be computed in order
to quantify the rates from the new transition states.

2.5.2. The exponential contribution of the non-equilibrium transition state rate


For an N-dimensional non-equilibrium system, we can develop a TST for kinetic rates that goes
beyond the equilibrium expression given
 t in Equation (30). The generalized weight action for
non-equilibrium systems is [82] S = ti f dtL with the Lagrangian

 εμν
−1 1
−1
L= (ẋμ − Fμ )(ẋν − Fν ) + εμχ ∂χ (Fν εμν ). (31)
μν
2 μνχ
2

t
In the zero-fluctuation limit ε → 0, this action S = ti f dtL leads to the exponential part of
Freidlin–Wentzell ’s theory [38]. Furthermore, in the zero-fluctuation limit, the weight ratio of
e−Sl1 /e−Sl2 between the two smooth paths l1 and l2 agrees with the Onsager–Machlup function
[115].
The optimal path that contributes the most to the path sum can be quantified by minimizing
the weight action S with respect to the paths xμ (t)’s. We then obtain the equation of motion for the
dominant path through its satisfying the E–L equation (d/dt)(∂L/∂ ẋα ) = ∂L/∂xα . The dominant
path approach gives the lowest order approximation of the full path integral weight action. When
the fluctuations are relatively small, this starting point provides a practical way to quantify the
process in the large dynamical systems. Other sub-leading terms are exponentially suppressed
compared to the leading order contribution [14,82].
Rather than directly solving the E–L equation of motion, the dominant kinetic path can also
be evaluated by minimizing the weight action S directly. Let us define the canonical momentum
30 J. Wang
 −1
pμ = ∂L/∂ ẋμ = ν εμν (ẋν − Fν ), then the total energy becomes [14,82]

− E = −H = L − pμ ẋμ . (32)

This quantity is a constant along the dominant kinetic path. Then, the HJ weight action [129], the
minimization of which giving the dominant paths, can be written as follows:

 xf   xf 
−1
SHJ (xi , xf ) = 2(E − Veff )dl − εμν Fν dxμ . (33)
xi xi μν

 that the above action is now simplified as a line integral along the dominant path dl =
Notice
−1 −1
μν εμν dxμ dxν in a curved space with distance measure εμν , where the εμν quantifies the
fluctuation or diffusion strengths.

2.5.3. The transition state rate theory for non-equilibrium systems


As shown and discussed in Figures 9 and 10, the forward and backward dominant paths (lines
with arrows) are irreversible and do not pass through the saddle point Ŝ on
the gradient path (white
−1
lines) on the landscape [14,82]. From the effective driving force Fμeff = ν εμν Fν of the second
term on the right-hand side of Equation (33), we can always find a saddle or more accurately
the “global maximum along the dominant path” Ŝ  using the fact that the projection of the force
Fleff (S  ) along the path is zero, as shown in Figure 9 in two dimensions and Figure 10 in three
dimensions. The Fleff will always change its sign moving from the neighborhood of S (pointing
to S) toward the neighborhood S  (pointing to S  ). Quite often, we only have one saddle point
along the dominant path. When there are multiple new “saddles” along the path, we pick the last
one before the trajectory reaches the ending stable fixed point S as Ŝ  . Therefore, by replacing
the saddle point Ŝ for equilibrium systems with the new saddle point Ŝ  for the non-equilibrium
system, we can finally derive a transition state rate expression analytically for non-equilibrium

Figure 9. 2D illustration of non-equilibrium landscape with the irreversible dominant transition paths
between basins S and S  (green lines with arrows) and the gradient path (white line). Here, Ŝ is the saddle
point and Ŝ  is the “global maximum along the dominant path” (from Ref. [82]).
Advances in Physics 31

Figure 10. Three-dimensional ( 3D) illustration of non-equilibrium landscape with the irreversible dominant
transition paths between basins S and S  (purple lines with arrows) and the gradient path (white line). Here,
Ŝ is the saddle point and Ŝ  is the “global maximum along the dominant path” (from Ref. [82]).

systems as follows [14,82]:



−1 λu (Ŝ  ) detM (S)
e−SHJ .
noneq DOM
rK = (Eτ ) = (34)
2π |detM (Ŝ  )|
 Ŝ
In the exponential factor, the weight action SHJ DOM
= S p · dx, as defined in Equation (33) is
integrated along the 1D dominant path l from the stable basin S to the new saddle Ŝ  .
For the prefactor part of the expression, we can give a derivation in a similar spirit as for the
zero noise limit [43]. We call λu (Ŝ  ) the positive eigenvalue of the force matrix involving the
force derivatives with respect to the coordinates, Fμ,ν (Ŝ  ) = (∂Fμ /∂xν )(Ŝ  ) at the new saddle Ŝ  .
This gives a quantitative measure of the width of the contributing fluctuations at the new saddle Ŝ 
along the dominant path. At the stable state S, we can find the stationary solution for the Fokker–
Planck equation. The associated fluctuation matrix M (S) satisfies an algebraic Equation (35) at
the stable state S:
  
εξ χ M,μξ M,νχ + M,μξ Fν,ξ + M,νξ Fμ,ξ = 0. (35)
ξχ ξ ξ

Since the new saddle Ŝ  is not a fixed point of the system (force F = 0), there is no station-
ary solution for the Fokker–Planck equation at Ŝ  . Therefore, the matrix M (Ŝ  ) representing the
second-order fluctuations satisfies the dynamic equation at Ŝ  :

dMμν (x) ∂ 2H ∂ 2H ∂ 2H ∂ 2H
= Mμξ Mνξ  − − Mμξ − Mνξ . (36)
dt ∂pξ ∂pξ  ∂xμ ∂xν ∂xν ∂pξ ∂xμ ∂pξ

From the expression in the rate formula, the detM (S) represents the second-order fluctua-
tions in terms of frequencies of all stable modes around stable basin state (in all directions),
32 J. Wang

while detM (S  ) represents the second-order fluctuations in terms of frequencies of all stable
modes
 and unstable modes (in all directions) around the “saddle point” on the dominant path
Ŝ  . detM (S)/|detM (Ŝ  )| represents the ratio of the fluctuations around the saddle and the stable
basin state along the dominant path. On the other hand, the λu (Ŝ  ) determines the fluctuations of
the single unstable mode at the saddle point Ŝ  .
In this rate expression for non-equilibrium dynamical systems, the main contribution is from
the exponential with the weight action from stable basin state to the new saddle on the dominant
path. On the other hand, the non-exponential prefactor gives a second-order correction for the
local fluctuations at the stable point S and the saddle on the dominant path Ŝ  .
In conventional TST for equilibrium systems, the kinetic rate is determined by the potential
energy barrier at the saddle point between the basins of attraction (potential energy difference
between the saddle point and stable basin on the landscape). On the other hand, in the transition
state rate theory for non-equilibrium systems, the kinetic rate is determined by the weight action
from the stable basin to the new saddle along the dominant path. It is important to realize that the
non-equilibrium “saddle point” is path and directional dependent. In other words, the forward and
backward paths do not share the same “saddle point”. In contrast, in the conventional equilibrium
case, the forward and backward dominant paths all pass through the same saddle point on the
landscape as shown in Figure 10.

2.6. Non-equilibrium thermodynamics


For equilibrium systems, once the potential interaction energy or Hamiltonian is known, we
can find the equilibrium probability distribution directly using the Bolzmann relationship. We
can furthermore construct the partition function from which the entropy and free energy of the
system can be quantified. For non-equilibrium systems, the potential function is not known a
priori. The challenge then becomes: Can we construct an appropriate thermodynamics for a
non-equilibrium dynamical system? Progress has been made in non-equilibrium thermodynam-
ics [3,4,9–11,26,69,130]. We want to formulate the non-equilibrium thermodynamics with the
landscape and flux perspective. As we have already seen earlier, the non-equilibrium intrin-
sic potential is closely related to the steady-state probability distribution at the zero-fluctuation
limit, Pss (x) = Pss (x)|D→0 = exp(−φ
 0 /D)/Z, where D = D|D→0 and Z is the intrinsic parti-
tion function defined as Z = exp(−φ0 /D) dx [26,69,71,81]. From this, we can reach φ0 =
−D ln(ZPss ). We note that the intrinsic partition function Z is independent of time t. Analogous
to the equilibrium system, we are allowed to define the intrinsic entropy of the deterministic
non-equilibrium dynamical system as follows [3,4,9–11,26,69]: S = − P(x, t) ln P(x, t) dx,
where P(x, t) = P(x, t)|D→0 . Moreover, we can define the intrinsic energy E of the non-
equilibrium dynamical system as follows: E = φ0 P(x, t) dx = −D ln(ZPss )P(x, t) dx. A
natural definition
 of the intrinsic free energy F of the non-equilibrium system becomes F = E −
DS = D[ P(x, t) ln(P(x, t)/Pss ) dx − ln Z]. For thermodynamic reasoning, we now would
like to ask whether or not the entropy of the non-equilibrium system is maximized. We will
then explore the time evolution of the entropy [3,4,9–12,26,69]. The change of the entropy in
 be decomposed into two parts: Ṡ = Ṡt − Ṡe , where the entropy production rate (EPR),
time can
Ṡt = dx(J · (DD)−1 · J)/P, becomes positive or zero. Note that entropy production is closely
related to the probability flux. The flux is the origin of the entropy production. On the other hand,
the heat dissipation rate or entropy flow rate from
 the environments to the non-equilibrium sys-
tem can be either positive or negative: Ṡe = dx(J · (DD)−1 · F ), where the effective force is
defined as F = F − D∇ · D. We can interpret Ṡ as the entropy change of the non-equilibrium
system. Ṡt then has the physical meaning of the total entropy change of both the system and its
Advances in Physics 33

environments. It is always larger or equal to zero. This is consistent with the second law of ther-
modynamics. Furthermore, we can see that the entropy of the non-equilibrium system by itself
can be increased or decreased due to the entropy flow from or to the environments. Therefore,
the temporal change for the entropy of the system can be positive or negative. This provides
the chance to create order by decreasing the system entropy. So the entropy of system for the
non-equilibrium is not always increasing and therefore is not always maximized.
Furthermore, We can investigate the derivative of the intrinsic free energy with respect to time
[3,4,9–12,26,69,71,81]:
    
dF P P
= −D 2
∇ ln · D · ∇ ln P dx ≤ 0. (37)
dt Pss Pss
The above equation indicates that the intrinsic free energy of the non-equilibrium system F
always decreases in time until reaching the minimum value F = −D ln Z. This can be under-
stood as one form of the expression for the second law of thermodynamics of non-equilibrium
systems. Therefore, the intrinsic free energy is a Lyapunov function and can also be used to
explore the global stability of the non-equilibrium system. As we see, although the system entropy
is not necessarily maximized (only the total entropy keeps increasing), the intrinsic free energy of
the system does minimize itself. This might provide a design principle for the complex dynamical
systems to search for their optima.
We can now also investigate the non-equilibrium nature of the steady state. We can do so by
the study of how the intrinsic energy, entropy, and free energy of the non-equilibrium systems
change with respect to the fluctuation
 strengths D and the underlying system parameters. The
intrinsic
 system entropy S ss = − Pss (x) ln Pss (x) dx, the intrinsic energy Ess = φ0 Pss (x) dx =
−D ln(ZPss )Pss (x) dx, and the intrinsic free energy Fss = −D ln Z at the steady state can be
naturally defined with the probability in time now replaced as the steady-state probability as
shown above [11,26,56,69,71,81]. Therefore, the intrinsic free energy of the non-equilibrium
system at steady state becomes Fss = −D ln Z = Ess − DSss . We can take this as the first law of
thermodynamics for non-equilibrium systems. We see that the fluctuation strength D here plays
the role of temperature for the non-equilibrium systems analogous to that for the equilibrium
case. We can also explore the non-equilibrium thermodynamic behavior with respective to the
changes of D. We expect entropy and disorder to dominate at high fluctuations, while energy
and order dominate at low fluctuations. Therefore, non-equilibrium phase transitions might occur
from disorder to order as fluctuation decreases or vice versa. The non-equilibrium phase transition
might also occur when the parameter changes of the systems influence significantly the energy–
entropy balance and therefore the behavior of the system free energy.
When the fluctuations D are finite, we can also define and construct the non-equilibrium
entropy, energy, and free energy of the corresponding stochastic  dynamical systems in the
following way. The entropy is − P ln P dx; The energy is E = DUP dx, where U is the non-
equilibrium population potential and  is related to the steady-state probability
 as U = − ln Pss ;
The free energy is F = E − DS = DUP dx − D[− P ln P dx] = D[ P ln(P/Pss ) dx − ln Z].
The above forms the non-equilibrium stochastic thermodynamic first law for dynamical systems
with finite fluctuations [69,71,81]. 
We can also check and see the behavior of the total entropy production (Ṡt = dx(J ·
(DD)−1 · J)/P0) as well as the free energy change of the stochastic dynamical systems with
respect to time with finite fluctuations (dF/dt = −D2 ∇ ln(P/Pss ) · D · ∇ ln(P/Pss )P dx ≤
0). We can see that the total entropy production is always greater than or equal to zero,
while the free energy of the system is always less than or equal to zero. This is the second
law of non-equilibrium stochastic thermodynamics for dynamical systems with finite fluctu-
ations. Therefore, the free energy is also a Lyapunov function monotonically decreasing in
34 J. Wang

time. At the non-equilibrium


 steady state,
 the non-equilibrium free energy becomes [69,71,81]
Fss = Ess − DSss = DUPssdx − D[− Pss ln Pss dx] = −D ln Z. Here the partition function Z
is defined as the form Z = exp(−U) dx which is related to the non-equilibrium population
potential U.

2.7. FDT for intrinsic non-equilibrium systems


The FDT plays a key role for systems near equilibrium with detail balance [131,132]. The FDT
connects the fluctuations of the system quantified by the correlation function at equilibrium with
the response or relaxation of the system to that equilibrium quantified by the response function.
Efforts have been made to extend the FDT to non-equilibrium systems [15,27,28,133–145]. It
has been found that the FDT involves the correlation function of a specific variable conjugate to
entropy [25]. Moreover, by choosing proper observables, the FDT for non-equilibrium systems
can be formulated [146].
In this study, we illustrated a way to generalize FDT for non-equilibrium processes based
on the landscape and flux theory, focusing on the direct observables such as xi , under stochastic
dynamics [15]. The stochastic dynamics in continuous observable can be described by Langevin
dynamics or Fokker–Planck equations. We find that the response function can be decomposed
into two parts. One contribution is from the correlation of the observables themselves represent-
ing the spontaneous relaxations to the steady state. This contribution also exists in systems with
detailed balance. The only difference is that state the system relaxes to is not the equilibrium
state but the non-equilibrium steady state without detailed balance. The other contribution is
related to the heat dissipation. It represents the contribution from breaking detailed balance and
is directly related to the curl flux part of the force. Remarkably, non-equilibrium thermodynamics
[10,12,147,148] can be derived from the generalized FDT in the equal time limit [15].
Again stochastic dynamics in continuous representation can be formulated by Langevin
equations [15]:
ẋi = Fi (x) + Bij (x)ξj (t), (38)
where Fi (x) represents the driving force and ξi (t) represents the Gaussian white noise describing
the statistics of the stochastic fluctuations from the environments to the system: ξi (t)ξj (t ) =
δ(t − t ), adopting the Einstein convention for the notations. The temporal evolution of probabil-
ity distribution obeys the Fokker–Planck equation [15]:

Ṗ(x, t) = L̂(x)P(x, t) (39)

with the operator L̂(x) = [−∂i Fi (x) + ∂i ∂j Dij (x)] and the diffusion coefficient Dij (x) =
1
2
(BBT )ij (x). For convenience, we adopt the notation ∂i ≡ ∂/∂xi , P(x) ≡ P(x, t) to represent the
time-dependent probability distribution and PSS (x) to represent the time independent steady-state
probability distribution.
− F̃i (x)P(x) + Dij (x)∂j P(x) = ji (x), (40)
where F̃i = Fi − ∂j Dij . Then, Fokker–Planck equation again can be rewritten in the following
form: dP(x, t)/dt = ∂ · j. The system is in detailed balance if the steady-state flux is zero: jSS = 0:

jiSS (x) = −F̃i (x)PSS (x) + Dij (x)∂j PSS (x). (41)

In non-equilibrium systems, detailed balance is broken: jSS = 0. The steady-state flux is there-
fore not zero. From the steady-state conditions, the steady-state flux is a divergence free vector
with ∂ · jSS = 0 [15]. The driving force F̃j (x) can then be decomposed of two contributions:
Advances in Physics 35

a potential gradient part −Dij (x)(∂/∂xi )U(x), where U(x) = − ln Pss (x) and a curl flux part
−jjSS (x)/PSS (x) ≡ −vSSj (x). We can define the probability velocity of the steady state as vi (x).
SS

On other hand, we can also decompose the gradient of potential − ln P (x) into the driving force
SS

and the curl flux [15]:

− ∂i ln[PSS (x)] = D−1


ij (x)[−F̃j (x) − vj (x)].
SS
(42)

FDT for equilibrium systems with detailed balance was investigated using perturbation
approach [132]. Here we will extend perturbation approach to non-equilibrium systems. By
applying a linear perturbation on the driving force of the system: Fi (x) → Fi (x) = Fi (x) +
h(t)δFi (x), we see the change of the temporal evolution operator that follows as a consequence
of the perturbations in the driving force L̂ → L̂ = L̂ − h(t)δ L̂, with δ L̂ = δFi (x)∂i + ∂i δFi (x).
The corresponding change of the probability evolution and the average of the observed variable
become [15]:
 t 
P(x, t) = exp dt(L̂ − h(t)δ L̂) P(x, t ),
t

δ(t) = (t) −  = dx(x)[P(x, t) − PSS (x)]. (43)

Therefore, for t ≥ t , the response function of the system measured by the change of the average
of observable with respect to the perturbation applied on the force reads as [15]
⏐ 
δ(t) ⏐⏐ 
R (t − t ) = = dx(x)eL̂(t−t ) (−δ L̂)PSS (x). (44)
δh(t ) ⏐δF=0

Applying the decomposition in Equation (42), we can obtain the response function for the
stochastic non-equilibrium systems [15]:

 −1
R (t − t ) = dxeL̂(t−t ) {δFi [−F̃k − vSS
k ]Dik − ∂i δFi }P
SS

= −(t)∂i δFi (t ) − [(t)δFi (t )F̃k (t )D−1   SS  −1 


ik (t ) + (t)δFi (t )vk (t )Dik (t )].
(45)

The above expression we obtained provides a general relation between the response functions
and the correlation functions of stochastic non-equilibrium dynamical systems. Here, the corre-
 1 
lation between the two observables Ω 1 and  Ω is defined as [15] CΩ 1 Ω 2 (t , t) = Ω (t )Ω (t) −
2 2

Ω 1 (t )Ω 2 (t) with Ω 1 (t )Ω 2 (t) = PSS (xi )Ωx1i Ωx2j P(xi , t |xj , t). Here, P(xi , t |xj , t) is the
transition probability from initial state xi at time t to final state xj at time t. For a vari-
able or coordinate x independent (constant) perturbation of particular component i: δFi = δi i ,
we obtain [15]

R 
i (t − t ) = −(t)∂i ln[P (x)]
SS

= −[(t)F̃k (t )D−1  SS  −1 


ik (t ) + (t)vk (t )Dik (t )]. (46)

This is a generalized FDT for non-equilibrium systems. Notice that the generalized FDT applies
to any number of dimensions not just in one dimension [147,148]. From the force decomposition
in Equation (42), the response or relaxation to the steady state of the stochastic non-equilibrium
system can be decomposed to two parts. The first contribution, analogous to that for equilibrium
36 J. Wang

systems, is related to the usual correlation function of the observable with the driving force.
This contribution exists in FDT even for equilibrium systems obeying the detailed balance. In
equilibrium systems with detailed balance, the driving force can be explicitly expressed as the
gradient of the logarithm of the steady-state probability. However, the second contribution does
not occur in equilibrium systems. It is directly related to the non-zero flux which violates detailed
balance. The curl flux gives a quantitative measure of the degree of the non-equilibrium-ness or
how far away the system is from equilibrium. In other words, for a non-equilibrium system, the
response or relaxation depends both on steady-state fluctuations and also a contribution from the
curl flux.
For constant diffusion coefficients Dij for simplicity when j = 0, the system is in equilib-
rium with detailed balance and time reversal symmetry [15]: in this case (t)Fj (x(t )) =
Fj (x(t))(t ). Applying the Langevin equation (38), Fi (x(t))(t ) = [ẋi (t) − ξi (t)](t ) =
ẋi (t)(t ), due to the fact that the stochastic random force from the environment does not cor-
relate with the system variable  of the previous times (t > t ): ξi (t)(t ) = 0. Then, we can
see  
−1 d
Ri (t − t 
) = −D ik xk (t)(t 
) . (47)
dt
For the specific observable operator (x) = xj , we find
 
d
Ri j (t − t ) = −D−1 
x
ik xk (t)xj (t ) . (48)
dt
We see that the force-observable correlation now becomes the temporal change of the correlations
of the observable variables. The equilibrium FDT then becomes that the response or relaxation to
the equilibrium system is equivalent and can be measured by the equilibrium fluctuations. This
is the FDT near equilibrium [132].
We should point out that the FDT in Equation (46) can also be generalized to the case where
the system is not in steady state but in an arbitrary state with distribution P(x). For t ≥ t , we can
then arrive at [15]


Ri (t − t ) = dx(x)eL̂(t−t ) (−δ L̂)P(x)
 
(49)

= −[(t)F̃k (t )D−1   −1 


ik (t ) + (t)vk (t )Dik (t )]

with the probability velocity of the general state: −vi (x) ≡ −ji (x)/P(x).
Now we will give a new way of deriving the non-equilibrium thermodynamics for the stochas-
tic dynamical systems. Let us choose the observable in the generalized FDT as  = vi (x) and sum
over i from Equation (49), the response function in equal time limit t = t then becomes [15]
 
vi
Ri (t) = dxvi (x)[−∂i P(x)] = dx[∂ · j(x)] ln P(x)

d
= dxP(x) ln P(x) = −Ṡ
dt
= −[vi (t)F̃k (t)D−1 −1
ik (t) + vi (t)vk (t)Dik (t)]. (50)

Therefore, the entropy production of the system S = − d(x)P(x) ln P(x) has two contributions:
Ṡ = vi ∂i ln[P(x)]
= vi D−1 −1
ij vj  + vi Dij F̃j  = ep − Ṡm . (51)
Advances in Physics 37

ep ≥ 0 is the average total EPR of the system and environments. T Ṡm = T ṡm  is the average heat
dissipation rate to the environment or medium. The heat dissipation rate for the environment is
q̇ = Fi ẋj D−1
ij = T ṡm , where the exchanged heat q between the system and the environment can be
identified with the increase in entropy sm in the environment with temperature T [147,148]. The
system entropy change rate Ṡ is closely related to the gradient of the time-dependent probability
distribution: ∂i P, which is composed of two parts. One contribution is from the total bulk entropy
production of the system and environment which is associated with the curl flux v. The other
part is from the heat dissipation into the environment (surface) which is associated with the
driving force F̃ [147,148]. We can see that the driving force for entropy production is the curl
flux. With detailed balance, the only contribution to entropy production comes from the time-
dependent flux. Without detailed balance, entropy production has both contributions from the
time-dependent and the steady-state flux. We can separate the contribution of the time-dependent
and time independent parts of the curl flux to the entropy production of the system. We can further
relate these to the relaxation of time-dependent probability and steady-state flux explicitly. Let us
take the observable  = vi (x) − vSS i (x) and sum over i, the response function in Equation (49)
with equal time limit t = t to yield [15]
  SS 
P (x) Ḟfree
vi ∂i ln =
P(x) T
−1 −1 Qhk
i Dij vj  − vi Dij vj  =
= vSS − ep . (52)
T
This leads to Tep = Qhk − Ḟfree with free energy defined as Ffree = Tln[P(x)/PSS (x)] = U −
−1 −1
TS, the house-keeping heat defined as Qhk = TvSS i (t)vj (t)D (t) = TvSS
i (t)vj (t)Dij (t) =
SS
 ij
Tep + Ḟfree = T Ṡm + U̇ and total energy defined as U = −T dxP(x) ln[PSS (x)] [10,12,147,148].
The change of the total internal energy is therefore U̇ = Tvi (t)∂i ln[PSS (x)]. We can see that
there are two different origins of the total entropy production ep . Ḟfree arises from spontaneous
non-stationary relaxation associated with the gradient of relative potential −∂i ln[P(x)/PSS (x)].
Qhk is the driving force necessary to sustain the non-equilibrium environment, which is asso-
ciated with the steady-state flux vSS (x). For the non-equilibrium steady state, Ḟfree = 0. Qhk is
equal to the environment or medium dissipated heat for maintaining the violation of detailed
−1
balance [15]: Qhk = T Ṡm = −TvSS i Dij F̃j . When the system is in equilibrium with detailed bal-
ance, Qhk = 0. The total entropy production of the system is equal to the spontaneous relaxation
of free energy Tep = −Ḟfree . Therefore, we find that the generalized FDT in the equal time limit
t = t naturally leads to non-equilibrium thermodynamics with total entropy production coming
from the contribution of both non-stationary spontaneous relaxation and stationary house-keeping
part [12,15].

2.8. Gauge field, FDT


Symmetry has played a very important role in physics and chemistry. All the current fundamental
physical laws are the result of symmetry and symmetry breaking. So investigating the symmetry
of a system will help us to uncover the underlying physical laws of that system. We devote this
section to discuss the gauge symmetry of the non-equilibrium systems.
We will explore the relationship of the non-equilibrium Fokker–Planck equation with
Abelian Gauge Theory and internal space geometry as in quantum electrodynamics [149]. The
Fokker–Planck equation can be rewritten as follows [15]: ∇t P(x, t) = ∇i Dij (x)∇j P(x, t), with
the covariant derivative with respect to observable variable ∇i = ∂i + Ai = ∂i − 12 D−1 ij F̃j and
the covariant derivative with respect to time ∇t = ∂t + At = ∂t + [Dij Ai Aj − ∂i (Dij Aj )]. Here,
38 J. Wang

the independent coordinate components Ai of the Abelian gauge field (At , Ai ) will introduce a
curvature of internal charge space written as
1
R
2 ij
= ∂i Aj − ∂j Ai = [∇i , ∇j ], (53)

where [·] is a commutator of two operators. According to Equation (41), in the equilibrium case
with detailed balance: jSS = 0. Then, Ai = ∂i ln(PSS ) is a pure gradient and the curvature is zero:
Rij = 0 which corresponds to a flat space. In the non-equilibrium case, A cannot be written as
a pure gradient. Therefore, the curvature is not zero, Rij = 0. This leads to a curved internal
space. Notice that the Rij is a gauge invariant tensor: for a gauge transformation Ai → Ai + ∂i φ,
Rij → Rij = Rij , the curvature does not change. Moreover, the presence of the probability velocity
v(x, t) and the curl flux j(x, t) are also closely related to the internal curvature as [15]:

∂i (D−1 −1
jk vk ) − ∂j (Dik vk ) = Rij . (54)

In the case of a constant diffusion coefficient Dij = Dδij : ∂i vj − ∂j vi = Rij . We notice that
Equation (54) is gauge invariant. This implies that if we change Ai → Ai + ∂φ, P(x, t), v(x, t)
and j(x, t) are all changed. However, Equation (54) is always satisfied with the same curvature
Rij . Moreover, while v(x, t) and j(x, t) depend on the solutions of P(x, t), they always satisfy
Equation (54). Therefore, Rij represents a measurement of the geometry of the internal space of
the non-equilibrium dynamics. This curvature of the internal space is associated with the heat dis-
sipation in the environment or medium along a closed loop. Along any specific path x(t), T sm
is the heat dissipation in the environment or medium [15]:
 t
T sm (x (t ), x(t)) = T ṡm dt
t
 t  t
= D−1
ij (x(t))F̃j (x(t))ẋi dt = − Ai (x(t)) dxi (t). (55)
t t

Applying the current definition in Equation (40) and Stokes’s theorem, the entropy increase in the
environment or medium sm along a close loop C becomes [15]
 
T sm = − Ai (x) dxi = − D−1
C
ij (x)vj (x) dxi
SS
C C

1
=− dσij Rij , (56)
2 

where  is the surface formed by the closed loop C, dσij is the an area element on this surface. Rij
is the curvature of the internal space due to the presence of the gauge field A. Both the curvature
Rij and the closed-loop heat dissipation in the medium or environment T sCm are gauge invari-
ant under gauge transformation Ai → Ai + ∂i φ. Therefore, we can associate the non-equilibrium
nature of the dynamics with a curved internal space. Notice that the non-equilibriumness is
thermodynamic in nature and measured by statistical number counting. The internal space is
geometric and topological in nature and is measured by the curvature. Number counting and
geometry/topology are the only two most reliable measure of the objective world. Here, we see
an intimate connection between the statistical number counting and geometry/topology. The pres-
ence of the non-zero flux destroys the detailed balance. This leads to non-zero internal curvature
and a global topological non-trivial phase in analogy to quantum mechanical Berry phase [13,14].
The difference lies in the fact that in quantum mechanics, the Berry phase is “imaginary”, while
in classical non-equilibrium dynamics such a “phase” is real.
Advances in Physics 39

The medium or environmental heat dissipation sm in Equation (55) plays a key role in the
time irreversibility for non-equilibrium systems [147,148,150–152]. We shall see it also gives an
important contribution to the generalized FDT for non-equilibrium dynamics. This contribution
is closely related to the gauge field and internal curvature. The gauge aspect of discrete case was
also recently discussed and similar conclusion was obtained [153].
When the system is in a non-equilibrium state, there is no detailed balance and the flux is
non-zero [15]: j = 0. We are often more interested in the direct observable variable xi and the
FDT in the form of the equilibrium case shown explicitly in Equation (48). Accordingly, we
can transform the original force-observable correlation to the observable-observable correlation
xk (t)xi (t ). Without detailed balance, the system becomes time irreversible: (t)Fj (x(t )) =
Fj (x(t))(t ). According to the Fluctuation theorem [147,148,151,152], we get

PSS (x )P̃(x, t|x , t ) PSS (x )


ln = sm + ln . (57)
PSS (x)P̃(x , t|x, t ) PSS (x)

Here, P̃(x, t|x , t ) and (P̃(x , t|x, t ) represent the probabilities of a forward and backward paths,
respectively. We can write (t)Fi (x(t )) − Fi (x(t))(t ) = dxdx (x)Fi (x )A(x, x , t − t )
with A(x, x , t − t ) given as

A(x, x , t − t ) = PSS (x )P(x, t|x , t ) − PSS (x)P(x , t|x, t )


  
SS    PSS (x) − sm
= P (x ) D[x]P̃(x, t|x , t ) 1 − SS  e , (58)
P (x )

D[x] is the path integral from x (t ) to x(t). Then, we arrive at [15]
 
−1 d
R
i (t − t 
) = −D ik xk (t)(t 
)
dt

− D−1 ik dx dx (x)Fk (x )A(x, x , t − t )

− D−1 SS 
ik (t)vk (t ). (59)

When we set the operator as (x) = xj , the response function becomes


 
j  −1 d 
Ri (t − t ) = −Dik xk (t)xj (t )
dt

− D−1
ik dx dx xj Fk (x )A(x, x , t − t )

− D−1 SS 
ik xj (t)vk (t ). (60)

The first part on the right-hand side of the equation is similar to that for the equilibrium case in
Equation (48). The last two parts of Equation (60) are zero for the detailed balance case. These
two parts are related to the non-zero curvature due to the presence of the gauge field originated
from the curl flux in internal space,
 as shown in Equations (54) and (56). In Equation (58), the
factor U(x, y) = e(T/2) s

m
= e− P Ai (x) dxi
is analogous to the Wilson loop or Wilson line in Abelian
gauge theory, with P representing the integral for a path from x to y [149]. This gives a quantita-
tive description of the irreversibility from the heat dissipation in the medium or environment. The
function inside the path integral of Equation (58) is [U(x, y)]−2/T (PSS (y)/PSS (x)) = e− qhk /T ,
where qhk is the house-keeping heat along a trajectory. It was proved that e− qhk /T  = 1
40 J. Wang

[147,148]. Along a closed loop, e− qhk /T = [U(x, x)]−(2/T) . Under the gauge transformation,
C

U(x, y) transforms as U(x, y) → eφ(x) U(x, y)e−φ(y) . Therefore, U(x, y) acquires a phase factor.
It satisfies a differential equation [15]:

ẋi ∇i U(x, y) = 0. (61)

This implies that the gradient of the phase factor (Wilson lines) originates from the heat dissipa-
tion or house keeping of the non-equilibrium systems that is perpendicular to the dynamics. This
is just the same case as the circular motion where the radial motion and phase motion are perpen-
dicular to each other. The origin of such behavior is from the non-zero curvature of internal space
of the gauge field caused by the presence of the non-zero flux which breaks the detailed balance
for non-equilibrium systems.

3. Multiple landscapes, curl flux, and non-adiabaticity


Physical and biological systems often involve multiple degrees of freedom with widely different
timescales. Take an (electronic) Hamiltonian system with known energy function, for example,
the interplay with different timescales of intra-landscape dynamics (motion of the electrons)
along the same and inter-landscape hopping between different (electronic) energy surfaces (for
nuclear motion) is critical for electron transfer. If the dynamics on the intra-landscape is faster
(slower) than the inter-landscape hopping, the process is often called non-adiabatic (adiabatic).
For general non-equilibrium complex dynamics where the Hamiltonian or energy function is
unknown a priori, the challenge is how to study the multiple timescale problem (adiabatic and
non-adiabatic process) of the non-equilibrium system dynamics. We extend the landscape and
flux theory for describing global non-equilibrium and non-adiabatic complex system dynam-
ics with eddy current to coupled landscapes [74]. We find that through rigorous mathematical
transformation, the coupled landscapes which are often technically difficult to deal with, can be
studied in a continuous representation, in which they become a single landscape but with addi-
tional dimensions. Intra- and inter-landscape dynamics on the coupled landscapes becomes the
dynamics along the multidimensional surface of a single landscape. On this single landscape, the
dynamics of the complex system can be decomposed to two determining factors: a gradient of
the potential landscape which is closely related to the steady-state probability distribution of the
enlarged dimensions, and a probability flux which has a curl or eddy-current nature.

3.1. Introduction
The world can be seen as composed complex systems. The complex system dynamics are under
intensive study, but due to their own natures, many properties are not well understood, espe-
cially the global quantification of the system. Complex systems can be physical or biological.
For example, in physical world, the convection in the atmosphere is important for the weather
pattern, while in the biological world, molecular motors through the ATP/ADP pumping real-
ize the function for muscle contraction. Complex systems are usually not isolated. They are in
constant exchange with energy, materials, and information with their environments. So complex
systems are usually activated with pumps and are not similar to the conservative systems often
encountered in the bulk of the physics and chemistry literatures. In a conservative system, the
energy function is often known and given a priori, the ultimate distribution of the system often
follows a Boltzmann law and the dynamics is determined by the gradient of the energy function.
Typical complex systems are usually not in equilibrium. There is no energy function given a pri-
ori. The global nature of the dynamics is a challenge to address. We previously have established
Advances in Physics 41

that the non-equilibrium dynamics can be characterized within a landscape and flux framework
[13–15,17,62].
Furthermore, there is another intriguing complexity related to the multiple timescales even
when the information on the underlying landscape is known (either for equilibrium or non-
equilibrium systems). For example, even for a Hamiltonian system with known energy function,
the interplay with different timescales of intra-landscape motion along a single surface and
inter-landscape hopping between different surfaces represent non-equilibrium systems which can
absorb the energy from energy pump (provided, e.g., from ATP hydrolysis often occurred in
biology) and move along or jump between different chemical states for realizing the muscle con-
traction. If the intra-landscape motion is faster (slower) than the inter-landscape hopping, the
process is often called non-adiabatic (adiabatic). For a Hamiltonian system with given energy
function such as multi-electronic system in atomic physics, electron transfer, etc. much has been
explored and progress has been made [154,155].
While the landscape and flux theory [13–15,17,62,69] is useful for addressing the global
nature of complex systems, as it stands, it can only be applied to a single potential surface
and is not directly applicable to multiple coupled landscapes. On the other hand, most exist-
ing treatments of the adiabatic and non-adiabatic dynamics of multiple energy surfaces taking
into account multiple timescales apply only for Hamiltonian systems where the energy function
is a prior known for each individual surfaces. It does not apply to the case where the underlying
process is non-equilibrium in nature.
The adiabatic and non-adiabatic non-equilibrium systems have been studied computation-
ally in the context of electronic transitions, electron transfer, networks, motors, and nonlinear
dynamical systems [45,63,66,77,154,155]. However, a global description and framework of
understanding is still lacking and in demand. Furthermore, although some systems have been
studied computationally, general complex systems require more intensive computation and theo-
retical guidance is needed to develop efficient algorithm to study their global dynamics. Finally,
the ultimate goal is to uncover the organizing principles underlying complex systems so as to
apply them to design and engineering. Therefore, for general non-equilibrium complex dynamics
where Hamiltonian or energy function is unknown a priori, the challenge is how to study the
multiple timescale problem (adiabatic and non-adiabatic process).
We now discuss a theoretical framework for describing global non-equilibrium and non-
adiabatic complex system dynamics that have eddy currents and require coupled landscapes [74].
We find that through a rigorous mathematical transformation, the coupled landscapes which are
often technically difficult to deal with, can be studied in a continuous representation, in which
they become equivalent to a single landscape but one with additional degrees of freedom of higher
dimension. Intra- and inter-landscape dynamics on the coupled landscapes become dynamics
along the multidimensional surface of a single landscape. On this single landscape, the dynamics
of the complex system can be decomposed into two determining factors: a gradient of the poten-
tial landscape which is closely related to the steady-state probability distribution of the enlarged
dimensions, and a probability flux which has a curl or eddy-current nature. We summarize the
approach in Figure 11.
Figure 11(a) shows the adiabatic single landscape dynamics which is determined by the
potential and gradient. Figure 11(b) shows the adiabatic dynamics of the single landscape for
non-equilibrium system which is determined by both a landscape gradient and a curl flux or
eddy current. Figure 11(c) shows the non-adiabatic multiple landscape surfaces with the energy
function of each individual surface being known a priori and the dynamics being determined
by the combination of the gradient of the energy on the surface and hopping in between the
surfaces. This is the traditionally focused approach to non-adiabatic dynamics. Figure 11(d)
shows non-adiabatic multiple landscapes where the energy function is a priori not known. The
42 J. Wang

(a) (b) (e)

(c) (d)

Figure 11. Illustrations of the equilibrium/non-equilibrium and adiabatic/non-adiabatic landscapes. (a) The
adiabatic single landscape which underlies the equilibrium gradient dynamics. (b) The adiabatic single land-
scape which underlies the non-equilibrium dynamics determined by both the landscape gradient and curl
flux. (c) The non-adiabatic multiple landscapes where the dynamics is determined by the combination of
the gradient of the individual landscape and hopping between the landscapes. (d) The non-adiabatic mul-
tiple landscapes where dynamics is determined by both the landscape gradient and curl flux as well as the
additional inter-landscape hopping. (e) The description equivalent to D with the single landscape for non-a-
diabatic non-equilibrium systems in the continuous representation, where the dynamics is determined by the
gradient of the landscape and curl flux or eddy current on the expanded space (from Ref. [74]).

non-adiabatic dynamics is determined by both the landscape gradient and curl flux on the land-
scape surface as well as by the additional interface hopping. Finally, Figure 11(e) shows the
equivalent non-adiabatic single landscape for non-equilibrium systems in continuous representa-
tion (equivalent description of Figure 11(d)) where the dynamics is determined by the gradient
of a non-equilibrium potential and a curl flux or eddy current in an expanded space with more
dimensions.

3.2. Theory of non-adiabatic non-equilibrium potential and flux landscape for general
dynamical systems
To establish a potential and flux landscape framework for studying the non-adiabatic non-
equilibrium dynamical systems, we can divide the degrees of freedom or important variables
into two groups. When the timescale for the dynamics of one group of variables y is significantly
faster than for the other group of variables x, then the system is in adiabatic limit where we can
eliminate the fast variables y and only consider the effective evolution dynamics of the system
along the x variables. We have previously discussed this adiabatic case and have shown that the
driving force of the general non-equilibrium dynamics can be decomposed into a gradient of a
potential and a curl flux. While the gradient force drives the system as an electric field on elec-
trons generating motion downhill the gradient, the curl flux provides an additional contribution
that acts like a magnetic field on electrons generating curly motion. It is the curl flux term that
breaks detailed balance and gives the possibility of a limit cycle, while the limit cycle does not
exist when the detailed balance is preserved in an equilibrium system.
Advances in Physics 43

If one group of variables y is comparable or slower in their motion than the other group of
variables x, then the system is said to be in a non-adiabatic regime where we cannot eliminate
the y variables and only consider the dynamics along the x variables. In this case, we must take
both groups of variables of x and y into the consideration. If the variables y are continuous, we
can again extend our previously studies and decompose the driving force of the system dynamics
into a gradient and curl flux in x and y spaces.
However, the y variables often can be discrete variables, in many examples that appear in
physics, chemistry, and biology. Electron motions can be thought of as motion along continuous
coordinates x while different electronic energy surfaces where electrons move on can be rep-
resented by variable y. Depending on the electronic states, the y is usually discrete (integers).
Electron transfer and motors are other examples, where the motion is best quantified by two sets
of variables, x describing the continuous motion along energy surfaces for nuclear coordinates,
while y for describing which energy surface the motion is on. This set of combined continuous
x and discrete y variables for complex systems is a challenge to study even with serious numeri-
cal computations because we are dealing with the motion on the multiple (labeled by y) coupled
(through y) energy surfaces or landscapes instead of a single energy surface or landscape. Thus,
it is harder to visualize multiple landscape motion than the pure continuous representation.
One immediate question is whether even when the motion in x along each energy surface
follows a gradient dynamics, does the whole system of multiple coupled energy surfaces or land-
scapes with x and y follow the gradient dynamics and preserve the detailed balance? The answer
to this question is “No”, in general, otherwise the cycle motion such as muscle contraction caused
by motors would not occur since there is an active pumping process with the consumption of ATP
that breaks detailed balance. However, it is not so easy to quantify and visualize how a continu-
ous curl flux emerges for detailed balance breaking with coupled landscapes using the combined
continuous x and discrete variable y representation.
Furthermore, for general dynamical systems, at each discrete y, the motion on x may not be
driven by gradient force alone. Nevertheless, we may decompose the driving force for each dis-
crete y along x to the gradient of a potential and a curl flux. So now we are dealing with again
the cases of multiple (labeled by y) coupled (through y) potential landscapes. Therefore, for the
general case of combined continuous and discrete variable systems where the timescales of x and
y are not apparently separable, the challenge is how to quantify and visualize the global dynam-
ics on such coupled landscapes. We will show that the key to resolve this issue lies in the fact
that we can transform the discrete variables y into an equivalent continuous representation with
variables z. The physical meaning of the corresponding continuous variables z is the occupation
or probability on each discrete variable y. Once the transformation from discrete variables y to z
is completed, the whole system can be described again by two sets of continuous variables x and
z. Therefore, we can decompose the driving force into a gradient and a curl flux for the general
case when the z variables are fast (adiabatic regime), or comparable/slow (non-adiabatic regime)
compared to the x variables. In this way, the physical picture is very clear. We can see how the
dynamic systems in different timescales can be decomposed into gradient of a single (not mul-
tiple coupled) potential landscape with continuous variables x and z and the curl flux defined on
this single potential landscape in x and z. In other words, we have transformed a rather difficult
problem of multiple (labeled by y) coupled landscape dynamics problem in x and y (finite size in
y because of its discrete nature) into a single landscape problem in continuous variables x and z
with expanded dimensions (infinite number of values in z, due to its continuous nature).
Our aim here is to obtain a physically intuitive picture and quantification of the dynam-
ics for non-adiabatic non-equilibrium dynamical systems. We will, in the following, develop a
theoretical framework to explore the global dynamics of non-adiabatic non-equilibrium systems
[74]. The deviation from the strong-adiabatic limit is represented by fluctuations in the discrete
44 J. Wang

state space. We first explore a simple system of single self-regulating gene to illustrate the idea.
The discrete state of our example of self-regulating gene is represented by the on or off binding
state of the gene’s DNA. The (nearly) continuous variable here is the concentration of the proteins
produced by the gene.

3.3. A one variable coupled landscape


We will use an example to show how we can transform the challenge of exploring the coupled
multiple landscapes to an equivalent single landscape with extended dimensions. To begin with,
we consider a 1D process of simple synthesis and degradation (decay) coupled with reactions.
The reactions of synthesis and decay can occur either in a gene-on landscape or on a gene-off
landscape. This is a typical non-adiabatic case mimicking many of the important physical and
biological processes, for example, electron transfer, gene regulation, motor dynamics, etc.
Set state “1” as the gene-on state and “0” as the gene-off state. Assuming the reaction requires
a dimer, we can write down the corresponding Master equations as follows:
dP1 (n) h
= [(n + 2)(n + 1)]P0 (n + 2) − fP1 (n)
dt 2
+ k[(n + 1)P1 (n + 1) − nP1 (n)] + g1 [P1 (n − 1) − P1 (n)], (62)
dP0 (n) h
= − [n(n − 1)]P0 (n) + fP1 (n − 2)
dt 2
+ k[(n + 1)P0 (n + 1) − nP0 (n)] + g0 [P0 (n − 1) − P0 (n)]. (63)

Here n is a discrete particle number. The physical meaning is clear. The population probabil-
ity change of the reaction on (off) state is controlled positively (negatively) by the on reaction,
negatively (positively) by the off reaction, its own synthesis and decay.
In the continuous space, we can set x = n/V , where V is the volume of the system. In the
large volume (V ) limit,
⎛ 1 2 ⎞ 
  H − f hx
d P1 ⎜ 1 2 ⎟ P1
=⎝ ⎠ P , (64)
dt P0 1 2 0
f H0 − hx
2
where
1 1 1 2
H1 = − ∂F1 + ∂ D1 , (65)
V V2 2
1 1 1 2
H0 = − ∂F0 + ∂ D0 (66)
V V2 2
with F1 = g1 − kx, F0 = g0 − kx and D1 = g1 + kx, D0 = g0 + kx. We can write the above
equation in an operator form from the probability.
dP
= HP (67)
dt
and  
P(t) = exp dtH P, (68)

where P is the probability and H is the operator.


Advances in Physics 45

With the coherent-state representation of the spin,


 π  2π
1
Is = sin θ dθ dφ|ŝs̄| (69)
2π 0 0

and 
Ip = dp|pp|, (70)

where
⎛ θ ⎞
eiφ/2 cos2
⎜ 2 ⎟ , s̄| = (e−iφ/2 , eiφ/2 )
|ŝ = ⎝ ⎠ (71)
−iφ/2 2 θ
e sin
2
and |p is the eigenstate of momentum operator p̂ = −i(∂/∂x): p̂|p = p|p.
Inserting into P(xf , sf , tf |xi , si , ti ), we obtain
      i=N−1  
    
   
P(xf , sf , tf |xi , si , ti ) = xf , sf T̂ exp Hdt  xi , si = xf , sf
 (1 + Hi t) xi , si
 
i=1
 i=N−1  
  
 
= xf , sf  [Is ⊗ Ip (1 + Hi t)]Is ⊗ Ip  xi , si
 
i=1
   
= const D[− cos θ ]DφDpDx exp − dtL . (72)

One can calculate matrix for production of s̄| ⊗ p|(1 + H t)]|ŝ ⊗ |q by using

1 1 2
p|H1 |x = −i pF1 − p D1 p|x, (73)
V V2
1 1 2
p|H0 |x = −i pF0 − p D0 p|x (74)
V V2

and p|x = e−ipx . Thus, we find the effective “Lagrangian”

i dφ 1 + cos θ iφ 1 2 1 − cos θ −iφ


L= cos θ − f e − hx e
2 dt 2 2 2
 
1 + cos θ h 2 1 − cos θ
+ (L1 + f ) + L0 + x , (75)
2 2 2
1 1 1 2 dx
L1 = i F1 p + p D1 − i p, (76)
V 2 V2 dt
1 1 1 2 dx
L0 = i F0 p + p D0 − i p. (77)
V 2 V2 dt
Defining (1 + cos θ )/2 = c1 , (1 − cos θ )/2 = c0 , expanding Equation (75) with respect to the
conjugate variables φ and p to the lowest order
   
dc1 h 2 1 1 1 dx
Lcl = iφ − − fc1 + x c0 + ip c1 g1 + c0 g0 − kx − . (78)
dt 2 V V V dt
46 J. Wang

Integrating φ and p leads to δ functions which gives the deterministic equations:

dc1 h
= −fc1 + x2 c0 , (79)
dt 2
dx g1 g0 k
= c1 + c0 − x. (80)
dt V V V

Taylor expands Equation (75) with respect to the conjugate variables φ and p to the second order
yielding
 
1 h 1 1
L = Lcl + φ 2 fc1 + x2 c0 + p2 2 (g1 c1 + g0 c0 + kx). (81)
2 2 2 V
This leads to the coupled-Langevin equations as

dc1 h
= −fc1 + x2 c0 + ηθ , (82)
dt 2
dx g1 g0 k
= c1 + c0 − x + ηx (83)
dt V V V

with
 
 1 h 2
ηθ (t)ηθ (t ) = fc1 + x c0 δ(t − t ), (84)
2 2
 
 1 g1 g0 k
ηx (t)ηx (t ) = c1 + c0 + x δ(t − t ). (85)
2V V V V

Setting k/V = 1, c1 = (1 − ξ )/2, g1 → g0 , g0 → g1 , h = 2h0 V 2 , Equations (82) and (83)


become

1 dξ 2
= (1 − ξ ) − κx2 (1 + ξ ) − ηθ , (86)
ω dt ω
dx g1 + g0 g1 − g0 Xad δX
= + ξ − x + ηx = + ξ − x + ηx . (87)
dt 2V 2V V V

We can define ω as a parameter which quantifies the degree of adiabaticity as the ratio ω = f /k,
where the f is the off rate of inter-landscape hopping and k is the decay rate on a landscape
representing the intro-landscape dynamics. ω being the ratio of the timescales of inter-landscape
hopping and intra-landscape motion gives a quantitative measure of the relative importance of
the non-adibaticity. The larger ω values indicates a stronger coupling among landscapes (x, i)
(i = 0, 1), resulting in an effective landscape (x). On the other hand, the smaller ω values imply
a weak coupling among landscapes, resulting in multiple landscapes (x, i), (i = 0, 1). The non-
adiabatic multiple landscapes can be further transformed into the picture of single landscape with
extra dimensions (x, ξ ).
   
2 2 1 4 h 2 1
ηθ (t) ηθ (t ) = fc1 + x c 
0 δ(t − t ) = [(1 − ξ ) + κx2 (1 + ξ )]δ(t − t ), (88)
ω ω 2 ω2 2 ω
 
 1 Xad δX
ηx (t)ηx (t ) = + ξ + x δ(t − t ). (89)
2V V V
Advances in Physics 47

The corresponding Fokker–Planck equation for p(x, c1 , t) then is as follows:


      
g1 g0 k 1 1 g1 g0 k
∂t p = −∂x c1 + c0 − x p + ∂x2 c1 + c0 + x p
V V V 2 2V V V V
      
h 1 1 h
− ∂c1 −fc1 + x2 c0 p + ∂c21 fc1 + x2 c0 p . (90)
2 2 2 2

Therefore, we can see that we have successfully transformed a non-adiabatic dynamic chal-
lenge with a multiple coupled landscape on (x, i) space (i = 0, 1) into dynamics on a landscape
with extended dimensions (x, c) or (x, ξ ). There is an important implication of this study. In the
original problem, the dynamics is 1D and therefore there is always a landscape whose derivative
determines the intra-landscape motion with occasional jumps to other landscapes. In this pic-
ture, if there is no coupling or jump to other landscapes, we will not have a flux and the system
dynamics is driven by a potential gradient. The question is how would coupling of the discrete
state change this picture. As seen, we have transformed this to the dynamics with an extra dimen-
sion. In two dimensions, the system is no longer guaranteed to have a pure gradient. In fact, the
original coupled landscape dynamics with intra-landscape driving force driven only by a poten-
tial gradient now becomes a system with driving forces from both the potential gradient and flux
components of a single landscape in expanded space. Introducing the extra dimensions provides
the extra timescales. So the flux here comes from the landscape coupling and the timescale due
to the coupling. This is the non-adiabatic origin of the flux.
In a similar spirit, the above schemes can be naturally generalized to N variable case with 2N
coupled landscapes. In general, we can transform the general issues of non-adiabatic dynamics
involved with the coupled landscapes to the non-equilibrium dynamics on a single compos-
ite landscape in expanded continuous dimensions. Then, we can explore the global behavior
and dynamics of the system by directly applying the non-equilibrium landscape and flux theory
directly [13–15,17,62, 69].

4. Spatial fields, landscapes, and fluxes


We live in a non-equilibrium world. Our Earth is a non-equilibrium system constantly receiv-
ing energy from the Sun. Our human body as a non-equilibrium system constantly consumes
energy for survival. Uncovering the principles and physical mechanisms of these activated pro-
cesses is vital for understanding the non-equilibrium systems. Significant efforts and progresses
have been made recently on the global stability and dynamics of the non-equilibrium systems
[3–6,8,9,11,13–15,69]. The current study have often been focused on the homogeneous systems
where the spatial dependence is ignored or at most characterized by an averaged mean field.
However, almost all of the realistic systems are spatial dependent. The famous Bernard convec-
tion flow is a typical physical example of spatially dependent non-equilibrium system [156]. The
drosophila differentiation and growth is another famous biological example of spatially depen-
dent non-equilibrium system [157,158]. These systems have been studied through the dynamics
of reaction diffusion processes at the mean field level, often characterized by the partial differ-
ential equations. The mean field level description can give local dynamics and local stability
analysis for these spatial-dependent non-equilibrium systems. However, the global natures of the
systems such as global stability and robustness cannot be addressed using the typical mean level
local description.
In this section, we will generalize the non-equilibrium landscape and flux theory [13–
15,17,62,69] to include quantities that vary in physical space. In other words, we will go from
48 J. Wang

non-equilibrium statistical mechanics to non-equilibrium statistical field theory [73,75,85]. We


can develop a general method to construct a Lyapunov functional to quantify the global stabil-
ity and robustness of such spatially dependent non-equilibrium systems. We find the Lyapunov
functional reflects an underlying intrinsic potential field landscape for the spatial non-equilibrium
systems. The topography of the intrinsic potential field landscape can be characterized by the
basins of attractions directly related to the global stability. In the spatially dependent equilibrium
systems, the dynamics is determined by the functional gradient of an energy functional. Usually,
the energy functional is known a priori as the interaction potential functional for the systems. For
the spatially dependent dynamical systems, in general, an energy functional giving equilibrium
gradient dynamics cannot usually be found. One has to consider the additional contributions from
a curl flux. The curl flux field quantitatively characterizes the degree of the non-equilibriumness.
Again the potential field landscape and curl flux field form a dual pair for characterizing the
global spatial temporal non-equilibrium dynamics.
In the following subsections, we will develop a potential and flux field landscape theory
for spatially dependent non-equilibrium systems [73,75,85]. We will also uncover the Lyapunov
functional for quantifying the global stability of the spatially dependent non-equilibrium systems
[73,75,85].

4.1. Potential and flux field landscape theory for stochastic spatial non-equilibrium systems
To extend the potential and flux landscape theory for the non-spatially dependent (or spatially
homogenous) dynamical systems to the general spatially dependent dynamical systems, we first
will start with the stochastic dynamics with spatial dependence. To distinguish the problem from
that with the spatially independent case, we use a different symbol φ( x, t) to represent the spatial
dependence of the dynamical variables rather than C  in the spatial-independent case [73,75,85].

 x, t)
∂ φ(
 x; φ] + ξ [x, t; φ],
= F[ 
∂t
∂φa (x, t)

= Fa [x; φ] + ξa [x, t; φ], (91)
∂t
 
where ξa [x, t; φ]  = b d3 x Gab [x, x ; φ]ζ
 b (x , t) with < ζa (x, t) >= 0 and < ζa (x, t)ζb (x , t ) >=
δab δ (3) (x − x )δ(t − t ).
Notice that φ(  x, t) instead of being a single dynamical variable as in the spatially indepen-
dent case, now is a function of space and time and therefore becomes a vector field. F[  x; φ]
becomes the deterministic driving force field which can depend on space and dynamic variables
explicitly. ξ [x, t; φ]
 becomes the stochastic driving force field depending on the space and field
variables. Therefore, the evolution of the spatially dependent dynamical systems is determined
by the deterministic and stochastic driving force fields. Gab [x, x ; φ]  gives the spatial and dynami-
cal variable dependence of the stochastic driving force field. The deterministic part of the driving
force field F[ x; φ] often contains a term which is associated with spatial diffusion of the dynam-
ical variables Diff(x), ∇ · ∇ · Diff(x)φ(  x, t). ∇ here refers to the spatial derivative. Therefore,
the stochastic ordinary differential equation for the evolutions of the dynamical variables in the
spatial-independent case becomes partial differential equation for the evolutions of the dynami-
cal field variables in the spatial-dependent case [73,75,85]. Once the stochastic partial differential
equation for the field is written up as aforementioned, we can investigate the evolution of indi-
vidual trajectories for the dynamical variables. However, since the individual trajectories are
stochastic, we cannot follow the trajectories to predict the outcome. The more appropriate quan-
tity to trace is the evolution of the probability distribution rather than the individual trajectories.
Advances in Physics 49

The next question then is what would be the corresponding Fokker–Planck probability evolu-
tion equation for the spatial dependent dynamical system. In order to do so, we must extend the
derivatives in the original Fokker–Planck equation in the spatial-independent case to the spatial-
dependent case. Since the dynamical variables themselves now are functions of space, so the
derivatives with respect to them become functional derivatives. We need to define the functional
derivative explicitly. A simple ansatz would be to divide the space into cubic cells of each with
size of l and volume of l3 [2]. We can then define the zi = l3 φ(  xi ) for each spatial cell with the
zi } of all the different spatial cells.
label i. Let us consider function of the variables z = {
The driving force F[  x; φ] becomes functions of all these cell variables z. The partial
derivatives can be defined straightforwardly. The functional derivative can be defined as
δ F[ 
 φ]  z]
∂ F[
= lim l−3 . (92)
 xi ) l→0
δ φ( ∂ zi
Therefore, the corresponding stochastic partial differential equation
 x, t)
∂ φ(
 x; φ] + ξ [x, t; φ]
= F[  (93)
∂t
becomes [2]
∂ zi  
=  zi ] +
Diffij zj + δF[ δGij ζj
∂t i j

with the spatial diffusion term explicitly put in. In this equation, the Diffij are the spatial diffusion
coefficients giving the discrete approximation of ∇ · ∇ · Diff.
 xi , t)] = lim δF[
 φ(
F[  zi ]l−3
l→0

and 
G(xi , t) = lim l−3 δGij ζj .
l→0
j

Assuming an explicit general variable-dependent correlation [73,75, 85]:


G(x, t)G(x , t ) = D(x, x )δ(t − t )
and 
D(xi , xj ) = lim l−6 δGik δGjk .
l→0
k

Finally the Fokker–Planck equation for zi becomes [2,73,75,85]


∂P(z)  ∂ 1  ∂2
=−  zi )]P(z)} +
{[Diffij zj + δij δF( ∂zj δGik δGjk P(z).
∂t ij
∂zi 2 ijk ∂zi

If we take the l → 0 limit, then we finally obtain the functional Fokker–Planck diffusion
 x).
equation for the field variable φ(
∂P[φ]  δ
=− d3 x (∇ · ∇ · Diff(x)φa (x) + Fa [x; φ]P[  φ])

∂t a
δφ a (
x)
  δ2
+ d3 x d3 x  φ]))
(Dab [x, x ; φ]P[  (94)
δφ (
x )δφ (
x)
ab a b
50 J. Wang

with spatial diffusion term explicitly put in. For convenience, we list a table for linking the vari-
ables in Fokker–Planck diffusion equation for spatial-independent case and the field variables in
functional Fokker–Planck equation in spatial-dependent case.
Notation correspondence

i ⇐⇒ a, x,
j ⇐⇒ b, x ,
k ⇐⇒ c, x”,
Ci (t) ⇐⇒ φa (x, t),
 ⇐⇒ Fa [x, φ],
Fi (C) 
 ⇐⇒ Gab [x, x ; φ],
Gij (C) 
 ⇐⇒ Ja [x; φ],
Ji (C) 

δ
∇ ⇐⇒ d3 x ,
δ φ

∂ δ
∂i ≡ ⇐⇒ d3 x ,
∂Ci δφa (x)
 ∂  δ
∇ · J ≡  ⇐⇒
Ji (C) d3 x 
Ja [x; φ]. (95)
i
∂Ci a
δφa (x)

Quantification of the potential field landscape and decomposition of the dynamics to a


potential field landscape and a curl flux field for stochastic spatially dependent dynamical systems
(1) Functional Fokker–Planck Equation
Here we can see the Fokker–Planck equation for the evolution of the probability distribution
of the spatial-independent dynamical systems and the corresponding functional Fokker–Planck
equation for the evolution of the probability functional of the spatial-dependent dynamical
systems [73,75,85].
∂P
 + ∇ · ∇ · (DP),
= −∇ · (FP)
∂t
that is,
∂P
= −∂i (Fi P) + ∂i ∂j (Dij P)
∂t
⇐⇒

  
∂P[φ] δ
=− d3 x  φ])
(Fa [x; φ]P[ 
∂t a
δφa (x)
  δ2
+ d3 x d3 x  φ])).
(Dab [x, x ; φ]P[  (96)
ab
δφa (x)δφb (x )

The physical meaning is clear. The change of the probability functional is determined by both
the driving force field and the fluctuations characterized by the diffusion in field configurations.
Here for clarity purposes, we have not explicitly written out the spatial diffusion term (Diff term
aforementioned ) contained in the deterministic force field.
Advances in Physics 51

(2) Probability flux field


 considering the spatially dependent case
Here we define the corresponding flux field Ja [x; φ]

as compared with the flux J for the spatial-independent case.

J = FP
 − ∇ · (DP),

that is,
Ji = Fi P − ∂j (Dij P)
⇐⇒
 δ
 = Fa [x; φ]P[
Ja [x; φ]  φ]  − d3 x  φ])).
(Dab [x, x ; φ]P[ 
b
δφb (x )

The physical meaning of the flux field is that the net flux determines the evolution of the
probability functional since

  
∂P[φ] δ
=− d3 x 
Ja [x; φ].
∂t a
δφa (x)

The probability functional changes in time are determined by the functional divergence of the net
flux field.
(3) Force field decomposition
Here we consider the driving force field decomposition into the functional gradient of the
potential field landscape and the curl flux field how including spatial dependence as compared to
the spatial-independent case where the driving force for the dynamics is decomposed to gradient
of the potential and curl flux [73,75,85].

 = −D · ∇U + Jss ,

Pss
that is,
Jiss
F̃i = −Dij ∂j U + ,
Pss
 =F
where F̃  − ∇ · D and U = − ln Pss
⇐⇒
 δU[φ]  
Jass [x; φ]

F̃a [x; φ] = − 
d3 x Dab [x, x ; φ] + , (97)
δφb (x ) Pss [φ] 
b
 
 = Fa [x; φ]
where F̃a [x; φ]  − b d3 x (δ/δφb (x ))Dab [x, x ; φ]  and U[φ]  = − ln Pss [φ]
 and the
 satisfies
divergent-free probability flux field J [x; φ]
 δ
d3 x  = 0.
Ja [x; φ]
a
δφa (x)

 is quantified and closely linked to the steady-state probability func-


We can see clearly, U[φ]
tional of the whole spatial-dependent dynamical systems. Since the U[φ]  directly reflects the
probability or weight of each field configuration in space, U serves as the potential field land-
scape which can be used to characterize the global nature of thesystem.
 On the other hand, for
 = 0.
steady state, the divergence of the flux field is equal to zero. a d3 x(δ/δφa (x))Ja [x; φ]
52 J. Wang

 = 0, then we have the detailed balance equilibrium condition. Under this condi-
If Ja [x; φ]
tion, the potential landscape field is related to the equilibrium probability functional U[φ]  =
 while the dynamics is determined by the functional gradient of potential F̃a [x; φ]
− ln Peq [φ],  =
 

− b d3 x Dab [x, x ; φ](δU[ 
φ]/δφ x )) with an addition to the diffusion-dependent force field
b (
  3 
b d x (δ/δφb (

x ))Dab [x, x ; φ].
When
 the flux field itself is not equal to zero, the divergence free property of the flux at steady
state a d3 x(δ/δφa (x))Ja [x; φ]  = 0 implies that the flux field J [x; φ]
 has no sinks or sources

to go into or come out of in the field configurational space φ [73,75,85]. Therefore, J [x; φ]  has
to be a curl rotating around in the field configurational space. The dynamics is determined by
the functional gradient of the potential field landscape, the curl flux field, and the additional
diffusion-dependent force field. In this way, we realize the force field decomposition in the field
configurational space for determining the non-equilibrium dynamics. While gradient dynamics
attracts the system down to the underlying potential field landscape, the flux will tend to drive
the spatial-dependent dynamical system curling around in the field configurational space φ.  The
gradient of the potential field landscape and the curl flux field are analogous to dynamics of a
charged scalar field moving in both spatial-dependent electric and magnetic fields in quantum
field theory.

4.2. The Lyapunov functional for spatially dependent dynamical systems


Global stability is essential for the function and dynamics of spatially dependent systems. Here
we will provide a general method to construct a Lyapunov functional monotonically decreasing
for the spatially dependent dynamical systems [73,75,85]. In this way, we can quantify the global
stability with this Lyapunov functional. It turns out that the Lyapunov functional is the intrinsic
potential field landscape of the dynamical systems under zero fluctuations in field configurations.
Furthermore, we also define the free energy functional of the spatially dependent dynamical sys-
tem and find that it is a Lyapunov function for finite fluctuations in field configurations. So the
free energy functional can be used to explore the global stability of spatial-dependent dynamical
systems under finite fluctuations.
Or equivalently, the Lyapunov functional of spatially dependent deterministic dynamical
systems
The corresponding deterministic field equation of the corresponding Fokker–Planck proba-
bility equation (FFPE) is as follows:

∂φ a (q, t)
= F {a,q} [φ],
∂t

where F {a,q} [φ] stands for the force field with the field variable φ with spatial variable q and
parameter a φ = φ(a, q). The zeroth-order potential field term as intrinsic potential field land-
scape (0) [φ] plays the role of a Lyapunov functional of the spatially dependent deterministic
dynamical system [73,75,85].
When D is small but not yet in the zero limit, we need to keep only the zero order (0) [φ] in
the expansion of  and thus the steady-state probability distribution is approximated by
 
1 (0) [φ]
Pss [φ] = exp − .
Zss [φ] D

For Pss [φ] to be a proper functional when D is small but not in the zero limit, Pss [φ] should have
a higher bound and thus (0) [φ] should have a lower bound. Thus, we can always add a constant
Advances in Physics 53

to (0) [φ] to make it non-negative, that is,

(0) [φ] ≥ 0.

Then we calculate its time derivative [73,75,85]:

d(0) [φ] ∂φ a (q, t)


= δ{a,q} (0) [φ]
dt ∂t
= F {a,q} [φ]δ{a,q} (0) [φ]

= −(δ{a,q} (0) [φ])D{a,q}{b,q } [φ](δ{b,q } (0) [φ])
≤ 0. (98)

where δ{a,q} = δ{a,q} /δφ(a, q) represents the functional derivative with respect to field φ(a, q).
Therefore, the intrinsic potential field landscape (0) [φ] is a Lyapunov functional of the spatially
dependent deterministic dynamical systems. We can use the intrinsic potential field landscape
to directly quantify the global stability of the spatially dependent deterministic dynamical

systems. Furthermore, from the relation (F {a,q} [φ] + D{a,q}{b,q } [φ]δ{b,q } (0) [φ])δ{a,q} (0) [φ] = 0
(0){a,q}
in zero-fluctuation limit and expression of flux field in zero-fluctuation limit Vss [φ] =

D{a,q}{b,q } [φ]δ{b,q } (0) [φ] + F {a,q} [φ], we can see that the driving force field F {a,q} [φ] in the spa-
tially dependent deterministic dynamical systems can be decomposed of two terms, one is the

gradient of the intrinsic potential field landscape −D{a,q}{b,q } [φ]δ{b,q } (0) [φ] and the other is the
(0){a,q}  (0){a,q}
curl flux velocity field Vss [φ], F {a,q} [φ] = −D{a,q}{b,q } [φ]δ{b,q } (0) [φ] + Vss [φ]. Further-
(0){a,q}
more, the Vss [φ]δ{a,q} (0) [φ] = 0 [73,75,85]. This implies that the flux velocity field of the
driving force field is perpendicular in direction to the gradient of the intrinsic potential field
landscape of the driving force field for the deterministic spatially dependent dynamical systems.
Lyapunovfunctional of thestochastic spatially dependent dynamical systems
In the last section, we discussed about the global stability and Lyapunov functional of the
deterministic spatially dependent dynamical systems. We now turn to the explorations of the
Lyapunov functional for global stability for finite fluctuations.
The free energy functional F of the system as a functional of the probability distribution
P[φ, t] is a Lyapunov functional of the FFPE. The proof is as follows [73,75,85]:
 
P[φ, t]
F(t) = ln
Pss [φ] P[φ,t]
  
P[φ, t]
= P[φ, t] ln Dφ
Pss [φ]
    
P[φ, t]
= P[φ, t] ln − P[φ, t] + Pss [φ] Dφ
Pss [φ]
    
P[φ, t] P[φ, t] P[φ, t]
= Pss [φ] ln − + 1 Dφ
Pss [φ] Pss [φ] Pss [φ]

= Pss [φ](R[φ, t] ln R[φ, t] − R[φ, t] + 1)Dφ

≥ 0, (99)
54 J. Wang
 
d d P[φ, t]
F[P[φ, t]] = ln
dt dt Pss [φ] P[φ,t]
   
P[φ, t]
= V {a,q} [φ, t] δ{a,q} ln
Pss [φ] P[φ,t]
      
P[φ, t]  P[φ, t]
= − δ{a,q} ln D{a,q}{b,q } [φ] δ{b,q } ln
Pss [φ] Pss [φ] P[φ,t]

≤ 0.

F[P[φ, t]] = ln(P[φ, t]/Pss [φ])P[φ,t] plays the same role for the FFPE in the case of finite
fluctuations as (0) [φ] for the deterministic equation.
When the probability distribution functional in time P[φ, t] differs from probability distribu-
tion functional at steady state Pss [φ], δ{a,q} ln(P[φ, t]/Pss [φ]) is non-zero and thus free energy
functional F(t) will continue to decrease until P[φ, t] agrees with Pss [φ], where F(t) = 0 and
(d/dt)F(t) = 0.
From this we finally obtain the Lyapunov functional for the deterministic spatially dependent
dynamical systems as the intrinsic potential field landscape and the Lyapunov functional for the
spatially dependent dynamical systems with finite fluctuations as free energy functional. We can
quantify the global stability and behavior with the Lyapunov functional [73,75,85].

4.3. Non-equilibrium thermodynamics of stochastic spatial systems


We consider spatially inhomogeneous dynamical systems described by a functional Fokker–
Planck equation. Define the following three thermodynamic quantities internal energy functional,
entropy functional, and free energy functional and their total energy, entropy, and free energy
counterparts (for details, see [73,75,85]):

U(t) = − ln Pss (t) U(t) = U(t) = P(t)(− ln Pss (t)) dq̃,

S(t) = − ln P(t) S(t) = S(t) = P(t)(− ln P(t)) dq̃,
  
P(t) P(t)
F(t) = U(t) − S(t) = ln F(t) = F(t) = P(t) ln dq̃,
Pss (t) Pss (t)

where dq̃ = M dq (M is the metric tensor), Pss (t) is the steady-state probability functional at
each fixed time t and the probability functional in time P(t) is the time-dependent solution of the
functional Fokker–Planck equation.
We demonstrate that these three definitions combined with the following equations derived
from the force field or functional decomposition equations will give the non-equilibrium
thermodynamic equations.

∇{a,q} U(t) = 2V{a,q}


ss
(t) − 2F̃{a,q} ,
∇{a,q} S(t) = 2V{a,q} (t) − 2F̃{a,q} ,

where V{a,q}
ss
(t) and V{a,q} (t) are steady-state and time-dependent probability flux field,
respectively.
Advances in Physics 55

The first law of thermodynamics for spatially inhomogeneous stochastic dynamical systems
We derive the rate of change of the internal energy U(t) [73,75,85]:


d
U̇(t) = U(t)P(t) dq̃
dt
 
∂U(t)
= P(t) dq̃ + U(t)Ṗ(t) dq̃
∂t
  
∂U(t)
= − U(t)∇{a,q} J {a,q} (t) dq̃
∂t
  
∂U(t)
= + J {a,q} (t)∇{a,q} U(t) dq̃
∂t
  
∂U(t)
= + P(t)V {a,q} (t)∇{a,q} U(t) dq̃
∂t
 
∂U(t)
= + V {a,q} (t)∇{a,q} U(t). (100)
∂t

The first term ∂U(t)/∂t is called dissipative work Wd (t) which in general is due to the change
of time-dependent parameters of the system.
By plugging in the equation ∇{a,q} U(t) = 2V{a,q}
ss
(t) − 2F̃{a,q} , we can see that the second term
becomes [73,75,85]

V {a,q} (t)∇{a,q} U(t) = 2V{a,q}


ss
(t)V {a,q} (t) − 2F̃{a,q} V {a,q} (t)
= Qhk (t) − hp (t)
= −Qex (t), (101)

where hp (t) = 2F̃{a,q} V {a,q} (t) is the heat dissipation rate, which splits into two parts: hp (t) =
Qhk (t) + Qex (t). One part is the house-keeping heat rate Qhk (t) = 2V{a,q} ss
V {a,q} (t) originated
from non-zero steady-state flux velocity breaking the detailed balance and the other part is the
excessive heat rate Qex (t).
The convention of the sign of heat here is that it is positive when it flows from the system to
the heat bath. Thus, −Qex (t) is the excessive heat rate transferred from the environment to the
system.
Thus, we reach the following equation:

U̇(t) = Wd (t) − Qex (t)

This is a generalization of the first law of thermodynamics for spatially dependent systems.
It states that the increase in the internal energy of the system is due to the work done by the
environment to the system Wd (t) and the EXCESSIVE heat transferred from the environment to
the system −Qex (t). The house-keeping heat does not contribute to the change of the internal
energy of the system but is used to maintain the system away from equilibrium steadily (thus the
internal energy of the system does not change).
56 J. Wang

The second law of thermodynamics for spatially inhomogeneous stochastic dynamical systems

d
Ṡ(t) = −P(t) ln P(t) dq̃
dt
 
= − Ṗ(t) dq̃ + (− ln P(t))P(t) ˙ dq̃
  
d
=− P(t) dq̃ + S(t)(−∇{a,q} J {a,q} (t)) dq̃
dt

= 0 − S(t)∇{a,q} J {a,q} (t) dq̃

= J {a,q} (t)∇{a,q} S(t) dq̃

= P(t)V {a,q} (t)∇{a,q} S(t) dq̃

= V {a,q} (t)∇{a,q} S(t). (102)

By plugging in the equation ∇{a,q} S(t) = 2V{a,q} (t) − 2F̃{a,q} , we obtain

Ṡ(t) = 2V{a,q} (t)V {a,q} (t) − 2F̃{a,q} (t)V {a,q} (t) = ep (t) − hp (t).

The first term on the right-hand side ep (t) = 2V{a,q} (t)V {a,q} (t) the EPR which is always non-
negative and the second term hp (t) = 2F̃{a,q} (t)V {a,q} (t) is the heat dissipation rate [73,75,85].
Thus, we get the entropy balance equation:

Ṡ(t) = ep (t) − hd (t),

Ṡ(t) represents the entropy change for the system, hd (t) represents the heat dissipation and there-
fore the change of the entropy due to the environments. We can write down the above entropy
balance equation as ep (t) = Ṡ(t) + hd (t). ep (t) is the combination of system and environmental
entropy change and therefore represents the change of the total entropy. Since the definition of
the entropy production guarantees it to be positive, the total entropy change of the system and
environment together will never decrease.

Stot = ep (t) = Ṡ(t) + hd (t),

ep (t) ≥ 0. This is the generalized second law of thermodynamics for spatially dependent system.
Non-equilibrium thermodynamics for spatially dependent stochastic dynamical systems in
free energy functional representation
We can also derive the rate of F by combining the first law and the second law [73,75,85]:

Ḟ(t) = U̇(t) − Ṡ(t)


= (Wd (t) − Qex (t)) − (ep (t) − hd (t))
= Wd (t) − (ep (t) − Qhk (t))
= Wd (t) − fd (t). (103)

Where we have split the EPR into two parts: ep (t) = Qhk (t) + fd (t). The first term Qhk (t) is
the house-keeping heat yet here its physical meaning is a part of the EPR because this part of
Advances in Physics 57

heat is used to maintain the system away from equilibrium which contributes to a part of EPR.
The second term fd (t) is a part of the EPR that comes from the contribution of the dissipation
of the free energy F. In other words, Ḟ(t) term accounts for the usual free energy relaxation.
This non-equilibrium term exists for the pure equilibrium systems away from the equilibrium
state. The house-keeping term Qhk (t) is intimately linked with the flux (when flux is zero, house-
keeping Qhk (t) is zero). When the detailed balance is preserved, the systems are intrinsically in
equilibrium. The free energy relaxation back to the equilibrium state is equal to the dissipation
or the entropy production when the Wd (t) = 0 (work done from environments to the system is
zero). If flux is not zero, the detailed balance is broken. The system is in non-equilibrium state.
Then, the total entropy production is partitioned to the house keeping for the non-equilibrium
steady state and the free energy relaxation back to the steady state when the Wd (t) = 0. Non-zero
flux plays an important role here in the non-equilibrium thermodynamics. If Wd (t) = 0, we have
additional contribution to the free energy relaxation from the work done to the system from the
environments.
Thus, we get the equation [73,75,85]:

Ḟ(t) = Wd (t) − fd (t).

This equation or equivalently −Ḟ = −Wd (t) + fd (t) can be understood as follows: the dis-
sipation of free energy does two things: one is that it does work to the environment −Wd (t)
and the other is that it (fd (t)) contributes to a part of the EPR. This is the generalization of first
law of thermodynamics to the non-equilibrium regime for spatially inhomogeneous stochastic
dynamical systems.
We now discuss the change of the free energy in time. We have already proved in an earlier
section that free energy is a Lyapunov functional. Thus, we can generalize the second law of
non-equilibrium thermodynamics for spatially inhomogeneous systems by the fact that the free
energy functional never increases [73,75,85].
When the probability distribution functional in time P[φ, t] differs from probability distribu-
tion functional at steady state Pss [φ], δ{a,q} ln(P[φ, t]/Pss [φ]) is non-zero and thus free energy
functional F(t) will continue to decrease until P[φ, t] agrees with Pss [φ], where F(t) = 0 and
(d/dt)F(t) = 0 [73,75,85].

5. The cell cycle: limit cycle oscillations


The cell cycle is one of the most fundamental aspects of the cell. With sufficient nutrition, cells
grow and proliferate. New copies of DNA form and segregate from their original parents. Then,
the cells divide and new cells are born. This covers the whole life cycle of a cell. Understanding
the cell cycle is crucial for understanding the cell function. In current biology textbooks, the cell
cycle is thought of as having several major distinct phases, G1 (resting) phase, S phase (synthe-
sis), G2 phase (interphase), and M phase (mitosis) along with their corresponding checkpoints
controlling the cell-cycle dynamics. From a molecular perspective, the cell cycle has been shown
to be controlled by the underlying gene regulatory networks [17,57,83,159–166]. The concentra-
tion dynamics of genes/proteins can be quantitatively traced and correlated with the cell cycle.
However, physical understanding from the molecular network perspective and a global picture
of what is going on still proves challenging. Some key questions remain to be answered: What
controls the global stability of the underlying gene network as well as its different phases, the
associated checkpoints, and the whole cell cycle; How will those controlling factors influence
the function. Addressing these issues is not only crucial for understanding the underlying mech-
anism, but also for the possible medical applications. It is known that cancer cells have faster
58 J. Wang

cell-cycle period than the normal cells. Understanding the cell-cycle mechanism will be helpful
to explore the origin of the faster speed of the cancer cell cycle. Based on this, new strategies can
be designed to change the cell-cycle period for cancer preventions and treatments.

5.1. Model for the cell-cycle network


Here, we will apply landscape and flux theory to the cell cycle [17,57,83]. Progress has been
made in modeling cell-cycle dynamics as determined by the underlying gene regulatory net-
works including the yeast cell cycle [159–162] and mammalian cell cycle [163–166]. We explore
a simplified underlying gene regulatory network description based on cell-cycle biology [163–
165,167]. A detailed gene network diagram for the model of the cyclin/Cdk network driving the
mammalian cell cycle is shown in Figure 12. The model contains all four modules corresponding
to four stages of the cell cycle sequentially, which are separately centered on cyclin D/Cdk4-6
(Module 1), cyclin E/Cdk2 (Module 2), cyclin A/Cdk2 (Module 3), and cyclin B/Cdk1 (Module
4). The network also incorporates the pRB/E2F pathway, which controls progression or arrest of
the cell cycle. At the beginning of the cell cycle, the growth factor (GF) promotes the synthesis
of cyclin D, and Cyclin D can form a complex with the kinase subunit Cdk4-6. The active forms
of cyclin D/Cdk4-6 and cyclin E/Cdk2 ensures progression in G1 and elicits the G1/S transition,
by phosphorylating and inhibiting pRB. The inhibition of pRB ensures the activation of the tran-
scription factor E2F that allows cell-cycle progression by promoting the synthesis of G1 cyclins.
During S and G2 phases, cyclin A/Cdk2 inhibits (by phosphorylation) the Cdh1 protein that pro-
motes the degradation of cyclin B. The negative feedback loops exerted (via Cdc20 activation)

Figure 12. The diagram for the mammalian cell-cycle model (see Figure S11 in SI Appendix of Ref. [83] for
a more detailed diagram). Arrows represent activation and dotted lines with short bar represent repression.
The model includes four major cyclin/Cdk complexes centered on cyclin D/Cdk4-6, cyclin E/Cdk2, cyclin
A/Cdk2, and cyclin B/Cdk1. The opposite effects of pRB and E2F control the cell-cycle progression. The
combined effects of four modules determine the cell-cycle dynamics of oscillation. Red colors represent the
key genes and regulations found by global sensitivity analysis. Blue colors represent the key genes found
by global sensitivity analysis which are consistent with experiments (from Ref. [83]).
Advances in Physics 59

by cyclin B/Cdk1 on itself and cyclin A/Cdk2, and the negative feedback loop exerted by cyclin
A/Cdk2 on E2F allows the reset of the cell cycle and the start of a new round of oscillations.
Inhibitory phosphorylation by the kinase Wee1 and activating dephosphorylation by the Cdc25
phosphatases regulate the activity of Cdk1 and Cdk2. The activity of the cyclin/Cdk complexes
can also be regulated by reversible association with the protein inhibitor p21/p27. The combined
effects of four modules determine the cell-cycle dynamics of oscillation. Based on mass action or
Michaelias–Menton kinetics, the dynamics of the model is governed by a set of nonlinear ordi-
nary differential equations (44 ordinary differential equations with detailed rate parameter values
[83], with the mathematical form as dC/dt = F(C), where C is the concentration vector and F is
the underlying chemical driving force).

5.2. Self-consistent mean field approximation


In reality, gene regulatory network dynamics is stochastic. This is due to the fact that both intrinsic
fluctuations from the finite number of molecules and extrinsic fluctuations from the environ-
ments are unavoidable. Nevertheless, probability evolution usually follows a linear equation and
is predictable [17,76,83].
It is technically challenging to solve the diffusion equation for a large regulatory network due
to its inherent large dimensionality. We can apply a self-consistent mean field approximation to
reduce the dimensionality [8,17,76,83] from exponential to polynomials. In this way, one can
follow the time evolution and obtain the steady-state probability in protein concentration space.
Based on the result, one can map out the underlying potential landscape, associated with the
steady-state probability distribution.
We assume that the gene network state follows a probabilistic diffusive evolution equation:
P(X1 , X2 , . . . , Xn , t), where X1 ,X2 , . . . is the concentration of genes/proteins or populations of
molecules. This leads to an N-dimensional partial differential equation, which is not feasible to
solve exactly. This is because every variable can have M values. Then, the dimensionality of the
system becomes M N . Using a self-consistent mean field approach [8,17,56,76,83], one  can split
the probability into the products of individual probabilities: P(X1 , X2 , . . . , Xn , t) ∼ ni P(Xi , t)
and solve the probability self-consistently. This effectively reduces the dimensionality from
exponentially large M N to polynomially small M × N. This makes the computational task
tractable.
As mentioned, for the multidimensional system, it is still challenging to solve the diffusion
probability directly. We can also simplify the calculations using moment equations. We can
simplify the computational task by assuming specific probability distributions based on physi-
cal arguments. This leads to specific connections between moments. In principle, once all the
moments are known, one should be able to construct probability distribution. For example, the
Poisson distribution has only one parameter (its associated mean), so one can calculate all the
other moments from the first moment or the mean. In this study, we assume summed gaussian
distributions as an approximation [1,7]. A Gaussian distribution requires two moments, mean and
variance to specify.
When the diffusion coefficient D is small, the moment equations can be approximated as [1,7]

ẋ(t) = C[x(t)]. (104)

The above first-order ordinary differential equations follow Michaelias–Menton kinetics based
on mass action.
σ̇ (t) = σ (t)AT (t) + A(t)σ (t) + 2D[x(t)]. (105)
60 J. Wang

These equations quantify the fluctuations of those dynamics. Here,x, σ (t), and A(t) are vec-
tors and tensors, and AT (t) is the transpose of A(t). The matrix elements of A are Aij =
∂Ci [X (t)]/∂xj (t). According to this equations, we can solve x(t) and σ (t). We consider here only
diagonal elements of σ (t) from a mean field approximation. Therefore, the evolution of the distri-
bution for one variable can be obtained using the mean and variance by gaussian approximation
[8,17,56,76,83]:
1 [x − x̄(t)]2
P(x, t) = √ exp − . (106)
2π σ (t) 2σ (t)
The results can be extended to the multidimensional system using the same method. The
probability from the above corresponds to a single fixed point or basin of attraction. If the
system exhibits multi-stability, then there will be several probability distributions localized at
every basin of attraction, but with different variations. Therefore, the total probability becomes
the weighted sum of all these probability distributions. For instance, for a bi-stable system,
P(x, t) = w1 Pa (x) + w2 Pb (x) (w1 + w2 = 1). Here, the weighting factors (w1 , w2 ) reflect the rel-
ative sizes of the basins of attractions. The relative weights can be obtained by running multiple
initial conditions and counting what fractions falling into each basin.
For an oscillating system, the mean and variance x(t) and σ (t), are not constants at any
times. They are explicit functions of times. One can obtain the steady-state results by inte-
gration of the probability in time for one period and divide by the period [8,17,56,76,83]:
 st+z
Poscillation = ( st Po (x, t) dt)/z. Here, z is period of oscillation, and st is starting point for
integration.
Once we obtain the total probability, one can construct the potential landscape by U(x) =
− ln Pss (x) [8,17,56,76,83]. For the gene regulatory network, every parameter, node, and link con-
tribute to the structure and dynamics of the network. Together, they determine the total probability
distribution, or the underlying potential landscape.
In the 44-dimensional protein concentration space, it is difficult to visualize 44-dimensional
probabilistic flux [83]. We thus will explore the associated two-dimensional (2D, variables CycE
and CycA) projection of the landscape and flux vector:


J1 (x1 , x2 , t) = F1 (x1 , x2 )P − D P,
∂x1

J2 (x1 , x2 , t) = F2 (x1 , x2 )P − D P. (107)
∂x2

5.3. Results
By solving the dynamics of the underlying gene network, a limit cycle emerges. The limit cycle
starts from the beginning of each cycle for cell growth and ends at the division of each cell cycle.
This process repeats itself multiple times. From the stochastic dynamics and the corresponding
diffusive probability evolution, one finds the underlying landscape and flux of the underlying
cell-cycle gene regulatory network. The result is in 44-dimensional (44 species of proteins) con-
centration space [83]. To visualize the result, we project the landscape and flux into the two
dimensions of cyc E and cyc A protein concentrations that are responsible for cell-cycle dynam-
ics in order to present the quantitative landscape and flux picture shown in Figure 13. On the
left-hand side of Figure 13, we show the typical text book description of the cell-cycle processes
running through the distinct G1, S, G2, and M phases and the corresponding checkpoints [163–
165]. On the right-hand side of Figure 13, we show the landscape U as a function of cyc A and
cyc E.
Advances in Physics 61

(a) (b)

(c)

Figure 13. (a) shows the four phases of cell cycle with the three checkpoints: G1,S, G2, and M phases. (b)
shows the three phases ( G1, S/G2, and M ) and the two checkpoints (G1 checkpoint and S/G2 checkpoint)
in our landscape view (in terms of gene CycE and CycA). (c) shows the 2D landscape, in which white
arrows represent probabilistic flux, and red arrows represent the negative gradient of potential. Diffusion
coefficient D = 0.05 (from Ref. [83]).

We can clearly see that the global shape of the potential landscape is a Mexican hat with
a central island with a high potential and a closed cell-cycle oscillation ring valley with low
potential between the center island and outer higher plateaus of the landscape [83]. On the limit
cycle oscillation ring valley, there are three major distinct basins. We can identify each basin
according to its corresponding protein concentrations and stages of the cell cycle. We can see
that the deepest local basin along the limit cycle ring valley represents the G1 phase, the next
basin on the ring valley represents the S/G2 phase, and the third basin along the ring valley basin
represents the M phase. Between the G1 and S/G2 basins, we see a local barrier or transition
state which we can identify as the G1 checkpoint since it provides a natural point for the cell to
decide whether or not to continue to go forward. Between the S/G2 and M basins, we see a local
62 J. Wang

barrier or transition state which we can identify as the S/G2 checkpoint. Between the M and G1
basins, we see a local barrier or transition state which we can identify as the M checkpoint. In
summary, we can identify the different phases (G1, S/G2, and M ) of the cell cycle as the local
basins of attractions on the cycle path and checkpoints as barriers or transition states between
the local basins of attractions on the cell-cycle path. The above quantitative picture of the cell
cycle provides a physical foundation for the text book description of cell cycle. However, there
is an issue. With the potential landscape alone as the driving force for the network dynamics, we
can see that the underlying dynamics would prefer to stay in G1 basin since it has the lowest
potential and highest probability. Therefore, the cell-cycle process would not continue along the
limit cycle ring and it will most likely trap into the G1 phase.
On the other hand, we know from landscape and flux theory that there is another driving force
for general dynamical systems, the curl flux [13–15,17,62,69,83]. While outside the ring valley of
the cell cycle, the curl flux is not as effective as the landscape gradient for guiding the dynamics,
on the ring valley of the cell cycle, the landscape gradient tends to trap the cell into the resting G1
basin. Here the curl flux comes in for the rescue. The curl flux has rotational nature. As a driving
force for dynamics, it acts along the ring valley. In fact, it is the driving force for pushing and
maintaining the cell-cycle periodic motion. Therefore, the local barriers and the curl flux are the
keys to push or block the cell-cycle processes. In Figure 13(c), we show the 2D contour of the
landscape and flux as the driving force for the cell cycle. The white arrows represent the curl flux
and the red arrows represent the negative of the potential landscape gradient. As we can see away
from the basins, the landscape attracts the system down to the basin, while on the oscillation ring,
the flux becomes dominant and drives the cell cycle around the oscillation ring.
One may be curious about the origin of the flux as the driving force for the limit cycle. In
fact, the origin of the flux is the energy input from the environment [13–15,17,57,62,67,69,83].
In the cell cycle, this energy input is provided by the nutrition supply. A signature of the nutrition
supply is the GF. When the nutrition supply is intact, the GF is enhanced and cell starts to grow.
The above mapping of cell cycle to a landscape and a flux gives a quantitative and physical
picture of the cell-cycle process. While the landscape attracts the cell into different phases or
basins, the curl flux drives the cell cycle on the oscillation ring. In the following figures, we
show quantitative results to support the above physical picture. Figure 14 shows quantitatively
the interplay between the barrier and flux in determining the cell-cycle dynamics. Figure 14(a)
shows that as GF is enhanced, the flux increases so the driving force for the cell cycle is enhanced
for completing the cycle. The barrier of the center island also becomes higher, which is important
for maintaining the coherence of the cell cycle. Without the center island, the state on the one
side of the cycle trajectory could jump easily to the other side of the trajectory. This would lead
to the loss of the sense of the directions for the cell-cycle motion and therefore the decoherence
of the oscillations. On the other hand, we can see the barrier between G2 and M phases decreases,
promoting the cycle. The barrier between G1 and S/G2 increases with respect to the increase in
GF. However, the magnitude of the barrier between G1 and S/G2 is much smaller than that of
between G2 and M phase. Therefore, the barrier between G2 and M along with the flux is the rate
limiting passage way determining the cell-cycle dynamics along the oscillation ring.
When we change the regulation among the genes in the network, we uncover the correlations
between the curl flux (curl flux here is quantitatively measured by the flux integrated over the
cell-cycle closed-loop path) and the period of oscillation as shown in Figure 14(b). We can see
that as the flux increases, the period of the cell-cycle oscillations becomes faster. We can see
that the flux through nutrition supply is the driving force for the periodicity of the cell cycle. We
also show that the correlation between the period of oscillation and barrier between the S/G2 and
M phases in Figure 14(c). As the barrier increases, the period becomes longer. Therefore, the
local barrier between S/G2 and M phases acts as a friction force for the cell-cycle oscillation.
Advances in Physics 63

(a) (b)

(c) (d)

Figure 14. (a) shows the change (percentage) of FluxIntLoop , BarrierCenter , BarrierG1/S , and BarrierG2/M
when GF is changed. (b) shows the correlation between flux and period when parameters (regula-
tion strengths or synthesis rates) are changed (correlation coefficient is −0.839). (c) shows the correlation
between BarrierG2/M and period when parameters are changed (correlation coefficient is 0.879). (d)
shows the correlation (correlation coefficient is 0.741) between BarrierCenter ( flux for inner plot,
correlation coefficient is −0.553) and coherence when parameters are changed (from Ref. [83]).

This serves the function of the checkpoint. We can also see that the correlation between the
coherence of the oscillation and the barrier of the center island as well as the flux in Figure 14(d).
We can see that both the flux and the center island barrier height promote the coherence of the
oscillations.
Furthermore, we uncover in Figure 15 the correlation between the energy consumption and
the quality of the cell cycle as the amount of GF changes. We can see that both the EPR and energy
consumption per cycle increase with respect to the nutrition supply signatured by the increase in
the GF. Since the GF promotes the flux, the energy consumption increases as the flux increases.
The origin of the energy consumption is the energy input through the GF. The physical realization
of the nutrition supply is through flux acting as one of the driving forces for cell cycle. On the
other hand, we can see as the central barrier increases, the energy consumption increases. This is
due to the fact that center barrier promotes the coherence of oscillations and the oscillations con-
sume energy. Furthermore, we can see that faster oscillations consume more energy as expected.
At last, as the barrier between G2 and M becomes smaller, the cell-cycle oscillation is faster.
Therefore, the resulting energy consumption is larger.
64 J. Wang

(a) (b)

(c) (d)

(e) (f)

Figure 15. (a) and (b) show the change of EPR and energy per cell-cycle increase as GF is increased. (c)–(f)
show separately the Flux, BarrierCenter , BarrierG2/M , and Period versus the energy per cell cycle with GF
changed. It can be seen that the energy per cycle increases as the Flux and BarrierCenter increase, while
decreases as the BarrierG2/M and Period increase (from Ref. [83]).

To study the functions of the cell cycle from the network structural perspective, one can carry
out a global sensitivity analysis on the cell-cycle period, the landscape barrier, and the flux, to find
out which key genes or regulatory wirings are responsible for cell-cycle function in Figure 16.
Exploring the parameters of the synthesis rates and regulation strengths (41 parameters were
selected), Figure 16(a) shows the corresponding changes in period, flux, and the central barrier
(BarrierCenter ) when these 41 parameters are separately changed. Figure 16(b) shows the corre-
sponding changes of BarrierG1/S (G1 checkpoint) and BarrierG2/M (DNA replication checkpoint)
when these 41 parameters were separately changed.
By selecting the top genes and regulations influencing the function (those influencing the flux
and period the most in Figure 16), we can identify certain key factors for the cell cycle progres-
sion. Some of the identified key genes and regulations have been confirmed by experiments. For
instance, pRB serves as a key gene in controlling G1 checkpoint. This is due to the fact that its
activation represses the cell-cycle process and the cancer [168,169]. The results from the global
sensitivity analysis (Figure 16) are consistent with the above findings. We see that the activation
of pRB leads to the decrease in the curl flux, the increase in the barrier between G2 and M phases
Advances in Physics 65

(a)

(b)

Figure 16. Global sensitivity analysis in terms of the barrier (BarrierCenter for global stability, BarrierG1/S
for G1 checkpoint, BarrierG2/M for S/G2 checkpoint), period, and flux changes when parameters are
changed. x coordinate (1– 41) is corresponding to the 41 parameters (synthesis rate or regulation strength
as follows: “1: AP1”, “2: E2F”, “3: pRB”, “4: synthesis of CDC25 acting on CycE/Cdk2”, “5: Skp2”, “6:
Cdh1”, “7: synthesis of CDC25 acting on CycA/Cdk2”, “8: p27 synthesis independent of E2F”, “9: p27
induced by E2F”, “10: synthesis of CycB”, “11: Cdc20”, “12: synthesis of Cdc25 acting on CycB/cdk1”,
“13: synthesis of Wee1”, “14: CycD induced by AP1”, “15: CycD induced by E2F”, “16: CycD bind
Cdk4,6”, “17: CycD/Cdk4,6 bind P27”, “18: CycE induced by E2F”, “19: CycE/Cdk2 bind p27”, “20:
CycE bind Cdk2”, “21: CycA bind Cdk2”, “22: CycA/Cdk2 bind p27”, “23: CycB/Cdk1 bind p27”, “24:
CycB bind Cdk1”, “25: inhibition of pRB to CycD”, “26: CycE inhibited by pRB”, “27: CycA inhibited
by pRB”, “28: p27 inhibited by pRB”, “29: Cdc25 activate CycA/Cdk2”, “30: Wee1 inhibit CycA/Cdk2”,
“31: CycA/Cdk2 activate Cdc25”, “32: CycA/Cdk2 and CycB/Cdk1 inhibit Cdh1”, “33: CycE/Cdk2 inhibit
p27”, “34: Cdc25 activate CycB/Cdk1”, “35: Wee1 inhibit CycB/Cdk1”, “36: Cdc20 (inactivate form)”,
“37: CycB/Cdk1 activate Cdc20”, “38: CycB/Cdk1 activate Cdc25”, “39: CycB/Cdk1 inhibit Wee1”, “40:
CycE/Cdk2 activate Cdc45”, “41: ATR activate Chk1”). Here, every parameter is changed 10% (from
Ref. [83]).

BarrierG2/M , and eventually the elongation of the cell cycle. The analysis also provides further
predictions (Ref. [83, Table 1]), which can be tested by experiments. The key sensitivity analysis
results (key genes and regulations) in the wiring diagram for the cell-cycle network (Figure 12)
with red color.
66 J. Wang

It has been observed that the cell-cycle period of a cancer cell is significantly faster than
that of the normal cell. This leads to uncontrollable cycling. In other words, the two driving
forces of the cell-cycle dynamics, the landscape barrier and the flux are unbalanced. The curl flux
dominates the effects of potential landscape barrier along the cell-cycle path. In order to restore
the balance between the flux and potential landscape, one needs to either decrease the flux or
increase the barriers for the cancer cells to become normal. This can be realized by changing
the nutrition supply and perturbing the genes and regulatory wirings through genetic or environ-
mental changes according to the results of our global sensitivity analysis based on the underlying
landscape topography and curl flux.
In summary, an underlying Mexican hat shape landscape controls the mammalian cell-cycle
process. Landscape topography, quantified by the barrier heights between basins of attractions can
provide a quantitative measure of the global stability and function of the cell cycle [17,57,83].
Different phases of the cell cycle (G1, S/G2, M) are identified as the local basins of attractions
on the cycle path and checkpoints as barriers or transition states between the local basins of
attractions on the cell-cycle path. There are two driving forces which determine the progression
of the cell cycle. One driving force is from probability flux (that originates from the nutrition
supply) along the cell-cycle path. The other determinant is the potential barriers along the cell-
cycle path, characterizing the cell-cycle checkpoints. Landscape and flux theory provides a simple
physical and quantitative picture for the mechanism of cell-cycle checkpoints.

6. Cell fate decisions: stem cell differentiation and reprogramming, paths, and rates
The normal cell has a few alternative fates. One is to self renew as a primary or stem cell through
the cell cycle. Another is to change from stem cell to the differentiated cell. The differentiation
process is the basis for the development. Therefore, stem cell differentiation is one of the most
fundamental processes in biology. It is now known that the stem cell differentiation is determined
by underly gene regulatory networks. Recently, there have been new developments in the field.
Researchers have been able to use a few regulatory genes to control the reverse process of cell
differentiation, thus reprogramming the cell. Researchers are able to take a differentiated cell
from an animal’s body and transform it back to a so-called induced multi-potent stem cells, a
multi-potent stem cell [170,171]. This opens up the door for manipulating the cell differentiation
and reprogramming process. The potential applications for regenerative medicine are obvious.
The dream to use one’s own cells to regenerate or repair damaged organs may become true not
too long from now.
Significant efforts have been made toward understanding the mechanisms and global picture
of the cell differentiation and development. Most noticeably, Waddington proposed his idea for
development [172]. In his original picture drawn by his artist friend, the developmental process
can be viewed as a marble rolling down from the top to the bottom of the mountain valleys. Here
the top of the mountain represents the stem cell while the bottom of the mountain valleys rep-
resent the fates of differentiated cells. The journeys from the stem cell to the differentiated cell
on the mountain become the developmental paths. The Waddington picture for development is
useful in visualizing the developmental process. It provides an intuitive understanding of devel-
opment. This picture has influenced the generations of biologists in thinking about development
and differentiation. However, Waddington’s description of development is only at an intuitive and
qualitative level. There were no physical and quantitative basis for this idea when he proposed it.
This is the reason why people can only use it as a metaphor rather than a real physical theory.
Recently, there have been important theoretical works on cell fate decision in the context
of phenotypic transitions in gene regulations, cell development, differentiation, and metastasis
Advances in Physics 67

of epithelial–mesenchymal transition [14,22,46–52,62,66,76,77,82,84,108–114,126,173–175]. A


quantitative description of the Waddington landscape can be found by applying landscape and
flux theory, yielding a global, physical and quantitative theory for development [60,62,66,79].

6.1. The Waddington landscape for key modules of stem cell differentiation, paths, and rates
To illustrate the quantitative Waddington picture for development, we first focus on a well-studied
gene module for development. This gene module is the key one determining many of the cell fate
decision-making processes in the development. The structure of this gene module involves two
self-activating genes mutually repressing each other. The dynamics of this circuit is described
by a set of two variable ordinary differential equations below, for the rates of the expression
[60,62,79,176]:

dx1 a1 xn b1 S n
= n 1n + n − k1 x1 = F1 ,
dt S + x1 S + xn2
dx2 a2 xn b2 S n
= n 2n + n − k2 x2 = F2 , (108)
dt S + x2 S + xn1

where x1 and x2 are the time-dependent expression levels of the two cell-specific transcription
factors X1 and X2 [60,62,79,176]; parameters a1 and a2 are the self-activation strength of the
transcription factors X1 and X2 , respectively; b1 and b2 are the strength of the mutual repression
for transcription factors X1 and X2 , respectively; k1 and k2 are the first-order degradation rates
for X1 and X2 , respectively [60,62,79,176]; S represents the threshold (inflection point) of the
sigmoidal functions, that is, the minimum concentration needed for appreciable changes; and n is
the Hill coefficient which represents the cooperativity of the regulatory binding and determines
the steepness of the sigmoidal function. We can see that varying these parameters can lead to
bi-stable states or tri-stable states and to phase transitions between different sorts of behavior.
Here, the parameters for Hill function and degradation rate for X1 and X2 are specified as follows:
S = 0.5, n = 4, and k = k1 = k2 = 1.0 [60,62,79,176]. For illustration, this section uses results
for the symmetric situation a = a1 = a2 and b = b1 = b2 . Although the values of parameters can
be different in organisms under different circumstances, the mathematical model here describes
a simplified gene circuit, and these values (S = 0.5, n = 4, k = k1 = k2 = 1.0) have been used in
many previous studies [60,62,79,176].
For these fixed parameters and the above dynamical equations for the gene module with exter-
nal fluctuations, one can explore the underlying stochastic dynamics of this gene circuit and map
out the corresponding landscape. The landscape is shown in Figure 17. There can be three basins
or two basins in general according to different self-activation regulation strengths. At high self-
activation regulation strengths, three basins of attraction emerge. Two basins of attraction emerge
for low values of self-activation. The central basin is symmetric and represents a moderate con-
centrations of both genes. This is identified as the IPS multi-potent stem cell basin. On the other
hand, the two side basins represent the differentiated cell fates (one marker gene has high expres-
sion and the other is suppressed). In this way, the cell fates of stem cell and differentiated cells can
be quantified as basins of attractions on the landscape. The developmental process of differentia-
tion or reprogramming can be seen as a cell fate decision process allowing a transition from one
basin of attraction to another (stem cell to differentiated cell or vice versa). The paths between
these cell fate basins represent the developmental or reprogramming paths. The barriers and the
kinetic time needed to move in between the basins will quantify the degrees of difficulties of
switching from one cell fate to another. The effective self-activation strengths of gene regulations
change during the developmental process. Therefore, at high self-activation in the initial stage of
68 J. Wang

Figure 17. The landscape of the development at different stages or different parameters a (from Ref. [62]).

cell development, the stem cell basin dominates while at a later stage of the development, the two
differentiated cell basins dominate. We can treat the self-activation gene regulation strength itself
as a dynamical variable during the developmental process [60,62,79,176].

da
= −λa. (109)
dt
We can see that landscape starts to change from one for the original dominant stem cell
stage to one for the final differentiated cell dominating era. Taken this together, this analysis
yields a quantitative Waddington landscape for differentiation and development as shown in
Figure 18.
The advantage of this approach is obvious. (1) One can now quantify the Waddington picture
for development and differentiation. The development and differentiation can be mapped quanti-
tatively to a landscape. (2) At the beginning of the development, the stem cell has a stable basin
of attraction. This is in contrast to the original Waddington picture of development where the
stem cell was described as sitting on top of the hill or barrier. In fact, this implies that in order to
transform from stem cell to differentiated cell, the stem state has to go over the barrier to reach
to the final differentiated state. (3) We can quantify the dominant paths from the stem cell to the
differentiated cell. This gives the major developmental pathways. On the other hand, we can also
quantify the dominant paths from the differentiated cell state to the stem cell. This predicts a
major reprogramming pathway. Quantification of the reprogramming pathway is critical for the
tissue engineering and regenerative medicine. Notice that these two pathways are not reversals
of each other. This is due to the fact that the underlying network does not satisfy the detailed bal-
ance and therefore there is a curl flux as an additional driving force going beyond the landscape
gradient for the dynamics.
For this concrete example of mutual repression with self-activation cell fate decision-making,
we can compare the difference in predicted rates using the equilibrium TST, the zero noise limit
(WKB) non-equilibrium TST predictions (optimal path going through the saddle point) [43],
and the present non-equilibrium TST prediction where the barrier for determining the kinetic
process is taken from the saddle on the dominant path rather than the conventional saddle on
the landscape by itself [82]. We can also compare these to direct simulation results for both
Advances in Physics 69

Figure 18. The quantified Waddington developmental landscape and pathways (from Ref. [62]).

the forward and backward directions as shown in Figure 19. We can see that the results based
on the present TST are more consistent with the simulations compared to other methods in
this example. The major difference of the present method to the earlier ones [22,43–52] lies
in the fact that the barrier or action and prefactor is evaluated at the optimum on the dominant
path rather than the saddle on the landscape. In the zero noise limit, all approximations agree
that the optimal path goes through the landscape saddle point. Some differences of the present
method compared to others may arise, when we are working on the path integral or action in
the coordinate x space representation, but not in the phase space of x and p representation as
some of the others. The optimal paths in the x and p spaces may not be the same as the opti-
mal path in the coordinate x space by itself. For the small but finite noise case considered here,
the reason why the optimal path does not follow the steepest descent path is the presence of the
curl flux in addition to the force from the gradient of the non-equilibrium potential. The rea-
son why the optimal path does not necessarily go through the saddle point making the present
TST necessary (on the optimal path rather than on the landscape saddle) seems to lie in the
fact that an extra term not present in zero noise limit in the path integral action kicks in (the
divergence of the force term due to the variable (Jacobian) transformation from noise to observ-
ables). We want to emphasize that the present non-equilibrium TST not only takes into account
of the exponential part of the rate, the barrier heights on the optimal path (not on the landscape
saddle), but also the prefactor at the saddle on the optimal paths rather than at landscape sad-
dle. This contrasts with the conventional rate studies where the barrier height is measured at
the landscape saddle and the prefactor is measured as fluctuations around that saddle and stable
basin.
70 J. Wang

Figure 19. The mean first passage time (MFPT) of the reprogramming (S → S  ) from our theoretical predic-
tions (non-equilibrium TST ), Langevin dynamics simulations, zero noise approximations, and equilibrium
TST, for different cell volume V (different fluctuation levels) (from Ref. [82]).

6.2. The Waddington landscape and epigenetics


Since the cellular processes are all determined by the underlying regulatory networks, there are
two components of the cell networks that can influence the cell behavior. One is the nodes, the

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 20. The potential landscape (3D view) in the nA – nB plane for different self-activation strength FA
and binding/unbinding speed ω (from Ref. [66]).
Advances in Physics 71

genes. The other is the links or regulation strengths among genes. Epigenetics plays an important
role in cellular process. It provides the source of change through the regulation changes other
than the genes themselves for influencing the biological function. For many gene networks or
circuits, the dynamics is studied by directly following the gene products, the proteins. So implic-
itly people have assumed that proteins faithfully represent the genes. This is an okay assumption
when the regulations of the genes through proteins are relatively fast and there is a tight coupling
between proteins and genes. However, due to the epigenetics of DNA methylation and histone
remodification during the gene regulations in eucaryotic cells, the regulatory proteins have to take
a longer series steps in order to reach the target gene. Therefore, the coupling between the genes
and proteins is less strong and we have to consider both of them through different timescales
of epigenetics. Effectively, we enter the non-adiabatic weak coupling regime of gene dynamics
instead of the usually assumed adiabatic dynamics with strong coupling of proteins and genes.
In order to see this effect, we can explicitly take into account of the regulatory binding steps of
the proteins to the DNA. The epigenetic effects are then reflected in the slow timescales (through
DNA methylation and histone modification).
The result is shown in the following figure. We use the ratio of the unbinding rate versus
binding rate as the measure of the slow timescale from the epigenetic effect. We also use self-
activating strength to monitor the developmental process. In Figure 20, along the developmental
process, we can see that gradual cell fate changes from the central stem cell basin to the dif-
ferentiated cell basis. On the other hand, as we turn on the epigenetic effects of slow binding
mimicking DNA methylation and histone remodifications, the epigenetics shows four effects on
the development. (1) It provides another mechanism of differentiation. Through enhancing the
effects of slow regulatory binding, we can see that the cell fate can transform from the stem cell

(a) (b)

(c)

Figure 21. The MFPT of the differentiation and reprogramming for different self-activation strength FA and
binding/unbinding speed ω (from Ref. [66]).
72 J. Wang

(0.5,0.5) to differentiated cells (1,0) or (0,1). (2) New states can be generated. As we can see,
some intermediate or differentiated states can be formed (0,0). (3) Furthermore, there are many
more metastable basins forming around the differentiated and stem cell states. This explains the
inhomogeneous distributions of the stem and differentiated cells. The inhomogeneity is often
observed in the experiments but has puzzled many since it has had no good explanation. The
landscape study gives a quantitative explanation of the origin of the inhomogeneous distribution
of states through epigenetic effects of slow binding. (4) The rate for differentiation has an opti-
mum speed with respect to the timescales of the regulatory binding shown in Figure 21. This
suggests that one can reach optimal differentiation through the modulation of epigenetics. Future
experiments should test this theoretical prediction.

6.3. The Waddington landscape for large networks


We have also explored a more realistic gene network underlying the human embryonic devel-
opmental process [76]. This network has been constructed from collecting the experimental

Figure 22. The wiring diagram for the stem cell developmental network including 52 gene nodes and their
interactions (arrows represent activation and perpendicular bars represent repression). The magenta nodes
represent 11 marker genes for the pluripotent stem cell state, cyan nodes represent 11 marker genes for the
differentiation state, and the yellow nodes represent genes activated by the stem cell marker genes. The
solid black links represent the key links found by the global sensitivity analysis, and the octagon shape
nodes represent key stem cell and differentiation markers found by global sensitivity analysis (from Ref.
[76]).
Advances in Physics 73

(a)

(b)

Figure 23. A bi-stable landscape picture for the stem cell network. Parameters are specified as follows:
k = 1 (degradation), b = 0.5 (repression), a = 0.37 (activation), and diffusion coefficient D = 0.01. (a) 3D
landscape and dominant kinetic paths. The yellow line represents developmental path, and the magenta line
represents reprogramming path. (b) 2D dominant kinetic path and flux on the landscape. The white arrows
represent the direction of flux, and the red arrows represent the direction of the negative gradient of potential
energy (from Ref. [76]).
74 J. Wang

literatures [177]. It has 52 genes with 11 stem cell marker genes and 11 differentiated cell marker
genes as shown in Figure 22. We found that even with many more genes involved, the basic pic-
ture of development remains the same as for the two gene module study. As shown in Figure 23,
the resulting developmental landscape has two major basins, one is the stem cell and the other
is the differentiated cell. The developmental process can be quantified as the transformation of
cell fate from stem cell down to the differentiated cells. Furthermore, we can quantify the domi-
nant pathways for differentiation and reprogramming. They are irreversible. By quantifying these
pathways, we can see the detailed process of how the cell fate decision actually is made. We can
see in Figure 24 that the developmental pathways starts from the embryonic stem cell stage of
high nanog, low GATA6, and low cdx2 protein concentrations, through the route of low nanog,
low GATA6, and low cdx2 protein concentration state, and further to the low nanog, low GATA6,
and high cdx2 protein concentration state, and finally reach the differentiated state with low
nanog, high GATA6, and high cdx2 protein concentrations. For the reprogramming pathways,

Figure 24. Differentiation and reprogramming process represented by 313 nodes (every node denotes a cell
state, characterized by expression patterns of the 22 marker genes) and 329 edges (paths). The sizes of nodes
and edges are proportional to the occurrence probability of the corresponding states and paths, respectively.
Red nodes represent states which are more close to stem cell states in terms of gene expression pattern,
and blue nodes represent states which are more close to differentiation states. The green and magenta paths
denote dominant kinetic paths from path integral separately for differentiation and reprogramming. Here,
we set a probability cutoff to decrease the number of states and paths, that is, we only demonstrate the states
and paths with higher probability (from Ref. [76]).
Advances in Physics 75

the network starts from the differentiated state with low nanog, high GATA6, and high cdx2 con-
centration, through the route of high nanog, high GATA6, and high cdx2 protein concentration
state, and further to the high nanog, low GATA6, and high cdx2 protein concentration state, and
finally reach the differentiated state with high nanog, high GATA6, and low cdx2 concentration
stem cell state. These identifications and quantifications of the developmental and reprogram-
ming pathways not only uncover the underlying mechanisms, but also quantify the route by which

(a) (b)

(c) (d)

(e) (f)

Figure 25. The barrier height and MFPT results when the activation strength a, the repression strength b
as well as the noise level D changes (Langevin dynamics). (a) and (b) show that when a increases, stem
cell state becomes more stable, the barrier for stem cell state USP (or the barrier for differentiation process
Udifferentiation ) increases, and the MFPT for differentiation process from stem cell state to differentiation state
(τdifferentiation ) increases. By contrast, When a increases, differentiation state becomes less stable, the barrier
for differentiation state USD (or the barrier for reprogramming process Ureprogramming ) decreases, and the
MFPT for reprogramming process from differentiation state to stem cell state (τreprogramming ) declines. (c)
and (d) show that when b increases, the barrier for stem cell state USP (Udifferentiation ), the barrier for differ-
entiation state USD (Ureprogramming ), the MFPT for differentiation process (τdifferentiation ), and the MFPT for
reprogramming process (τreprogramming ) all increase. (e) and (f) show that when noise level D increases, the
barrier for stem cell state USP (Udifferentiation ), the barrier for differentiation state USD (Ureprogramming ), the
MFPT for differentiation process (τdifferentiation ), and the MFPT for reprogramming process (τreprogramming )
all decrease (from Ref. [76]).
76 J. Wang

development and reprogramming are realized. This is important and should provide a quantitative
strategy for guiding reprogramming and development in regenerative medicine.
The kinetics of the transformation from stem cell to differentiated cells or vice versa is con-
trolled by the landscape topography quantified by the barrier heights between the two basins of
attraction as illustrated in Figure 25 [76]. The changes in barrier heights and kinetics quantified
by the MFPT are illustrated in Figure 25 upon changes in regulation activation, regulation repres-
sion, and fluctuations. By performing the global sensitivity analysis of the landscape topography
through the barrier heights between the stem cell fate basin and differentiated cell fate basin,
one can identify which genes and regulations are important in the developmental network and
responsible for the development and reprogramming process where the degree of difficulty of the
underlying cell fate decision-making process is controlled by the barrier height in between. The

(a) (b)

(c) (d)

(e)

Figure 26. Results of the global sensitivity analysis in terms of barrier height and MFPT when parameters
are changed. The results in (a) are for six repression links (named, respectively, as R1,R2, . . . , R6, see
Ref. [76, Table S2]) based on the change of barrier heights ( Barrier ). The results in (b) are for 14 acti-
vation links (named, respectively, as A1,A2, . . . , R14, see Table S3) based on barrier heights. Blue bars
represent the change of USP (barrier for differentiation process), red color represents the change of USD
(barrier for reprogramming process). (c) and (d) separately show the corresponding results in terms of the
change of MFPT ( MFPT). Blue bars represent the MFPT change for differentiation process, and red bars
represent the MFPT change for reprogramming process. (e) shows the corresponding global sensitivity for
the knockdown of individual genes (from Ref. [76]).
Advances in Physics 77

resulting identified key genes and regulations are indicated in Figures 22 and 26. These are the
hot spots for the development and reprogramming. They are critical in determining the cell fate
decision-making process. Therefore, by exploring the landscape topography, we can study the
underlying network structure and determine the backbone for the biological function and behav-
ior. This will not only help us to understand the underlying mechanism, but may also help the
network design and new generation of network drug discovery.

7. Landscapes and paths of cancer


7.1. Introduction
Cancer presents a serious threat to human health. For example, breast cancer is the most frequent
malignancy in women. Worldwide, breast cancer comprises 22.9% of all cancers (excluding non-
melanoma skin cancers) in women. About 1 in 8 US women will develop invasive breast cancer
over the course of her lifetime. In 2008, breast cancer caused 458,503 deaths worldwide [178–
180]. Though most breast cancers are benign and curable by surgery, one-quarter have a latent
and insidious character, growing slowly but metastasizing early. Current therapies delay tumor
progression significantly, but recurrence is inevitable, resulting in high mortality rates [181].
About 5–10% of breast cancers can be linked to gene mutations (abnormal changes) inherited
from one mother or father. Mutations of the BRCA1 and BRCA2 genes are the most common.
Women with these mutations have up to an 80% risk of developing breast cancer during their
lifetime [181]. There are specific genes in the cells of our bodies that normally help to prevent
tumors from forming. One of these tumor-suppressor genes is P53. The function of P53 is to
suppress cells from growing. When it has been damaged or altered, P53 loses its ability to block
cell growth, and thus results in an increased risk of cancer. Almost 50% of all human cancer cells
contain a P53 mutation. These cancers are more aggressive and more often fatal.
Conventionally, the cancer has been seen as a diseases caused by mutations [182]. This has
guided the thinking of the drug discovery industry for the last 50 years. Despite the efforts, cancer
still presents a major threat to human health. Recently, it becomes more and more clear that
cancer is not a disease solely from some single gene-mutation, but a disease of state [50,84,112–
114,173,183–189]. More observations have appeared that contradict the paradigm of mutation-
driven tumor-genesis. Cancer should be seen as a particular natural state that originates from cell
regulation. In other words, cancer should be seen as a network disease rather than as coming
from a single mutation. The cancer state is often hidden under the complex molecular networks
and therefore normally is inaccessible [183]. These molecular networks form different kinds
of cell types through evolution. The microenvironment can be equally important as mutations
and perhaps even more for the cancer formation. While mutations can influence the network,
microenvironments although not causing the genetic changes do change the network wirings
or the interaction strengths between the genes in the underlying gene regulatory network. The
changes of the interactions between genes can also alter the natures of the resulting states and
cell types. Therefore, driver gene mutations may not play the absolute dominant role in causing
a stepwise cancer cell phenotype formation. Rather, their role might be to allow the cells to have
the access to these hidden cellular states depending on the microenvironments. Cancer would
then be viewed as an intrinsic state and only released through a series of mutations controlled by
the microenvironments.
Cancer states should be naturally emerging functional entities, the result of collective action
of all the gene–gene interactions in the gene regulatory network. They form a basin of attractions
around the states. They become robust due to the flow of the unstable states into them. This
implies that with a small perturbation to a state within the basin, the system will return to the
78 J. Wang

attractor state. Importantly, a single network can give rise to multiple attractors, leading to multi-
stability. Thus, to learn the mechanism of cancer, we need to investigate the underlying gene
regulatory networks and associated dynamics (usually represented by a wiring diagram including
gene nodes and their interaction links), which govern the evolution and behavior of normal and
cancer states.
The cancer gene regulatory networks are dynamical systems. Intrinsic stochasticity is present
due to statistical fluctuations from a finite number of molecules of the network, and external
stochasticity is present due to highly dynamical and inhomogeneous environments. Thus, we
should study the stochastic dynamics of cancer network dynamics in fluctuating conditions in
order to model realistically the cellular inner and outer environments. There have been increasing
numbers of studies on the global topological structures of the network systems, recently [190].
The underlying nature of the networks has been explored by experimental research [191,192].
The conventional way of describing the network dynamics is often in terms of either deterministic
or stochastic chemical kinetics and follows the temporal trajectories of system variables. It often
probes only the local natures of the network such as local stability around fixed point state [159–
162,183]. However global nature of the system cannot be easily revealed from such analysis.
As mentioned, the biological function is often realized by gene regulatory networks at the
cell level. The global stability and behavior of cellular networks are essential for performing the
biological functions. However, the quantifications of the global stability, function, and behavior
of cancer networks as well as the key factors determining the underlying dynamics present us a
challenge in system biology. Solving this problem is crucial for understanding the functions and
mechanism of cancer.
For cancer cellular network with its huge state space, understanding how seemingly infi-
nite number of genotypes can produce a finite number of functional phenotypes (i.e. normal
and cancer states) is challenging. Rather than focusing on the individual trajectory dynamics, a
probabilistic description provides a global quantification. Different states correspond to different
probabilities of appearance. The functional state should possess a higher probability of appear-
ance or lower potential energy, whereas nonfunctional states have lower probability or higher
potential [13,17].
The dynamical system of cancer gene networks is not at equilibrium due to the constant
exchange of energy, information and matter from the environments. The equilibrium landscape
recipe does not apply here. Recently, there have been increasing efforts in exploring the physics
of cancer network dynamics [50,84,112–114,173,183–189]. Despite the progress made, there are
still challenges on the origin and underlying mechanisms, the driving forces, the quantification
of landscape, the paths and speed, the key genes and regulations of cancer. Non-equilibrium
landscape and flux theory can be applied to cancer to meet some of the aforementioned challenges
[84]. Analysis shows the following:

(1) The normal states and cancer states can be quantified as basins of attractions on the cancer
landscape, the depth of which represents the associated probabilities.
(2) The global stability, behavior, and function of cancer through underlying regulatory net-
works can be quantified through the landscape topology in terms of depths of and barriers
between the normal and cancer states.
(3) The paths and transition rates from the normal (cancer) state to the cancer (normal) state
representing the underlying tumor-genesis and reverse processes will be quantitatively
uncovered, through a path integral approach and non-equilibrium TST.
(4) Global sensitivity analysis based on landscape topography, again allows one to identify
the key genes and regulations important to the global stability, behavior, and function of
Advances in Physics 79

cancer networks. The results of global sensitivity analysis will provide multiple targets
for cancer.

7.2. Results
7.2.1. The underlying cancer gene regulatory network
Ten hallmarks of cancer [168,169,182] have been proposed previously, characterized by some key
cancer marker genes. Each hallmark is often involved in a set of functionally linked pathways.
This makes the mapping of the functional modules and the mutated genes onto a cancer network
possible. One can use the network to uncover the underlying relationships, insights, mechanisms,
and principles of cancer [189]. Starting from cancer marker genes and certain critical tumor-
suppressor genes such as P53, RB, P21, and PTEN, through an extensive literature search [193],
one can construct a cancer gene regulatory network made of 32 gene nodes (Figure 27) and
111 edges (66 activation interactions and 45 repression interactions). In Figure 27, the arrows
represent activation and the short bars represent repression. The network has mainly three types of
marker genes: apoptosis marker genes (green nodes, including BAX, BAD, BCL2, and Caspase),
cancer marker genes (magenta nodes, including AKT, MDM2, CDK2, CDK4, CDK1, NFKB,
hTERT, VEGF, HIF1, HGF, and EGFR), and tumor repressor genes (light blue nodes, including
P53, RB, P21, PTEN, ARF, and CDH1). The brown nodes represent other genes.
The dynamics of this gene network can be described by corresponding ordinary differential
equations and the interactions among genes in terms of Hill functions representing their activation
or repression strengths and cooperativity. These equations have the mathematical form as follows:
a ∗ Xai b ∗ Sn
Fi = −k ∗ Xi + + . (110)
S n + Xai S n + Xbi
In Equation (110), i = 1, 2, . . . , 32. S represents the threshold (inflection point) of the sig-
moidal functions, representing the strength of the regulatory interaction, and n is the Hill
coefficient which determines the steepness of the sigmoidal function [84]. The Hill func-
tion parameters are given as S = 0.5, n = 4. Furthermore, k represents the self-degradation
constant, b represents the repression constant, and a represents the activation constant. Xai
and Xbi represent the average interaction strengths, respectively, for the activation and the
repression from the other nodes to the node i. At each node i, Xai can be defined as fol-
lows: ((Xa(1)
n
∗ M (a(1), i) + Xa(2) n
∗ M (a(2), i) + · · · + Xa(m1)
n
∗ M (a(m1), i))/m1, and Xbi can be
defined as follows: ((Xb(1) ∗ M (b(1), i) + Xb(2) ∗ M (b(2), i) + · · · + Xb(m2)
n n n
∗ M (b(m2), i))/m2.
a(1), a(2), . . . , a(m1) represents a list of the nodes with the activation interactions to node i,
and b(1), b(2), . . . , b(m1) represents the list of nodes with the repression interactions to node i.
M (j, i) (i, j = 1, 2, . . . , 32) represents the interaction matrix M with the strengths from node j to
node i. The model makes the simplified assumption that the regulation from one individual gene
j to the other genes has the same interaction strength.

7.2.2. The landscape of the cancer network


From the above 32 ODEs, one can identify the driving force governing the cancer network
dynamics by considering the corresponding stochastic dynamic [13,17,62]. By applying the
self-consistent mean field approximation, one finds the steady-state probability distribution of
the cancer network in 32 gene concentration variables. According to the relationship between
the underlying landscape and the probability, U = − ln(Pss ) [13,17,62,76,84], one can further
acquire the potential landscape of the cancer gene network. Here, Pss represents the steady-state
probability distribution, and U denotes the dimensionless potential energy.
80 J. Wang

Figure 27. The diagram for the cancer network including 32 nodes (genes) and 111 edges (66 activation
interactions and 45 repression interactions). Red arrows represent activation and blue filled circles repre-
sent repression. The network includes mainly three kinds of marker genes: apoptosis (green nodes), cancer
marker genes (magenta nodes), and tumor repressor genes (light blue nodes). The cancer marker genes
include EGFR for proliferative signal, VEGF for angiogenesis, HGF for metastasis, hTERT for unlimited
replication, HIF1 for glycolysis, CDK2 and CDk4 for evading growth suppressors. The solid black links
represent the key links found by the global sensitivity analysis, and the octagon shape nodes represent key
genes for the transition between normal and cancer states found by global sensitivity analysis. The brown
nodes represent other genes (from Ref. [84]).

For visualization purposes, we can project the 32-dimensional landscape onto a 2D state space
by integrating out the other 30 variables and leaving the 2 key genes, AKT (an oncogene) and
RB (a tumor repressor gene). Figure 28 shows 3D landscape and 2D contour map for the entire
cancer network in gene expression state space in AKT and RB. In Figure 28(a), three stable states
or basins of attraction on the landscape (tri-stability) emerge. The landscape reflects the steady-
state probability distribution. Every basin of attraction (high probability states) can represent a
cell type in gene expression state space. These basins are separated by certain barriers, prevent-
ing easy conversions among different cell types. The bottom attractor denotes apoptosis state,
having higher expression of tumor repressor gene RB, P21, PTEN, lower expression of onco-
gene AKT, EGFR, VEGF, HGF, HIF1, hTERT, MDM2, CDK2, CDK4, and higher expression
level of apoptosis marker gene Caspase. The middle attractor denotes the normal state, hav-
ing higher expression level of tumor repressor gene RB, P21, PTEN, higher expression level of
oncogene AKT, EGFR, VEGF, HGF, HIF1, hTERT, MDM2, CDK2, CDK4, and lower expres-
sion level of apoptosis marker gene Caspase. The top attractor denotes the cancer state, having
Advances in Physics 81

(a)

(b)

Figure 28. The tri-stable landscape for the cancer network from self-consistent approximation method.
The parameters are set as follows: degradation constant k = 1, activation constant a = 0.5, and repression
constant b = 0.5 (see supporting information for parameter depictions). Diffusion coefficient, D = 0.01.
(a) shows the 3D landscape and dominant kinetic paths. The yellow path represents the path from normal
state attractor to cancer state attractor, and magenta path represents the path from cancer state attractor to
normal state attractor. Black paths represent the apoptosis paths for normal and cancer states. (b) shows
the corresponding 2D landscape of cancer network. Red arrows represent the negative gradient of potential
energy, and white arrows represent the probabilistic flux (from Ref. [84]).
82 J. Wang

lower expression level of tumor repressor gene RB, P21, PTEN, much higher expression level of
oncogene AKT, EGFR, VEGF, HGF, HIF1, hTERT, MDM2, CDK2, CDK4, and lower expres-
sion level of apoptosis marker gene Caspase (see Table S3 in supporting information for detailed
relative gene expression level of three stable states of Ref. [84]). The three attractors or stable
states are consistent with our biological understanding of normal, cancer, and apoptosis state in
cancer network [168,169,182].
The tri-stable landscape exists in certain regulation ranges and provides a relatively balanced
case for 3 (normal, cancer, and apoptosis) state coexistence. We can explore the transitions among
these three attractors. Changing the nodes to mimic the mutations or regulation strengths mim-
icking the non-genetic environmental changes leads to changes of the landscape topography. The
effects range from there being a single dominant basin to bi-stable basins and to tri-stable basins
or vice versa.

7.2.3. Dominant paths among normal, cancer, and apoptosis states


The path integral method [14,62,76,84] yields dominant paths between the normal cell state, the
cancer cell state, and the apoptosis state basin. In Figure 28, the yellow path denotes the dominant
path from the normal to the cancer attractor, and the magenta path denotes the dominant path
from the cancer to the normal attractor. We can see that the dominant paths between normal and
cancer states are irreversible. We also see the apoptosis paths for normal and cancer states (black
paths) to the death. We can quantify the probabilistic flux of the cancer network, shown on the
landscape (Figure 28(b)). The white and red arrows represent the direction of probabilistic flux
and the negative gradient of the potential landscape, respectively. We see that the dynamics of the
cancer system is determined by both the force from the gradient of potential and the force from
the curl flux [13]. The force from the curl flux leads the deviation of the dominant paths from the
steepest descent path calculated from gradient of potential. Therefore, we see the two dominant
paths from the normal to the cancer state and from the cancer to the normal state are irreversible
(yellow line and magenta line are not identical).
The landscape in Figure 28 merely shows a 2D projection of the whole 32 dimensional state
space. To see the paths in the whole state space, one can project the continuous expression level
of the 21 major marker genes (reducing the original dimensionality of 32) to high and low binary
expressions in each gene (221 cell states totally) (ATM, P53, P21, PTEN, CDH1, RB, ARF,
AR, MYC, AKT, EGFR, VEGF, HGF, HIF1, hTERT, MDM2, CDK2, CDK4, CDK1, E2F1, and
Caspase). The normal state can be represented by the binary number 001011010101110000000
(denoting the expression level from gene 1 to gene 21, 1 for high expression, 0 for low expres-
sion), and for the cancer state, it is represented by 100010000101110111100. For the apoptosis
state, it is represented by 001101011010000000001. Figure 29 shows the discrete cancer land-
scape represented by 247 cell states (nodes representing the states, characterized by expression
patterns of the 21 marker genes) and 334 transition jumps (edges) between the different cell
states. The sizes of the nodes and edges are proportional to the occurrence probability of the cor-
responding states and paths, respectively. Blue nodes denote the cell states closer to normal cell
states, red nodes denote the cell states closer to cancer states, and green nodes denote the cell
states closer to apoptosis states. The largest blue node (high RB/high AKT/low Caspase) shows
the most significant normal state, the largest red node (low RB/high AKT/low Caspase) shows
the most significant cancer state, and the largest green node (high RB/low AKT/high Caspase)
shows the most significant apoptosis state.
There are 21 dimensional dominant paths shown as green and magenta paths separately for
the normal to cancer transformation and for the cancer to normal process, and black paths label
apoptosis paths of normal and cancer states. We see again that the path from the normal to the
Advances in Physics 83

Figure 29. Discrete landscape for cancer network with 247 nodes (every node denotes a cell state, charac-
terized by expression patterns of the 21 marker genes) and 334 edges (paths) at default parameter values
(a = 0.5, b = 0.5, k = 1). The sizes of nodes and edges are proportional to the occurrence probability of
the corresponding states and paths, respectively. Blue nodes represent states closer to normal cell states, red
nodes represent states closer to cancer cell states, and green nodes represent states closer to apoptosis cell
states. The green and magenta paths denote dominant kinetic paths from path integral separately for normal
to cancer attractor and cancer to normal attractor, and black paths represent the apoptosis paths for normal
and cancer state. Here, we only demonstrate the states and paths with higher probability by setting a prob-
ability cutoff. The largest blue node (high RB/high AKT/low Caspase) represents most major normal state,
the largest red node (low RB/high AKT/low Caspase) represents most major cancer state, and the largest
green node (high RB/low AKT/high Caspase) represents most major apoptosis state. Some key intermediate
states along the kinetic paths also have been labeled with high or low expression level (from Ref. [84]).

cancer attractor and the path from the cancer to the normal attractor are not reverses of each other.
We can track the transition from the normal to the cancer state according to certain marker genes
RB, MDM2, and CDK2. We see that the cancerization process proceeds (the green path from
the normal to the cancer attractor in Figure 29) from MDM2 on (from 0 to 1), CDK2 on (from
0 to 1), RB off (from 1 to 0), and finally to cancer state. This leads to a possible mechanism for
cancerization process. Initially, the on state of MDM2 represses the tumor repressor genes P53
and P21. This releases the genes CDK2 and CDK4 (CDK2 and CDK4 on) responsible for cell
growth due to the inhibition of P21 to CDK. Then, RB is off due to the suppression of activated
CDK2 and CDK4 on it, and cell gets into cancer state. This shows the importance of oncogene
MDM2 to induce the tumor-genesis.
For the reverse transition path from cancerization (from cancer to normal state), we can see
that the cell proceeds (the magenta path from cancer to normal attractor in Figure 29) from RB
on, off of CDK2 and CDK4, and finally to the off of MDM2. This shows that in the process of
transition from cancer state to normal state, the cell may first switch on the key tumor repressor
genes RB. As the time goes on, the growth genes CDK2 and CDK4 are gradually inactivated
84 J. Wang

due to the repression of RB to them. Finally, the oncogene MDM2 gene is turned off. The cell
then goes back to the normal state. In the experimental studies, inhibiting expression of TCTP
(encoding translationally controlled tumor protein) was suggested as an important mechanism for
tumor reversion. This is due to the activation of the expression of P53. Note that TCTP promotes
MDM2-mediated degradation of P53 [194–196]. This illustrates the role of oncogene MDM2
as an important drug target for tumor reversion. It also shows a verification for our theoretical
predictions. It also illustrates the importance of restoring the function of tumor repressor gene
RB as an anti-cancer tactics.
The biological paths for cancerization and the apoptosis obtained from the landscape and flux
theory of cancer can be tested and validated by experiments in the near future, and used to guide
the design of new anti-cancer strategies.

7.2.4. Changes in landscape topography of cancer


Based on the results of global sensitivity analysis, we identified certain key regulations and
visualized the change of landscape when these regulation strengths are changed (Figure 30). In
Figure 30, the vertical axis of every sub-figure denotes the negative probability (−P correspond-
ing to potential energy U according to U = − ln(P)). The four rows are separately associated
with four specific regulations for illustration (two key activations and two key repressions). As
the activations on AKT and VEGF increase, the landscape changes from tri-stability with dom-
inant normal state to bi-stability (cancer and apoptosis coexist), and finally to mono-stability
with a dominant cancer state. This shows the role of AKT and VEGF [197–199] for inducing
the cancer, consistent with the sensitivity analysis. When the repression on RB is increased, the
landscape changes from bi-stability with dominant normal state to tri-stability, and finally to a
mono-stability with dominant cancer state. This demonstrates the role of suppressing RB for
inducing the cancer. As the repression on AKT is increased, the landscape changes from a can-
cer dominant bi-stability to tri-stability with dominant normal state, and finally to mono-stability
with a dominant apoptosis state. This illustrates that repressing AKT can attenuate cancer through
inducing the transition between cancer and normal state or inducing cell apoptosis [197].
The changes of landscape with regulations provide a possible explanation for the mecha-
nisms of cancerization. At small AKT activation, the tri-stable landscape has a dominant normal
state with deepest potential basin. It is difficult for the cell to switch from the normal attrac-
tor to the other two attractors (cancer and apoptosis). This shows that the cells perform normal
functions and are stable against fluctuations. As the activation on AKT increases, the normal
attractor becomes less stable, and the cancer attractor becomes more and more stable. This
demonstrates the cancerization process from normal cells through the change of AKT activa-
tion strength. Finally, the cells show a landscape with only one dominant cancer attractor. This
marks the completion of the transformation from normal cells to cancer cells. A funneled shape
landscape guarantees the stability of cancer state. At this stage, it is difficult for the cells to escape
from the cancer attractor. In the similar way, the landscape topography changes with respect to
the changes in repression on AKT (the fourth row). This may provide a strategy for inducing
the death of cancer cells, reflected by the landscape topography changes from a dominant cancer
state to a dominant apoptosis state. These are some examples for regulation strength changes
which can induce the landscape topography change of cancer gene network. In reality, there are
many different types of combinations for changing regulation strengths in the network. This can
lead to the change of landscape topography and further the change of network function. Due to
the limitation of computational cost, here we only did single-factor sensitivity analysis. Ideally,
a multi-factor sensitivity analysis is expected to find more realistic and interesting anti-cancer
recipes.
Advances in Physics 85

Figure 30. The change of landscape when some key regulation strength (activation and repression parame-
ters, Mji ) are changed. The vertical axis of every sub-figure represents −P ∗ 1000 (P is probability and −P
is corresponding to potential energy U). From left column to right column is separately corresponding to
s = −50%, s = 0, s = 50%, and s = 100% (here s represents the percentage of parameters changed). The
four rows are separately corresponding to the change of four parameters. As labeled, the first row represents
activation strength from VEGF to AKT, the second row represents the activation strength from AKT to
VEGF, the third row represents the repression strength from CDK2 to RB, and the fourth row represents
the repression strength from PTEN to AKT. The labels C, N, and A separately represent cancer attractor,
normal attractor, and apoptosis attractor (from Ref. [84]).

8. Evolution
8.1. Introduction
Evolution is the most fundamental idea in biology. According to Darwin, current biology is
shaped by evolutionary history. Evolution is governed by the natural selection. The fittest to
the environment survive. A quantitative realization of the Darwinian evolution was reached
by Wright and Fisher [102,200–204]. Wright quantified the adaptive landscape for evolution
where the evolution follows a gradient of the adaptive landscape and always searches for the
better fitness. Fisher on the other hand developed his fundamental theory of natural selection
that evolution is such that the fitness always increases and this increment is determined by the
genetic variance of the fitness. Wright and Fisher’s combined theory formed the foundation of
evolutionary landscapes. However, Wright and Fisher’s theory of evolution were often challenged
86 J. Wang

by the following arguments for general evolution. It is known that in evolution, one often has
so-called Red Queen effects where the evolution still continues even after reaching the optimal
fitness [86]. For example, in evolution, one can often see limit cycle oscillatory cases where
dynamics continues on the oscillation ring when the apparent fitness already reaches optimum.
This is the most serious criticism of Wright and Fisher’s quantitative theory of evolution. In other
words, this clearly points out that Wright and Fisher’s evolution theory only works under very
restrictive conditions, for example, allele frequency-independent cases where there are no biotic
interactions among species or within species. In general, almost all the realistic evolution pro-
cesses do not satisfy the criteria that Wright and Fisher theory employ as assumptions. Clearly,
a more general quantitative evolution theory is required and efforts have been made toward this
direction [69,205–210].
As in our earlier examples, general evolution dynamics does not follow the gradient dynam-
ics as Wright’s theory stated. Instead, general evolution is determined by both a gradient of the
potential landscape and a flux [69]. We can identify the intrinsic potential landscape as the Lya-
punov function monotonically decreasing in the process to quantify the stability and function
for the evolution but we must realize that this Lyapunov function is not the same as the fitness
and thus defines a new fitness function due to its statistical probabilistic nature. So the evolu-
tion always searches for optimum on an intrinsic potential landscape, but not the original fitness.
Furthermore, the presence of the additional curl flux will give an extra contribution to Fisher’s
fundamental theorem, so that the temporal change of the new fitness function with respect to time
is not only equal to the genetic variance of the fitness, but also contains an extra contribution from
the curl flux. The origin of the flux can come from allele frequency-dependent selection. In fact,
this term can be negative or positive. Therefore, there is a chance that the new “fitness” is no
longer changing any further. Instead, evolution still continues with non-zero genetic variance due
to the presence of the flux. We can thus explain the famous Red Queen effects [86] with the limit
cycle evolution. The evolution does tend to the higher fitness or lower potential. However, when
it reaches an optimum such as on the limit cycle ring, the driving force for the continuation of
the evolution dynamics on the ring comes not from the landscape gradient but from the curl flux
arising from, for example, the biotic interactions among species or within species. This can create
endless evolution.
Evolution theory is based on three key ingredients: reproduction, mutation, and selection.
The mathematical evolutionary theory is based on describing the changes in allele frequencies
[69,201–203]. An allele is one of the multiple forms of a gene located at a specific position on
a particular chromosome. A specific position on the chromosome is called a locus. Most of the
higher eukaryotes are in diploid form with two copies of each gene. In sexual reproduction, only
one allele is passed onto a single gamete. The two gametes unite together to restore the double
complements of alleles. Here, we focus on a single diploid locus with multiple alleles in a ran-
domly mating diploid population. We represent n the n alleles at the given locus by A1 , A2 , . . . , An
and their frequencies
 by x ,
1 2x , . . . , x n ( x
i i = 1), respectively. Due to the conservation of total
allele frequencies, ni xi = 1, the n allele system is with n − 1 degree of freedom. The state space
is therefore n − 1 dimension, {xk |1 ≤ k ≤ n − 1}.

8.2. Evolutionary driving forces: selection, mutation, and genetic drift


8.2.1. Selection
Darwin proposed his evolution ideas as “Survival of the fittest”. It becomes a metaphor for
describing natural selection. Natural selection can be expressed through the fitness defined as
the average number of offsprings produced by individuals with a certain genotype. We can see
how the natural selection changes the allele frequencies through the means of the fitness.
Advances in Physics 87

In a population with N new born diploid individuals, there are xi xj N individuals with genotype
Ai Aj . For the next generation, the expected number of offsprings  with genotype Ai Aj becomes
wij xi xj N. The expected total number of the population is then nij wij xi xj N, where wij represents
the fitness of genotypeAi Aj . Therefore, the proportion of  Ai allele for the next
generation will
be xi = nj xi xj wij N/ nij wij xi xj N = xi wi /w̄, where, wi = nj=1 xj wij and w̄ = ni,j=1 xi xj wij . We
call w̄ as the mean fitness of population. Therefore, the rate of change in allele frequencies under
natural selection becomes [201–203]

dxi xi (wi − w̄)


= FiS = . (111)
dt w̄
n
It is worthwhile to notice that i=1 FiS = 0.

8.2.2. Mutation
The process of replicating a gene is not always accurate. Mutation represents any change in a
new gene from the parental gene. Mutation can also result in the changes in allele frequency. The
rate of change in allele frequencies under mutation becomes [201–203]

dxi  n  n
= FiM = xj mji − xi mij , (112)
dt j=1 j=1

where the mji is the rate of mutation


 from allele Aj to Ai . Due to conservation of total frequency
of alleles, ni xi = 1, we notice ni=1 FiM = 0.

8.2.3. Genetic drift


Sexual reproduction can be described as a binomial sampling process: a new generation with N
individuals is formed as a result of sampling of 2N alleles from a large pool of gametes. Due to
the sampling nature of the reproduction, the allele frequencies change in a random fashion. The
change in allele frequencies from the random process is called genetic drift. This contributes to
a stochastic evolution force. The corresponding mathematical approaches for describing genetic
drift is the diffusion [201–203]:
∂P
= D∇ · ∇ · (GP), (113)
∂t
where Gij = xi (δij − xj ) is from the sampling nature of the genetic drift and D = 1/(4Ne ), Ne is
the effective population size. The state space is n − 1 dimension. The operator ∇, therefore, has
n − 1 components.
The matrix Gij shows some distinct features. The first one is

(∇ · G)i = 1 − nxi , (114)


n
so that i=1 (∇ · G)i = 0. The second one is that its inverse matrix is known to have the feature
[202]:
Fi Fn
(G−1 · F)i = − , (115)
xi xn
n−1
where Fn = − i=1 Fi .
88 J. Wang

8.3. Mean fitness as the adaptive landscape for the frequency-independent selection systems
When the fitness of every genotype is independent of the allele frequencies, the corresponding
natural selection is called frequency-independent selection. From Equation (111), the frequency-
independent selection can be written in the form asFS = ( 12 )G · ∇ ln w̄, and thus ∇ × (G−1 ·
FS ) = 0. Moreover, ∇ × [G−1 · (∇ · G)] = ∇ × [∇( ni=1 ln xi )] = 0. We can see Jss = 0 for the
population system with both natural selection and genetic drift. In this situation, FS − D∇ · G +
DG · ∇U = ( 12 )G · ∇ ln w̄ − D∇ · G + DG · ∇U = 0. So, ∇φ0 = ∇(DU)|D→0 = −( 12 )∇ ln w̄.
Therefore, we see
 
1
φ0 = − ln w̄ (116)
2

for the frequency-independent selection population [69]. Notice we ignore a constant of


integration here.
From the previous discussion, we can see dw̄/dt = F · ∇ w̄ = −2w̄F · ∇φ0 = 2w̄∇φ0 · G ·
∇φ0 ≥ 0. This indicates that the mean fitness as a Lyapunov function can be used to quantify
where the evolutionary optima are. This is the picture for Wright’s adaptive fitness landscape
with a gradient flow. However, Wright’s adaptive fitness landscape description will break down
for the more general case, that is, under frequency-dependent selection force.

8.4. The adaptive landscape for general evolutionary dynamics


As discussed, Wright’s fitness landscape will be inadequate for the general evolution dynamics
such as frequency-dependent selection. Then, the challenge becomes whether there is an adaptive
landscape for evolution and if so what it is? The potential and flux landscape theory meets the
challenge to provide a solution [69].
We will start with studying the stochastic evolution dynamics through the probabilistic
description of Fokker–Planck diffusion equation including the selection, mutation, and genetic
drift,
∂P
= −∇ · [(FS + FM )P − D∇ · (GP)]. (117)
∂t

In the steady state, the corresponding steady-state probability flux is Jss = (FS + FM )Pss −
D∇ · (GPss ) = Pss [(FS + FM − D∇ · G) + DG · ∇U]. When the steady-state flux is zero, the
system is in detailed balance. Then, the judgment of the detailed balance becomes the judg-
ment of whether the flux is zero. According to our previous discussion, we can check the
∇ × [G−1 · (FS + FM − D∇ · G)] to obtain the information. Since ∇ × (G−1 · FM ) = 0 and
∇ × [G−1 · (∇ · G)] = 0, mutation and genetic drift do not violate the detailed balance. There-
fore, whether or not the detailed balance is preserved is determined by whether the natural
selection via ∇ × (G−1 · FS ) is equal to zero.
The non-equilibrium intrinsic potential φ0 of this system satisfies

(FS + FM ) · ∇φ0 + ∇φ0 · G · ∇φ0 = 0. (118)

Because of the deterministic equation dx/dt = FS + FM , φ0 /dt = −∇φ0 · G · ∇φ0 ≤ 0. There-


fore, φ0 being monotonically going down with time is a Lyapunov function always searching for
optimum irrespective to whether the evolution system is in detailed balance or not. Therefore, φ0
here defines the true adaptive landscape for evolution.
Advances in Physics 89

8.5. Potential and flux landscape of a group-help model with frequency-dependent selection
for evolution
For the purpose of studying the general evolution dynamics, we consider the case of frequency-
dependent selection where the fitness is dependent on the allele frequency describing the social
interactions. We investigate a group-help model (GHM) to explore the underlying evolution
dynamics that includes biotic interactions among individuals in addition to the selection, mutation
and genetic drift.
Frequency-dependent selection emerges when the fitness is dependent on the allele frequency.
A social system is a typical one with frequency-dependent selection. If the genotypes differ in
their social behavior, the fitness of a genotype may depend on the population composition. For
selection, we investigate a GHM to study the effect of interactions among individuals on the
evolution.
In the GHM, it is assumed that a single diploid locus has three alleles A1 , A2 , A3 with
frequencies x1 , x2 , x3 , respectively. The fitness of genotypes are taken as follows [69]:
w11 = α + 2βx1 x2 , w12 = α + x1 x1 , w13 = α + x3 x3 ,
w21 = α + x1 x1 , w22 = α + 2βx2 x3 , w23 = α + x2 x2 ,
w31 = α + x3 x3 , w32 = α + x2 x2 , w33 = α + 2βx3 x1 . (119)
Due to conservation of total allele frequency, we have x3 = 1 − x1 − x2 . The fitness of each
genotype has two components influenced by the external physical environment described by a
strength α and biotic interactions characterized by the strength β.
The scheme of the GHM for evolution with frequency-dependent selection is illustrated in
Figure 31. In this random mating population, three groups are formed and within each group
two genotype members help each other. This is from the selection that shows that one’s rel-
atives carrying many of the same genes help it to survive and reproduce. The fitness of each

Figure 31. Scheme for GHM (from Ref. [69]).


90 J. Wang

genotype has two components influenced by external physical environment characterized by the
strength parameter α and biotic interactions characterized by strength parameter β. The heterozy-
gote helps the homozygote with the assistance factor β(β ≥ 0), while the homozygote helps the
heterozygote with the constant assistance factor 1. We can obtain a frequency-dependent selec-
tion force from inserting these fitness (Equation (119)) into Equation (111). This selection has
∇ × (G−1 · FS ) = 0. Therefore, it will induce a non-zero steady-state probability flux, J = 0
which breaks detailed balance.

8.6. The underlying potential–flux landscape for general evolutionary dynamics


Wright’s adaptive fitness landscape is of a gradient nature. It is invalid for the general case. An
example of this inadequacy is illustrated by the frequency-dependent selection population sys-
tem. We can easily prove it in a simple case. For instance, limit cycle dynamics can emerge
under the frequency-dependent selection. The fitness is not monotonous along the limit cycle
because of the gradient nature of Wright’s landscape of being always searching for the opti-
mum. If the fitness is a constant, then the evolution according to Wright will stop. On the
other hand, if the fitness is not uniform on the limit cycle, another issue arises. When finish-
ing the cycle, the ending point is supposed to have higher fitness from the gradient nature of
Wright’s landscape always searching for optimum. However, the ending point has the same value
of potential as the initial point due to the full cycle back to itself. This creates a paradox. The
potential and flux landscape provides a way of obtaining the true adaptive landscape for evolu-
tion and resolving this paradox. In the landscape and flux theory, there are two driving forces
for the dynamics of the non-equilibrium systems. One is the potential landscape related to the
steady-state probability distribution and the other is related to the steady-state probability flux.
For the limit cycle, both forces are in action. While the potential landscape attracts the system
to the oscillation ring, it is the flux that drives the coherent oscillation on the ring. Thus, one
can have a constant intrinsic landscape on the oscillation ring with the same “fitness”, but the
dynamics still goes on through the curl flux circulating around the limit cycle. To illustrate this,
we will construct the Lyapunov function φ0 to quantify the true intrinsic adaptive landscape
for GHM. We have also considered the population landscape U for the finite population with
fluctuations.
We will now study the dynamics of the GHM. Four phases emerge under different parameter
strengths: the mono-stability phase, β = 0.03; a limit cycle phase, β = 0.09; coexistence of a
limit cycle and tri-stability phase, β = 0.11; the tri-stability phase, β = 0.17. The fluctuation
strength is taken as D = 3 × 10−4 for quantifying the population potential U = − ln Pss .
To quantify the intrinsic and population potential landscape as well as the corresponding
steady-state curl flux for the GHM of evolution dynamics, we need to solve the correspond-
ing Fokker–Planck equation (Equation (117)) in steady state for U = − ln Pss , where Pss is
the steady-state probability, and HJ equation (Equation (118)) for φ0 . We apply the zero flux
boundary condition, n · J = n · [(FS + FM )P − D∇ · (GP)] = 0, for the Fokker–Planck equation
(Equation (117)). n represents the unit normal vector of boundary. Such condition corresponds
to the conservation of total probability. We obtain the numerical solution of the steady-state
Fokker–Planck equation Pss and the associated population potential U [69]. Correspondingly,
the boundary condition of HJ equation (Equation (118)) is taken in the form n · (FS + FM + G ·
∇φ0 ) = 0. This is the zero-fluctuation limit of the zero flux boundary condition. The HJ equation
is difficult to solve analytically. It is also challenging to obtain the numerical solution. The notion
of viscosity solution was introduced to help solving the HJ equation. According to this, a numer-
ical method – level-set method – was developed. We can apply Mitchell’s level-set toolbox to
solve the HJ equation for intrinsic potential φ0 [102].
Advances in Physics 91

(a) (b) (c)

(d) (e) (f)

(g) (h)

Figure 32. Landscapes of different phases. Top row: the population potential landscape U ((a) β = 0.03,
(b) β = 0.09, (c) β = 0.11, and (d) β = 0.17). In (b), the lower left corner sub-picture shows the enlarged
(U is enlarged by five times) valley bottom of the population potential landscape. Purple arrows represent
the flux (Jss /Pss ), while the black arrows represent the negative gradient of population potential landscape
(−∇U). Bottom row: the adaptive landscape defined by intrinsic potential φ0 ((e) β = 0.03, (f) β = 0.09,
(g) β = 0.11, and (h) β = 0.17). In (f), the lower left corner sub-picture shows the enlarged (φ0 is enlarged
by six times) valley bottom of the intrinsic potential landscape. Purple arrows represent the intrinsic flux
(V = (Jss /Pss )D→0 ), while the black arrows represent the negative gradient of intrinsic potential (−∇φ0 )
(from Ref. [69]).

Figure 32 shows the population potential U(the top row) and intrinsic potential φ0 (the bottom
row), respectively. The arrows on the landscape in Figure 32 show schematically the two compo-
nents of driving force: negative gradient of the landscape (−∇U for top row and −∇φ0 for bottom
row)(black arrows) and the steady-state probability curl flux (Jss /Pss for the top row and intrinsic
flux velocity V = (Jss /Pss )D→0 for the bottom row)(purple arrows). The arrows at the bottom
are the projections of those arrows. The flux (purple arrows) and negative gradient of U(black
arrows) are almost perpendicular to each other at the bottom plane of Figure 32(a)–(d). We can
see the flux Jss /Pss (purple arrows) and negative gradient of φ0 (black arrows) are perpendicular
to each other at the bottom plane of Figure 32(e)–( h). The non-equilibrium intrinsic potential
landscape φ0 does not change with the fluctuation strength D. The effect of D on probability is
through the − ln Pss = U ∼ = φ0 /D under weak fluctuations.
Figure 32(a) and 32(e) show both the population landscape U and intrinsic landscape φ0 for
the mono-stable phase (β = 0.03). While the negative gradient of potential (black arrows) attracts
the system into the minimum of the funnel (basin of attractions), around the basin the system is
92 J. Wang

also driven by the intrinsic flux velocity (purple arrows). Without the former, the system will
not be attracted to the bottom of the funneled attractor basin. Without the latter, the system will
directly go down to the low potential basin. The effect of the intrinsic flux becomes more apparent
when the system approaches to the bottom of the basin in a spiral fashion. It is interesting to see
that the intrinsic potential φ0 gives the essential topography (funnel) of the landscape for global
stability.
Figure 32(b) and 32(f) show both the population landscape U and intrinsic landscape φ0 for
limit cycle phase (β = 0.09). Under finite fluctuations, the topography of the population land-
scape shows a Mexican hat shape with a closed ring valley for oscillations. The forces from
negative gradient of the potential are almost negligible along the closed ring. But they are more
significant inside and outside ring. Therefore, away from the closed ring, the evolution is attracted
by the negative landscape gradient toward the closed ring. On the Mexican hat closed ring valley,
the flux becomes dominant and provides the driving force for coherent oscillation. The direction
of the flux near the close ring is parallel to the oscillation path. We can also see that the under-
lying intrinsic landscape φ0 has a Mexican hat like topography with constant φ0 values on the
oscillation ring valley. Notice that population landscape U is not constant along the ring since U
is a direct reflection of steady-state probability U = − ln Pss . The population landscape U is not
a Lyapunov function in general but captures more details than the intrinsic landscape φ0 . This
is particularly true for the inhomogeneity of the steady probability due to the inhomogeneous
speed on the ring. The non-equilibrium intrinsic potential φ0 characterizes the essential global
topography of the oscillation landscape.
Figure 32(c) and 32(g) show both the population landscape U and the intrinsic landscape φ0
for the coexistence of a limit cycle oscillation and tri-stability phase (β = 0.11). The topography
of the landscape shows three basins around a Mexican hat ring valley. The steady-state probability
flux on the closed ring valley landscape of the limit cycle appears to be more significant than that
of the negative gradient of the potential landscape. It drives the coherent oscillations. But the flux
is less significant compared with gradient force around the three stable basins of attraction. The
direction of the flux on the limit cycle ring is parallel to the oscillation path. The flux curls around
near the bottom of the three stable basins of attraction. The forces from negative gradient of the
potential landscape are almost negligible on the closed ring. They are more significant away
from the oscillation ring and the bottom of the three basin valleys. Therefore, away from the
closed ring and the bottom of the three basins of attraction, the evolution dynamics is attracted
by the landscape toward them. We see again that φ0 is constant on the limit cycle oscillation
ring.
Figure 32(d) and 32(h) show both the population landscape U and the intrinsic landscape φ0
for the tri-stability phase (β = 0.17). Three basins of attraction emerge with equal depths due to
the symmetry. The direction of the flux around the linking region between the two attractors is
parallel to the link. The directions of the flux are curling around at the bottom of three basins of
attractions. The forces from negative gradient of the potential landscape are less significant along
the linking regions between the two attractors (away from the attractors) and more significant
near the basins of attractions. Therefore, near the basins of attractions, evolution is attracted by
the landscape toward the basins.

8.7. Generalizing Fisher’s FTNS


The Wright adaptive landscape theory of never decreasing fitness and Fisher’s FTNS on the adap-
tive rate resulting from natural selection are in fact intimately related. The validity of Wight’s
adaptive fitness landscape is guaranteed by Fisher’s FTNS [204]. The mathematical form in
Advances in Physics 93

constant environments of FTNS is given as follows [205]:


dw̄ VA (w)
= , (120)
dt w̄

where VA (w) = 2 ni=1 xi (wi − w̄)2 is the additive genetic variance. Therefore, the change of the
mean fitness is from the effects of selection favoring the most fitted individuals among variations.
Because the variance is always larger or equal to zero, Fisher’s fundamental theorem implies that
the mean fitness never decreases for a constant environment. However, the mean fitness is not a
Lyapunov function in general. Fisher’s FTNS and Wright’s adaptive fitness landscape are only
valid in the frequency-independent case. What is the relationship between the adaptive rate and
genetic variations in the general evolutionary dynamics then? We can generalize Fisher’s FTNS
with potential–flux landscape theory.
We will study the general case for evolution dynamics and quantify the adaptive rate below.
From the landscape and flux theory, for any positive definite matrix D, the general selection force
can be decomposed to the intrinsic potential φ0 (D) and the corresponding intrinsic flux velocity
V(D) [69]:
F = −D · ∇φ0 + V. (121)
Since V · ∇φ0 = 0, we get F · D−1 · F = ∇φ0 · D · ∇φ0 + V · D−1 · V. Thus, the adaptive rate is
given by
dφ0
= −∇φ0 · D · ∇φ0
dt
= −F · D−1 · F + V · D−1 · V. (122)

In evolution dynamics, the diffusion matrix D has a special biological meaning. Namely it
describes the sampling nature of the random mating. Therefore, we are interested in the φ0 (D)
and dφ0 (D)/dt. We will see that the dφ0 (D)/dt is related to the genetic variance. From this, we
can generalize Fisher’s FTNS.
We can check,
VA (w(i) )
= 2F (i) · (D−1 )(i) · F(i) , (123)
(w̄(i) )2
i
where VA (w(i) ) = 2 nk=1 Xk(i) (w(i) (i) 2
k − w̄ ) . We take the sum with respect to i to get F · D
−1
·F=
1 N
( 2 ) i=1 VA (w(i) )/(w̄(i) )2 [69]. Finally, we have

dφ0 (D) 1 VA (w)


=− + V(D) · D−1 · V(D). (124)
dt 2 w̄2
The mono-species population system under frequency-independent selection is in equilib-
rium where the intrinsic flux velocity V(D) = 0, and the intrinsic non-equilibrium potential
φ0 = −( 12 ) ln w̄. So for the case of frequency-independent mono-species population system,
Equation (124) is reduced to Fisher’s FTNS, dw̄/dt = VA (w)/w̄. We call Equation (124) a gen-
eralized FTNS. It uncovers the underlying connection of the adaptive rate. One component of
this connection is the genetic variance as originally proposed by Fisher, which is related to the
non-equilibrium intrinsic potential φ0 . The other component of this connection is from the corre-
sponding intrinsic flux velocity V, missing in Fisher’s original FTNS. As we can see clearly in the
general evolution dynamics, the adaptation rate is not only determined by the genetic variance,
but also by the curl flux resulting from the complex biotic interactions, which breaks the detailed
balance of the system [69].
94 J. Wang

8.8. The Red Queen hypothesis explained by a generalized FTNS


We now investigate what the generalized FTNS implies. We can gain insights from the case
when dφ0 (D)/dt = 0, when the overall population system reaches its optima. For frequency-
independent selection, the system is  in equilibrium with the detailed balance, that is, V(D) = 0,
due to ∇ × (D−1 · F) = ∇ × [( 12 )∇( Ni=1 ln w̄(i) )] = 0. This leads the genetic variance for every
species to be VA (w(i) ) = 0 [69]. Thus, the natural selection cannot change the allele frequency.
However, for general evolution dynamics, the system is often not in equilibrium. The detailed
balance is broken and the intrinsic flux is not equal to zero, that is, for the evolution system
frequency-dependent selection. The non-zero intrinsic flux velocity, V(D) = 0, gives rise
with
to Ni=1 VA (w(i) )/(w̄(i) )2 = 2V(D) · D−1 · V(D) ≥ 0. The non-zero genetic variance implies that
the additional natural selection can have effects on some species to change the allele frequen-
cies even if the system reaches its overall population optima. We explored further on which
species has the non-zero genetic variance. Since D is a block diagonal matrix, we can decom-
pose the selection force acted on the ith species from Equation (121) in the form of F(i) =
−D(i) · (∇φ0 )(i) + V(i) . Inserting this into Equation (123) and using the orthogonality feature
of V · ∇φ0 = 0, we obtain VA (w(i) )/(w̄(i) )2 = 2[(∇φ0 )(i) · D(i) · (∇φ0 )(i) + V(i) · (D−1 )(i) · V(i) ].
Moreover, when dφ0 (D)/dt = 0, from Equation (122), we obtain ∇φ0 = 0 [69]. Thus, we get

VA (w(i) ) = 2(w̄(i) )2 [V(i) (D) · (D−1 )(i) · V(i) (D)]


≥ 0. (125)

If the intrinsic flux velocity is from the ith subspace, the non-zero V(i) gives rise to the genetic
variance for ith species. Consequently, the ith species will continue to evolve even when the
overall population reaches its optima.
Let us explore the origin of the non-zero intrinsic flux velocity. We can decompose the selec-
tion force into two components, one from external environment and the other from internal
interactions among individuals. The former is the frequency-independent selection. It provides a
gradient force as given in Wright’s picture. The non-zero intrinsic flux velocity is originated from
the internal interactions from the same species or between different species. Thus, biotic inter-
actions can give rise to endless evolution of some species. This effect that originates from the
interactions between different species was mentioned by Van Valen as the Red Queen hypothesis
[86]. According to the Red Queen hypothesis, the biotic interactions between different species
can lead to endless evolution for some species even when the underlying physical environments
are unchanged. The coevolving systems often enter into a limit cycle or chaos phase, leading to
Red Queen dynamics. Potential and flux landscape theory provides a theoretical foundation and
quantitative explanation for this effect [69]. We can illustrate the Red Queen hypothesis in action
using an example. In a parasite–host system, the mutual interactions can sustain a genetic vari-
ance which maintains genotypic diversification in host populations through sexual reproduction.
The host species benefit from the genotypic diversifications in resisting the parasites and thus the
sexual reproduction can be persisted.
Conventional evolutionary theory neglects the effects of biotic interactions on the evolu-
tionary process. Consequently, the evolutionary process is to adapt to a fixed landscape (when
the external physical environment is unchanged). More recently, evolutionary game theory has
been proposed and developed to study the coevolving systems [206]. According to evolution-
ary game theory, the landscape of a biotic system is continually changing by coevolving with
other biotic systems [208]. From potential and flux landscape theory of evolution, we can quan-
tify the Lyapunov function for the system including all the biotic interactions. We can find the
non-equilibrium intrinsic potential to quantify the underlying adaptive landscape. The intrinsic
Advances in Physics 95

potential landscape provides the information on where the evolution optima are. The intrinsic
flux can drive some species into an endless cyclic evolution even when the optima is reached.

9. Ecology
Ecology is the subject that is concerned with the stability and dynamic behavior of interacting
species for specific time window. Potential and flux landscape theory can be applied to ecosys-
tems [81]. Again the driving forces of ecological dynamics are determined by both a gradient of
the potential landscape and a curl probability flux from the in or out flow of the energy, materials
or information to the ecosystems. The underlying intrinsic potential landscape provides a global
Lyapunov function giving a quantitative measure for the global stability of the ecosystems. We
can study several typical and important ecological systems that involve predation, competition,
and mutualism. Single attractor, multiple attractors, and limit cycle attractors emerge from these
studies. Both the landscape gradient and curl flux determine the dynamics of these ecosystems.
Landscape flux theory provides a way to explore the global stability, function, and dynamics of
ecosystems [81].

9.1. Introduction
Ecosystems are the mutually interacting systems with the exchange of energy, material, informa-
tion from the environments. The structure and the function of the ecosystems are determined by
both cooperation and competition [211,212]. Ecosystems are capable of regulating themselves to
maintain a certain stability. Global stability is one of the most fundamental aspects of ecological
systems. The stability issue is directly relevant to the existence of every species. The stability is
influenced by many sources, such as the structures of the interactions and the nature of the envi-
ronment. Studies of the stability of ecosystems are important for uncovering the underly laws and
mechanisms governing species and populations [211,212].
Ecosystems are complex and often involve many forms of interactions among their compo-
nents. The underlying interactions are usually nonlinear. The nonlinear interactions can lead to
complex dynamics. These systems can be mathematically described by a set of nonlinear differen-
tial equations [213,214]. There have been many studies on the stability of ecosystems [213–221].
Most of these studies have been focused on the local linear stability analysis. The global stabil-
ity of the ecological systems is still challenging. Moreover, the connection between the global
characterization of the ecological systems and the dynamics of the components is still not clear.
Past researchers have explored the dynamical system with the approach of Lyapunov function
to investigate the global stability. The analytical Lyapunov function for predation model was first
suggested by Volterra in 1931 [214]. Since then, significant efforts have been devoted to find the
analytical Lyapunov function [215–218,220,221] for a variety of ecological systems. However, it
is still a great challenge to find the Lyapunov function of the more general and realistic ecological
models even though a few highly simplified ones have been worked out [215,218]. Up to now,
there is no general approach and recipe for finding the Lyapunov function. One has to work on
a case-by-case basis. Furthermore, there is not even a guarantee that a Lyapunov function exists
for a more complex system. Importantly, the Lyapunov function and the stability of a important
class of ecosystems described by the predation model with a solution of a limit cycle have hardly
been explored. Here we will provide a general approach to explore the Lyapunov function and
therefore the global stability of ecological systems.
In the earlier studies, people have focused on deterministic models. However, both external
and intrinsic fluctuations of ecosystems are present and unavoidable. Environmental fluctuations
can influence the global ecological behaviors. The intrinsic fluctuations are due to the finite size
96 J. Wang

(a) (b) (c)

Figure 33. (a) Predation model, (b) competition model, and (c) mutualism model (from Ref. [81]).

of the ecosystem. It is widely believed that the analysis of global stability is important for a full
understanding of the robustness of ecological systems under fluctuations [174,215–217,221,222].
As discussed, it is still challenging to explore global stability with deterministic dynamics. A
probabilistic (P) description is necessary due to the presence of fluctuations in the real systems.
The probabilistic description has the advantage of quantifying the weights of the whole popula-
tion state space and therefore is global. The potential landscape U is linked with the probability
P by U ∼ ln P and can be used to address global stability and function of ecosystems.
Here we develop a potential and flux landscape theory to quantify the global stability and
dynamics for ecosystems [13,16,26,57,69,71,81]. Again the underlying intrinsic potential land-
scape is a global Lyapunov function for ecosystem. The topography of the landscape therefore
provides a quantitative measure for the global stability of the ecosystems. The dynamics of the
ecosystems is determined by both the gradient of the potential landscape and curl probability flux
from the environmental exchange of energy, materials, and information, breaking the detailed
balance.
The landscape and flux theory is applied to three important ecosystems: predation, com-
petition, and mutualism. Lotka-Volterra model of the interactions between two species is the
most famous ecological model proposed by Lotka [213] and Volterra [214], respectively. Over
the years, this model has attracted significant attention for exploring the underlying dynami-
cal process of the ecology. In ecosystems, the relationship between species can be grouped
into two categories: a negative antagonism interaction ( − ) and a positive mutualism interac-
tion (+). We show the different models in Figure 33. Predation shows the activation–repression
relationship (+/−) where one species SA is disfavored, while the other species SB benefits in
Figure 33(a); competition shows a mutual repression relationship (−/−) where each species SA
or SB is influenced negatively by the other one in Figure 33(b); mutualism shows a mutual acti-
vation relationship (+/+) where both species SA and SB benefit from interactions of the other in
Figure 33(c) [212,216,221].
For predation, predators sustain their lives by the consumption of prey. Without the pres-
ence of prey, predators are not able to survive. On the other hand, the prey can sustain their
lives and grow without predators. The presence of the prey controls the growth of the predators.
The predator–prey (predation) system emerges from such interactions [212–214]. The predator–
prey system can have one stable state or stable limit cycle. Competition between species often
happens when they rely on the same resources. Competition can promote the ecological char-
acteristics such as species differentiation and generate certain biological community structures.
The system can have multi-stable states. Mutualism means mutual activation benefiting to each
individuals. The mutualism system can also have multi-stable states. These relations aforemen-
tioned show the complexity of biological communities, their stable structures, and the ecological
balance [212,216,221]. These models are also important for population biology because of the
applications to the real biological world.

9.2. The ecological dynamical models


The ecosystems can be described by a set of nonlinear ordinary differential equations with species
interactions. We will add some restrictions on the models to enable them to be more reasonable
Advances in Physics 97

and closer to the real world [212,223], such as avoidance of exponential growth and existence of
lower critical bound for each species.
(1) Predation model
The general Holling type II responses for the prey to account the nonlinearity in interactions
can be added in a predation model for two species predator–prey model [212]:

dN1 aN1 N2
= N1 (1 − N1 ) − = F1 (N1 , N2 ),
dt N1 + d
 
dN2 N2
= bN2 1 − = F2 (N1 , N2 ), (126)
dt N1

where N1 represents the normalized population of prey, while N2 represents the normalized pop-
ulation of predator. The parameter a represents the relative death rate or the interaction strength
for the prey. The parameter b represents the ratio of the linear birth rate of the predator to that of
the prey. The parameter d represents the relative saturation rate of the prey. The system has two
saddle points: one is at (0, 0) representing that none of the species exists and the second one at
(1, 0) representing the prey at their carrying capacity in the absence of predators [212]. The sec-
ond point is stable along the N1 population axis and unstable along the N2 population axis. There
is also a critical point which is the unstable center of the limit cycle or the stable point under
different parameter ranges. The system has a stable limit cycle oscillation when the parameters
are set to a = 1.5, b = 0.1, d = 0.2.
(2) Competition model
A realistic competitive model should have a lower critical bound, which means the popula-
tions would perish once the size of the population is below this threshold. The model is shown as
follows [223]:

dN1
= N1 (N1 − L1 )(1 − N1 ) − a1 N1 N2 = F1 (N1 , N2 ),
dt
dN2
= αN2 ((N2 − L2 )(1 − N2 ) − a2 N1 ) = F2 (N1 , N2 ), (127)
dt

where N1 and N2 represent the normalized populations of the two competitive species SA and SB .
L1 , L2 represent the lower critical bounds for species SA , SB , respectively. The ranges of L1 , L2 are
from 0 to 1. a1 , a2 are the competitive capability for species SA , SB , respectively. α is the relative
rate of increase for species SB [223].
We have performed phase analysis for the system. Both of the two populations can be at
an extinct state (0, 0) (marked as O) of the system. This is because both groups have a lower
critical density. Under weak competitors for other species, the states: (1,0) (marked as A, which
means the species SA exists alone) and (0,1) (marked as B, which means the species SB exists
alone) are locally stable. Besides the above three states, when the values of a1 and a2 meet certain
conditions, the system can have another local stable state which corresponds to the coexistence of
the two species (marked as C). Here the parameters are set to a1 = a2 = 0.1, L1 = L2 = 0.3, α =
1.0 since the system has these four states.
(3) Mutualism model
We consider the case of two mutualism species both having a lower critical bound. This
realistic mutualism model can be described as [223]:

dN1
= N1 (N1 − L1 )(1 − N1 ) + a1 N1 N2 = F1 (N1 , N2 ),
dt
98 J. Wang

dN2
= αN2 ((N2 − L2 )(1 − N2 ) + a2 N1 ) = F2 (N1 , N2 ), (128)
dt
where N1 and N2 represent the normalized populations of the two mutualism species SA and SB .
L1 , L2 represent lower critical points for species SA and SB , respectively. The ranges of L1 , L2 are
from 0 to 1. a1 , a2 are the mutualism capability for species SA , SB , respectively. α is the relative
rate of increase for species SB [223].
We have performed the phase analysis of this system. Both of the two populations can be at an
extinct state giving the trivial solution (0, 0) (marked as O) of the system. This is because the two
groups have a lower critical density. Under no mutual helper for each species, the states: (1,0)
(marked as A, which means the species SA exists alone) and (0,1) (marked as B, which means the
species SB exists alone) are locally stable. Besides the above three phases, the system has another
local stable state which corresponds to the coexistence of the two species (marked as C). Here,
the parameters are set to a1 = a2 = 0.1, L1 = L2 = 0.5, α = 1.0 since the system has these four
states.

9.3. The potential landscapes and fluxes of ecosystems: predation, competition, and
mutualism
The population potential landscape U(the top row) and intrinsic potential landscape φ0 (the bot-
tom row) for predation, competition, and mutualism models are shown, respectively, in Figure 34.
The negative gradient of the population potential landscape −∇U at the top row and the intrinsic
potential landscape −∇φ0 at the bottom row are represented by the black arrows, while the curl
steady-state probability flux Jss /Pss at the top row and the intrinsic flux velocity at the bottom
row are represented by the purple arrows. The arrows at the bottom of each sub-figures are the
projection of the associated arrows. The curl flux with purple arrows are nearly orthogonal to
the negative gradient of U with black arrows shown at the bottom plane of Figure 34(a)–(c).
The flux velocity with purple arrows are orthogonal to the negative gradient of φ0 with black
arrows shown at the bottom plane of Figure 34(d)–(f).
Figure 34(a) and 34(d) show the population potential landscape U and intrinsic potential land-
scape φ0 for the predation model when the parameters are set to a = 1.5, b = 0.1, d = 0.2, D =
3 × 10−5 . We can see when the fluctuations characterized by the diffusion coefficient are small
that the underlying potential landscape has a distinct Mexican hat-shaped closed irregular and
inhomogeneous closed ring valley shown in Figure 34(a). We can clearly see that the population
potential landscape is not uniformly distributed along the limit cycle closed ring. The intrinsic
potential landscape φ0 has a homogeneous closed ring valley along deterministic oscillation tra-
jectories with a constant value of φ0 shown in Figure 34(d). The intrinsic potential landscape φ0
as a Lyapunov function can quantify the global topography of the oscillation landscape of preda-
tion model. The figure shows that the potential is lower along the oscillation ring. The potential
landscape is higher with a center island inside the oscillation ring and mountain outside the oscil-
lation ring. The system is attracted to the closed oscillation ring by the force generated from
the landscape’s gradient-potential ∇U or the ∇φ0 . The other driving force for the system from
the curl flux keeps the continuous periodical oscillation dynamics. Both landscape and flux are
required to quantify the dynamics of the non-equilibrium predation ecosystems. The oscillation
study here shows that when the number of predators increases, more prey will be consumed.
Then due to the shortage of food, the number of the predator will be reduced. The reduction of
the predators leads to the growth of the prey, then the number of predators increases again from
the rich prey. This is the origin of the limit cycle predator–prey dynamics.
Figure 34(b) and 34(e) show the population potential landscape U and intrinsic potential
landscape φ0 for competitive model when the parameters are set to a1 = a2 = 0.1, L1 = L2 =
(a) (b) (c)

Advances in Physics
(d) (e) (f)

Figure 34. Top row: the population potential landscape U ((a) predation model, (b) competition model, and (c) mutualism model). Purple arrows represent the flux
velocity (Jss /Pss ), while the black arrows represent the negative gradient of population potential (−∇U). Bottom row: the potential intrinsic energy landscape φ0
((d) predation model, (e) competition model, and (f) mutualism model). Purple arrows represent the intrinsic flux velocity (V = (Jss /Pss )D→0 ), while the black

99
arrows represent the negative gradient of intrinsic potential (−∇φ0 )) (from Ref. [81]).
100 J. Wang

0.3, α = 1.0, D = 0.01. We can see both the underlying population potential landscape and intrin-
sic potential landscape have four distinct basins around four locally stable states. These four
stable states are the survival alone state A of species SA , the survival alone state B of species SB , a
coexisting state C, and a mutually extinct state O. These figures show that the potential landscape
is relatively higher (and probability is relatively lower) on the extinct state (the state O) of these
two species due to the small lower critical points L1 , L2 for species. The potential landscapes of
the survival alone states A and B are lower (more stable) in potential than that of the coexisting
state C. This shows that the coexisting state is less stable than the species survival alone states
because they have competitive relations with each other. The forces from negative gradient of
the potential landscape are more significant away from the attractors and less significant near the
basins. Therefore, the system is attracted by the gradient of the landscape toward the four basins.
The directions of the flux are curling around among the basins.
Figure 34(c) and 34(f) show the population potential landscape U and intrinsic potential
landscape φ0 for the mutualism model when the parameters are set to a1 = a2 = 0.1, L1 = L2 =
0.5, α = 1.0, D = 0.01. Both the underlying population potential landscape and intrinsic poten-
tial landscape have distinct four basins. The basins are located around the four stable states. These
figures show that the potential landscape is the highest (and probability is lower) on the extinct
state O of these two species. The potential landscape of coexisting state C is the lowest than those
of species survival alone states A and B, and the extinct state O. This shows that the coexisting
state is more stable than the species alone state for the two species with mutualism relationship
from each other. The directions of the flux are curling around among the four basins. The system
is also attracted by the gradient of landscape toward the four basins.

9.4. Discussion
It is interesting to compare landscape and flux theory of ecology with landscape ecology mod-
els. The landscape ecology models concentrate on spatial heterogeneity with space probabilistic
methods [224,225]. These methods as well as the present theory all focus on the dynamical evo-
lution in probability. However, the present theory concentrates on the probability landscape and
flux in the population space rather than in the geographical space as in the landscape ecology
models.

10. Landscape and fluxes of neural networks


10.1. Introduction
Understanding brain function is a grand goal for biology. The brain is a complex and dynami-
cal system [72,226–231]. The individual neurons connect with each other through synapses to
form the neural networks. The neural networks of the brain generate complex patterns of activity
related to biological functions, such as learning, long-term associative memory, working mem-
ory, olfaction, decision-making and thinking [232–235]. Many models have been suggested for
understanding how neural networks function. The Hodgkin–Huxley model provides a description
of a single neuron [229]. However, the brain functions are realized by the neural networks rather
than individual neurons.
Hopfield developed a model [230,231] to explore the global features of neural networks. Hop-
field showed that, for symmetric neural connections, a global energy landscape can be constructed
that decreases with time. As shown in Figure 35, the neural network from the initial starting point
follows a gradient path down to the nearest basin of attractor of the underlying energy landscape.
The basins of attraction store the memory formed by learning from specific enhanced wiring
Advances in Physics 101

Figure 35. The schematic diagram of the original computational energy function landscape of Hopfield
neural network (from Ref. [72]).

patterns. The memory retrieval process can then be from a cue (incomplete initial information) to
the corresponding memory (final complete information). This gives a clear picture of how neural
networks store their memory and retrieve the functions. However, in real neural networks, the
neural connections are mostly asymmetric. The original Hopfield analysis does not hold with
asymmetrical connections. Therefore, there is no easy way of finding out the underlying energy
function. Without an energy function, the global stability and function of the neural networks
are hard to explore. In this study, we will explore the global behavior of neural networks for the
general case of both symmetric and asymmetrical synaptic connections.
Here, we will apply potential and flux landscape theory for neural networks. The driving
force of the neural network dynamics is determined by both the gradient of the potential and a
curl probability flux. The curl flux can generate coherent oscillations, which are not possible with
a pure gradient force. The potential landscape still functions as a Lyapunov function crucial for
quantifying the global stability even with asymmetrical connections [13,17,60,72] of the neural
networks.
The original Hopfield model shows a good associative memory features of the underlying
neural network. The network state always goes down to certain fixed point attractors with the
stored memories. However, evidences accumulate that oscillations also play important roles in
cognitive processes [72,233,235], that is, the theta rhythm enhanced in neocortex during working
memory [236] and an enhanced gamma rhythm related to attention [237,238]. However, the orig-
inal Hopfield model does not hold for asymmetrical connections and cannot be used to describe
this oscillation behavior. Landscape and flux theory provides a way to explore the global features
of neural networks that include the oscillations [13,14,17,72].
First, we will study a Hopfield associative memory network including 20 model neurons.
The synaptic strengths of neural connections are quantitatively represented by the connection
strength parameters Tij [230,231]. We uncovered the probabilistic potential landscape and the
corresponding Lyapunov function for this neural circuit, not only for symmetric connections of
the original Hopfield model where the dynamics is dictated by the gradient of the potential, but
also for asymmetric connections where the original Hopfield analysis does not hold. We can
102 J. Wang

explore the effects of the connections having different degrees of asymmetry on the behaviors of
the circuits and the robustness of the neural networks in terms of landscape topography through
barrier heights.
Neural networks with strong asymmetric connections can generate the limit cycle oscilla-
tions. However, the oscillations cannot occur for the symmetric connections due to the gradient
feature of the dynamics. The corresponding potential landscape for limit cycle oscillations shows
a Mexican hat closed-ring shape topology. The global stability of oscillation can be quantified
through the barrier height of the center island of the hat. The dynamics of the neural networks
is determined by both the gradient of the landscape and the probability flux. While the gradient
force attracts the network down to the ring, the flux becomes the driving force for the coherent
oscillations on the ring [13,17,59,72]. The probability flux is closely related to the asymmetric
part of the neural connections.
We can explore how the period and coherence of the oscillations for asymmetric neural net-
works and their relationships with the landscape topography. Both the flux and the landscape
are crucial for the process of continuous memory retrievals in the oscillation attractors. We sug-
gest that flux may provide driving force for the associations among various memory basins. The
connections with different degree of asymmetry influence the capacity of memory [72].
Potential and flux theory can be applied to a rapid-eye-movement (REM) sleep cycle model
[72,239,240]. We performed a global sensitivity analysis based on the landscape topography to
explore the influences of the key factors such as release of acetylcholine and norepinephrine on
the stability and function of the underlying neural network. Furthermore, we find that the flux is
crucial for both the stability and the period of REM sleep rhythms.

10.2. The dynamics of general neural networks


We start from the dynamical equations of Hopfield neural network with N neurons [231]:
⎛ ⎞
dui 1 ⎝ 
N
ui
Fi = = Ti,j fj (uj ) − + Ii ⎠ (i = 1, . . . , N). (129)
dt Ci j=1 Ri

The variable ui is the effective input action potential of the neuron i. The action potential u
changes with respect to time in the process of charging and discharging of individual neuron.
The collection of action potentials of different neurons can be used to represent the states of
the neuron. Ci represents the capacitance and Ri represents the resistance of the neuron i. Ti,j
represents the strength of the connection from neuron j to neuron i. The function fi (ui ) is the
firing rate of neuron i. It has a sigmoid functional form. The strength of the synaptic current into
a postsynaptic neuron i due to a presynaptic neuron j is proportional to the product of the firing
rate of the neuron i, fi (ui ) and the strength of the synapse connection Ti,j from j to i. Therefore,
the synaptic current can be represented by Ti,j fj (uj ). The inputs of each neuron have contributions
from three sources: postsynaptic currents from other neurons, leakage current due to the finite
input resistance, and input currents Ii from other neurons outside the circuit [72,231].
In the original Hopfield model, the strength of synapse Ti,j must be equal to Tj,i . Therefore,
the connection strengths between neurons are symmetric. In this study, we will explore more
general neural networks without this restriction and include the asymmetric connections between
neurons.
We will first explore the underlying stochastic dynamics of neural networks under fluctuations
from the corresponding Fokker–Planck diffusion equation for the probability evolution. We can
then obtain the Lyapunov function φ0 for a general neural network by solving the following HJ
Advances in Physics 103

equation from the leading order expansion in fluctuation strength characterized by the diffusion
coefficient of the steady-state Fokker–Planck equation [69,72]:


n
∂φ0 (u)  
n n
∂φ0 (u) ∂φ0 (u)
Fi (u) + Dij (u) = 0. (130)
i=1
∂ui i=1 j=1
∂ui ∂uj

For a symmetric neural network (Tij = Tji ), we can see right away that there exists a Lyapunov
function which is the energy E of the system as [230,231]

1  
N N N
1 ui 
E=− Tij fj (uj )fi (ui ) + ξ fi (ξ ) dξ − Ii fi (ui ). (131)
2 i,j=1 i
Ri 0 i

For this symmetric case, it is easy to see that


⎛ ⎞
N 
  N
dE dfi (ui ) ⎝ ui
=− Tij fi (ui ) − + Ii ⎠
dt i
dt j
Ri


N
dfi (ui )
=− Ci u̇i
i
dt


N
=− Ci fi (ui )u̇2i . (132)
i

Since the Ci are always positive and the function fi increases with variable ui monotonously,
the function E always decreases with time. As we can see, different from the energy defined by
Hopfield, φ0 is a Lyapunov function with respect to whether the neural network is symmetrically
connected or not. In fact, φ0 can be reduced to the energy function only when the connections of
neural network are symmetric. In general, one needs to solve the HJ equation to quantify the φ0 .
For a symmetric connections, the driving force of the neural network can be written as a
gradient, Fi = −A(u)∇E(u), where Aij = δij /Ci fi (ui ). For a more general asymmetric connec-
tions, the driving force for the neural networks cannot be written as the form of pure gradient of
the potential any more. Instead, according to the landscape and flux theory, the driving force are
determined by both a gradient of a potential closely related to steady-state probability distribution
and a curl flux. As we will see, complex neural behaviors such as oscillations can emerge in an
asymmetric neural circuit. The oscillation behavior is impossible for Hopfield model with sym-
metrical neural connections. The non-zero flux J plays an important role in driving the coherent
oscillations.

10.3. Potential and flux landscape of neural networks


From the dynamics of the general neural networks, we started with the corresponding proba-
bilistic diffusion equation and obtained the steady-state probability distributions Pss based on
a self-consistent mean field method. We then quantify the underlying potential landscape (the
population potential landscape U here is defined as U(x) = − ln(Pss (x))) [13,14,17,60,72]. It is
challenging to visualize the multidimensional state space of neural activity u. We can, however,
select two state variables from the 20 in the neural network model to project the landscape by
integrating out the other 18 variables.
104 J. Wang

(a) (b)

(c) (d)

Figure 36. The 3D potential landscape figures from restricted symmetric circuit to totally unrestricted
circuit. D = 1 for multi-stable case and D = 0.001 for oscillation (from Ref. [72]).

We first study a symmetric neural network from Hopfield model [231]. Figure 36(a) shows
the landscape of the neural network. The symmetric network has eight basins of attractors. Each
attractor represents a state that stores memory. When the neural network is cued to start with
initial condition with incomplete information, it will go down to the nearest basin of attractor
with a memory of complete information. This dynamical neural network guarantees the memory
retrieval from a cue to the corresponding memory [72].
As we discussed earlier, we quantified a Lyapunov function φ0 from the leading order expan-
sion of the potential U(x) on the diffusion coefficient D. It is difficult to solve the equation of φ0
directly due to the high dimensionality. We can apply a linear fit method for the diffusion coeffi-
cient D with respect to the DU to solve the φ0 approximately. Figure 37 illustrates the intrinsic
potential landscape φ0 of the symmetric neural circuit. There are also eight basins of attractor.
This landscape looks similar to the Hopfield energy landscape shown in Figure 35. Different
from the energy defined by the Hopfield analysis which only works for the neural networks with
symmetric connections, φ0 is a Lyapunov function for neural network with either symmetric or
asymmetrical connections. The landscape construction provides a general way to quantify the
global features of the asymmetric neural circuits with Lyapunov function φ0 .
To explore the role of asymmetry in neural connections, we chose a set of Tij randomly [72].
First, we set the symmetric connections, Tij = Tji when i = j. Here Tii is set to zero indicating
that neurons do not connect with themselves. Figure 36(a) shows the potential landscapes of this
symmetric circuit, and we can see eight basins of attractor. We relaxed the restrictions on Tij .
We set Tij = Tji for negative Tij when | i − j |> 4 and | i − j |> 5, and quantified the landscapes
in Figure 36(b) and Figure 36(c), respectively. The landscapes show a trend that the number
of stored memories decreases gradually. When we explored the original neural network with-
out any restrictions on the connections Tij , there exists a possibility where all the stable fixed
points disappear and a limit cycle emerges as shown in Figure 36(d). For this case, the potential
Advances in Physics 105

Figure 37. The potential landscape φ0 of the symmetric circuit (from Ref. [72]).

landscape has a Mexican hat shape. These figures show that as the neural network becomes less
symmetric, the number of point attractors decreases. This result does not necessarily mean that
the memory capacity of an asymmetric neural network must be smaller than a symmetric one.
In fact, the limit cycle oscillation also stores the memory, but in a continuous fashion on the
oscillation ring rather than in the isolated basin.
As shown in Figure 36(d), oscillation can emerge for unrestricted Tij . The system cannot
oscillate if it is driven only by the gradient force of the underlying landscape resulting from the
symmetric neural connections. The driving force F in the general neural networks usually cannot
be written as a pure gradient of a potential. As we have discussed earlier, the driving force F for
a neural network is determined by both gradient of a potential landscape related to steady-state
probability distribution and a curl flux [13,14,17,60,72].
In general, the neural network dynamics will stop at a point attractor only when the Lyapunov
function reaches a global minimum. As shown earlier, the neural network can also oscillate where
the values of Lyapunov function φ0 on the oscillation ring are constant. The quantified Lyapunov
function φ0 provides a good quantitative description of the global intrinsic features of the underly-
ing neural networks. The population potential U is not a Lyapunov function [69,72]. However, the
population landscape potential U captures more details of the neural networks. This is because
the population landscape is directly linked to steady-state probability distribution. For oscilla-
tions, U reflects the inhomogeneous probability distribution on the oscillation ring. This shows
the inhomogeneous speed on the limit cycle oscillation ring. φ0 being a global Lyapunov function
does not capture this inhomogeneity information.
For a symmetric circuit, the neural network cannot oscillate, since the gradient force cannot
provide the vorticity needed for oscillations [231]. We can clearly see that the curl flux plays an
important role as the connections between neurons become less symmetric. The curl flux is the
key driving force when the neural network is attracted onto the limit cycle oscillation ring [72].

10.4. Flux and asymmetric synaptic connections in general neural networks


The oscillatory patterns of neural activities are widely distributed in our brain [233,241–243]. The
oscillations play a mechanistic role in various aspects of memory including the spatial represen-
tation and memory maintenance [236,244]. Continuous attractor models have been investigated
for the mechanism of the memory of eye position [245–247]. However, the understanding of the
sequential orders of the recall is still poor [248,249]. This is because the basins of the attractions
storing the memory patterns are often isolated without any natural connections in the original
symmetrical Hopfield networks. Asymmetric neural networks are capable of recalling sequences
106 J. Wang

and the asymmetry determines the direction of flows in configuration space [250,251]. It is natural
to expect that the flux may provide the driving force for associations among different memo-
ries. Synchronization is important in neuroscience. Recently, phase-locking among oscillations
in different neuronal groups provides a new window to explore the cognitive functions involving
communications among neuronal groups such as attention [252]. The synchronization can only
occur among different groups with coherent oscillations. In the landscape and flux theory, the
flux is closely related to the frequency of oscillations. We expect the flux to play an important
role in modulation of rhythm synchrony.

10.5. Potential and flux landscape for REM/non-REM cycle


We can apply potential and flux landscape theory to a more realistic model describing the
REM/non-REM cycle with the human sleep data [72,239,240,253]. The REM sleep oscilla-
tions can be described by the interactions of two neural populations: “REM-on” neurons (mPRF,
LDT/PPT) and “REM-off” neurons (LC/DR). A limit cycle of the REM sleep system is sim-
ilar to the predator–prey model in ecology for describing the interactions between prey and
predator populations [213,214]. The mPRF neurons (“prey”) are self-activated through acetyl-
choline(Ach). As the activities of “REM-on” neurons reach a certain threshold, REM sleep
occurs. Being activated by Ach from the “REM-on” neurons, the LC/DR neurons (“predator”)
inhibit “REM-on” neurons through serotonin and norepinephrine, then the REM episode is termi-
nated. With less activations from “REM-on” neurons, the activities of LC/DR neurons decrease

(a) (b)

(c) (d)

Figure 38. (a) The potential landscape for b = 2.0 and a = 1.0. The red arrows represent the flux. (b) The
effects of parameters a and b on the barrier height. (c) The effects of parameters a and b on flux. (d) The
phase coherence versus the degree of asymmetry S (from Ref. [72]).
Advances in Physics 107

due to self-inhibition (norepinephrine and serotonin). This leads to the release of “REM-on”
neurons from inhibition. Another REM cycle starts right afterwards.
This circuit can be mathematically described by the following dynamic equations: dx/dt =
a ∗ A(x) ∗ x ∗ S1 (x) − b ∗ B(x) ∗ x ∗ y and dy/dt = −c ∗ y + d ∗ x ∗ y ∗ S2 (y). The x and y
represent the activities of “REM-on” and “REM-off” neural populations, respectively.
From the dynamics of REM sleep neural network, the potential landscape and the flux of
the REM/non-REM cycles can be quantified [72]. As illustrated in Figure 38(a), the potential
landscape U has a Mexican hat shape. The oscillations are mainly driven by the flux illustrated
by the red arrows along the cycle. The flux plays a key role for the robustness of oscillations.
Figure 38(b) and 38(c) show this explicitly: the barrier and the average flux along the ring is larger
as a increases (the REM network is more stable and oscillations is more coherent (Figure 38(d)).
Both the potential landscape and the flux are important for the stability of this oscillatory network.

11. Chaos: Lorentz strange attractor


Landscape and flux theory can be used to explore the dynamics and the global stability of chaotic
strange attractor with intrinsic fluctuations [68]. The Lorentz strange attractor is a typical non-
linear dynamical system exhibiting chaotic behavior, which often appears in nature and biology
[254]. The underlying landscape overall has a butterfly shape. Both the landscape and curl prob-
abilistic flux determine the dynamics even when there is chaos. The landscape attracts the system
down to the chaotic attractor, while the curl flux drives the coherent motions on chaotic attractors.
The curl probabilistic flux may provide us a clue to the mystery of chaotic attractor.

11.1. Introduction
Nonlinear interactions can generate complex dynamics and patterns, such as limit cycles and
chaos. Nonlinear dynamical systems have been extensively studied and applied to many fields,
including physical systems, weather, chemical reactions, biological systems, information process-
ing, etc. [68,255–257]. However, understanding the global stability and dynamics of the chaotic
systems is a challenge.
Finite systems often have the intrinsic statistical fluctuations from the number of molecules
or components within and the external fluctuations from the environments [88,258–262]. It is
important to investigate the stability of the dynamical systems under the stochastic fluctuations.
For chaotic systems, being extremely sensitive to initial conditions, it is impractical to enu-
merate all the initial conditions to investigate the dynamical outcome of the system. Probabilistic
approaches thus provide a useful route.

11.2. Landscape and probabilistic flux


The classical Lorenz equations [254] take the following form:
dx
= σ (y − x),
dt
dy
= rx − y − xz,
dt
dz
= xy − bz. (133)
dt
A “chemical” Lorenz model can be derived from the classical Lorenz equations by a nonlinear
transformation with the corresponding parameters as Ref. [68, Table 1]. The reactions of chemical
108 J. Wang

Lorenz model can be given through the following reaction steps [255,263]:

X + Y + Z− > [k1 ]X + 2Z, A1 + X + Y − > [k2 ]2X + Y ,


A2 + X + Y − > [k3 ]X + 2Y , X + Z− > [k4 ]X + P1,
Y + Z− > [k5 ]2Y , 2X − > [k6 ]P2,
2Y − > [k7 ]P3, 2Z− > [k8 ]P4,
A3 + X − > [k9 ]2X , X − > [k10 ]P5,
Y − > [k11 ]P6, A4 + Y − > [k12 ]2Y ,
A5 + Z− > [k13 ]2Z. (134)

Here, ki (i = 1, 2, . . . , 13) are chemical reaction rate constants. The concentration of species
Ai (i = 1, 2, . . . , 5) and Pi (i = 1, 2, . . . , 6) is assumed to be constant. Writing x, y, and z sepa-
rately for the concentrations of species X, Y , and Z, we can derive the corresponding deterministic
(average kinetics) equations of the system:
dx
F1 = = k2 xy − 2k6 x2 + k9 x − k10 x,
dt
dy
F2 = = −k1 xyz + k3 xy + k5 yz − 2k7 y2 − k11 y + k12 y,
dt
dz
F3 = = k1 xyz − k4 xz − k5 yz − 2k8 z2 + k13 z. (135)
dt
where rate constants ki (i = 1, 2, . . . , 13) have included the constant concentration of species Ai .
As mentioned, chemical Lorenz model can be derived from the classic Lorenz equations.
They are not completely identical by comparing their solutions, although the solutions changed
with parameters in a similar fashion. When the rate constants are set by k1 = 0.001, k2 =
0.1, k3 = 0.81, k4 = 1, k5 = 1, k6 = 0.05, k7 = 0.005, k8 = 0.0133, k9 = 1000, k10 = 1000, k11 =
8100, k12 = 100, k13 = 10002.67 (parameters when V = 1 in Ref. [68, Table 1], correspond-
ing to σ = 10, r = 80, b = 83 in classical Lorenz equations), the chemical Lorenz model gives
a chaotic solution, jumping back and forth between the two butterfly wings. This chaotic behav-
ior is very similar to that of the classical Lorenz equations. For the classic Lorenz equations, a
parameter r was used to determine how the system transforms from stable solution into chaotic
solution. In this chemical example, we also have a corresponding parameter r. We use this r as
an adjustable parameter to study the different behaviors of chemical Lorenz model at different
parameter regimes, although the parameter r in the chemical Lorenz model is not exactly the
same as one used in the classic Lorenz equations.
In the chemical dynamical systems, small numbers of molecules give rise to intrinsic fluctu-
ations. In a system with only a finite number of molecules, intrinsic perturbations √ can influence
the system behavior, due to the fact that fluctuation strength is proportional to 1/ N [16,68,264],
where N represents the number of molecules. Therefore, the smaller the total number of
molecules in a dynamical system, the larger fluctuations are expected. We define V as the effec-
tive dimensionless volume that scales the total molecular number and characterizes the intrinsic
fluctuations of the system [264]. In this model, when V = 1, the total molecular number is about
10, 000. By changing V , we can explore the behavior of system at different molecular numbers.
Stochastic simulation [255,263,265] and probabilistic approach [6] of the model Lorenz
chaotic system gives the steady-state probability distribution of the molecular species concen-
trations. The steady-state probability distribution yields the potential landscape of the system
Advances in Physics 109

Figure 39. Four-dimensional (4D) landscape and associated probabilistic flux of chemical Lorenz model for
molecular number variables X , Y , Z when the volume V is 10 (molecular number is 100,000), and r = 80.
For background landscape, deep color represents lower potential energy or higher probability, and light
color represents higher potential energy or lower probability. Landscape exhibits a butterfly shape with two
oscillation rings coupled to each other. Magenta arrows represent the direction of probabilistic flux (from
Ref. [68]).

[8,11,13,17,57,64,68,266,267]: U = − ln Pss , where Pss is the steady-state probability in the state


space of concentrations.
Figure 39 shows the three-variable (X , Y , Z) potential landscape of the chemical Lorenz
chaotic system with the given parameter. The landscape has a butterfly shape with two wings
connected together. Different colors show the depth of the landscape. Darker color shows a
lower potential with higher probability, and lighter color shows higher potential with lower
probability. The butterfly shape of the potential landscape is covered by the stochastic chaotic
trajectories. Away from the butterfly landscape, the potential is higher with lower probability. On
the coupled two wings around two eyes (holes) of the butterfly chaotic attractor, the potential is
lower, with higher probability. The two holes have higher local potentials and lower probabilities
than the surrounding butterfly wings. Therefore, the system is attracted to the chaotic attrac-
tors, and the shape of landscape guarantees the stability of the chaotic oscillator, as illustrated
clearly in the 3D landscape and 2D landscape projection with two variables X , Y (Figure 40(a)
and 40(b)).
The Lorenz chaotic system is an open system. It can reach a non-equilibrium steady state
(NESS). A non-zero flux is a distinct feature of a NESS [68]. The non-zero flux will generate the
dissipation energy for sustaining the NESS. The probabilistic flux of the system in state space
110 J. Wang

(a)

(b)

Figure 40. (a) shows 3D landscape and associated probabilistic flux of chemical Lorenz attractor for two
variables X and Y when V = 10 (the corresponding molecular number is 100,000) and r = 80. (b) shows 2D
landscape and corresponding probabilistic flux of chemical Lorenz model when V = 10. For background
landscape, deep color represents lower potential energy or higher probability, and light color represents
higher potential energy or lower probability. Magenta arrows represent flux, and white arrows represent
negative gradient of potential energy (from Ref. [68]).
Advances in Physics 111

of concentrations is given by [1] J(x, t) = FP − D · (∇P). Both the potential landscape and curl
probabilistic flux determine the dynamics and global features. The dynamics of the system can
be described as a spiral around the gradient (F = 12 D · ∇(ln Pss ) + Jss /Pss + 12 ∇ · DT , ss denotes
the steady state).
It is easier to visualize the landscape in two dimensions. We explore the two-variable (X
and Y ) projection of Lorenz attractor for V = 10 and r = 80. In Figure 40(a) and Figure 40(b),
the landscape in two variables X and Y space has the shape of the two joint ring valleys. Each
butterfly wing has its own ring valley. In Figure 40(b), for each ring valley, the direction of the
flux is along individual orbit on the ring. This drives the oscillations. In addition, by observing
the directions of the fluxes, we can see that the curl flux flows along the individual orbits on the
ring. Flux also flows along the outer edge of the double-ring butterfly. This illustrates that flux
vector has two components. One is the individual curl flow for each ring, and the other is the flow
along the whole attractor.
In the original Lorenz model [254], the trajectory of Lorenz attractor transfers from one spiral
to the other at irregular intervals. In our model, this corresponds to that the system makes a
transition from one oscillation ring valley to the other. Based on the analysis above, this transition
is driven by the outer flux flowing along outer edge of the ring. For the Lorenz system, both
landscape and flux determine the dynamical behavior of the chaotic attractor. The landscape
attracts system to the double-ring valleys and curl flux drives the oscillation transitions of the
system between the double wings. The landscape and flux theory provides a non-local view to
understand the global features of Lorenz strange attractor [68].

11.3. Flux may provide a clue to the formation of chaos


Lorenz system is a non-integrable dynamical system. It is not determined by the pure gradient
driving force. We can easily check this by taking the curl of the driving force resulting non-
zero values. In fact, there is no hamiltonian energy which gives gradient for the Lorenz system.
The deviation from the pure energy function may give rise to the driving force for the chaotic
attractor [68].
From Equation (8), the driving force F of chemical Lorenz system can be written as follows:
F = Fflux + Fgradient + Fdiffusion , where Fflux (Jss /Pss ) represents the force from flux, and Fgradient
(− 12 D · ∇U) represents the force from the gradient of the potential. We will mainly explore the
relative magnitude of these two components, due to the relatively smaller contributions from the
other item related to diffusion Fdiffusion ( 12 ∇ · DT ).
Figure 41 gives the results of the ratio Fflux /Fgradient when volume V and parameter r are
separately changed. In Figure 41(a), when V increases, the ratio of Fflux and Fgradient (the average
of Fflux and Fgradient ) increases. From the corresponding distributions (Figure 41(e)), we can reach
consistent results. When V increases, the noise decreases, implying the force from the gradient of
potential decreases. The force from flux then becomes more prominent. This illustrates that the
smaller the fluctuations, the larger the relative contribution of the flux to the driving force of the
system. Figure 41(c) also show consistent results where Fflux /Fgradient increases as the coherence
of the system increases.
From Figure 41(b), when the parameter r increases, the ratio of Fflux to Fgradient also
increase. Figure 41(f) shows the corresponding distribution of Fflux /Fgradient at different r values.
Figure 41(d) illustrates that the ratio Fflux /Fgradient increase when the coherence of the chaotic
system increases. The corresponding stability of the chaotic attractor increases (the r increases)
as the flux contribution in the total driving force increases.
From Figures 42(a)–( c) and 41(b)(d)(f), an upward trend is observed for both average flux,
ratio of flux and gradient force, and EPR going from chaos to limit cycle as r increases. This
112 J. Wang

(a) (b)

(c) (d)

(e) (f)

Figure 41. (a) shows the change of K when V is increased when r = 80. (b) shows the change of K when r
is increased when V = 10. Here K is defined as: Fflux /Fgradient , representing the relative magnitude of force
from flux and force from gradient of potential. (c) and (d) show separately K versus coherence when V and
r is changed. (e) and (f) show separately the distribution of K at different V and r (from Ref. [68]).

(a) (b) (c)

Figure 42. (a)– (c) show separately coherence, barrier height, the EPR, and the height of peak of power
spectrum of auto correlation function versus parameter r (from Ref. [68]).

shows that as the order increases from chaos to limit cycle, the average flux, ratio of flux and
gradient force, and EPR characterizing the dissipation and energy cost increases. This shows
that more ordered system as limit cycle requires more flux and energy to sustain the stability
compared to more disordered chaotic system.
There are often chaotic systems including autonomous system such as Van der Pol oscilla-
tor and Duffing oscillator and non-autonomous system such as Lorenz system, Rossler system,
Advances in Physics 113

Henon Heiles system (Hamiltonian system), and Chua circuit system that can be studied with the
present ideas. None of them are gradient systems. The driving force for all of them have non-zero
curl giving curl flux responsible for chaotic behavior [68].

12. Multiple landscapes and the curl flux for a self-regulator


We consider a simplest circuit of a self-regulating single gene (Figure 43(a) for illustration) [74].
The self-regulating gene dynamics can be described by several key factors. One is the on and
off of the gene states. The other is the protein concentrations. When a regulator protein binds
to the gene, the gene is either on or off depending on whether the regulator is an activator or
repressor. When the gene is activated, through transcription and translation processes, the protein
is synthesized and produced. In the self-regulating gene system, the proteins produced by the
gene will act back to its own gene and regulate the activity of the gene. The dynamics of such
system can be described by the underlying chemical reactions for protein synthesis and degra-
dation, and the binding/unbinding of the regulating proteins to the genes. Furthermore, in the
cell, there are only finite number of molecules (typically less than 104 ); therefore, the statistical
fluctuations in molecular numbers need to be taken into consideration. The dynamical process of

(a) (b)

(c)

Figure 43. Illustrations of self-regulating gene dynamics. (a) Reaction scheme in the self-regulating gene
circuit. (b) The multiple landscapes and dynamics of the self-regulating gene based on the view of
Figure 11(d) (dimension of individual landscapes is one in B and two in Figure 11(d)). (c) The equiva-
lent single landscape and dynamics on the expanded space of the protein concentration ρ and the gene state
ξ . Dotted lines show the basin of attractor and arrows represent the curl flux (from Ref. [74]).
114 J. Wang

the self-regulating genes can be described by the following master equation [74]:
 
∂P(n) g1 0
= [P(n − 1) − P(n)] + k(n + 1)P(n + 1, t)
∂t 0 g0
 
−h f
− knP(n) + P(n). (136)
h −f

Here P(n) is a vector with two components P0 (n) and P1 (n), with discrete variable; 0 and 1
represent the state of the gene either in on or off state. Therefore, P(n) gives the probability of
the protein concentration when the gene is on or off. Here, g0 is the production or synthesis rate
of protein at the state when a regulation protein is bound to the gene, and g1 is the production or
synthesis rate when the regulation protein is not bound to the gene. If g1 > g0 , it corresponds to
the self-repressor case. If g1 < g0 , it corresponds to the self-activator case. k is the degradation
rate of the protein. h0 is the binding rate of the regulating protein to its gene, while f is the
unbinding rate of the regulating protein from its gene. We assume that the regulation factor is a
dimer of the product proteins, so that h = h0 (n + 1)n.
One can think of the self-regulating gene dynamics as a coupled dynamical systems. For
a specific gene state, the protein concentration dynamics is dictated and follows the gradient
dynamics due to its 1D nature on the protein concentration on the one hand and the coupling
dynamics through the binding/unbinding of the proteins to the genes on the other hand. Therefore,
we can think of the self-regulation genes dynamics as moving along the protein concentration
space and jumping between the two potential energy landscape surfaces labeled by discrete values
of 0 and 1 representing the state of the gene. The timescale of binding/unbinding of regulating
proteins to the genes determining the gene states relative to the synthesis and degradation of the
protein determining the protein concentrations gives the origin of the complexity of the problem
(non-adiabaticity and adiabaticity).
We follow the landscape and flux theory for multiple landscapes and the procedure described
earlier (see the multiple landscape theory section). This leads to the coupled-Langevin equations
from the original continuous variable ψ for particle number and a discrete variable representing
the on and off gene state to the current continuous variables ψ and ξ , where ξ can be thought of
the continuous representation of discrete description of on and off state (e.g. a continuous variable
ξ can be defined as the probability of finding the discrete on or off state) [74].

ψ̇ = −ψ + X ad + ξ δX + ηψ , (137)
1
ξ̇ = −K(1 + ξ )ψ 2 + (1 − ξ ) + ηθ , (138)
ω
where ηψ and ηθ are Gaussian random numbers with ηψ  = 0, ηθ  = 0, and

1
ηψ (t)ηψ (t ) = (ψ + X ad + ξ δX )δ(t − t ),
2
1
ηθ (t)ηθ (t ) = (K(1 + ξ )ψ 2 + (1 − ξ ))δ(t − t ). (139)
ω

Here, X ad is the representative copy number of protein, and ω is the adiabaticity parameter mea-
suring the unbinding rate relative to the degradation rate. When ω is large, the binding/unbinding
of the proteins to the genes is much faster than the synthesis and degradation of the proteins,
the system is in adiabatic limit. When ω is small, the binding/unbinding of the proteins to the
genes is comparable or slower than the synthesis and degradation of the proteins, the system is
Advances in Physics 115

in the non-adiabatic regime. ω parameter characterizes the relative dynamic timescales of the
system and will be the focus of our discussion as mentioned earlier. K is the ratio of binding
versus unbinding rate of the proteins to the gene, mimicking the equilibrium binding constant.
δX represents the difference in the synthesis rate between on and off states of the genes.
If we rewrite Equation (137) using the system volume  and defining ρ = ψ/ , xad =
X / , δX = δX / , and κ = K2 , the Langevin equations are more symmetrical as
ad

ρ̇ = −ρ + xad + ξ δx + ρ , (140)
1
ξ̇ = −κ(1 + ξ )ρ 2 + (1 − ξ ) + θ (141)
ω
with

ξ = cos θ ,
1
ρ (t)ρ (t ) = (ρ + xad + ξ δx)δ(t − t ),
2
1
θ (t)θ (t ) = (κ(1 + ξ )ρ 2 + (1 − ξ ))δ(t − t ). (142)
ω
Here, ρ can be regarded as the density of protein, so that Equation (140) is the usual equation
of the volume -expansion, representing the statistical fluctuations from the finite number of
molecules deviating from the large number of molecules, and we can say that Equation (141) is
the corresponding equation of the adiabaticity parameter ω-expansion, representing the timescale
fluctuations deviating from the adiabatic case where the binding/unbinding of proteins to the
genes is significantly faster than the corresponding protein synthesis and degradation.

12.1. Potential, circulation flux, and eddy current


Our aim is to obtain an intuitive picture of relations among potential, eddy-current (circulation),
adiabatic change of states, and churn motions of non-adiabatic dynamics [74].
Let us focus our discussion on the coupled Lagevin equation. As we discussed, we trans-
formed the original self-regulating gene dynamics along the protein concentration variable n and
discrete values of on and off gene states to the problem of dynamics in the continuous variables
ψ or ρ representing the concentrations and θ or ξ (continuous representation) representing the
gene states. We can first discuss the adiabatic case when ω is large, where the binding/unbinding
of the proteins to the gene is relatively fast compared with the synthesis and degradation rate
of the proteins. This means the gene switches its on and off states frequently. In this case, from
equation for temporal dynamics of ξ , both the left-hand side of the √ equation and the noise term
tends to zero since they are inversely proportional to the ω and ω, respectively. As a result,
we obtain a specific relationship between the ρ and ξ through −κ(1 + ξ )ρ 2 + (1 − ξ ) = 0 or
ξ = (1 − κρ 2 )/(1 + κρ 2 ). If we substitute this expression to the equation for the dynamics of
ρ, we get ρ̇ = −ρ + xad + ((1 − κρ 2 )/(1 + κρ 2 ))δx + ρ or equivalently √ρ̇ = −(∂U/∂ρ)
√ + ρ ,
where U is the effective potential given as U = 12 ρ 2 − xad ρ − δx(2arctg[ κρ]/ κ − ρ). There-
fore, for the self-regulating gene in the adiabatic limit, the strong couplings between gene states
lead to frequent flipping of the gene states. We reduce this to a 1D problem with an effective
potential landscape U giving the pure gradient dynamics. The effective potential is a result of the
fast binding/unbinding of the regulators to the genes.
On the other hand, if the ω is not necessarily large, the binding/unbinding of the proteins to
the gene is comparable or even slow compared with the synthesis and degradation rate of the
116 J. Wang

proteins. This is the non-adiabatic regime. In this regime, there is no approximation we can use
for simplifying the coupled-Langevin equations further. This is because we no longer have the
timescale separation any more as the adiabatic case. Therefore, the system is inherently 2D. So
we can see in the non-adiabatic limit, we will have the following picture. By solving the corre-
sponding 2D Fokker–Planck equation in the steady state, we can find the steady-state probability
distribution Pss . By the relationship of U = − ln Pss , we characterize the potential landscape of
the system [13,74]. Furthermore, we find that the driving force of our 2D problem cannot be writ-
ten as a pure gradient of a potential. The probability flux can be obtained through the subtraction
of gradient and divergent diffusion matrix from the deterministic driving force. The net flux is not
equal to zero. This implies that the system in the non-adiabatic case is in a non-equilibrium regime
with broken detailed balance. The degree of non-equilibriumness or detailed balance breaking is
quantified by the strength of the curl probability flux. So in the 2D space of protein concentration
ρ and gene state ξ in the non-adiabatic regime, the dynamics is determined by both the gradient
of the landscape U and curl flux. Instead of straightly going down to the gradient, the motion
proceeds curly spiraling down the gradient. The curl probability flux provides the origin [74] of
the eddy-current motion in the dynamic trajectories [45].
Let us try to connect the new picture of non-adiabatic dynamics with some previous studies.
If we see the non-adiabatic case in another angle as in previous investigations, we can imagine
two landscapes each in ρ space with the discrete gene state labeling of ξ = 0 and ξ = 1 poten-
tial energy surfaces. The stochastic dynamics is mainly moving along one energy surface and
occasionally jump to another energy surface. Moving along the new energy surface for a while,
then the system jumps back to the original energy surface. This processes keep on iterating so the
stochastic trajectory along and between the two energy surfaces forms a cyclic churn-like spiral-
ing motion. The cause of this eddy-current motion is the jumps between the energy surfaces. By
expanding our system variables from discrete to continuous, we realize that although dynamics
along specific gene state can be determined by the gradient of the single potential surface, the
motion in the 2D space of protein concentration and continuous gene states can no longer be seen
in this way. In fact, the dynamics can be described by a single potential landscape U instead of
two individual landscapes here for this self-regulating gene example, giving the gradient part of
the dynamics. See Figure 11. Furthermore, the detailed balance is broken. This is characterized
by the steady-state probability flux which has the divergent-free curl nature. From this continu-
ous representation, we can quantify the origin that the churn eddy-current motion observed in the
simulations by the strengths of the curl steady-state probability flux.
Finally, let us point out an extreme non-adiabatic condition where the ω goes to zero [74]. In
this case, from equation for dynamics of ξ , ξ̇ = 0, then ξ = constant and the motion is fixed at a
specific ξ along ρ. Again the effective dynamics is a gradient of the potential landscape along 1D
protein concentration coordinates ρ, with effective potential U = 12 ρ 2 − (xad + ξ δx)ρ. There is
essentially no jumping between the energy surfaces because the binding/unbinding of regulators
to the genes is so slow. Therefore, the dynamics becomes very simple, following gradient of the
potential along a single energy landscape surface formed by the protein synthesis and degrada-
tion similar to the adiabatic case although the effective potential is different. The dynamics does
not have any curl flux component and therefore no eddy-current churn motion. Therefore, for
both extreme cases of very fast or very slow binding/unbinding of regulatory proteins to the gene
relative to the synthesis and degradation of the proteins, the dynamics of self-regulator is driven
by the pure gradient of the underlying single potential landscape. On the other hand, when the
timescale of binding/unbinding of regulatory proteins to the gene is comparable to the synthesis
and degradation of the proteins, the dynamics of self-regulator follows a gradient of the under-
lying landscape and a curl flux giving the possibility of eddy-current and churn motion. So the
timescale is the key for non-adiabatic dynamics and leads to the origin of the curl probability flux.
Advances in Physics 117

We must point out that our conclusion above on the decomposition of the driving force for
dynamics is only valid for the self-regulator network systems [74]. For general complex dynam-
ical systems, even in the adiabatic limit of timescale separation, the dynamics in general is not
driven by a pure gradient of a single potential landscape but in addition a curl flux force. The ori-
gin of the curl is from the underlying dynamics. On the other hand, in the non-adiabatic regime,
the dynamics in general is driven by both gradient of the potential landscape and the curl flux
force. The curl flux now has two contributions, one from the adiabatic part and the other is the
non-adiabatic part from the timescale consideration.

12.2. Results and discussions


In this part, we will explore the consequence of taking the timescale into the consideration for the
dynamics and especially the role of the curl flux leading to eddy-current and churn motion [74].
In Figure 44, we show a 2D contour map of the probability landscape U = − ln Pss in protein
concentration ρ and gene state ξ . The landscape has a single basin of attraction. When the ω is
large, the flux is small and negligible in numerical values. So the dynamics in adiabatic regime is
driven by the gradient of the landscape. The orientation of the landscape is vertical and therefore
the corresponding projections to the protein concentration variables at the two gene states of on
and off with ξ = 0 and ξ = 1 giving the same result at the same location. Therefore, there is
only single peak for the distribution at the protein concentration space when ω is small in this
adiabatic regime. On the other hand, when the ω becomes smaller, the landscape is still a single
basin of attraction, but the dynamics is non-adiabatic and driven by both the gradient of the
landscape and the non-zero curl flux. The orientation of the landscape is tilted from the vertical
direction. The smaller ω is the more tilted is the landscape. As we can see, this orientation of the
landscape is driven by the non-zero curl flux giving the eddy-current like motion in the combined
protein concentration and gene state space. The tilting orientation of the landscape leads to the
corresponding projections to the protein concentration variables at the two gene states of on and
off with ξ = 0 and ξ = 1 giving the different results at the two different locations. Therefore, two
peaks for the distribution at the protein concentration space are expected when ω is small in the
non-adiabatic regime. For the self-repressor, the one peak is typically expected since for many
procaryotic cells, the binding/unbinding rate is relatively fast compared with the protein synthesis
and degradation. While for eukaryotic cells, the binding/unbinding can be comparable or slower
than protein synthesis and degradation. In this non-adiabatic regime, the two peaks emerge which

(a) (b) (c)

Figure 44. Calculated single composite landscape on the expanded space of the protein concentration ρ and
the gene state ξ at (a) ω = 0.01, (b) ω = 0.1, and (c) ω = 100. Superimposed on landscapes are Jss (white
arrows) and dominant kinetic paths between the gene-on and gene-off states (black lines and blue lines).
Red lines are distributions of ρ in the gene-off (ξ = −1) and on (ξ = 1) states (from Ref. [74]).
118 J. Wang

Figure 45. The Fano factor as function of log ω, which is the variance divided by the mean, showing the
relative width of the steady-state probability distribution of protein concentration (from Ref. [74]).

are quite unexpected from the conventional point of view. We show here that the two peaks of the
protein concentration distribution at different locations here originate from the curl flux which
does not preserve the detailed balance and leads to the eddy current breaking the symmetry of the
original one peak at the same location for both gene-on and gene-off states [74].
In Figure 45, we also show the Fano factor (variance divided by the mean) or relative width
of the distribution peaks as a function of the adiabaticity parameter ω. We find that when the
omega is large, binding and unbinding are frequent, therefore the gene states are strongly coupled.
This leads to narrower and Poisson-like distribution (Fano factor is close to 1) of the protein
concentrations. This is because the dynamics essential follows gradient of the effective landscape
and motion is convergent to the same location. On the other hand, when the ω becomes smaller,
the couplings between the gene states are weaker. The dynamics is driven by both the gradient of
potential landscape and the curl flux. The effect of the curl flux is to form eddy-current motions.
This will lead to the dispersion of the motion, leading the distribution of the protein concentration
to split from one peak to two peaks in different locations. We see the broader distribution of the
protein concentrations. This is deviating from Poisson and the fluctuations characterized by the
Fano factor become large. Again, we now understand this is due to the non-zero curl flux [74].
In Figure 46, we quantify the kinetic paths from the gene-on state to the gene-off state and
back. Kinetic paths are crucial for understanding how the evolution dynamics of the system is
realized. We see that for the adiabatic case when ω is large, the forward and backward kinetic
paths are almost identical and opposition in direction. This is expected because the underlying
dynamics is driven by the gradient of the potential landscape. On the other hand, for the non-
adiabatic case when ω is small, the forward and backward kinetic paths are significantly different
from each other and they are irreversible. The irreversibility of the kinetic paths means the time
reversal symmetry is violated. Again this is caused by the non-zero curl flux. The curl flux gives
Advances in Physics 119

(a) (b) (c)

Figure 46. Calculated single composite landscape on the expanded space of the protein concentration ρ and
the gene state ξ at (a) ω = 0.01, (b) ω = 1, and (c) ω = 100. Superimposed on landscapes are Jss (black
arrows) and dominant kinetic paths between the gene-on and gene-off states (white lines and green lines)
(from Ref. [74]).

(a) (b) (c)

Figure 47. The two time correlation function in protein concentration with (a) ω = 0.01, (b) ω = 0.1, and
(c) ω = 100 (from Ref. [74]).

the directional preference and therefore leads to the non-equivalence of forward and backward
paths. The curl flux breaks the detailed balance and therefore gives the time asymmetry [74].
In Figure 47, we show the two-point correlation function of protein concentrations in time.
We found that for adiabatic case when ω is large, the correlation in time follows a single exponen-
tial law. On the other hand, when ω is small in the non-adiabatic regime, the correlation function
in time starts to have complex behavior ranging from multi-exponential to weak oscillations.
Again the curl flux is the origin of this behavior since it can give rise to more apparent states
with higher probability as shown earlier through the dispersion of the original single peak dis-
tribution in protein concentration. Furthermore, because of the curling nature of the flux, cycle
oscillations become possible. So the complexity in the time correlation function deviating from
single exponential is a quantitative signature of the curl flux and detailed balance breaking [74].
In Figure 48, we calculated the residence time (the waiting time before the state switch-
ing) distribution of particular gene states [74]. For adiabatic case when ω is large, the residence
time distribution monotonically decays. On the other hand, when ω is small in the non-adiabatic
regime, the residence time starts to form a peak of its distribution. This is another signature of the
detailed balance breaking which is caused by the curl flux. The curl flux leads to the irreversible
120 J. Wang

(a) (b)

Figure 48. The distribution of residence times of the gene-on or gene-off state: (a) gene is on and (b) gene
is off.

motion. A physical intuitive interpretation is that if the detailed balance is preserved, forward
and backward paths are equivalent. Suppose in and out are dynamic, then each forward path
A − − > (inside) − − > B will have a equivalent backward path B − − > (inside) − − > A.
 amplitudes (Ai ) of the exponentials (multiple) are positive (the distribu-
This leads to that all the
tion can be written as i Ai exp[−λi t]). Since all the eigenvalues of the problem (λi ) are real, this

Figure 49. Difference between the forward and backward three-point correlation functions of protein
concentration versus log ω (from Ref. [74]).
Advances in Physics 121

leads to only peak at zero. When the curl flux is none zero. The detailed balance is broken and the
kinetic paths are irreversible. This leads to the fact that not all the amplitudes (Ai ) of the expo-
nentials (multiple) are positive. This can shift the peak position from zero to a finite residence
time [104,268]. Again, the origin of this is the presence of the curl flux.
In Figure 49, we calculated the three-point correlation function in time and its time reversal
in protein concentration ρ [74]. The forward in time three-point correlation and backward in time
three-point correlations are nearly the same and the difference between the two is small for large
ω values. This is because for adiabatic case, the detailed balance is effectively preserved for self-
regulating genes. There is no curl flux to break the detailed balance and lead to time asymmetry.
On the other hand, for the non-adiabatic case of small ω values, the dynamics of the system is
driven by the gradient of the potential landscape and in addition the curl flux. The curl flux is more
prominent as the ω values decrease more until some threshold value. We can see the difference of
the forward and backward three-point correlation function in time becomes larger as the ω values
decrease or in other words as the curl flux increases. So curl flux is the quantitative measure of
the detailed balance breaking and time asymmetry. The three-point correlation function can be
directly measured in the experiment which provides a direct experimental probe to the effect of
the curl flux.
In Figure 50, we calculated the entropy production per turn around the loop of dominant
kinetic paths as a function of ω [74]. EPR is the product of both the potential gradient and the
curl flux in non-equilibrium systems analogous to voltage and electric current in an electrical
circuit which gives dissipation. The EPR per turn around the loop of dominant kinetic paths is
small in the adiabatic regime where the ω values are large. This is because in the adiabatic regime,
the system is in effective equilibrium state with detailed balance. There is no flux in and out to the
system. Therefore, there is no energy or entropy exchange. The EPR is zero. On the other hand,

Figure 50. The entropy production per turn of the on/off gene switching as function of log ω (from Ref.
[74]).
122 J. Wang

the EPR per turn around loop of dominant kinetic paths increases as the ω value decreases. As
we know that when ω decreases, the curl flux measuring the degree of detailed balance breaking
increases. When the flux in and out of the system becomes larger, the dissipation is naturally
larger analogous to increasing the current in electric circuit leading to more thermal dissipations.
Again curl flux is the origin of the entropy production or dissipations.
Figure 51 shows how the above crucial dynamical quantities vary with respect to the flux inte-
grated over a closed loop of dominant kinetic paths. In Figure 51(a), we show that as adiabaticity
increases, the flux along the dominant kinetic closed paths decreases. Or in other words, the flux
along the dominant kinetic closed paths increases as the non-adiabaticity increases. Figure 51(b)
shows that the dispersion or spread quantifying the fluctuations in protein concentrations (Fano
factor) increases as the flux along the dominant kinetic closed paths increases. Figure 51(c) illus-
trates that the irreversibility of three-point correlation function of the protein concentrations
increases as the flux along the dominant kinetic closed paths increases. Finally, Figure 51(d)
shows that as the flux along the dominant kinetic closed paths increases, the dissipation per turn
around the closed loop determined by the dominant kinetic paths also increases [74].

(a) (b)

(c) (d)

Figure 51. Anomalies arising from the non-adiabatic curl flux in the self-repressing gene. Quantities calcu-
lated from the Langevin equations for ρ and ξ are plotted as functions of log ω. (a) The effective strength
of curl flux Jpath is anti-correlated with ω and largely increases in the non-adiabatic regime of ω < 1. (b)
The Fano factor, which is the variance divided by the mean, showing the relative width of the steady-state
probability distribution of protein concentration. (c) Difference between the forward and backward three–
point correlation functions of protein concentration. (d) The entropy production per turn of the on/off gene
switching are plotted as function of Jpath , showing that these quantities are correlated with curl flux (from
Ref. [74]).
Advances in Physics 123

13. Spatial landscapes and development


13.1. Introduction
Cellular functions are usually realized through the underlying networks of proteins and genes.
Chemical kinetic equations are often used to describe the interactions within these networks.
The conventional chemical kinetic equations work at the bulk conditions. They do not work
accurately due to the statistical fluctuations from the finite number of molecules involved
[58,258,260,262,269–271]. Moreover, an organism is not a homogeneous system. Spatial inho-
mogeneity is crucial for many biological functions such as development, differentiation, growth,
and death [58,270,271].
Drosophila melanogaster is a typical organism for genetic and developmental biology for the
following reasons, where noise can be relevant. Experiments are relatively easy to perform on
the system. Rich information on this system is already available. The geometry of the embryo
is essentially ellipsoidal in shape with the anterior–posterior axis being much longer than the
other two axes. Within two hours of embryotic development, the organism has no distinct cells;
each embryo has many nuclei, but no cell membranes blocking the diffusion of proteins from
one nucleus to another. This makes D. melanogaster ideal for studying diffusion-related spatial
pattern formation, and the associated noise in an embryo.
There are several key proteins and genes in the embryo shown to have significant effects
on development. One of these is the bicoid protein. The bicoid seems to directly regulate many
developmental genes [272,273]. Furthermore, the bicoid production appears to be independent of
other zygotic proteins. Moreover, the average bicoid concentration at any given spatial location
is basically constant in time for the main part of the blastoderm stage. Importantly, the bicoid
concentration and effects on other proteins are dependent on the underlying statistical fluctua-
tions [157,158,274]. Recent progress on studying bicoid associated with the intrinsic statistical
fluctuations has led to a discussion on how the embryo can so accurately determine the spatial
location of the sudden discrete jump of a bicoid-dependent Hunchback (gene) concentration. The
Hunchback gradient is an important regulator of other zygotic proteins [157,158,274].
We can explore the spatial stochastic dynamics of the bicoid development and pattern
formation in one dimension (posterior–anterior direction) as an application of the stochastic
non-equilibrium statistical field theory [58,73,75]).

13.2. Master equation


Three assumptions are made based on the empirical observations to simplify the calculations.
First, the production of bicoid is located in the anterior of the embryo and is stochastically gen-
erated. Second, the motion of proteins through the embryos are assumed to be diffusive. Third,
k
bicoid degradation is assumed to occur stochastically Bcd − → ∅. With the above assumptions,
we can derive the corresponding spatial-dependent chemical master equation. The probabilistic
description from master equation provides a distribution or weights of the protein concentrations,
giving a probabilistic landscape in concentration space.
When the concentration has spatial dependence as discussed in the spatial landscape section,
the probability becomes probabilistic functional or field (field being the protein concentra-
tions which depend on space). Therefore, by solving the probabilistic functional or field, we
can map out the spatial-dependent field landscape of the underlying network. This is crucial
for quantifying the global stability, robustness, and function of spatially dependent dynamical
systems.
The chemical master equation can be expressed in terms of a vector, n, where the components
n = (n0 , n x , n2 x , . . .) = ({nx }) correspond to the number of bicoid proteins at evenly spaced
124 J. Wang

spatial locations x = 0, x, 2 x, . . . with x being constant. The equation becomes [265]

dP(n) 
= g(P(n − 0̂) − P(n)) + k ((nx + 1)P(n + x̂) − nx P(n))
dt x

+D ((nx + 1)P(n + x̂ − ŷ) − nx P(n)). (143)
xy

Here the P(n) is the probability that number and location of proteins are described by n. g is the
rate of protein production, 0̂ is a unit vector in the 0 space (representing a single protein at the
origin, spatial location 0), and the term multiplying g represents the process of a protein being
produced at the origin. k is the degradation or decay rate, x̂ represents a protein at point x, and the
term multiplying k represents the protein decay at any spatial location. D is the spatial diffusion
coefficient, and the term multiplying it gives diffusion from each spatial location to its neighbors.
The sums over x are for all space x = 0, x, 2 x, . . ., and over y are for all spatial neighbors of x
(y = x ± x).

13.3. Results and discussions


First, we can search for a time independent steady-state solution, dP(n)/dt = 0 for all n of the
master equation. It turns out that an analytical expression for the steady-state probability of the
above stochastic spatial-dependent master equation can be obtained [58]:


all −gzx/ x (1−z)/k
e (gzx/ x (1 − z)/k)nx
P(n) = . (144)
x
nx !

Here z ≡ (1 + k/2D − k 2 /4D2 + k/D). x is the lattice spacing between each point in space.
If we substitute this form into the original chemical master equation, we obtain dP/dt = 0.
Therefore, this is the exact analytical solution to the steady- state problem.
Since we have the full steady-state distribution. We can explore the average and fluctuations
of the bicoids. Both the mean values and the intrinsic fluctuations decay exponentially in space
from anterior to posterior. These match the current experimental data (see Figures 52 and 53)
[274]. The main portion of Figure 52 has a curve with predicted values, and two more curves
with predicted uncertainties from both intrinsic and experimental fluctuations.
The inset shows the probability distributions with only intrinsic (excluding experimental
fluctuations) fluctuations for nuclei located at 47% and 49% embryo length. Despite signifi-
cant overlap of the probability distributions in bicoid concentrations, the embryo seems capable
of distinguishing which side of the 48% embryo boundary they fall onto. It is important that
the statistical distribution of the intrinsic fluctuations can be obtained exactly and fits with the
experimentally observed fluctuations.  
To quantify the statistical fluctuations, one finds σ = gGx /k = geln(z)x/ x (1 − z)/k. We
see that, since ln z < 0, the size of the fluctuation decays from anterior to posterior (shown
in Figure 53). Adding the expected experimental noise from photon counting and focal plane
alignment gives a larger (no longer purely Poisson) fluctuations. The inset shows fractional uncer-
tainty, σ/C, where C is the number of bicoid molecules. Importantly, the increasing trend of
total (experimental plus intrinsic) fractional uncertainty from anterior to posterior agrees with
experiment.
The present model provides an insight into how the D. melanogaster system shapes the sta-
tistical distribution [58]. We can now quantify the underlying potential landscape U = − ln P
Advances in Physics 125

Figure 52. Calculation of expected distribution versus data, courtesy of Dr Thomas Gregor, from an embryo.
Error bars include intrinsic Poisson noise from proteins, photon counting noise, both of which are calculated
from first principles, and a small constant gaussian noise intended to account for focal plane alignment.
Errors from nuclear identification are not included. This fit gives χ 2 /dof = 1.26 (from Ref. [58]).

Figure 53. Calculated noise from Figure 52; dotted blue line shows intrinsic noise only, while solid red
line shows both intrinsic and predicted experimental noise. Inset shows the predicted total experimental
standard deviation divided by the mean, with dotted blue and solid red lines having the same meaning. Both
solid lines follow roughly the trends as in [274], though without errors from nuclear identification they are
somewhat smaller than the real experimental uncertainties (from Ref. [58]).
(a) (b) (c)

126
(d)

J. Wang
Figure 54. Potential versus number of proteins over space. The main graph shows the complete figure, in which each point in space (percent egg length) has its
own potential energy function. Three of these are shown explicitly above the main graph, at 20 %, 50 %, and 80 % egg length (from Ref. [58]).
Advances in Physics 127

and relate to the steady-state probabilistic functional obtained from the analytical solution of the
spatial-dependent master equation. In Figure 54, the landscape is shown in both concentration
and physical space. We can see from the bottom panel that the landscape topography at each spa-
tial point has a funnel shape, with the bottom of the potential basin corresponding to the peak of
the steady-state probability distribution at that location. We can also see this clearly from the 2D
representations of the potential versus protein number at different spatial locations (at 20%, 50%,
and 80% egg length). The widths or the spread of the funnels are measured by the variances in
potentials at each spatial locations. A funneled landscape indicates that the gene network under-
lying the developmental process is stable and robust. Therefore, the developmental system can
realize its biological function effectively and reliably. We can also see that the funneled landscape
becomes narrower from anterior to posterior. This shows varying stability and robustness along
different spatial locations.

14. Conclusion
In summary, we have presented a review of landscape and flux theory for non-equilibrium dynam-
ical systems. We demonstrated that the global nature of the dynamics for non-equilibrium system
is determined by two key factors: the underlying landscape and a curl probability flux. While the
landscape (U) reflects the underlying probability of steady states (P) (U = − ln Pss ) and gives
a global characterization and stability measure of the system, the curl steady-state probability
flux provides a measure of the degrees of detailed balance breaking. We found that there is an
analogy between equilibrium dynamics and electron motion in electric field. There is also an anal-
ogy between non-equilibrium dynamics and electron motion in combined electric and magnetic
field. The landscape and flux theory has many interesting consequences including the presence
of irreversible kinetic paths that do not necessarily pass through the landscape saddle points;
non-equilibrium TST at the new saddle on the optimal paths for small but finite fluctuations; gen-
eralized fluctuation–dissipation relationship for non-equilibrium dynamical systems where the
response function is not just equal to the fluctuations at the steady state alone as in the equi-
librium case but with additional contribution from the curl flux in maintaining the steady state;
non-equilibrium thermodynamics where the free energy change is not just equal to the entropy
production alone as in equilibrium case but also with additional contribution from house-keeping
part with the non-zero curl flux in maintaining the steady state; gauge theory and geometrical
connection where the flux is found to be the origin of the gauge field curvature and there is
a topological phase in analogy to the Berry phase in quantum mechanics; coupled landscapes
where non-adiabaticity of multiple landscapes in non-equilibrium dynamics can be analytically
studied as a single landscape in extended dimensions using the landscape and flux theory and
eddy current is emerged from the non-zero curl flux; the landscape and flux theory generalized to
the non-equilibrium field theory for stochastic spatial dynamics.
The review provides concrete examples of physical and biological systems that demonstrate
the insights that we can find using the landscape and flux theory. These include the cell cycle
where the landscape attracts the system down to the oscillation attractor, while the flux drives the
coherent cycle motion on the oscillation ring, the different phases of the cell cycle can be iden-
tified and quantified as local basins of attractions on the cycle path and the checkpoints can be
identified and quantified as the barriers or transition states between the local basins of attractions;
stem cell differentiation where Waddington landscape for development as well as for the differ-
entiation and reprogramming paths can be quantified; the quantification of landscape topography
of diseases such as cancer; evolution where more general evolution dynamics beyond Wright and
Fisher can be quantified with the specific example of allele frequency-dependent selection and
128 J. Wang

the Red Queen phenomenon; ecology where the landscape and flux as well as the global stabil-
ity of predator–prey, cooperation, and competition models are quantified; neural networks where
general asymmetrical connections are considered for learning and memory, gene self-regulators
where the multi-landscape non-adiabatic dynamics of gene expression can be described with
the single landscape and flux in extended dimensions and analytically treated; chaotic strange
attractor where the flux is crucial for the chaotic dynamics.
There are many important open issues and questions that remain challenging. The underly-
ing mechanisms for diseases such as cancer are still challenging and important to study in both
principle and in detail despite the progress so far. Specific questions include how to uncover and
quantify globally and locally the genetic origin, environmental influences, epigenetics and het-
erogeneity of various types of cancer formation. For development and differentiation, it becomes
crucial to uncover and quantify the paths of reprogramming, de-differentiation, and transdiffer-
entiation among different kinds of differentiated cells for obvious tissue re-engineering purpose.
Furthermore, the relationship between development and cancer remains to be seen. For evolution,
adaptive landscape and dynamics are critical in understanding the general multi-locus evolution
including effects of recombination, epistasis, but are still unclear. For an ecological landscape,
the spatial effects are clearly important and still need to be taken into account. Furthermore, the
interplay between the evolution and ecology can be studied. For neural dynamics, the global
quantification of learning and memory, thinking about decision-making and neural modulations
in terms of landscapes and flux is crucial for uncovering and understanding brain function. For
chaotic dynamics, many of the examples need to be work out to understand the interplay between
the underlying landscape–flux and the dynamics. The landscape and flux theory presented here for
classical dynamical systems can even be generalized to that of for non-equilibrium quantum sys-
tems [78]. The role of landscape and flux in determining the quantum dynamics, the transport and
coherence, the link to dissipation and dynamical coherence are important not only for the funda-
mental understanding, but also for the application to real quantum dynamical systems [275–282]
such as the energy and charge transfer in light harvesting complex and photosynthetic reaction
center [283–287], nano-transport [288–295] as well as quantum computations [296–302].
The landscape and flux theory for non-equilibrium dynamical systems has certain philosoph-
ical implications. (1) We can see that the dynamics of complex non-equilibrium systems are
determined by the two major driving forces, the probability landscape and the probability flux.
The driving forces have a statistical/probabilistic or entropy origin. As discussed in this review,
through the curl nature and gauge symmetry, the driving force also has geometrical/topological
interpretation. The nature of the equilibrium systems is related to the flat space, while the nature
of the non-equilibrium systems is related to the curved space. The non-equilibrium dissipa-
tion can be interpreted as the cost for making flat space curved. The statistics/probability or
entropy and geometry/topology appears to be intimated linked. Gravity and curved space time
may provide a good example of such connection. This implies that the foundation of the general
driving force characterizing the physical interactions of complex non-equilibrium systems may
be rooted from statistics/probability/entropy or geometry/topology. After all, the statistics and
geometry/topology are the most fundamental entity we can reliably count on. (2) The duality of
the driving force here echoes the ancient Chinese philosophy of Yin–Yang duality for the origin
of the dynamics of the world. The non-equilibrium complex world evolves under the interplay
between the landscape and the flux rather than the single dominant one (such as landscape alone
for equilibrium systems). (3) The curl nature of the probability flux explicitly breaks the detailed
balance. This leads to the time reversal asymmetry, giving the direction of the time. (4) Since a
non-equilibrium landscape emerges as a result of the dynamics rather than being known a priori as
equilibrium dynamics, this indicates the importance of process in determining the ultimate fates.
(5) Furthermore, the curl flux exhibits a global nature (analogous to Berry phase or topology).
Advances in Physics 129

This is in contrast with the detailed balance condition (zero flux) which can be locally restored.
The flux has the global curl nature and cannot be locally adjusted. It also implies that the dynamics
of the non-equilibrium complex world can no longer be locally determined but must be globally
connected. This suggests the importance of the global (non-local) connections and relationships
in addition to the individual components in determining the evolution dynamics. (6) In fact, the
global (non-local) nature can no longer be understood by the basic building blocks alone but also
by their connections or relationships. Understanding the individual building blocks cannot give
a full picture of the behavior of the whole system. Perhaps, the most fundamental nature of the
world is embedded in the processes, connections, and relationships rather than in the individual
building blocks, since the world may be after all emergent. Therefore, one should understand the
non-equilibrium complex world as a whole rather than only in terms of individual pieces. (7) As
we see, the statistical, geometrical/topological, and dual nature of the driving forces in terms of
the probability landscape and global probabilistic curl flux, emergence, processes, relationships,
and global (non-local) connections are all related and crucial for the dynamics of the world. Many
aspects of these relations can be quantified in the theoretical framework presented in this review.
Non-equilibrium statistical mechanics is the foundation of complex physical and biological
dynamical systems. We have only presented our own view and approach among the existing ones
in this review. We expect the field will continue to be flourishing with further more approaches
and deeper understanding due to the fundamental importance of physics and especially its
applications to biological systems.

Acknowledgments
I would like to thank my students, postdoctoral researchers, and collaborators for their efforts
and contributions in establishing the landscape–flux theory and its applications for the non-
equilibrium dynamical systems. They are Dr Li Xu, Mr. Kun Zhang, Dr Chunhe Li, Dr Haidong
Feng, Dr Bo Han, Dr Dave Lepzelter, Dr Liufang Xu, Dr Feng Zhang, Dr Wei Wu, Mr. Han Yan,
Mr. Zhedong Zhang, Mr. Lei Zhao, Mr. Bo Huang, Dr Xuefeng Xia, Dr Xidi Wang, Prof. Masaki
Sasai, Prof. Zhirong Sun, Prof. Hualin Shi, Prof. Keun-Young Kim, and Prof. Erkang Wang. I
also thank Dr Li Xu for the help in preparing references and figures for this review.
I would like to thank Prof. Peter G. Wolynes for his constant encouragement and insights. I
thank Prof. Jose N. Onuchic for his supports. I also thank Prof. Hong Qian for useful discussions.

Notation
x system variables (concentrations, densities, etc.)
F(x) driving force
η(x, t) stochastic driving force
D a scale factor representing the magnitude of the fluctuations
D diffusion tensor or matrix
P(x, t) probability
J(x, t) probability flux
Jss probability flux of steady state
U(x) the population potential landscape
φ0 the intrinsic potential landscape
V the intrinsic flux velocity
L(x) Lagrangian
H(x) Hamiltonian
Eeff effective energies
130 J. Wang

k kinetic rate constant


G the chemical potential difference in terms of Gibbs free energy
S(x) the action
L(x(t)) Lagrangian for each path
dl infinitesimal displacement along 1D path
SHJ the discreterized target function
P the probability distribution functional
J {a,q} probability current functional
U the internal energy functional
{a,q}
Vss steady-state probability velocity functional
S functional variables the entropy
Ṡ(t) entropy change for the system
hd (t) the heat dissipation
ep (t) the change of the total entropy
Mp the transition matrix in population space
Mc the transition matrix in coherence space
Mpc the coupling transition matrix between population and coherence space
ρS the reduced density matrix for non-equilibrium quantum system
Tmn the transfer matrix
σ (t) the variance
Ai Allele
Ai Aj genotype
wij the fitness of genotype Ai Aj
w̄ the mean fitness of population
VA (w) the additive genetic variance
ui the effective input action potential of the neuron i
Ci the capacitance
Ri the resistance of the neuron i
P(n) the probability of the protein concentration when the gene is on or off
g0 the production or synthesis rate when a regulation protein is bound to the gene
g1 the production or synthesis rate when the regulation protein is not bound to the gene
ω the adiabaticity parameter measuring the unbinding rate relative to the degradation rate

Disclosure statement
No potential conflict of interest was reported by the authors.

Funding
The research studies presented in this review article were supported in part by National
Science Foundation [USA: NSF-0447533, NSF-0926287, and NSF-0947767] and National Nat-
ural Science Foundation of China [NSFC-21190040, NSFC-11174105, NSFC-91225114, and
NSFC-91430217].

References
[1] N.G. Van Kampen, Stochastic Processes in Physics and Chemistry, Elsevier, Amsterdam, 2007.
[2] C.W. Gardiner, Handbook of Stochastic Methods, Springer, New York, 1983.
[3] J. Schnakenberg, Rev. Mod. Phys. 48 (1976), pp. 571–585.
Advances in Physics 131

[4] G. Nicolis and I. Prigogine, Self-organization in Nonequilibrium Systems: From Dissipative Struc-
tures to Order Through Fluctuations, Wiley, New York, 1977.
[5] H. Haken, Advanced Synergetics: Instability Hierarchies of Self-organizing Systems and Devices,
Springer, Berlin, 1987.
[6] R. Graham, Macroscopic potentials, bifurcations and noise in dissipative systems, in Noise in Non-
linear Dynamical Systems, Vol. 1, F. Moss, P.V.E. McClintock, eds., Cambridge University Press,
Cambridge, 1989, pp. 225–278.
[7] G. Hu, Stochastic Force and Nonlinear Systems, Shanghai Science Education, Shanghai, 1995.
[8] M. Sasai and P.G. Wolynes, Proc. Natl. Acad. Sci. USA 100 (2003), pp. 2374–2379.
[9] H. Qian, Method. Enzymol. 467 (2009), pp. 111–134.
[10] C. VandenBroeck and M. Esposito, Phys. Rev. E 82 (2010), p. 011144.
[11] P. Ao, Commun. Theor. Phys. 49 (2008), pp. 1073–1090.
[12] H. Ge and H. Qian, Phys. Rev. E 81 (2010), p. 051133.
[13] J. Wang, L. Xu, and E.K. Wang, Proc. Natl. Acad. Sci. USA. 105 (2008), pp. 12271–12276.
[14] J. Wang, K. Zhang, and E.K. Wang, J. Chem. Phys. 133 (2010), p. 125103.
[15] H.D. Feng and J. Wang, J. Chem. Phys. 135 (2011), p. 234511.
[16] S. Lapidus, B. Han, and J. Wang, Proc. Natl. Acad. Sci. USA. 105 (2008), pp. 6039–6044.
[17] J. Wang, C. H. Li, and E. K.Wang, Proc. Natl. Acad. Sci. USA. 107 (2010), pp. 8195–8200.
[18] L.E. Reichl, Modern Course in Statistical Physics, University of Texas Press, Texas, 1984.
[19] H. Frauenfelder, S.G. Sligar, and P.G. Wolynes, Science 254 (1991), pp. 1598–1603.
[20] P.G. Wolynes, J.N. Onuchic, and D. Thirumalai, Science 267 (1995), pp. 1619–1620.
[21] D.Q. Jiang and M. Qian, Mathematical Theory of Nonequilibrium Steady States: On the Frontier of
Probability and Dynamical Systems, Springer, New York, 2003.
[22] B. Zhang and P.G. Wolynes, Proc. Natl. Acad. Sci. USA. 111 (2014), pp. 10185–10190.
[23] G. Nicolis and I. Prigogine, Exploring Complexity – An Introduction, Freeman and Co, New York,
1989.
[24] P. Gaspard, Chaos, Scattering and Statistical Mechanics, Cambridge University Press, Cambridge,
1998.
[25] U. Seifert, Eur. Phys. J. B Condens. Matter Phys. 64 (2009), pp. 423–431.
[26] J. Wang, B. Huang, X.F. Xia, and Z.R. Sun, PLoS Comput. Biol. 2 (2006), pp. 1385–1394.
[27] T. Hatano and S. Sasa, Phys. Rev. Lett. 86 (2001), pp. 3463–3466.
[28] T. Harada and S. Sasa, Phys. Rev. Lett. 95 (2005), pp. 130602-01–130602-04.
[29] U. Seifert and T. Speck, Europhys. Lett. 89 (2010), pp. 10007-01–10007-04.
[30] D.J. Evans, E.G.D. Cohen, and G.P. Morriss, Phys. Rev. Lett. 71 (1993), pp. 2401–2404.
[31] G. Gallavotti and E.G.D. Cohen, Phys. Rev. Lett. 74 (1995), pp. 2694–2697.
[32] J. Kurchan, J. Phys. A 31 (1988), pp. 3719–3729.
[33] J.L. Lebowitz and H. Spohn, J. Stat. Phys. 95 (1999), pp. 333–365.
[34] C. Jarzynski, Phys. Rev. Lett. 78 (1997), pp. 2690–2693.
[35] C. Jarzynski, Phys. Rev. E 56 (1997), pp. 5018–5035.
[36] G.E. Crooks, Phys. Rev. E 60 (1999), pp. 2721–2726.
[37] G.E. Crooks, Phys. Rev. E 61 (2000), pp. 2361–2366.
[38] M.I. Freidlin and A.D. Wentzell, Random Perturbations of Dynamical Systems, Springer, New
York/Berlin, 1984.
[39] Z. Schusss, Theory and Applications of Stochastic Processes, Springer-Verlag, New York, 2010.
[40] E. Ben-Jacob, D.J. Bergman, and Z. Schuss, Phys. Rev. B 25 (1982), pp. 519–522.
[41] E. Ben-Jacob, D.J. Bergman, Y. Imry, B.J. Matkowsky, and Z. Schuss, Appl. Phys. Lett. 42 (1983),
pp. 1045–1047.
[42] E. Ben-Jacob and D.J. Bergman, Phys. Rev. A 29 (1984), pp. 20–21.
[43] R.S. Maier and D.L. Stein, SIAM J. Appl. Math. 57 (1997), pp. 752–790.
[44] R. Kupferman, M. Kaiser, Z. Schuss, and E. Ben-Jacob, Phys. Rev. A 45 (1992), pp. 745–756.
[45] A.M. Walczak, J.N. Onuchic, and P.G. Wolynes, Proc. Natl. Acad. Sci. 102 (2005), pp. 18926–18931.
[46] D.M. Roma, R.A. O’Flanagan, A.E. Ruckenstein, A.M. Sengupta, and R. Mukhopadhyay, Phys. Rev.
E 71 (2005), p. 011902.
132 J. Wang

[47] E. Aurell and K. Sneppen, Phys. Rev. Lett. 88 (2002), p. 048101.


[48] M. Assaf, E. Roberts, and Z. Luthey-Schulten, Phys. Rev. Lett. 106 (2010), p. 248102.
[49] C. Lv, X. Li, F. Li, and T. Li, PLoS ONE 9 (2014), p. e88167.
[50] M. Lu, J.N. Onuchic, and E. Ben-Jacob, Phys. Rev. Lett. 113 (2014), p. 078102.
[51] H. Ge and H. Qian, Chaos 22 (2012), p. 023140.
[52] B.S. Lindley and I.B. Schwartz, Phys. D 255 (2013), pp. 22–30.
[53] J. Wang, K. Zhang, H.Y. Lu, and E.K. Wang, Biophys. J. 89 (2005), pp. 1612–1620.
[54] J. Wang, K. Zhang, H.Y. Lu, and E.K. Wang, Phys. Rev. Lett. 96 (2006), p. 168101.
[55] J. Wang, K. Zhang, H.Y. Lu, and E.K. Wang, Biophys. J. 91 (2006), pp. 866–872.
[56] K.Y. Kim and J. Wang, PLoS Comput. Biol. 3 (2007), pp. 565–577.
[57] B. Han and J. Wang, Biophys. J. 92 (2007), pp. 3755–3763.
[58] D. Lepzelter and J. Wang, Phys.Rev. E 77 (2008), p. 041917.
[59] J. Wang, L. Xu, and E.K. Wang, Biophys. J. 97 (2009), pp. 3038–3046.
[60] J. Wang, L. Xu, E.K. Wang, and S. Huang, Biophys. J. 99 (2010), pp. 29–39.
[61] H.D. Feng, B. Han, and J. Wang, J. Phys. Chem. Lett. 1 (2010), pp. 1836–1840.
[62] J. Wang, K. Zhang, L. Xu, and E.K. Wang, Proc. Natl. Acad. Sci. USA. 108 (2011), pp. 8257–8262.
[63] H.D. Feng, B. Han, and J. Wang, J. Phys. Chem. B 115 (2011), pp. 1254–1261.
[64] C.H. Li, J. Wang, and E.K. Wang, Chem. Phys. Lett. 505 (2011), pp. 75–80.
[65] H.D. Feng, B. Han, and J. Wang, Biophys. J. 102 (2012), pp. 1001–1010.
[66] H.D. Feng and J. Wang, Sci. Rep. 2 (2012), pp. 550-01–550-06.
[67] L.F. Xu, H.L. Shi, H.D. Feng, and J. Wang, J. Chem. Phys. 136 (2012), p. 165102.
[68] C.H. Li, E.K. Wang, and J. Wang, J. Chem. Phys. 136 (2012), p. 194108.
[69] F. Zhang, L. Xu, K. Zhang, E.K. Wang, and J. Wang, J. Chem. Phys. 137 (2012), p. 065102.
[70] C.H. Li, E.K. Wang, and J. Wang, ACS Synth. Biol. 1 (2012), pp. 229–239.
[71] L. Xu, F. Zhang, E.K. Wang, and J. Wang, Nonlinearity 26 (2013), pp. 69–84.
[72] H. Yan, L. Zhao, L. Hu, X. Wang, E.K. Wang, and J. Wang, Proc. Natl. Acad. Sci. USA. 110 (2013),
pp. E4185–E4194.
[73] W. Wu and J. Wang, J. Phys. Chem. B 117 (2013), pp. 12908–12934.
[74] K. Zhang, M. Sasai, and J. Wang, Proc. Natl. Acad. Sci. USA 110 (2013), pp. 14930–14935.
[75] W. Wu and J. Wang, J. Chem. Phys. 139 (2013), p. 121920.
[76] C.H. Li and J. Wang, PLoS Comput. Biol. 9 (2013), p. e1003165.
[77] C.H. Li and J. Wang, J. R. Soc. Interface 10 (2013), p. 20130787.
[78] Z.D. Zhang and J. Wang, J. Chem. Phys. 140 (2014), p. 245101.
[79] L. Xu, K. Zhang, and J. Wang, PLoS ONE 9 (2014), p. e105216.
[80] F. Zhang, L.F. Xu, and J. Wang, Chem. Phys. Lett. 599 (2014), pp. 38–43.
[81] L. Xu, F. Zhang, K. Zhang, E.K. Wang, and J. Wang, PLoS ONE 9 (2014), p. e86746.
[82] H.D. Feng, K. Zhang, and J. Wang, Chem. Sci. 5 (2014), pp. 3761–3769.
[83] C.H. Li and J. Wang, Proc. Natl. Acad. Sci. USA. 111 (2014), pp. 14130–14135.
[84] C.H. Li and J. Wang, J. R. Soc. Interface 11 (2014), p. 20140774.
[85] W. Wu and J. Wang, J. Chem. Phys. 141 (2014), p. 105104.
[86] L. Van Valen, Evol. Theory 1 (1973), pp. 1–30.
[87] E.A. Jackson, Perspectives of Nonlinear Dynamics, Vols 1 and 2, Cambridge University Press,
Cambridge, 1989.
[88] P.S. Swain, M.B. Elowitz, and E.D. Siggia, Proc. Natl. Acad. Sci. USA. 99 (2002), pp. 12795–12800.
[89] P. Ao, J. Phys. A 37 (2004), p. L25.
[90] J. H. Xing, J. Phys. A: Math. Theory 43 (2010), p. 37500.
[91] J.X. Zhou, M.D. Aliyu, E. Aurell, and S. Huang, J. R. Soc. Interface 9 (2012), pp. 3539–3553.
[92] H. Qian, Phys. Lett. A 378 (2014), pp. 609–616.
[93] H. Risken, The Fokker-Planck Equation, 2nd ed., Springer-Verlag, New York, 1989.
[94] H. Ge, Phys. Rev. E 89 (2014), p. 0221127.
[95] T.L. Hill and O. Kedem, J. Theoret. Biol. 10 (1966), pp. 399–441.
[96] T.L. Hill, J. Theoret. Biol. 10 (1966), pp. 442–459.
[97] G. Oster and C. Desoer, J. Theoret. Biol. 32 (1970), pp. 219–241.
Advances in Physics 133

[98] G. Oster, A. Perelson, and A. Katchalsky, Nature 234 (1971), pp. 393–399.
[99] M.P. Qian and M. Qian, Z Wahrsch Verw Gebiete 59 (1982), pp. 203–210.
[100] M. Qian and Z.T. Hou, Reversible Markov Process, Hunan Scientific Publisher, Changsha,
1979.
[101] R.K.P. Zia and B. Schmittmann, J. Stat. Mech. Theory Exp. 7 (2007), p. 07012.
[102] I.M. Mitchell, J. Sci. Comput. 35 (2008), pp. 300–329.
[103] H. Qian and E.L. Elson, Biophys. Chem. 101–102 (2002), pp. 565–576.
[104] H. Qian, Annu. Rev. Phys. Chem. 58 (2007), pp. 113–142.
[105] D.T. Gillespie, J. Chem. Phys. 113 (2000), pp. 297–306.
[106] M. Vellela and H. Qian, Proc. R. Soc. A 466 (2010), pp. 771–788.
[107] H. Qian, S. Saffarian, and E.L. Elson, Proc. Natl. Acad. Sci. USA. 99 (2002), pp. 10376–10381.
[108] A.M. Walczak, M. Sasai, and P.G. Wolynes, Biophys. J. 88 (2005), pp. 828–850.
[109] A.H. Lang, H. Li, J.J. Collins, and P. Mehta, PLoS Comp. Biol. 10 (2014), p. e1003734.
[110] P. Wang, C. Song, H. Zhang, Z. Wu, X.-J. Tian, and J.H. Xing, J. R. Soc. Interface Focus 4 (2014),
p. 20130068.
[111] M. Lu, M.K. Jolly, R. Gomoto, B. Huang, J.N. Onuchic, and E. Ben-Jacob, J. Phys. Chem. B 117
(2013), pp. 13164–13174.
[112] M. Lu, M.K. Jolly, H. Levine, J.N. Onuchic, and E. Ben-Jacob, Proc. Natl. Acad. Sci. USA 110
(2013), pp. 18144–18149.
[113] M.K. Jolly, B. Huang, M. Lu, S. A. Mani, H. Levine, and E. Ben-Jacob, J. R. Soc. Interface 11 (2014),
p. 20140962.
[114] L. Chen and T. Lu, J. Phys. Chem. B 117 (2013), pp. 12995–13004.
[115] L. Onsager and S. Machlup, Phys. Rev. 91 (1953), pp. 1505–1512.
[116] N. Wiener, Generalized Harmonic Analysis and Tauberian Theorems, MIT Press, Boston, 1964.
[117] R.P. Feynman and A.R. Hibbs, Quantum Mechanics and Path Integrals, McGraw-Hill, New York,
1965.
[118] K.L.C. Hunt and J. Ross, J. Chem. Phys. 75 (1981), pp. 976–984.
[119] R. Olender and R. Elber, J. Chem. Phys. 105 (1996), pp. 9299–9315.
[120] R. Elber, J. Meller, and R. Olender, J. Phys. Chem. B 103 (1999), pp. 899–911.
[121] E.H. Davidson, J.P. Rast, P. Oliveri, A. Ransick, C. Calestani, C.H. Yuh, T. Minokawa, G. Amore,
V. Hinman, C. Arenas-Mena, O. Otim, C.T. Brown, C.B. Livi, P.Y. Lee, R. Revilla, A.G. Rust, Z.J.
Pan, M.J. Schilstra, P.J.C. Clarke, M.I. Arnone, L. Rowen, R.A. Cameron, D.R. McClay, L. Hood,
and H. Bolouri, Science 295 (2002), pp. 1669–1678.
[122] C.Y. Huang and J.E. Jr Ferrell, Proc. Natl. Acad. Sci. USA 93 (1996), pp. 10078–10083.
[123] H. Eyring, J. Chem. Phys. 3 (1935), pp. 107–115.
[124] H.A. Kramers, Phys. A 7 (1940), pp. 284–304.
[125] B. Caroli, H. Caro, and B. Roulet, J. Stat. Phys. 26 (1981), pp. 83–111.
[126] W. Bialek, Stability and Noise in Biochemical Switches, MIT Press, Cambridge, 2001.
[127] R. Landauer and J. Swanson, Phys. Rev. 121 (1961), pp. 1668–1674.
[128] P. Hanggi, P. Talkner, and M. Borkovec, Rev. Modern Phys. 62 (1990), pp. 251–342.
[129] L.D. Landau and E.M. Lifshitz, Mechanics, 3rd ed., Pergamon Press, Oxford, 1996.
[130] U. Seifert, Rep. Progr. Phys. 75 (2012), p. 126001.
[131] R. Kubo, Rep. Progr. Phys. 29 (1966), pp. 255–284.
[132] U. Deker and F. Haake, Phys. Rev. A 11 (1975), pp. 2043–2056.
[133] L. Cugliandolo, J. Kurchan, and L. Peliti, Phys. Rev. E 55 (1997), pp. 3898–3914.
[134] P. Hanggi and H. Thomas, Phys. Rep. 88 (1982), pp. 207–319.
[135] R. Chetrite, G. Falkovich, and K. Gawedzki, J. Stat. Mech. Theory Exp. 8 (2008), p. P08005.
[136] R. Chetrite, Phys. Rev. E 80 (2009), p. 051107.
[137] G. Verley, K. Mallick, and D. Lacoste, Europhys. Lett. 93 (2011), p. 10002.
[138] M. Baiesi, C. Maes, and B. Wynants, Phys. Rev. Lett. 103 (2009), p. 010602.
[139] E. Lippiello, F. Corberi, and M. Zannetti, Phys. Rev. E 71 (2005), p. 036104.
[140] U.M.B. Marconi, A. Puglisi, L. Rondoni, and A. Vulpiani, Phys. Rep. 461 (2008), pp. 111–195.
[141] Y. Okabe, Y. Yagi, and M. Sasai, J. Chem. Phys. 127 (2007), p. 105107.
134 J. Wang

[142] T. Lu, J. Hasty, and P.G. Wolynes, Biophys. J. 91 (2006), pp. 84–94.
[143] J.R. Gomez-Solano, A. Petrosyan, S. Ciliberto, R. Chetrite, and K. Gawedzki, Phys. Rev. Lett. 103
(2009), p. 040601.
[144] V. Blickle, T. Speck, L. Helden, U. Seifert, and C. Bechinger, Phys. Rev. Lett. 96 (2006), p. 070603.
[145] J.R. Gomez-Solano, A. Petrosyan, S. Ciliberto, and C. Maes, J. Stat. Mech. Theory Exp. (2011), p.
P01008.
[146] J. Prost, J.F. Joanny, and J.M.R. Parrondo, Phys. Rev. Lett. 103 (2009), p. 090601.
[147] U. Seifert, Phys. Rev. Lett. 95 (2005), p. 040602.
[148] T. Speck and U. Seifert, J. Phys. A 38 (2005), pp. L581–L588.
[149] M.E. Peskin and D.V. Schroeder, An Introduction to Quantum Field Theory, Addison-Wesley,
Reading, MA, 1995.
[150] G.M. Wang, E.M. Sevick, E. Mittag, D.J. Searles, and D.J. Evans, Phys. Rev. Lett. 89 (2002),
p. 050601.
[151] D.J. Evans, E.G.D. Cohen, and G.P. Morriss, Phys. Rev. Lett. 71 (1993), pp. 2401–2404.
[152] G.M. Wang, E.M. Sevick, E. Mittag, D.J. Searles, and D.J. Evans, Phys. Rev. Lett. 89 (2002),
p. 050601.
[153] M. Polettini, Eur. Phys. Lett. 97 (2012), p. 30003.
[154] R.A. Marcus, Ann. Rev. Phys. Chem. 15 (1964), pp. 155–196.
[155] J.D. Morgan and P.G. Wolynes, J. Phys. Chem. 91 (1987), pp. 874–883.
[156] A.V. Getling, Rayleigh Benard Convection: Structures and Dynamics, World Scientific, Singapore,
1998.
[157] J. Jaeger, S. Surkova, M. Blagov, H. Janssens, D. Kosman, K.N. Kozlov, XXX. Manu, E. Myas-
nikova, C.E. Vanario-Alonso, M. Samsonova, D.H. Sharp, and J. Reinitz, Nature 430 (2004), pp.
368–371.
[158] S.W. Wu, G.V. Nazin, X. Chen, X.H. Qiu, and W. Ho, Phys. Rev. Lett. 93 (2004), pp. 236802-01–
236802-04.
[159] B. Novak, A. Csikasz-Nagy, B. Gyorffy, K.C. Chen, and J.J. Tyson, Biophys. Chem. 72 (1998), pp.
185–200.
[160] K.C. Chen, A. Csikasz-Nagy, B. Gyorffy, J. Val, B. Novak, and J.J. Tyson, Mol. Biol. Cell 11 (2000),
pp. 369–391.
[161] K.C. Chen, L. Calzone, A. Csikasz-Nagy, F.R. Cross, B. Novak, and J.J. Tyson, Mol. Biol. Cell 15
(2004), pp. 3841–3862.
[162] F. Li, T. Long, Y. Lu, Q. Ouyang, and C. Tang, Proc. Natl. Acad. Sci. USA 101 (2004), pp. 4781–4786.
[163] C. Gerard and A. Goldbeter, Proc. Natl. Acad. Sci. USA 106 (2009), pp. 21643–21648.
[164] C. Gerard and A. Goldbeter, PLoS Comput. Biol. 8 (2012), p. e1002516.
[165] C. Gerard and A. Goldbeter, Chaos 20 (2010), p. 045109.
[166] R. Singhania, R.M. Sramkoski, J.W. Jacobberger and J.J. Tyson, PLoS Comput. Biol. 7 (2011),
p. e1001077.
[167] J. Ferrell, T. Tsai, and Q. Yang, Cell 144 (2011), pp. 874–885.
[168] D. Hanahan and R.A. Weinberg, Cell 100 (2000), pp. 57–70.
[169] D. Hanahan and R.A. Weinberg, Cell 144 (2011), pp. 646–674.
[170] K. Takahashi and S. Yamanaka, Cell 126 (2006), pp. 663–676.
[171] K. Takahashi, K. Tanabe, M. Ohnuki, M. Narita, T. Ichisaka, K. Tomoda, and S. Yamanaka, Cell 131
(2007), pp. 861–872.
[172] C.H. Waddington, The Strategy of the Genes, George Allen & Unwin, London, 1957.
[173] M. Lu, M.K. Jolly, J.N. Onuchic, and E. Ben-Jacob, Cancer Res. 74 (2014), pp. 4574–4587.
[174] G. Balazsi, A. van Oudenaarden, and J.J. Collins, Cell 144 (2011), pp. 910–925.
[175] J.E. Ferrell, Current Biol. 22 (2012), pp. R458–R466.
[176] S. Huang, Y. Guo, G. May, and T. Enver, Dev. Biol. 305 (2007), pp. 695–713.
[177] R. Chang, R. Shoemaker, and W. Wang, PLoS Comput. Biol. 7 (2011), p. e1002300.
[178] World Cancer Report. International Agency for Research on Cancer. 2008. Available at
http://globocan.iarc.fr/factsheets/populations/factsheet.asp?uno = 900.
Advances in Physics 135

[179] World Cancer Report. International Agency for Research on Cancer. 2008. Available at
http://www.iarc.fr/en/publications/pdfs-online/wcr/2008/wcr_2008.pdf.
[180] ONS, Cancer Survival in England, patients diagnosed 2004-08, followed up to 2009. Available at
http://www.ons.gov.uk/ons/publications/re-reference-tables.html?edition = tcm.
[181] P. Cowin, T.M. Rowlands, and S.J. Hatsell, Curr. Opin. Cell Biol. 17 (2005), pp. 499–508.
[182] R.A. Weinberg, The Biology of Cancer, Garland Science, Taylor & Francis Group, New York, 2013.
[183] S. Kauffman, J. Theoret. Biol. 31 (1971), pp. 429–451.
[184] R.A. Gatenby and T. L. Vincent, Cancer Res. 63 (2003), pp. 6212–6220.
[185] S. Huang, I. Ernberg, and S. Kauffman, Semin Cell Dev. Biol. 20 (2009), pp. 869–876.
[186] P. Ao, D.L.H. Galas, and X. Zhu, Med Hypotheses 70 (2008), pp. 678–684.
[187] P. Creixell, E.M. Schoof, J.T. Erler, and R. Linding, Nature Biotechnol. 30 (2012), pp. 842–848.
[188] Y. Bar-Yam, D. Harmon, and B. de Bivort, Science 323 (2009), pp. 1016–1017.
[189] E. Wang, A. Lenferink, and M. O’Connor-McCourt, Cell. Mol. Life Sci. 64 (2007), pp. 1752–1762.
[190] H. Jeong, Nature 407 (2000), pp. 651–654.
[191] T. Ideker, Science 292 (2001), pp. 929–933.
[192] E.H. Davidson, J.P. Rast, P. Oliveri, A. Ransick, C. Calestani, C.H. Yuh, T. Minokawa, G. Amore, V.
Hinman, C. Arenas-Mena, O. Otim, C.T. Brown, C.B. Livi, P.Y. Lee, R. Revilla, A.G. Rust, Z.J. Pan,
M.J. Schilstra, P.J.C. Clarke, M.I. Arnone, L. Rowen, R.A. Cameron, D.R. McClay, L. Hood, and H.
Bolouri, Science 295 (2002), pp. 1669–1678.
[193] S. Van Landeghem, J. Bjorne, C.H. Wei, K. Hakala, S. Pyysalo, S. Ananiadou, H.Y. Kao, Z. Lu, T.
Salakoski, Y. Van de Peer, F. Ginter, PLoS ONE 8 (2013), p. e55814.
[194] R. Amson, J. Karp, and A. Telerman, Curr. Opin. Oncol. 25 (2013), pp. 59–65.
[195] R. Amson, S. Pece, A. Lespagnol, R. Vyas, G. Mazzarol, D. Tosoni, I. Colaluca, G. Viale, S.
Rodrigues-Ferreira, J. Wynendaele, O. Chaloin, J. Hoebeke, J.C. Marine, P.P. Di Fiore, and A.
Telerman, Nat Med. 18 (2011), pp. 91–99.
[196] M. Tuynder, L. Susini, S. Prieur, S. Besse, G. Fiucci, R. Amson, and A. Telerman, Proc. Natl Acad.
Sci. USA 99 (2002), pp. 14976–14981.
[197] D. Altomare and J. Testa, Oncogene 24 (2005), pp. 7455–7464.
[198] A. Cardones and L. Banez, Curr. Pharm. Des. 12 (2006), pp. 387–394.
[199] L. Ellis and D. Hicklin, Nat. Rev. Cancer 8 (2008), pp. 579–591.
[200] S. Wright, Bull. Amer. Math. Soc. 48 (1942), pp. 223–246.
[201] S.H. Rice, Evolutionary Theory: Mathematical and Conceptual Foundations, Sinauer Associates,
Massachusetts, 2004.
[202] Y.M. Svirezhev and V.P. Passekov, Fundamentals of Mathematical Evolutionary Genetics, Kluwer
Academic Publishers, Massachusetts, 1990.
[203] J.F. Crow and M. Kimura, An Introduction to Population Genetics Theory, Harper and Row, New
York, 1970.
[204] R.A. Fisher, The Genetical Theory of Natural Selection, Oxford University Press, Oxford, 1930.
[205] G.R. Price, Nature 227 (1970), pp. 520–521.
[206] J. Maynard Smith and G.R. Price, Nature 246 (1973), pp. 15–18.
[207] S. Kauffman, Origins of Order: Self-organization and Selection in Evolution, Oxford University
Press, Oxford, 1993.
[208] M.A. Nowak and K. Sigmund, Science 303 (2004), pp. 793–799.
[209] T.L. Vincent and J.S. Brown, Evolutionary Game Theory, Natural Selection, and Darwinian
Dynamics, Cambridge University Press, Cambridge, 2005.
[210] P. Ao, Phys. Life Rev. 2 (2005), pp. 117–156.
[211] J. Vandermeer and D. Goldberg, Population Ecology: First Principles, Princeton University Press,
Woodstock, Oxfordshire, 2003.
[212] J. Murray, Mathematical Biology, Springer, New York/Berlin, 1998.
[213] A. Lotka, Elements of Physical Biology, Williams and Wilkins, Baltimore, 1925.
[214] V. Volterra, Lecons sur la Theorie Mathematique de la Lutte pour la Vie, Gauthier-Villars, Paris,
1931.
[215] G. Harrison, J. Math. Biol. 8 (1979), pp. 159–171.
136 J. Wang

[216] B. Goh, J. Math. Biol. 3 (1976), pp. 313–318.


[217] B. Goh, Am. Nat. 111 (1977), pp. 135–143.
[218] S. Hsu, Math. Biosci. 39 (1978), pp. 1–10.
[219] C. Holling, Mem. Entomol. Soc. Can. 45 (1965), pp. 1–60.
[220] W. Murdoch and A. Oaten, Advan. Ecol. Res. 9 (1975), pp. 1–131.
[221] A. Hastings, J. Math. Biol. 5 (1978), pp. 399–403.
[222] I. Karsai and G. Balazsi, J. Theoret. Biol. 218 (2002), pp. 549–565.
[223] A. Bazykin, Mathematical Biophysics of Interacting Populations (in Russian), Nauka, Moscow,
1985.
[224] J. Wu, Landscape ecology, in Encyclopedia of Ecology, S.E. Jorgensen, ed., Elsevier, Oxford, 2008,
pp. 2103–2108.
[225] J. Wu and R. Hobbs, Key Topics in Landscape Ecology, Cambridge University Press, Cambridge,
2007.
[226] L.F. Abbott and W.G. Regehr, Nature 431 (2004), pp. 796–803.
[227] L.F. Abbott, Neuron 60 (2008), pp. 489–495.
[228] T.P. Vogels, K. Rajan, and L.F. Abbott, Annu. Rev. Neurosci. 28 (2005), pp. 357–376.
[229] A.L. Hodgkin and A.F. Huxley, J. Physiol. 117 (1952), pp. 500–544.
[230] J.J. Hopfield, Proc. Natl. Acad. Sci. USA. 79 (1982), pp. 2554–2558.
[231] J.J. Hopfield and D.W. Tank, Science 233 (1986), pp. 625–633.
[232] F.T. Arnsten, A.M. Wang, N.J. Gamo, Y. Yang, L.E. Jin, X.J. Wang, M. Laubach, J.A. Mazer, and D.
Lee, Nature 476 (2011), pp. 210–213.
[233] G. Buzsaki and A. Draguhn, Science 304 (2004), pp. 1926–1929.
[234] X.J. Wang and J. Tegner, Proc. Natl. Acad. Sci. USA 101 (2004), pp. 1368–1373.
[235] X.J. Wang, Physiol. Rev. 90 (2010), pp. 1195–1268.
[236] S. Raghavachari, M.J. Kahana, D.S. Rizzuto, J.B. Caplan, M.P. Kirschen, B. Bourgeois, J.R. Madsen,
and J.E. Lisman, J. Neurosci. 21 (2001), pp. 3175–3183.
[237] P. Fries, J.H. Reynolds, A.E. Rorie, and R. Desimone, Science 291 (2001), pp. 1560–1563.
[238] P. Fries, Annu. Rev. Neurosci. 32 (2009), pp. 209–224.
[239] R.W. McCarley and S.G. Massaquo, Am. J. Physiol-Reg I 251 (1986), pp. R1011–R1029.
[240] S.G. Massaquoi and R.W. McCarley, J. Sleep Res. 1 (1992), pp. 138–143.
[241] C.D. Brody and J.J. Hopfield, Neuron 37 (2003), pp. 843–852.
[242] J.J. Hopfield, Proc. Natl. Acad. Sci. USA 88 (1991), pp. 6462–6466.
[243] S. Grillner and P. Wallen, Annu. Rev. Neurosci. 8 (1985), pp. 233–261.
[244] E. Duzel, W.D. Penny, and N. Burgess, Curr. Opin. Neurobiol. 20 (2010), pp. 143–149.
[245] H.S. Seung, Proc. Natl. Acad. Sci. USA 93 (1996), pp. 13339–13344.
[246] H.S. Seung, Neural Netw. 11 (1998), pp. 1253–1258.
[247] D.J. Amit, Modeling Brain Function: The World of Attractor Neural Networks, Cambridge University
Press, Cambridge, 1992.
[248] S. Lewandowsky and B.B. Murdock, Psychol. Rev. 96 (1989), pp. 25–57.
[249] G.D.A. Brown, T. Preece, and C. Hulme, Psychol. Rev. 107 (2000), pp. 127–181.
[250] H. Sompolinsky and I. Kanter, Phys. Rev. Lett. 57 (1986), pp. 2861–2864.
[251] D. Kleinfeld and H. Sompolinsky, Biophys. J. 54 (1991), pp. 1039–1051.
[252] P. Fries, Trends Cogn. Sci. 9 (2005), pp. 474–480.
[253] R.W. McCarley and S.G. Massaquoi, J. Sleep Res. 1 (1992), pp. 132–137.
[254] E.N. Lorenz, J. Atmos. Sci. 20 (1963), pp. 130–141.
[255] H.L. Wang and H.W. Xin, J. Chem. Phys. 107 (1997), pp. 6681–6684.
[256] S.Y. Li and Z.M. Ge, Phys. Lett. A 373 (2009), pp. 4053–4059.
[257] G. Zhang, J. Tianjin Univ. 39 (2006), pp. 185–188.
[258] H.H. McAdams and A. Arkin, Proc. Natl. Acad. Sci. USA. 94 (1997), pp. 814–819.
[259] M.B. Elowitz, A.J. Levine, E.D. Siggia, and P.S. Swain, Science 297 (2002), pp. 1183–1186.
[260] M. Thattai and O.A. Van, Proc. Natl. Acad. Sci. USA. 98 (2001), pp. 8614–8619.
[261] J.M.G. Vilar, C.C. Guet, and S. Leibler, J. Cell Biol. 161 (2003), pp. 471–476.
[262] J. Paulsson, Nature 427 (2004), pp. 415–418.
Advances in Physics 137

[263] N. Samardzija, L.D. Greller, and E. Wasserman, J. Chem. Phys. 90 (1989), pp. 2296–2304.
[264] D. Gonze, J. Halloy, and A. Goldbeter, Proc. Natl. Acad. Sci. USA. 99 (2002), pp. 673–678.
[265] D.T. Gillespie, J. Phys. Chem. 81 (1977), pp. 2340–2361.
[266] Y.A. Ma, Q.J. Tan, R.S. Yuan, B. Yuan, and P. Ao, Internat. J. Bifur. Chaos 24 (2014), p. 1450015.
[267] H. Qian and T.C. Reluga, Phys. Rev. Lett. 94 (2005), p. 028101.
[268] G. Li and H. Qian, Traffic 3 (2002), pp. 249–255.
[269] M. Elowitz and S. Leibler, Nature 403 (2000), pp. 335–338.
[270] F. Tostevin, P.R. ten Wolde, and M. Howard, PLoS Comput. Biol. 3 (2007), p. e78.
[271] J. Hattne, D. Fange, and J. Elf, Bioinformatics 21 (2005), pp. 2923–2924.
[272] W. Driever and C. Nusslein-Volhard, Cell 54 (1988), pp. 95–104.
[273] G. Struhl, K. Struhl, and P.M. Macdonald, Cell 57 (1989), pp. 1259–1273.
[274] T. Gregor, E. Wieschaus, A. McGregor, W. Bialek, and D. Tank, Cell 130 (2007), pp. 141–152.
[275] A.J. Leggett, S. Chakravarty, A.T. Dorsey, M.P.A. Fisher, A. Garg, and W. Zwerger, Rev. Mod. Phys.
59 (1987), pp. 1–85.
[276] J.N. Onuchic and P.G. Wolynes, J. Phys. Chem. 92 (1988), pp. 6495–6503.
[277] U. Weiss, Quantum Dissipative Systems, World Scientific Publishing Company, Singapore, 2012.
[278] E. Ben-Jacob and J. Gefen, J. Phys. Letts. A 108 (1985), pp. 289–292.
[279] M.O. Scully and M.S. Zubairy, Quantum Optics, Cambridge University Press, Cambridge, 1997.
[280] L.D. Landau and E.M. Lifshitz, Quantum Mechanics (Non-relativistic Theory), 3rd ed., Reed
Educational and Professional Publishing Ltd, Oxford, 1977.
[281] L.D. Landau, E.M. Lifshitz, and L.P. Pitaevskij, Statistical Physics, Part 2: Theory of the Condensed
State, 3rd ed., Reed Educational and Professional Publishing Ltd, Oxford, 1977.
[282] S.A. Getty, C. Engtrakul, L. Wang, R. Liu, S.H. Ke, H.U. Baranger, W. Yang, M.S. Fuhrer, and L.R.
Sita, Phys. Rev. B 71 (2005), p. 241401.
[283] G. Garab, Photosynthesis: Mechanisms and Effects, Springer Science and Business Media, B.V., New
York, 1999.
[284] G.S. Engel, T.R. Calhoun, E.L. Read, T.K. Ahn, T. Mančal, Y.C. Cheng, R.E. Blankenship, and G.R.
Fleming, Nature 446 (2007), pp. 782–786.
[285] R. van Grondelle and V.I. Novoderezhkin, Phys. Chem. Chem. Phys. 8 (2006), pp. 793–807.
[286] A. Ishizaki and G.R. Fleming, Annu. Rev. Condens. Matter Phys. 3 (2012), pp. 333–361.
[287] K.E. Dorfman, D.V. Voronine, S. Mukamel, and M.O. Scully, Proc. Natl. Acad. Sci. USA 110 (2013),
pp. 2746–2751.
[288] D.K. Ferry and S.M. Goodnick, Transport in Nanostructures, Cambridge University Press,
Cambridge, 1997.
[289] D. Andrieux, P. Gaspard, T. Monnai, and S. Tasaki, New J. Phys. 11 (2009), p. 043014.
[290] M. Esposito, U. Harbola, and S. Mukamel, Rev. Mod. Phys. 81 (2009), pp. 1665–1702.
[291] M. Campisi, P. Hanggi, and P. Talkner, Rev. Mod. Phys. 83 (2011), pp. 771–791.
[292] N. Li, J. Ren, L. Wang, G. Zhang, P. Hanggi, and B. Li, Rev. Mod. Phys. 84 (2012), pp. 1045–1066.
[293] J.P. Pekola, Nature Phys. 11 (2014), pp. 118–123.
[294] D. Leitner, J. Phys. Chem. A. 117 (2013), pp. 12820–12828.
[295] N. Boudjada and D. Segal, J. Phys. Chem. B. 118 (2014), pp. 11323–111336.
[296] M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information, Cambridge
University Press, Cambridge, 2000.
[297] J. Preskill, Quantum Information and Quantum Computation, California Institute of Technology
Press, Pasadena, 1998.
[298] R. Landauer, IBM J. Res. Dev. 5 (1951), pp. 183–191.
[299] D. Deutsch, Phys. Rev. Lett. 48 (1982), pp. 1581–1585.
[300] C.H. Bennett, IBM J. Res. Dev. 17 (1973), pp. 525–532.
[301] W.G. Unruh, Phys. Rev. A 51 (1995), pp. 992–997.
[302] G.M. Palma, K.A. Suminen, and A.K. Ekert, Proc. R. Soc. Lond. A 452 (1996), pp. 567–584.

Das könnte Ihnen auch gefallen