Sie sind auf Seite 1von 8

Simulation and Beyond: Computing in the 21st

Century
John Gu kenheimer

Computers have a remarkable ability to simulate natural and arti ial


phenomena. Industry, s ien e, e onomi a airs and national se urity have
ome to rely upon simulation as an essential te hnology. We bet our lives
on devi es like y-by-wire air raft and digitally ontrolled pa emakers that
require digital omputation to fun tion properly. We have repla ed testing
of nu lear weapons with an aggressive program of omputer simulation, and
international treaties on arbon dioxide emissions are based upon omputer
predi tions of long term e e ts of human a tivity on global limate. The
delity and reliability of omputer simulation is a riti al issue for many en-
deavours. As our omputers be ome faster and heaper, simulations be ome
larger, more omplex and more diÆ ult to evaluate. The pro ess of simula-
tion itself be omes more diverse in ways des ribed here. A onventional view
of simulation is presented brie y. This is followed by a dis ussion of issues
that extend beyond this traditional view.
The starting point for uid simulation { and many other problems { is fre-
quently expressed as the solution of systems of partial di erential equations.
We assume that these equations give a orre t \ rst prin iples" des ription
of a uid ow. The key mathemati al problems are the reation of algo-
rithms that approximate the solutions of PDEs as the s ale of dis retizations
tends to zero. As the resolution of the al ulations be omes ner and more
omputational resour es be ome available, we are able to solve these prob-
lems more a urately. This viewpoint is very prevalent, but it applies fully
to only a limited set of problems. These are problems in whi h a pre ise
des ription an be given with a data set of xed size, in whi h the problem
behavior is stable to perturbations of the magnitude of the errors inherent
in the al ulations and in whi h the simulation results are readily ompared
with test data. The issues that extend beyond these problems are onsidered
below in terms of ve di hotomies, using an example for ea h:
 First prin iples vs. phenomenonology
 Determinism vs. indeterminisms
 Continuous vs. dis rete models
 Spe ial stru ture vs. generi ity

1
 Complexity vs. aggregation
We then dis uss issues of numeri al analysis that arise in simulation.

First prin iples vs. phenomenonology

Computational neuros ien e is growing rapidly. Our e orts to model the


brain are rooted rmly in biophysi al models of membranes and the hannels
they ontain, but there is a vast di eren e between ompartmental Hodgkin-
Huxley models of neurons and the Navier-Stokes equations of uid dynami s.
The Hodgkin-Huxley models are based upon sound biophysi al prin iples,
but these prin iples do not onstrain the models to a de nite set of equations
in the same manner that a few assumptions about uid properties lead to
the Navier-Stokes equations. Instead, many aspe ts of the models depend
upon approximations, hoi es of parametri forms of relationships among
quantities and measurements that t data to these parametri forms. The
assumptions oversimplify and distort information that we know about these
systems. However, when we try to in rease the resolution of the models,
then we in rease the number of parameters that must be measured to t
ner models. This reates the need for more measurements, many of whi h
are unlikely to be feasible. The measurements that an be made reate
voluminous data sets that need to be analyzed to extra t useful information
for parametrizing and initializing the models. Thus, it is hardly lear that
e orts to in rease resolution in these models will to better ts. Un ertainty
about the values of additional parameters may prevent us from obtaining the
improved delity that we expe t to obtain from ner resolution models.
When data for initializing high resolution models an be olle ted, it is
usually expensive. Some databases are maintained by the Federal government
at great ost and made partially available for s ienti purposes. However,
large s ienti ollaborations like the human genome proje t or those en-
tered around the NSF Long Term E ologi al Resear h Sites are required to
lay the substrate for detailed simulation of many natural pro esses. Build-
ing high delity models from data using phenomenologi al models requires
planning and oordination greater than has been ustomary in most resear h
areas. Su essful e orts will require data standards and omputational tools
that are a epted by the resear hers ontributing to modeling e orts dire tly
or indire tly. Agreement on su h standards is likely to in rease s ienti
orthodoxy, perhaps at the risk of thwarting individual reativity. Thus sim-
ulation may bear ultural osts as well as those of money and e ort. This is

2
not an argument that we should avoid a fo us on simulation, but rather a plea
that we should examine the enterprise. As with large teles opes in astron-
omy, many areas of s ien e stand to reap enormous bene ts from thoughtful
planning and investment in te hnology that will help simulation e orts.

Determinism

Is the weather predi table? There is abundant eviden e that global fore-
ast models display sensitive dependen e to initial onditions. Perturbations
within the measurement error of urrent observations lead to di erent fore-
asts. It is likely that this sensitivity is not an artifa t, but an inherent prop-
erty of atmospheri ir ulation. Let us assume here that the atmospheri ow
is turbulent and that disturban es grow to e e t global ir ulation. What
are the impli ations for simulation? Dynami al systems analysis of haoti
systems gives insight into the answer to this question. It lari es how unpre-
di tability an arise in a deterministi , lo kwork universe. Sensitive depen-
den e to initial onditions is a property of many dynami al systems, from
simple nonlinear me hani al linkages to population models. The haoti na-
ture of su h systems has been studied extensively, to the point that there
are solid mathemati al foundations for quantifying their unpredi tabiity us-
ing su h quantities as invariant measures, entropy, Lyapunov exponents and
fra tal dimensions. Still, the appli ation of these on epts to the simulation
of omplex systems remains problemati .
Fa ed with systems that display sensitive dependen e to initial ondi-
tions, long term predi tion of the full state of the system as a fun tion of
time is simply impossible. Weather fore asts ve weeks in advan e, let alone
ve years, an spe ify at best average properties of the weather. Operational
weather fore asts have begun to employ ensemble fore asting based on simu-
lations of several initial states. The usefulness of dynami al systems methods
is problemati for simulations with attra tors whose dimensions are large.
Probabilisti methods about system behavior seem more suited to these sit-
uations, but assumptions about underlying statisti al distributions are hard
to verify. For example, the statisti al properties of turbulent uid ows re-
lated to oherent geometri stru tures remain a ontroversial subje t. Monte
Carlo methods embody systemati approa hes to the modeling of sto hasti
systems. Produ ing high delity simulations in omplex systems with many
parameters using Monte Carlo methods is an even larger task than it is for
deterministi models with stable asymptoti states. When fa ed with noisy,

3
unpredi table systems like the brain, we have only a primitive understanding
of whi h aspe ts of dynami al behavior of omplex systems we an hope to
simulate.

From Model Ar hite ture to Dynami s

Modern mole ular biology has invested great e ort to determine the rea tion
pathways of living organisms. This resear h produ es graphs that show how
important biologi al mole ules are produ ed, modi ed and used to regulate
physiologi al pro esses. Frequently missing from this work is an understand-
ing of the kineti s of the pathways. We see information about the stru ture
of the pathway, but we do not understand how it works. In parti ular, we
have diÆ ulty predi ting the fun tional onsequen es of drugs, mutations or
other modulatory e e ts on the network. For example, the deleterious side
e e ts of al ium blo kers used as ardia drugs illustrate the importan e
of understanding biologi al kineti s. Mole ular biologists have assumed that
fun tion will be evident from stru ture. They have made the elu idation
of stru ture the prin ipal goal of the subje t. However, the ase of neu-
ral networks demonstrates that a single morphologi al network may support
a diverse set of behaviors. Mole ular biologists in reasingly re ognize that
systems modeling will fa ilitate their understanding of omplex biologi al
pro esses.
Apart from onsiderations of symmetry, we have had little su ess in
relating system stru ture to fun tion. Dynami al systems theory provides
a ontext for guiding our intuition of dynami al phenomena that we expe t
to observe in generi systems. Using results from di erential topology and
singularity theory, we have a oherent view of phenomena that we regard
as typi al and phenomena that are ex eptional. When there is symmetry
in a system, we know how to modify the theory. We believe that there
are ar hite tural prin iples whi h are important in building robust omplex
systems, but there is little theory to support our intuition. In the ontext of
spe i appli ations su h as ele troni ir uit design, we build hierar hi al
systems of astounding omplexity with millions of elements. Con epts su h
as hierar hy and feedba k ontrol have not been in orporated into a general
theory of nonlinear dynami al systems. Our la k of insight into how system
ar hite ture onstrains dynami al behavior limits the power of simulation as
a tool for studying omplex systems.

4
Continuous, Dis rete and Hybrid Models

High delity simulation of human walking is a demanding task. We per eive


small variations in gait, and readily distinguish departures from normality.
Attempts to build two legged lo omotion ma hines have oundered over the
issue of maintaining balan e in dynami ally unstable states. Computational
models of walking must ontend with impa ts. These break stride into phases
in whi h the primary physi al for es a ting on the body di er. Dynami al
systems in whi h there are dis rete events in the phase spa e that result in
dis ontinuous hanges of the underlying model are alled hybrid systems.
Hybrid models an have ontinuous and dis rete omponents in spa e and
time. For example, the engagement of gears hanges the dimension of the
phase spa e of a model of two rotating shafts. Most of the ma hines that we
build are hybrid systems. The onstru tion of full system simulations from
simulations of omponents frequently introdu es the need for hybrid models.
Theoreti al models and omputational tools for studying dynami al sys-
tems are framed in terms of ontinuous or dis rete time, but seldom both.
Theory and numeri al methods that apply to dis ontinuous or singular sys-
tems are more limited than those that treat analyti or smooth systems.
Consequently, the foundations for simulations of hybrid systems are shaky.
Dynami al systems theory has sharpened our intuition about what types of
phenomena we should expe t to see in ontinuous phase spa es of systems
that operate in ontinuous or in dis rete time. Extending that understand-
ing to hybrid systems is a barrier to on dent simulation of hybrid systems.
We are left with weaker intuition to guide design of ma hines and industrial
pro esses that meet our desired spe i ations.

S ales and Aggregation

De omposition of natural systems into di erent "s ales" is one of the en-
tral tasks in produ ing high delity simulations. We seek to understand
how ma ros opi behavior results from physi al laws that operate on smaller
s ales. For example, we would like to understand fra ture in terms of atom-
isti properties of materials. Redu ing all omplex phenomena to atomi
intera tions is learly a hopeless task. It is preposterous to model the ef-
fe ts of global limate hange on natural populations and agri ulture on an
atomi s ale. Re ognizing when we an separate physi al s ales and en ap-
sulate smaller s ale information in models that operate at larger s ales has

5
been a entral issue within ondensed matter physi s. In studying popula-
tion biology, e onomies, or the brain the issue of aggregating small s ales is
fuzzier and more hallenging.
Only re ently have we a quired the omputing resour es required to sim-
ulate detailed multi-s ale models. One of the areas that is growing rapidly is
the simulation of models with large numbers of omponents. In appli ations
as varied as mole ular dynami s, battle eld simulations and traÆ ow, we
build sto hasti models from whi h we seek to observe emergent behavior at
the system level. Su h simulations form the basis for a whole set of om-
puter games. The hallenge with these e orts is to obtain results that t the
real world. We seldom know whi h details of omponent behavior are most
signi ant for determining system properties, so it would be prudent to have
systemati ways of evaluating the e e ts of un ertainty in model omponents
upon system behavior. Su h methods hardly exist at this time.

Numeri al Analysis
Numeri al implementation of dynami al systems models depends upon ap-
proximations that themselves are subtle. The simplest, most dire t numeri-
al integration algorithm (the Euler method) is subje t to substantial errors.
These errors an a umulate to give qualitatively in orre t predi itions about
long time dynami s, as happens with the harmoni os illator. Histori ally,
numeri al solution methods for di erential equations addressed stringent lim-
itations on the speed and ost of performing arithmeti . The dramati im-
provements of digital omputers during the past fty years have ompletely
transformed these parts of mathemati s. The speed of omputation in sim-
ulation of physi al systems as a limiting fa tor has largely been repla ed by
issues su h as memory hierar hies, round-o errors inherent in oating-point
arithmeti , and extra ting useful information from very large data sets.
Consider the problem of simulating ele tri power systems. Reliable ele -
tri power produ ed with minimal environmental impa t is vital to the world
today. Adequate apa ity to handle anti ipated loads and real-time mon-
itoring of operations are essential for these systems. Simulating network
models at rst sight seems like a straightforward task in numeri al integra-
tion. However, when we look a bit loser we nd te hni al issues that are
bothersome. One issue is that the equations for a network are naturally ex-
pressed as di erential-algebrai equations rather than as ordinary di erential
equations. The mathemati al theory of DAE's is more omplex than that of

6
ODE's. For DAE's, not all initial onditions in the phase spa e are onsis-
tent with the equations. Moreover, there are points in phase spa e where the
algebrai onstraints inherent in the equations are satis ed, but there still
are no solutions (or multiple solutions) with these initial onditions. The
mathemati al underpinnings of the theory of DAE's remain in omplete. Re-
stri tive assumptions on models are required to guarantee that simulation
algorithms will work. Still DAE's are ommon in engineering appli ations
and annot be ignored. In addition to power systems, DAE's arise naturally
as models of onstrained me hani al systems.

Beyond Simulation
Simulation viewed as the evolution of spe i initial onditions for a dynami-
al omputer model is unlikely to dire tly answer many of the questions that
we ask. For example, onsider the problem of tting parameters to experi-
mental data. In Hodgkin-Huxley models for neurons, a typi al ve tor eld
may have a ten dimensional phase spa e and forty parameters. The model
of a network of ten neurons built from single ompartment Hodgkin-Huxley
neurons will have a phase spa e of dimension approximately one hundred
with several hundred parameters. If there are many model parameters that
annot be measured dire tly, then we are left with a omplex \inverse" prob-
lem of using simulation data to optimize the parameters. In the ase of the
neural network, the problem is further ompli ated by the distortion of the
primary voltage measurements that o urs due to unmodeled spatial e e ts
in the system. This means that the most useful data for model omparisons
is likely to be related to qualitative properties like the period of os illations or
the stability boundaries for di erent dynami al states as physi al parameters
are varied. Obtaining this information by sampling traje tories an only be
done for a small number of parameters be ause the number of required tra-
je tories grows exponentially with the number of parameters. Thus, solution
of these parameter identi ation problems seems to require algorithms that
go beyond simulation. The problems of tting model parameters inhibit the
reation of high delity models. As des ribed previously, in reasing model
resolution to in lude smaller s ales in a problem may in rease the number of
parameters that must be determined faster than the delity of the models
improve.
Bifur ation theory for dynami al systems provides a framework for dire t
determination of information about how system behavior hanges qualita-

7
tively with parameter variations. Implementation of algorithms based on
this theory is a step towards omputing parameter ranges that produ e de-
sired behavior. For bifur ations of equilibria, stability boundaries an be
determined without numeri al integration by formulating de ning equations
for bifur ations from the derivatives of the ve tor eld. These methods have
been implemented in a ontinuation setting to ompute urves of odimension
one Hopf and saddle-node bifur ations in two-parameter families of ve tor
elds and urves of odimension two bifur ations in three-parameter families
of ve tor elds. We need better algorithms to ompute multi-dimensional
ontinuation of submanifolds of bifur ations and to treat bifur ations of pe-
riodi orbits reliably.
There are additional geometri questions about dynami al systems that
are important for varied appli ations. Computation of mixing properties of
uid ows has been greatly fa ilitated by regarding the instanteous velo ity
elds as generating a dynami al system of streamlines and omputing in-
variant manifolds of these dynami al systems. The stagnation points of the
uid ow are saddle points of the dynami al system and their stable and
unstable manifolds give separation boundaries for the uid ow. Computing
the interse tions of these manifolds and the evolution of their turnstile stru -
tures gives approximations to the mixing properites of these uids. These
te hniques have been used to investigate the design of industrial rea tors,
hemi al rea tion rates and uid transport of o ean eddies. Be ause invari-
ant manifolds be ome highly onvoluted with sharp bends, substantial are
is need to ompute them a urately.
We desire the ability to robustly and routinely ompute far more about
the qualitative properties of dynami al systems than we an today. There are
phenomena that play a prominent role in the qualitative theory that o ur
on very ne s ales in many examples. Developing onsistent, onverged al-
ulations requires that these s ales be resolved. Sin e the phenomena often
involve singularities and bifur ations, lassi al algorithms need to be mod-
i ed and extended to work with these problems. Mathemati al theory has
guided this work, leading to the reation of algorithms that solve hallenging
problems. Unlike prevailing trends in omputational s ien e, the problems
have often been small and the omputing highly intera tive. The interplay
between lassi al and modern mathemati s, geometry and numeri al analysis
and omputational s ien e will ontinue to be important to progress in the
use of simulation as a powerful s ienti tool.