Sie sind auf Seite 1von 149

RheinischWestflische a Technische Hochschule Aachen

Lecture Notes

Introduction to Simulation Techniques


Lehrstuhl fr Prozesstechnik u Dr.-Ing. Ralph Schneider

Version 1.2 Copyright: R. Schneider 2006

Copyright 2006 by Wolfgang Marquardt Lehrstuhl fr Prozesstechnik u RWTH Aachen University Templergraben 55 D - 52056 Aachen Germany Tel.: Fax: E-mail: WWW: +49 (0) 241 - 80-94668 +49 (0) 241 - 80-92326 secretary@lpt.rwth-aachen.de http://www.lpt.rwth-aachen.de

Dieses Skript ist urheberrechtlich geschtzt und darf nur zur Benutzung im Rahmen der u Vorlesung ,,Introduction to Simulation Techniques an der RWTH Aachen kopiert werden. Jede weitergehende Nutzung bedarf der ausdrcklichen schriftlichen Genehmigung. u In diesem Skript werden Materialien anderer Autoren zu Ausbildungszwecken verwendet. Dies impliziert nicht, dass die Materialien frei von Copyright sind. Dieses Skript zur Vorlesung ,,Introduction to Simulation Techniques wurde nach bestem Wissen erstellt. Jedoch kann keine Garantie fr Richtigkeit der gemachten Angaben sowie Freiheit von u Tippfehlern ubernommen werden. Der Stoumfang der Prfungen im Fach ,,Introduc u tion to Simulation Techniques richtet sich nach den Darstellungen in Vorlesungen und Ubungen, nicht nach diesem Skript.

The copyright of these lecture notes is reserved. Copies may only be made for use within the lecture Introduction to Simulation Techniques at RWTH Aachen University. Any further use requires a written permission. In these lecture notes, materials of other authors are used for educational purposes. This does not imply that these materials are free of copyright. These notes for the lecture Introduction to Simulation Techniques have been created to the best knowledge of the authors. However, correctness of the given information as well as absence of typing errors cannot be guaranteed. The assessment load for examinations in Introduction to Simulation Techniques conforms to the presentations in lectures and exercises, not to these notes.

Preface
This manuscript accompanies the lecture Introduction to Simulation Techniques which may be attended by students of the master programme Simulation Techniques in Mechanical Engineering, students of the Lehramtsstudiengang Technische Informatik, students of Mechanical Engineering whose major course of study is Grundlagen des Maschinenbaus as well as a 3rd technical elective course in Mechanical Engineering. This lecture was oered for the rst time in the summer semester 2001. The manuscript aims at minimizing the eort of taking notes during the lecture as far as possible and tries to represent the basics of simulation techniques in a compact manner. The topics treated in the manuscript are very extensive and can therefore be discussed only in a summarizing way in an one-term lecture. Well-known material from other lectures is only covered briey. It is presupposed that the reader is familiar with the basics of numerics, mechanics, thermodynamics and programming. Above all Martin Schlegel contributed to the success of this manuscript, both with critical remarks and helpful comments as well as the continuous revision of the text and the gures. Aidong Yang did a lot of work in polishing the rst English version of this manuscript. Beyond that Ngamraht Kitiphimon and Sarah Jones have to be mentioned, who provided the rst German and English versions of the manuscript, respectively. My thanks to all of them. The lecture is based on the lecture Simulationstechnik oered by Professor M. Zeitz at the University of Stuttgart. I would like to express cordial thanks to him for his permission of using his lecture notes. Despite repeated and careful revision of the manuscript incorrect representations might not be excluded. I am grateful for each hint about (possible) errors, gaps in the material selection, didactical weaknesses or unclear representations, in order to improve the manuscript further. These lecture notes are also oered on the homepage of Process Systems Engineering (http://www.lpt.rwth-aachen.de), where it can be downloaded by any interested reader. I hope, that with the publication on the internet a faster correction of errors can be achieved. I would like to ask the readers to submit suggestions for changes and corrections by email (simulationtechniques@lpt.rwth-aachen.de). Each email will be answered.

Aachen, in July 2004

Ralph Schneider

Contents
1 Introduction 1.1 What is Simulation? . . . . . . . . . . . . . . . . . . 1.2 Simulation Procedure . . . . . . . . . . . . . . . . . 1.3 Introductory Example for the Simulation Procedure 1.3.1 Problem . . . . . . . . . . . . . . . . . . . . . 1.3.2 Abstraction . . . . . . . . . . . . . . . . . . . 1.3.3 Mathematical Model . . . . . . . . . . . . . . 1.3.4 Simulation Model . . . . . . . . . . . . . . . . 1.3.5 Graphical Representation . . . . . . . . . . . 1.3.6 Analysis of the Model . . . . . . . . . . . . . 1.3.7 Numerical Solution . . . . . . . . . . . . . . . 1.3.8 Simulation . . . . . . . . . . . . . . . . . . . 1.3.9 Applications of Simulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . 1 . 4 . 5 . 5 . 5 . 5 . 7 . 7 . 8 . 10 . 12 . 12 15 15 19 20 20 23 24 24 26 27 27 28 30 31 32 34 35 36 36 37 38 38

2 Representation of Dynamic Systems 2.1 State Representation of Linear Dynamic Systems . . . . . . 2.2 The State Space . . . . . . . . . . . . . . . . . . . . . . . . 2.3 State Representation of Nonlinear Dynamic Systems . . . . 2.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Generalized Representation . . . . . . . . . . . . . . 2.4 Block-Oriented Representation of Dynamic Systems . . . . 2.4.1 Block-Oriented Representation of Linear Systems . . 2.4.2 Block-Oriented Representation of Nonlinear Systems

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

3 Model Analysis 3.1 Lipschitz continuity . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Solvability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Stationary States . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Jacobian Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Linearization of Real Functions . . . . . . . . . . . . . . . . . . . 3.6 Linearization of a Dynamic System around the Stationary State . 3.7 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . 3.8 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 One state variable . . . . . . . . . . . . . . . . . . . . . . 3.8.2 System matrix with real eigenvalues . . . . . . . . . . . . 3.8.3 Complex eigenvalues of a 2 2 system matrix . . . . . . . 3.8.4 General case . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

Contents

3.9 Time Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.10 Problem: Sti Dierential Equations . . . . . . . . . . . . . . . . . . . . . . 43 3.11 Problem: Discontinuous Right-Hand Side of a Dierential Equation . . . . . 44 4 Basic Numerical Concepts 45 4.1 Floating Point Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2 Rounding Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.3 Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5 Numerical Integration of Ordinary Dierential Equations 5.1 Principles of Numerical Integration . . . . . . . . . . . . 5.1.1 Problem Denition and Terminology . . . . . . . 5.1.2 A Simple Integration Method . . . . . . . . . . . 5.1.3 Consistency . . . . . . . . . . . . . . . . . . . . . 5.2 One-Step Methods . . . . . . . . . . . . . . . . . . . . . 5.2.1 Explicit Euler Method (Euler Forward Method) . 5.2.2 Implicit Euler Method (Euler Backward Method) 5.2.3 Semi-Implicit Euler Method . . . . . . . . . . . . 5.2.4 Heuns Method . . . . . . . . . . . . . . . . . . . 5.2.5 Runge-Kutta Method of Fourth Order . . . . . . 5.2.6 Consistency Condition for One-Step Methods . . 5.3 Multiple-Step Methods . . . . . . . . . . . . . . . . . . . 5.3.1 Predictor-Corrector Method . . . . . . . . . . . . 5.4 Step Length Control . . . . . . . . . . . . . . . . . . . . 53 53 53 55 57 59 60 61 61 62 63 63 64 65 65 67 67 68 69 70 71 71 72 74 77 77 78 79 79 80 80

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

6 Algebraic Equation Systems 6.1 Linear Equation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Solution Methods for Linear Equation Systems . . . . . . . . . . 6.2 Nonlinear Equation Systems . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Solvability of the Nonlinear Equation System . . . . . . . . . . . 6.2.2 Solution Methods for Nonlinear Equation Systems . . . . . . . . 6.2.2.1 Newtons Method for Scalar Equations . . . . . . . . . 6.2.2.2 Newton-Raphson Method for Equation Systems . . . . 6.2.2.3 Convergence Problems of the Newton-Raphson Method 7 Dierential-Algebraic Systems 7.1 Depiction of Dierential-Algebraic Systems . . . . . 7.1.1 General Nonlinear Implicit Form . . . . . . . 7.1.2 Explicit Dierential-Algebraic System . . . . 7.1.3 Linear Dierential-Algebraic System . . . . . 7.2 Numerical Methods for Solving Dierential-Algebraic 7.3 Solvability of Dierential-Algebraic Systems . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . . . . . . . . . . . . . . Systems . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

VI

Contents

8 Partial Dierential Equations 8.1 Introductory Example . . . . . . . . . . . . . . 8.2 Representation of Partial Dierential Equations 8.3 Numerical Solution Methods . . . . . . . . . . 8.3.1 Method of Lines . . . . . . . . . . . . . 8.3.1.1 Finite Dierences . . . . . . . 8.3.1.2 Problem of the Boundaries . . 8.3.2 Method of Weighted Residuals . . . . . 8.3.2.1 Collocation Method . . . . . . 8.3.2.2 Control Volume Method . . . . 8.3.2.3 Galerkin Method . . . . . . . . 8.3.2.4 Example . . . . . . . . . . . . 8.4 Summary . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

85 85 86 87 87 88 90 91 93 93 93 94 97 99 100 100 101 101 101 102 103 104 106 108 108 108 110 110 112 113 113 113 113 114 115

9 Discrete Event Systems 9.1 Classication of Discrete Event Models . . . . . . . . . . . . . 9.1.1 Representation Form . . . . . . . . . . . . . . . . . . . 9.1.2 Time Basis . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 States and State Transitions . . . . . . . . . . . . . . . 9.2 State Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Representation of Graphs and Digraphs with Matrices 9.3.2.1 Models for Discrete Event Systems . . . . . . 9.3.2.2 Simulation Tools . . . . . . . . . . . . . . . . 9.4 Automaton Models . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Discrete Time Petri Nets . . . . . . . . . . . . . . . . 9.5.2 Simulation of Petri Nets . . . . . . . . . . . . . . . . . 9.5.3 Characteristics of Petri Nets . . . . . . . . . . . . . . 9.5.3.1 Reachability . . . . . . . . . . . . . . . . . . 9.5.3.2 Boundedness and Safety . . . . . . . . . . . . 9.5.3.3 Deadlock . . . . . . . . . . . . . . . . . . . . 9.5.3.4 Liveness . . . . . . . . . . . . . . . . . . . . . 9.5.4 Continuous Time Petri Nets . . . . . . . . . . . . . . . 10 Parameter Identication 10.1 Example . . . . . . . . . . . . . . . . . . 10.2 Least Squares Method . . . . . . . . . . 10.3 Method of Weighted Least Squares . . . 10.4 Multiple Inputs and Parameters . . . . . 10.5 Recursive Regression . . . . . . . . . . . 10.6 General Parameter Estimation Problem 10.6.1 Search Methods . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

117 . 117 . 118 . 120 . 121 . 121 . 123 . 123

VII

Contents

10.6.1.1 Successive Variation of Variables . . . . . . . . . . . . . . . 124 10.6.1.2 Simplex Methods . . . . . . . . . . . . . . . . . . . . . . . 125 10.6.1.3 Nelder-Mead Method . . . . . . . . . . . . . . . . . . . . . 126 11 Summary 11.1 Problem Denition . . . . . . . . . . . . 11.2 Modeling . . . . . . . . . . . . . . . . . 11.3 Numerics . . . . . . . . . . . . . . . . . 11.4 Simulators . . . . . . . . . . . . . . . . . 11.4.1 Application Level . . . . . . . . . 11.4.2 Level of Problem Orientation . . 11.4.3 Language Level . . . . . . . . . . 11.4.4 Structure of a Simulation System 11.5 Parameter Identication . . . . . . . . . 11.6 Use of Simulators . . . . . . . . . . . . . 11.7 Potentials and Problems . . . . . . . . . Bibliography 127 127 128 130 130 131 131 131 131 134 134 135 137

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

VIII

1 Introduction
1.1 What is Simulation?
Simulation (virtual reality, the experiment on the computer) is also called the third pillar of science next to theory and experiment. We all know examples of simulation techniques from the area of computer games, e.g. the ight simulator (see Fig. 1.1).

Figure 1.1: Flight simulator as an example of a simulator. In this example the reality is represented in the form of a mathematical model. The model equations are solved with a numerical algorithm. Finally, the results can be visually displayed. A more rigorous denition of (computer aided) simulation can be found in Shannon (1975, p. 2): Simulation is the process of designing a model of a real system and conducting experiments with this model for the purpose either of understanding the behavior of the system and its underlying causes or of evaluating various designs of an articial system or strategies for the operation of the system. The VDI guideline 3633 (Verein Deutscher Ingenieure, 1996) denes simulation in the following way:

1 Introduction

Simulation is the process of emulating a system with its dynamic processes in an experimental model in order to receive some knowledge, which is transferable to the reality. In a broader sense, simulation means the preparation, the execution, and the evaluation of aimed experiments by means of a simulation model. With the help of simulation the temporal behavior of complex systems can be discovered (simulation method). Examples of application areas where simulation studies are used are:

ight simulators, weather forecast, stock market, war gaming, software development, exible manufacturing, chemical processes, power plants.
Simulation became well-known in connection with the book of ?, which presented and interpreted a world model in the seventies. The obtained simulation results predicted, that with the continuation of those days economy and population growth, only a few decades were needed to lead to the exhaustion of raw material resources, world wide undernourishment, environmental destruction, and pollution, and thereby to a dramatic population breakdown.

Figure 1.2: Modeling as an abstraction.

1.1 What is Simulation?

As Fig. 1.2 shows, you should be aware of the dierences between reality and its representation by the computer. This is because modeling is an intended simplication of reality through abstraction. Essentially, it is not reality that is represented on the computer, but solely an approximation! According to the approximation used, dierent models are obtained. This becomes clear through the denition of a model by Minsky (1965) and is illustrated in Fig. 1.3: To an observer B, an object M is a model of an object A to the extent that B can use M to answer questions that interest him about A.

Figure 1.3: Models denition by Minsky (1965). Although reality is not completely reproducible, models are useful. Reasons for this are that (computer) simulations (also called simulation experiments) are usually

simpler, faster, less dangerous to people, less harmful for the environment, and much more economical
than real experiments. For the signicance of modeling and simulation, the following two quotations should be mentioned: Karl Ganzhorn, IBM (IBM Nachrichten, 1982) Models and simulation techniques are considered the most important area of future information processing in technology, economy, administration, society, and politics.

1 Introduction

Ralph P. Schlenker, Exxon (Conference of Foundations of Computer-Aided Process Operations, 1987) Modeling and simulation technology are keys to achieve manufacturing excellence and to assess risk in unit operation. [...] As we make our plant operations more exible to respond to business opportunities, ecient modeling and simulation techniques will become commonly used tools.

1.2 Simulation Procedure


The simulation procedure (see Fig. 1.4) serves us as a guideline through this course. After this introductory chapter, dierent kinds of simulation models, for example steady-state models, dynamic models with lumped and distributed parameters, as well as discrete time and discrete event models will be discussed. The dierent model types will be introduced with the help of examples, the numerical treatment will be briey discussed, and a short survey of the software tools available today for these system classes will be given.

                

Figure 1.4: Simulation procedure. Afterwards, we will deal with methods of parameter estimation (identication, optimization) in the context of adjusting models to experiments. Simulation techniques are not subject to any concrete eld of application. Rather, it treats a methodology, which can be widely used in many applications. The examples in the lecture and the tutorials will illustrate this interdisciplinary character.

1.3 Introductory Example for the Simulation Procedure

1.3 Introductory Example for the Simulation Procedure


For illustration purposes, the following example is presented, in which the main steps in the simulation procedure are mentioned.

1.3.1 Problem
At rst, the real world problem has to be formulated, for which we want to nd answers with the help of simulation. This example deals with the determination of the vertical movement of a motor vehicle during a period of time through a ride over a wavy road. (see Fig. 1.5)

()
()
=
( )= ( )= ( )

Figure 1.5: Depiction of the vertical movement of a motor vehicle.

1.3.2 Abstraction
Generally it is not our intention or within our capability to capture all aspects of the real world problem. For this reason, an appropriate selection of the eects to be considered and practical simplications of the reality have to be made. In our example, you can simplify the problem e.g. through regarding both, the wheels and the car body as two mass points. In Fig. 1.6 this simplied model of the vehicle is depicted.

1.3.3 Mathematical Model


Based on this revised picture, you state a mathematical description of the problem with physical (and possibly chemical) equations. In our example you use the equation of motion

11 Introduction Introduction

ECT DQF[ URTKPI  FCORGT YJGGN  CZKU VKTG UVTGGV


F$

P$ >> P5
G

\$

UOCNN EQPUVCPV

P5 = P
F5

\5 = \
\6

Figure 1.6: Depiction of the simplied motor vehicle model. Figure 1.6: Depiction of the simplied motor vehicle model. (Newtons law) for the wheel: (Newtons law) for the wheel: m = forces, m = yy forces, m = d A y cR (y y ), m = dyy ccA y cR (y yss), yy or or m + y + (cA + R y = cR y m + ddy+ (cA + ccR))y = cR yss .. yy := := cc := k(t) := k(t) (1.3) (1.3) (1.1) (1.1) (1.2) (1.2)

t > t0 t > t0

(1.3) represents a second-order linear dierential equation equation with the initial Equation (1.3) represents a second-order linear dierentialwith the initial conditions conditions y(t0 ) = y0 , y(t0 ) = y0 (1.4) y(t0 ) = y0 , y(t0 ) = y0 (1.4) for the distance y and the velocity y at time t0 . The dierential equation and the initial for the distance y and the velocityproblem. The simulation experiment is dened through conditions form the model of the y at time t0 . The dierential equation and the initial conditions form the model of the problem. The simulation experiment is dened through the following known quantities: the following known quantities: model structure, X the model structure (equation 1.3c), the example), in X (time variant!) parameter (m, d, the values of parameters (m, d, c) which may be time-variant in general! , (time variant!) the X the time variantinput k(t) andand input (k(t)), initial conditions y X the initial conditions, y(y. , y ).
0 0

The wanted quantities are: The wanted quantities are: y(t), y(t), (t), X y(t), y(t), yy(t), 6 6 t>t . t > t00.

1.3 Introductory Example for the Simulation Procedure

1.3.4 Simulation Model


For simulation purposes, you often do not rely on the model as it has been built through abstraction (which are in our example the equations (1.3) and (1.4)). Rather, you do certain conversions on it. In this example, the conversion would result in a system of rst-order dierential equations (so called state description). The states correspond to energy storages, characterized by the according initial conditions. We dene the variables: x1 = y x2 = y distance, velocity, (1.5) (1.6)

which leads to the following dierential equations: x1 = dx1 = y = x2 , dt dx2 1 x2 = = y = (dx2 cx1 + k(t)) , dt m x1 (t0 ) = y0 , x2 (t0 ) = y0 . (1.7) (1.8)

This is a time continuous, dynamic system of order n (here n = 2). So the equationoriented description of the model is: x1 = x2 , 1 x2 = (dx2 cx1 + k(t)), m x1 (t0 ) = y0 , x2 (t0 ) = y0 . (1.9) (1.10)

1.3.5 Graphical Representation


The model (1.9) - (1.10) can be represented in a graphical block-oriented way (see Fig. 1.7).
      ! #$ "

54 2 3 6 DC & '% ( 0) 1  78

Figure 1.7: Block-oriented representation of the motor vehicle model.

1 Introduction

1.3.6 Analysis of the Model


For preparation of a reasonable simulation, an analysis of the model is necessary, e.g. in view of the following points: a. Wellposedness The problem must have denite solutions for all meaningful values of parameters, inputs, and initial conditions (see existence theorem for rst-order dierential equation systems in Section 3.2). b. Simulation Time The model must at least be simulated over the time T (t [t0 , t0 +T ]), so that you can see the consequences of the dierent inuential parameters. T can be determined by means of dynamic parameters (see Fig. 1.8). You obtain these for instance through approximations, process knowledge (experiment with the real examination object), or trial simulations with guessed quantities.

Figure 1.8: Dynamic parameters. In our example, you can perform the following approximations: d = 0, k(t) = const. (1.11)

With this and equation (1.3), you obtain the oscillation dierential equation: y+ 1 c y = k = const , m m =: 2 1 with the frequency f = 2 = 2 c 1 m and the period = f = 2 m. c (1.12)

1.3 Introductory Example for the Simulation Procedure

A model is characterized through the time parameters Tmax and Tmin : Tmax = max( period ,
as above

excitation
typical time for the characterization of k(t)

),

(1.13)

Tmin = min(period, excitation). As an empirical formula, the simulation time T can be determined by: T = 5 Tmax

(1.14)

(1.15)

c. Time Step Length for Integrator For many numerical integration algorithms you need a clue on how to choose an appropriate time step length t (see Fig. 1.9).

Figure 1.9: Time step length t. As an empirical formula, the following choice of a time step length is valid: t = min Tmin T , 10 200
accuracy graphics

(1.16)

d. Expected Solution Space For the visualization of the solution, an imagination of its order of magnitude is useful. You can get this through approximative calculations or on the basis of process knowledge. In our example, you are able to determine e.g. the extreme values of the wheel movement. For a constant k(t) = k, we can calculate the steady-state solution of the system. The steady state is characterized by the fact that no time-dependencies of the state variables x1 = y and x2 = y occur, i.e. it can be determined with the conditions x1 = y = 0, x2 = y = 0. (1.17)

1 Introduction

Then equation (1.3) yields an expression for the steady-state solution y: cy = k, hence y= k . c (1.19) (1.18)

The dynamic solution of equation (1.3) is given by (assuming d = 0): y(t) = x1 (t) = y(1 cos(t)), y(t) = x2 (t) = y sin(t), therefore x1,max = 2 y, x2,max = y, x1,min = 0, x2,min = y. (1.22) (1.23) (1.20) (1.21)

1.3.7 Numerical Solution


The analysis of the model is followed by its numerical solution. The formulated dierential equations have to be integrated with respect to time. In Fig. 1.10 this is depicted symbolically with an integration block. In the following, we proceed based on the assumption that x is a scalar, not a vector variable. The same ideas, however, can also be applied to vectors.

Figure 1.10: Integration block.

A model shall be dened through x = f (x), x(0) = x0 . (1.24)

The time integration provides:


t

x(t) = x(0) +
0

f (x( ))d.

(1.25)

10

1.3 Introductory Example for the Simulation Procedure

The time axis will now be subdivided into an equidistant time grid (see Fig. 1.11). With this, equation (1.25) can be rewritten as:
tk+1

x(tk+1 ) = x(0) +
0 tk

f (x( ))d
tk+1

= x(0) +
0

f (x( ))d +
tk tk +1

f (x( ))d

= x(tk ) +
tk

f (x( ))d.

(1.26)

1.3 Introductory Example for the Simulation Procedure

GSWKFKUVCPV VKOG ITKF

W
WN  WN WN +
I

WN 

WN

WN +

Figure 1.11: Numerical integration.

Figure 1.11: Numerical integration.


The second term in (1.24) can be approximated (see Fig. 1.11). An example for this is

The the (explicit) f (x( ))d in equation (1.26) can be approximated (see Fig. 1.11). An term tkk Euler method: example for this is the (explicit) Euler method: xk+1 = xk + h fk withwith xk+1 x(tk+1 ) xk x(tk )
xk+1 x(tk+1 ) xk x(tk ) (1.26) xk+1 = xk + h fk (1.25)

t +1

(1.27)

(1.28) (1.29)

(1.27)

fk f= = (xk(x ) f f) k k h h =k+1k+1 tk tk = t t The numerical method is coded in a computer program (see Fig. 1.12).

(1.30) (1.28) (1.31) (1.29)

11

1 Introduction

Figure 1.12: Numerical integration as a calculation program.

1.3.8 Simulation
Up to now, the solution of simulation problems has been discussed. It is important to compare the results of these simulations with real experiments in order to be able to make statements on e.g. whether model simplications are acceptable and the numerical solution method applied is suitable: simulation experiment simulated distances and velocities matching through test conditions If relevant deviations occur (Fig. 1.13) between simulation and experiment (after examination of comparable experimental conditions), then the simulation has to be modied through real experiment measured distances and velocities on test track

parameter adaptation (c, d, m), or model modication (friction, sophisticated vehicle model).
Often only an iterative model adjustment leads to a satisfactory correspondence between simulation and reality.

1.3.9 Applications of Simulators


If a suciently accurate model of the reality is nally constructed, the simulation can be used for problem solving. Examples for applications of the simulation models in our example are:

12

1.3 Introductory Example for the Simulation Procedure

Figure 1.13: Deviations between simulation and real experiment. a. Analysis of the spring behavior |R | < MR y |A | < MA y safety comfort

b. Synthesis e.g. active spring c(y, y) c. Hardware in the loop e.g. active spring as a component in a simulated suspension model d. Training e.g. education e. Predictive simulation e.g. prediction

These dierent applications can be divided into online and o-line applications: o-line online, real time a, b, (e) c, d, (e)

Figure 1.14: Time axes in the simulation and real time.

13

1 Introduction

On the basis of Fig. 1.14, = 1 T t = = <1 T t >1

simulation can be divided with regard to its time scales: real time, time extension, time compression,

(1.32)

14

2 Representation of Dynamic Systems


2.1 State Representation of Linear Dynamic Systems
The example of the vertical movement of a motor vehicle in the previous chapter led to a time continuous dynamic system (with lumped parameters), which is described through ordinary dierential equations with time as the independent variable. In this chapter we will generalize this system representation. Starting with one or more linear dierential equations of higher order, we are going to implement a transformation such that we get a system of dierential equations of rst order (Zeitz, 1999). This alternative representation of the model is useful for determining certain characteristics of the system in consideration, for instance its stability. Furthermore, as we will see in Chapter 5, common numerical integration methods require a system of dierential equations of rst order. In Figure 2.1 a linear transfer system is depicted (comparable to systems in control engineering).

()

()

Figure 2.1: Linear transfer system. The model for this system is given through a generic linear dierential equation of order n: a0 y + a1 y + . . . + an y (n) = b0 u + b1 u + b2 u + . . . + bm u(m) , t >0. (2.1)

Note that for real systems, the condition m n always holds: The system state and the output depend only on previous states and inputs. The initial conditions for y(0), y(0), . . . , y (n1) (0) and u(t) as well as all derivatives of u(t) for t 0 have to be known. Attention has to be paid to the time derivatives of the input variables! In Fig. 2.2, a more detailed model of the vertical movement of a motor vehicle depicted in section 1.3 is given. Its mathematical representation

m = c1 y d1 y + c2 (u y) + d2 (u y) y

for t > 0.

(2.2)

15

2 Representation of Dynamic Systems

()

()


Figure 2.2: Modeling of the vertical movement of a vehicle. The initial conditions at t = 0 is y(0) = y0 , y(0) = y0 . (2.3) (2.4)

After a simple transformation of equation (2.2), it becomes clear that the dierential equation of this model is a special case of the generic equation (2.1) with n = 2 and m = 1:

1 + y a2

d1 + d2 c1 + c2 y+ y = m m a1 a0

c2 d2 u+ u m m b0 b1

(+0)u. b2

(2.5)

Note that in favor of a more general solution, we have introduced the term b2 u. In our example, this term equals to zero. Indeed, the representation (2.5) is unsuitable for simulation purposes because in general, the inputs u(t) may not be dierentiable. For example, with a step change of u at t as depicted in Figure 2.3, the derivatives u(t ) and u(t ) are not dened.

()

Figure 2.3: Step change on u(t).

16

2.1 State Representation of Linear Dynamic Systems

As such discontinuities cause diculties in simulation (i.e. in the numerical methods), equations like (2.1) are transformed through successive integration at time 0 in the interval from t = 0 to t = 0+. As we want to eliminate all derivatives u, u, ..., u(m) , m integrations are required:

0+

0+

...
t=0 t=0 m integrals

dierential equation d . . . d .

(2.6)

Inspect, for instance, the general case n = m = 2 as in equation (2.5): By solving this equation with respect to the highest order derivative in y you obtain y = b0 u a0 y + b1 u a1 y + b2 u with the initial conditions y(0) = y0 , y(0) = y0 . (2.8) (2.9) (2.7)

As equation (2.7) is of second order in u, two integrations are required to obtain a representation free of derivatives of u. After the rst integration, we get y(t) = (b0 u a0 y) d + b1 u a1 y + b2 u , =: x1 hence y(t) = x1 + b1 u a1 y + b2 u , and the second integration yields y(t) = (x1 + b1 u a1 y) d + b2 u (2.12) (2.11) (2.10)

=: x2 = x2 + b2 u .

We get the following equation system, consisting of two dierential equations, namely the denitions of x1 and x2 in equation (2.10) and (2.11) respectively, and one algebraic equation, i.e. equation (2.12): x1 = b0 u a0 y , x2 = x1 + b1 u a1 y , y = x2 + b2 u , (2.13) (2.14) (2.15)

17

2 Representation of Dynamic Systems

As y is an output variable of the system, it is reasonable to replace y in (2.13) and (2.14) by inserting equation (2.15): x1 = a0 x2 + (b0 a0 b2 )u , x2 = x1 a1 x2 + (b1 a1 b2 )u , y = x2 + b2 u . (2.16) (2.17) (2.18)

This form is called the state representation of the linear system (2.7). In this representation, the evolution of the system, i.e. the temporal derivatives of the state variables x1 and x2 , is given in function of the current state of the system (the xi ) and the input variable u. The solution of the equation system (2.16) - (2.18) requires the initial conditions x1 (0) and x2 (0) at time t = 0. They can be obtained with the help of the initial conditions y(0) = y0 and y(0) = y0 : y(0) = x2 (0) + b2 u(0) x2 (0) = y0 b2 u(0) . (2.19)

After dierentiation of equation (2.18), you obtain y(0) = x2 (0) + b2 u(0). Applying (2.17) leads to y(0) = x1 (0) + b1 u(0) a1 y(0) + b2 u(0), hence x1 (0) = y0 + a1 y0 b1 u(0) b2 u(0). (2.22) (2.21) (2.20)

So the initial conditions are known, if u(t), t 0 is known and if lim u(t) exists. For
t0

instance, if the input u(t) is a step function, u(t) = then lim u(t) = 1.
t0

0 1

t < 0, t 0,

(2.23)

(2.24)

A generalization of the state representation (2.16) - (2.18) renders the following representation in matrix notation: x = [x1 , x2 , . . . , xn ]T Rn u = [u1 , u2 , . . . , um ] R y = [y1 , y2 , . . . , yp ] R
T p T m

(state vector), (input vector), (output vector).

(2.25) (2.26) (2.27)

18

2.2 The State Space

In our example this means: x1 = x2 x 0 a0 1 a1 A system matrix nn x1 + x2 x n1 x1 + x2 x n1 b0 a0 b2 b1 a1 b2 B input matrix nm u u m1 (2.28)

y = y

0 1 C

b2 D

u u m1

(2.29)

output matrix pn

transmission matrix pm

Therefore, you obtain a generic linear state representation in the following form: x = Ax + Bu, y = Cx + Du. x(0) = x0 , (2.30) (2.31)

Consequently, the simulation task is to solve a system of linear dierential equations of rst order. 2.2 The State Space

2.2 The State Space 2.2 The State Space


The state vector x(t) describes the solution of the model. For a xed t it represents a The state vector x(t) describes the solution of the model. For a xed t it represents a vector in the state space. For variation of t, it describes a state curve or trajectory starting vector in the state space. For variation of t, it describes a state curve or trajectory starting from x(t0 ) (Fllinger and Franke, 1982). This is illustrated in Fig. 2.4. o from x(t ) (Fllinger and Franke, 1982). This is illustrated in Fig. 2.4. o
0

() ()
=

Figure 2.4: Trajectory in the state space (according to Fllinger and Franke, 1982). o Figure 2.4: Trajectory in the state space (according to Fllinger and Franke, 1982). o For given inputs u(t), t t0 , the trajectory is denitely determined through the initial conditions x(t0 ) = x0 . For dierent u(t) with the same x0 , you obtain dierent trajectories. The special case n = 2 (two-dimensional state space) is illustrated in Fig. 2.5, left. This 19 illustration is also called phase plot. Fig. 2.5, right, shows in contrast a time domain depiction.

2 Representation of Dynamic Systems For given inputs u(t), t t0 , the trajectory is denitely determined through the initial conditions x(t0 ) = x0 . For dierent u(t) with the same x0 , you obtain dierent trajectories. The special case n = 2 (two-dimensional state space) is illustrated in Fig. 2.5, left. This illustration is also called phase plot. Fig. 2.5, right, shows a time domain depiction.
x1 x2

x(0) t x(t )

t x2

x1 t phase plot time domain depiction

Figure 2.5: Phase plot and time domain depiction.

2.3 State Representation of Nonlinear Dynamic Systems


Most real systems cannot be suciently represented with linear relations. Therefore, we extend the state representation to the nonlinear case (Fllinger and Franke, 1982; Zeitz, o 1999).

2.3.1 Example
Now we study a common example from ecology: the predator-prey model (Lotka, 1925; Volterra, 1926). In Fig. 2.6 the corresponding ecology system is depicted schematically. We make the following assumptions:

As an input quantity we only use the catch, there is no immigration or migration, 1 is the prey, 2 is the predator, death rate number of prey or predator, birth rate, for prey number of prey,
for predator number of prey and predator, 20

2.3 State Representation of Nonlinear Dynamic Systems

()

()

Figure 2.6: Predator-prey model.

death rate of prey caused by predator number of prey and predator.


Considering the balance for the temporal change of the number of animals in the ecology system, you get: temporal increase loss loss change = through through through of the animals birth death catch Hence, for the prey: x1 = a1 x1 b1 x1 x2 c1 x1 v1 u, and for the predator: x2 = b2 x1 x2 c2 x2 v2 u, x2 (0)=x20 . (2.33) x1 (0)=x10 , (2.32)

As this state space model includes the term x1 x2 , it is nonlinear. Henceforth, the question comes up, what does the solution for x1 (t) and x2 (t) look like. A formula representation of x1 (t) and x2 (t) is hardly possible. We simplify the model (2.32) - (2.33) by assuming that no catch takes place (v1 = v2 = 0). Now we can determine the stationary states (equilibrium states) x1 and x2 , for which the conditions x1 = 0 and x2 = 0 hold (see also Section 3.3): (a1 b1 x2 c1 )x1 = 0 (b2 x1 c2 )x2 = 0 Besides the trivial solution x1 = x2 = 0 you obtain: c2 with equation (2.34) x1 = , b2 a1 c1 with equation (2.35) x2 = . b1 (2.34) (2.35)

(2.36) (2.37)

21

2 Representation of Dynamic Systems

In the following step the non-stationary solution trajectory has to be worked out. In this case you can use a trick, where you divide equation (2.33) by equation (2.32) and then separate the variables x1 and x2 : dx2 /dt dx2 = dx1 /dt dx1 a1 b1 x2 c1 dx2 x2 a1 c1 dx2 b1 dx2 x2 (a1 c1 ) ln |x2 | b1 x2 x2 b2 x1 c2 , a1 b1 x2 c1 x1 b2 x1 c2 = dx1 , x1 c2 = b2 dx1 dx1 , x1 = b2 x1 c2 ln |x1 | + C. = (2.38) (2.39) (2.40) (2.41)

C is an integration parameter. With equation (2.41) you obtain a trajectory bundle of closed curves. Fig. 2.7 shows an illustration in the state space. The trajectories run through in the arrow direction with increasing time. We observe steady state oscillations with dierent amplitudes and frequencies. By which of the curves in Fig. 2.7 a particular system is described, depends on the initial conditions x10 and x20 . Fig. 2.7 also shows the behavior of the system for the two special cases x10 = 0 and x20 = 0. If x10 = 0, i.e. no prey at t = 0, the number of predator sh decreases; the system towards the stationary state [0, 0]T . If x20 = 0 and x10 > 0, i.e. no predator, but some prey at t = 0, the amount of prey sh increases. The system converges towards [, 0]T .

2 - predator

p2
stationary state 2

a1 c1 b1 p1 p1

p2

0 0
stationary state 1

c2 b2

1 - prey

Figure 2.7: State trajectories of the predator-prey model. The representation in the state space already gives an insight into the inuences of external eects on the system. For example, you can assume that the prey is a harmful insect, which

22

2.3 State Representation of Nonlinear Dynamic Systems

shall be reduced through the use of chemicals. Using chemicals to reduce the predator and the prey from p1 to p1 has the consequence that later a much greater amount of prey will appear. So a better time for combating would be a decisive moment with a large amount of prey, e.g. p2 . An alternative would be a continuous intervention u(t). However, this would lead to the question of determining the optimal trajectory of u(t).

2.3.2 Generalized Representation


The general nonlinear state space model can be described in the following way: x = f (x, u), y = g(x, u). x(0) = x0 , (2.42) (2.43)

If f and g are both linear in the input quantity u, this leads to a so-called nonlinear, control ane state space model: x = f 1 (x) + f 2 (x)u, y = g 1 (x) + g 2 (x)u. x(0) = x0 . (2.44) (2.45)

So the simulation task consists of solving a system of nonlinear dierential equations. Note that you deal with a vector model: x1 f1 . . x = . , f = . , ... (2.46) . . xn or x1 f1 (x, u) . . . . = , . . xn fn (x, u) x1 (0) x10 . . . = . , . . xn (0) xn0 y1 g1 (x, u) . . . .= . . . yp gp (x, u) fn

(2.47)

If the time t does not appear explicitly or implicitly through u(t) in the functions f and g, these systems are called autonomous. Every non-autonomous system x = f (x, u(t)) = f (x, t), y = g(x, u(t)) = g(x, t) (2.48) (2.49)

can be transformed into an autonomous one with the additional variable xn+1 = t and the supplementary equation xn+1 = 1, xn+1 (0) = 0.

23

2 Representation of Dynamic Systems

Example For the non-autonomous system y = ty, t > 0; y(0) = y0 , (2.50)

we get with x1 := y, x2 := t: x1 x x = 1 2 , x2 1 t > 0; x1 (0) y = 0 . x2 (0) 0 (2.51)

2.4 Block-Oriented Representation of Dynamic Systems


2.4.1 Block-Oriented Representation of Linear Systems
The state space model can also be illustrated graphically. For this purpose you use a graph, which consists of nodes (vertices) and arcs (edges) (cf. outline of graph theory in section 9.3). In the graphical representation, an edge is assigned a direction, which corresponds to a signal ux. The nodes of the graph represent the functional elements, which modify the signals according to some dened rules. Such function blocks may have one or more inputs and outputs. Normally, you associate an edge in a graph with a scalar quantity, although generalization to a vector quantity is possible and also common. In this case, a functional element has to be interpreted as a vector, which converts a vector input into a vector output. The most important function blocks for illustrating linear dynamic systems are

the integrator, the summarizer, the branch, and the gain.


These blocks are depicted in Fig. 2.8 graphically. Additionally, the corresponding elements in the commercial software Simulink (MathWorks, 2002) are shown. An application of the block-oriented representation to the example of the motor vehicle is shown in Fig. 2.9. Some parts of a graph can be aggregated to a (structured) function element in order to improve the clarity, which converts inputs into outputs through the rules given by those aggregated parts. This procedure is exemplarily represented for the instance of our wheel model in Fig. 2.10.

24

2.4 Block-Oriented Representation of Dynamic Systems

+ =

= = =

+ +

Figure 2.8: Block-oriented representation of dynamic systems.



C DE

4 03

#"

@A

Figure 2.9: Block-oriented representation of the wheel model.

 

F GH

) 0(

'

&

25

2 Representation of Dynamic Systems

"









Figure 2.10: Aggregation of model parts.

2.4.2 Block-Oriented Representation of Nonlinear Systems


Nonlinear systems can also be illustrated graphically. The generalization of the depiction of linear systems is easily to handle by introducing a nonlinear gain. For this gain the output quantities are nonlinear functions of the input quantities of the function block. The block diagram of the predator-prey model (with a0 = a1 + c1 ) is shown in Fig. 2.11. It contains a nonlinear multiplier block, which multiplies x1 and x2 .

&

'

"

 

2 4 53

$%

Figure 2.11: Block-oriented representation of the predator-prey model.

26

&

'

3 Model Analysis
Before solving the model equations a model analysis should be performed (see simulation procedure, Fig. 1.4) in order to support the implementation of the model in a simulator. We look at an autonomous system of nonlinear dierential equations of the form: dx = f (x), dt x(0) = x0 . (3.1)

The output equations do not have to be considered because they are an explicit function of states and inputs. Therefore, the outputs can also be determined after the simulation (which is normally not done for practical reasons). In the following sections, dierent aspects of the model analysis will be briey introduced.

3.1 Lipschitz continuity


Before we will discuss a criterion for the solvability of autonomous dierential equations, we will introduce the term Lipschitz continuity. A function f : Rn Rm is called Lipschitz continuous if a constant L > 0 exists such that for all x1 , x2 Rn f (x1 ) f (x2 ) L x1 x2 . (3.2)

may be an arbitrary vector norm for Rm or Rn respectively, e.g. the maximum norm x

= max xi
1in

(3.3)

or the Euclidean norm x


2

1 n

x2 + ... + x2 . n 1

(3.4)

In general, continuity of a function is a necessary condition for Lipschitz continuity. For dierentiable functions, the following theorem is valid: A function f is Lipschitz continuous, if and only if its derivative is bounded, i.e. a constant L R exists such that x Rn : f (x) L. (3.5)

27

3 Model Analysis

f(x)
3

f(x)
2 0 2 4 2

f(x)
2

0 2

0 2

(a) f (x) = e not Lipschitz continuous

(b) not continuous not Lipschitz continuous

(c) f (x) = 1 + |x| not dierentiable, but Lipschitz continuous

Figure 3.1: Lipschitz-continuous and non-Lipschitz-continuous functions. Example a. The function f : R R, f (x) = ex (Fig. 3.1(a)) is not Lipschitz continuous, because its derivative f (x) = ex is not bounded. b. As the function in Figure 3.1(b) is even not continuous, it is not Lipschitz continuous. c. Consider the function f : R R, f (x) = 1 + |x| (Fig. 3.1(c)). By applying the denition above, we will show that f is Lipschitz continuous. For any x1 , x2 R, the following relations hold (the last transformation is a direct result of the triangle inequality): |f (x1 ) f (x2 )| = (1 + |x1 |) (1 + |x2 |) = |x1 | |x2 | |x1 x2 |. (3.6)

With the denition above, it follows immediately that f is Lipschitz continuous; L = 1 is a Lipschitz constant of f .

3.2 Solvability
We consider an ordinary dierential equation of the form x = f (x) for t > 0 , x(0) = x0 . (3.7)

If f is Lipschitz-continuous, then the dierential equation has a unique solution x(t). Note that this is a sucient, but not a necessary condition (see example 1). Example 3.1

28

3.2 Solvability

x(t)

x(t)
0.5

0.5

0.5

0 0

0.5
(a) x = e
x

1
, x(0) = 0

1.5

1 0

0.5 ln 2

1.5

(b) x = 1 + |x|, x(0) = 1

Figure 3.2: Unique solution x(t). a. In Section 3.1, we have seen that f (x) = ex is not Lipschitz continuous. Nevertheless, the dierential equation x = ex for t > 0 , x(0) = 0 (3.8)

has a unique solution (Fig. 3.2(a)), which can be calculated analytically: dx = ex , dt dt = ex dx = d(ex ) , t=e e and therefore x(t) = ln(1 + t) for t 0 . (3.12)
x x(0)

(3.9) (3.10) (3.11)

= e 1,

b. As f (x) = 1 + |x| is Lipschitz continuous (cf. Section 3.1), the dierential equation x = 1 + |x| for t > 0 , x(0) = 1 (3.13)

has a unique solution (Fig. 3.2(b)). The calculation of the analytical solution x(t) = 1 2et etln 2 1 for t [0; ln 2] , for t (ln 2, ) (3.14)

is left as an exercise to the reader.

29

3 Model Analysis

3.3 Stationary States


In addition to the possible existence of a dynamic, i.e. time-dependent solution, the question of whether a system provides one or more stationary (steady) states is of interest. By denition, a stationary state is characterized by the fact that the state variables x Rn of a system x = f (x, u) x(0) = x0 are constant with respect to time: x = 0. (3.17) for t > 0 , (3.15) (3.16)

Therefore, for a given input u, the stationary states x of a system can be calculated by solving the equation f (x, u) = 0 . (3.18)

This condition shows that the stationary states of a system do not depend on initial values; they are a property of the system itself. Nevertheless, in general it depends on the initial values x(0) to which stationary state a system converges and whether it converges at all. Example In Section 2.3.1 we have seen that the predator-prey model has two stationary states: x1 = 0 , 0
1

x2 =

x2 b2 a1 c1 b1

(3.19)

If x(0) = x or x(0) = x , then the system will remain in the respective stationary state. If x (0) = 0 and x (0) > 0, then the system will converge towards x , but it will never reach this state. If x (0) = 0 and x (0) > 0, then the system will converge towards [, 0] . For all other x(0) with x (0) > 0, x (0) > 0, the system does not converge towards
2 1 2 1 2 1 T 1 2

a stationary state, but it follows the trajectories shown in Fig. 2.7.

For the calculation of stationary states, the linear and the nonlinear case are to be distinguished:

30

3.4 Jacobian Matrix linear case (x = Ax + Bu): A x = B u (3.20)

If the rank of the matrix rank(A) = n, i.e. det A = 0, then for any input u a solution x such that A x + B u = 0 exists. nonlinear case (x = f (x, u)):

In the nonlinear case no sucient conditions exist. As a necessary condition the implicit function theorem

det

f x
Jacobian matrix

=0

if ||x x|| < .

(3.21)

has to be satised. The Jacobian matrix must have full rank for all x in the neighborhood of the searched stationary state x.

3.4 Jacobian Matrix


The Jacobian matrix of a dierential equation system is dened in the following way: f1 x1 = . . . fn x1 ... .. . f1 xn . . . . fn xn

fi f = xj x

(3.22)

i,j=1,...,n

...

Its calculation can be done either

analytically, e.g. manually or by a computer algebra system such as Maple or Mathematica, or

numerically:
fi xj fi (. . . , xj + xj , . . . ) fi (. . . , xj xj , . . . ) 2xj (3.23)

31

3 Model Analysis

The pattern of the Jacobian-Matrix reects the coupling structure of the dierential equation system. Take a look at the following example; a non-zero entry is marked by : 1 2 3 4 5 0 0 0 f1 0 0 0 f2 = . . . . . . 0 0 f5
fi xj

f x

=0

=0

In this example, the function f1 depends only on x2 and x4 . You distinguish between dense (much more non-zero than zero entries) and sparse (much more zero entries) matrices. Among other reasons, this is important for the storage as well as for the numerical evaluation of the matrices (sparse numerical methods).

3.5 Linearization of Real Functions


The Taylor expansion of a function f : R R at x is1

f (x) =
i=0

1 di f i! dxi

x=x

(x x)i

(3.24)

df 1 d2 f (x x) + (x x)2 + ... (3.25) dx x=x 2 dx2 x=x If the Taylor expansion is truncated after the second (linear) term, we get the a linear approximation of f at x: df flin,x (x) = f (x) + (x x) . (3.26) dx x=x If x := x x is suciently small, then flin,x (x) f (x). = f (x) + Note that a linearization of a function requires dierentiability of the function at the respective point. In particular, a function cannot be linearized at saltuses or breakpoints (cf. Fig. 3.3). Example Let f (x) = ex . The linearization of f at x is flin,x (x) = ex +
x

d x e x=x (x x) dx = ex + ex (x x) = e (1 x + x) .

(3.27) (3.28) (3.29)

1 df dx x=x

is the value of f =

df dx

at x.

32

3.5 Linearization of Real Functions

Figure 3.3: Possibilities of linearization. For instance, in the neighborhood of x = 0 (see Figure 3.4), the linearization is flin,0 = 1 + x , and in the neighborhood of x = 1, we get flin,1 = ex . (3.31) (3.30)

Similarly, a function f : Rn Rm with n, m N can be linearized by truncating its Taylor expansion:

f lin,x (x) = f (x) +

f x

x=x

(x x) .
=:x

(3.32)

Example x1 + x2 + exp x1 2 . The linearization at x = [0, 1]T is 4 sin x1 0 + 12 + exp 0 1 + exp x1 2x2 + x 4 sin 0 cos x1 0 2 2 2 + x . 4 1 0 (3.33) (3.34)

Let f (x) =

f lin (x) = =

33

3 Model Analysis

f (x ) = e x
3,5 3 2,5 2 1,5 1 0,5 0 -2 -1 0 1 2

f lin,1 (x ) = ex

f lin,0 (x ) = 1+x

Figure 3.4: Linearization of f (x) = ex at x = 0 and x = 1.

3.6 Linearization of a Dynamic System around the Stationary State


Consider a dynamic system x = f (x, u) with x Rn and u Rm . If we assume that the system is and remains in the neighborhood of a stationary state x which in general depends on the stationary input u then we can replace f by its linearization with respect to x and u at the stationary state: x = f (x, u) + f x (x x) +
=:x

(3.35)

x=x,u=u

f u

x=x,u=u

(u u) .
=:u

(3.36)

At a stationary state, we have always f (x, u) = 0. Furthermore, as x is a constant that d does not depend on t, we have x = dx = dt (x + x) = dx . Thus, we get dt dt dx = Ax + Bu dt (3.37)

34

3.7 Eigenvalues and eigenvectors

with f1 x1 = . . . fn x1 f1 x2 fn x2 ... f1 xn fn xn
x=x,u=u

A=

f x

x=x,u=u

(3.38)

...

and f1 u1 = . . . fn u1 f1 u2 fn u2 ... f1 um fn um
x=x,u=u

B=

f u

x=x,u=u

(3.39)

...

The system matrix A is an n n-matrix, the input matrix B is an n m-matrix.

3.7 Eigenvalues and eigenvectors


For a given matrix A Cnn , C and v Cn , v = 0 are called an eigenvalue and an eigenvector of A if the following equation holds: Av = v . In order to calculate the eigenvalues of a matrix, we transform equation (3.40): (I A)v = 0 . (3.41) (3.40)

Here, I denotes the n-dimensional identity matrix. As v = 0, equation (3.41) holds if and only if det(I A) = 0 . det(I A) is the characteristic polynomial of A. (3.42)

Example Let 1 0 1 A = 0 1 1 . 0 1 1 (3.43)

35

3 Model Analysis

The characteristic polynomial of A is 1 0 1 1 1 det(I A) = det 0 0 1 1 = ( 1)3 + ( 1) = ( 1) [ ( 1)2 + 1 ] , and its roots are 1 = 1, 2 = 1 + i, and 3 = 1 i. (3.44)

3.8 Stability
A further important issue in the model analysis is the stability of a system after a perturbation x0 of a stationary state x. In order to make statements about the stability of a system, we will solve the homogeneous dierential equation system (3.37) under the assumption that there are no perturbations of the input variables, i.e. u = 0. Furthermore, we simplify the notation by writing x instead of x. Then equation (3.37) becomes x = Ax , x(0) = x0 . (3.45)

A Rnn is an n n-matrix, where n is the number of state variables xi . Assume that v Cn is an eigenvector of A and that is the corresponding eigenvalue, i.e. Av = v . It is evident that x(t) = vet is a solution of x = Ax, because d t ve = vet = Avet . dt (3.48) (3.47) (3.46)

It becomes obvious that the eigenvalues of A provide valuable information about the dynamic behavior of the system.

3.8.1 One state variable


In the simplest case, the system has only one state variable x = x1 , i.e. n = 1 and A = [k]. Note that k is the eigenvalue of A. The system is described by the scalar dierential equation x = kx , x(0) = x0 , (3.49)

36

3.8 Stability

which has the solution x(t) = x0 ekt . (3.50)

According to the value of k, the following statements about the systems stability can be made:

If k < 0, then lim x(t) = 0. Hence, the system will return to its stationary state after a (small) perturbation. The system is stable. If k > 0 (and x = 0), then |lim x(t)| = . The system will not return to the
t 0 t

For k = 0, the system is said to be on the limit of stability.


3.8.2 System matrix with real eigenvalues

stationary state; rather, the absolute value of the state variable x grows towards innity. The system is unstable.

Now let A Rnn be a square matrix whose eigenvalues i are all real, i R i = {1, ..., n} . (3.51)

It is known from linear algebra that a matrix T Rnn , det(T ) = 0, exists, such that 1 .. T 1 AT = diag i = (3.52) = . . n is diagonal matrix whose entries are the eigenvalues of A. Now, the dierential equation (3.45) can be transformed as follows: x = Ax = A T T 1 x , T
1

(3.53) (3.54)

x=T

AT T

identity 1

x = T 1 x .

We apply a coordinate transformation to the state vector x and get a new state vector z: z = T 1 x x = Tz . (3.55)

In terms of the transformed state vector, equation (3.54) yields the following dierential equation: z = z or z i = i zi i {1, ..., n} . (3.57) (3.56)

These are n independent dierential equations, one for each of the transformed state variables zi . Hence, for each zi the argumentation in Section 3.8.1 is valid. We get the following criteria for the stability of the system:

37

3 Model Analysis

If all eigenvalues are negative, i.e. < 0 i {1, ..., n}, then the system is stable. If there is at least one positive eigenvalue, i.e. i {1, ..., n} : > 0, then the
i i

system is unstable.

3.8.3 Complex eigenvalues of a 2 2 system matrix


Let A R22 be a matrix of the form A= a b , b a a, b = 0 . (3.58)

The eigenvalues are the roots of the characteristic polynomial 0 = det(A I) = (a )2 + b2 , hence + = a + bi , = a bi . (3.60) (3.59)

By equation (3.47) we get that the two components x1 and x2 of the solution x(t) are linear combinations of e and e
t +t

= e(a+bi)t = eat [ cos(bt) + i sin(bt) ] = e(abi)t = eat [ cos(bt) i sin(bt) ] .

(3.61)

(3.62)

Therefore, the real parts of x1 and x2 are oscillations of the frequency b/2, which are damped (for a < 0) or excited (for a > 0) by the factor eat . For the stability of the system, this means that

for a < 0, the system is stable, and for a > 0, the system is unstable.

3.8.4 General case


Let A Rnn be a matrix with k N0 real eigenvalues real and l N0 pairs of conjugate i complex eigenvalues com + = aj + bj i, com = aj bj i. Then a matrix T Rnn , j j det(T ) = 0, exists, such that real 1 .. . real k a1 b1 1 1 (3.63) T AT = b1 a1 .. . al bl bl al

38

3.8 Stability

Hence, by a similar consideration as in Section 3.8.2, we can reduce the general case to the one-dimensional case with a real eigenvalue in Section 3.8.1 and the two-dimensional case with complex eigenvalues in Section 3.8.3. As a condition for the stability of a system x = Ax, we get:

If all real eigenvalues of A are negative and if the real parts of all complex eigenvalues of A are negative, then the system is stable. If there is at least one positive real eigenvalue or at least one complex eigenvalue with a positive real part, then the system is unstable. If A has pairs of conjugate complex eigenvalues a bi, then the system shows oscillations with frequency b/2. Figure 3.5 shows a graphical representation of these criteria. The solution of x = Ax takes the form of a linear combination
n

x(t) =
i=1

ei t ci .

(3.64)

The values of the ci depend on the initial conditions x(0) = x0 . The addends ei t ci are also called the eigenmodes of the system. Example Consider the dierential equation system 1 0 1 x = Ax = 0 1 1 x , 0 1 1 x(0) = [1, 2, 0]T . In Section 3.7 we have calculated the eigenvalues 1 = 1 , 2 = 1 + i , 3 = 1 i . (3.67)

(3.65) (3.66)

As there is at least one eigenvalue with a positive real part in fact all real parts are positive the system is unstable. Furthermore, the system shows an oscillation with frequency 2. Although this is rarely done in practice, we apply the coordinate transformation described above for illustrative purposes. The matrix of the eigenvalues is 1 0 0 1 0 0 = 0 Re(2 ) Im(2 ) = 0 1 1 , (3.68) 0 Im(2 ) Re(2 ) 0 1 1

39

3 Model Analysis

real, negative eigenvalues aperiodical damping, system stable





eigenmode with respect to 1 trends with growing t to system instable


 

conjugate complex eigenvalues 1,2 damped oscillation, system stable

conjugate complex eigenvalues 1,2 undamped oscillation system instable

Figure 3.5: Inuence of the eigenvalues on the stability of a system.

40

3.8 Stability and for2 1 1 0 T = 0 1 0 , 0 0 1 T 1 1 1 0 = 0 1 0 , 0 0 1

(3.69)

we have = T 1 AT . If we dene z = T 1 x , then equation (3.65) becomes z = z , z(0) = T


1

(3.70)

(3.71) x(0) = [1, 2, 0] .


T

(3.72)

The solution in terms of z is z1 (t) = et , z2 (t) = c1 e


(1+i)t

(3.73) + c2 e
(1i)t

(3.74) (3.75)

z3 (t) = c3 e(1+i)t + c4 e(1i)t . The coecients ci are c1 = 1 , c3 = i and thus z1 (t) z2 (t) = e
(1+i)t 3

c2 = 1 , c4 = i ,

(3.76) (3.77)

= et , +e
(1i)t

(3.78) (3.79) (3.80)

= 2e cos t , = 2et sin t ;

z3 (t) = ie(1+i)t + ie(1i)t

for the latter transformation, we have used Eulers formula ei = cos + i sin . Finally, we get the solution for x = T z: x1 (t) = z1 (t) + z2 (t) x2 (t) = z2 (t) x3 (t) = z3 (t)
2 3

= et (1 + 2 cos t) , = 2e cos t , = 2e sin t .


t t

(3.81) (3.82) (3.83)

The calculation of a matrix T fullling the given condition is not the subject of this course. The calculation of the coecients ci is left as an exercise to the reader. Hint: Solve the linear equation system for c1 ,... , c4 that consists of the initial conditions for z2 and z3 as well as two further equations resulting from the dierential equations z2 = z2 z3 and z3 = z2 + z3 .

41

3 Model Analysis

3.9 Time Characteristics


The time characteristics of a linearized system can be determined by means of its eigenvalues:

For real eigenvalues


Titime constant =

real i

R, the time constants (3.84)


real t

1 |real | i

For pairs of imaginary eigenvalues


Tioscillation = 2 |imag | = 2 |bi |

are a measure for the increase or decrease of the corresponding eigenmode ei


imag i

ci .

= bi i Ri, (3.85)
imag

For complex eigenvalues


Titime constant =

is the oscillation period of the eigenmode ei


com i

tc

= [ cos(bi t) i sin(bi t) ]ci .

= ai bi i C with ai , bi = 0, the corresponding eigenmodes show damped or excited oscillations (depending on the sign of ai ) with the time constant 1 1 = com |ai | | Re(i )| (3.86)

and the oscillation period Tioscillation = 2 2 . = com |bi | | Im(i )| (3.87)

The minimal and the maximal time characteristic of the entire system are Tmin = min(Titime constant , Tioscillation ) and Tmax = max(Titime constant , Tioscillation ) (3.89) (3.88)

As already introduced, you choose as simulation time tsim = 5Tmax , if the system is stable. Otherwise, tsim depends on an upper limit M : |x(t)| M or the problem formulation. You choose the time step length h = t Tmin with the time step index = 1 1 , , leading to (5 . . . 20) h = Tmin . 20 5 h Tmin

42

3.10 Problem: Sti Dierential Equations

Example For the example in Section 3.8, we have found the real eigenvalue 1 = 1 with
time constant T1 =

1 =1 |1|

(3.90)

and the complex eigenvalues 2,3 = 1 i with


time T2,3 constant = oscillation T2,3 =

1 = 1, |1|

(3.91) (3.92)

2 = 2 . |i|

The maximal time characteristic of the system is 2, and its minimal time characteristic is 1.

3.10 Problem: Sti Dierential Equations


Dierential equations with very dierent time characteristics (time constants, periods, eigenvalues) are called sti dierential equations. Examples for these are given in Fig. 3.6.

Figure 3.6: Sti systems: very dierent time constants. The stiness measure SM makes use of the comparison of the minimal to the maximal time constant: SM = Tmax > 103 . . . 107 Tmin sti (3.93)

tsim 5 Tmax 5 = = SM . Because of h Tmin this, sti systems require long computation times. This problem can be solved by using integration methods with variable time steps hk (k = 0, 1, . . . ) as it will be described in Chapter 5. The computation time can be considered as

43

3 Model Analysis

3.11 Problem: Discontinuous Right-Hand Side of a Dierential Equation


Discontinuous right-hand sides of a dierential equation can cause problems with the integration method (examples are given in Fig. 3.7).

Figure 3.7: Discontinuities in u and f . An improvement can be achieved with an integration method which detects the discontinuity and solves the dierential equations piecewisely with the corresponding new initial conditions.

44

4 Basic Numerical Concepts


In this chapter, a few basic concepts of numerical mathematics, which are of interest for the simulation techniques, will be reviewed very briey. For detailed presentations the well-known fundamental books (e.g. Press et al. (1990), Engeln-Mller and Reutter u (1993)) and lecture notes (e.g. Dahmen (1994)) are recommended. The requirements on numerical methods can be summarized in these key words:

robustness (the solution should be found), reliability (high accuracy), and eciency (short computation time).
This chapter deals with major issues regarding possible numerical errors and their eects. Generally, you can distinguish the following kinds of errors:

model errors, data errors, rounding errors, and method errors.


4.1 Floating Point Numbers
Numbers are saved in computers as oating point numbers in the normalized oating point representation. A well-known example for this representation of real numbers is the scientic notation: 12 = 1.20E1 = 1.20 101 , 0.375 = 3.75E -1 = 3.75 101 . (4.1)

In these examples, number are represented in the form f 10e , where f is a three-digit number with 1 f < 10. The exponent e is an integer. This notation can be generalized in several respects. For a particular oating point representation, we have to choose:

a base b N, b 2,
45

4 Basic Numerical Concepts

a minimum exponent r Z and a maximum exponent R Z with r R, and the number of digits for the representation of a number, d N.
For instance, if we choose b = 10, r = 5, R = 5, and d = 3, the examples in equation (4.1) are valid oating point numbers. In general, a oating point number x can be written x = f be , (4.2)

where the mantissa f is a number with d digits, 1 f < b, and the exponent e is an integer with r e R. If we choose the base b = 2, then we get the following representations for the examples above: 12 = 1 23 + 1 22 = (1 20 + 1 21 ) 23 = 1.1 211 , 0.375 = 1 22 + 1 23 = (1 20 + 1 21 ) 22 = 1.1 210 .

Binary numbers are marked with . In a binary representation, only the two digits 0 and 1 are needed. Therefore, binary numbers are very well suited for the representation of numbers in computers: The two digits can be mapped to the two states voltage and no voltage in an electrical circuit. As the reader is assumed to be much more familiar with decimal numbers, we will use b = 10 in the following discussion. Note that all remarks are in principle valid for any choice of the base b.

4.2 Rounding Errors


For any choice of b, d, r, and R, the set A of the oating point numbers f be is nite. Due to this limitation, not all real numbers x R can be written in oating point notation. 1 For instance, for x = 3 , we have (b = 10, d = 3) 3.33 101 < 1 < 3.34 101 . 3 (4.3)

Therefore, we introduce a rounding function rd : R A that maps a real number x to the nearest oating point number; i.e. for any x R, this rounding function fulls the condition |x rd(x)
machine number

| |x g|

for all g A.

(4.4)

Consequently, a oating point number x represents all real x in within an interval of length x (see Fig. 4.1). Note that x depends on the absolute value of x . As x = rd(x) for x A, numeric calculations with oating point numbers contain rounding / errors. As an example, we consider the calculation of the sum 1 + 6.789. We choose the 3 base b = 10 and the mantissa size d = 3.

46

4.2 Rounding Errors

( )

Figure 4.1: Representation of a number on the computer. a. Input errors. The input values xi of a calculation must be represented as oating point numbers x . The application of the rounding function causes an input error i xi : xi = x + xi . i

(4.5)

In the example, we have 1 rd( ) = 3.33 101 , 3 rd(6.789) = 6.79 100 , 1 1 0.333 = 0.000333, 3 3000 x2 = 6.789 6.79 = 0.001. x1 = (4.6) (4.7)

b. Computation errors. In general, the precise result of a calculation f (x , ..., x ) is n 1 no valid oating point number and must be rounded: f (x , ..., x ) = rd(f (x , ..., x )) 1 n 1 n f = f + f.
+
=

(4.8) (4.9)

Example: f = 3.33 101 + 6.79 100 = 7.123 100 , f = rd(f ) = 7.12 10 , f = f f = 7.123 7.12 = 0.003.
0

(4.10) (4.11) (4.12)

The deviation between a real number x and its oating point representation x is called relative rounding error x, referring to the true value: x = x x . x (4.13)

47

4 Basic Numerical Concepts

The machine accuracy (resolution of the computer) is an upper limit for the relative rounding error: x for all x. (4.14)

For an ordinary PC, this value is 1016 . As an example for eects of rounding errors, we will study the calculation of the cosine function. Its series expansion is the following:

cos x =
k=0

cos(k) (0) k cos 0 0 sin 0 1 cos 0 2 sin 0 3 x = x x x + x + ... k! 0! 1! 2! 3! x2 x4 x6 + + ... 2! 4! 6! x2k . (1)k (2k)!

=1

=
k=0

(4.15)

In a computer-program (written e.g. in Fortran or C) the calculation can be realized for instance in the following way:

function cosine(x, eps) # initialization cosine = 0.0 term = 1.0 k = 0 xx = x*x # loop DO WHILE cosine k term END DO

(ABS(term) > ABS(cosine)* eps) = cosine+term = k+2 = (-term*xx)/(k*(k-1))

return cosine

If you apply this program, you obtain for x = 3.14159265 and for x = 31.4159265 10 the value cosine(x) = 8.593623E+04. the value cosine(x) = 1.000000E+00

48

4.3 Conditioning

The cause for this eect are so-called cancellation errors: 30th sum term: term30 = 3.0962506 1012 , 60th sum term: term60 = 8.1062036 107 . Because the size of the mantissa is limited to d = 7, you obtain: term30 = 3.0962506E+12 + term60 = 0.0000811E+12 3.0963317E+12 The solution of the series expansion depends on the summation order. Therefore, it is sensible to change the summation order and to calculate the small terms rst. (from the third digit on all numerals get lost)

4.3 Conditioning
The conditioning of a problem indicates how strongly errors in the input data, e.g. rounding errors, aect the solution. As an introductory example, we consider the calculation of the square root of an input value x: f (x) = x. (4.16)

f(x) f+f f

slope:

1 2 x

x+x

Figure 4.2: Eect of an input error in square root calculation. For a given input value x, let f = x be the precise output value. As shown in Figure 4.2, an input error x causes an output error f . Furthermore, for small x, the approximation f df x dx (4.17)
x=x

49

4 Basic Numerical Concepts

is valid, hence f x df dx x = 2 x (4.18)

x=x

For the relative errors x and f , we get the relation f f = f


x 2 x

1 x 1 = x. 2 x 2

(4.19)

In general, an output value f can depend on n input values xj . Consequently, an error in each of the input values aects the output. This inuence can be calculated using a Taylor expansion of f which is truncated after the rst-order term:
n

f (x + x) = f (x) +
j=1

f xj

xj + . . . .
x=x

(4.20)

For the relative error f , we obtain under the conditions f (x) = 0 and j {1, . . . , n} : xj = 0 the expression f (x + x) f (x) f = = f (x)
n n j=1

f xj

x=x Kj

xj xj , f (x) xj
xj

(4.21)

f =
j=1

Kj xj .

(4.22)

The relative error xj of each input value contributes to the relative output error. The amplication factor or relative condition number Kj is a measure for the contribution of an error in the corresponding input value. Hence, a problem is ill-conditioned with respect to an input xj if |Kj | is big (> 1). The smaller the absolute values of the amplication factors are, the better the conditioning of the problem is. We have already determined the amplication factor for the calculation of the square root (equation (4.19)): 1 x = x, 2 1 K= . 2 (4.23)

For the multiplication, we get: f (x1 , x2 ) = x1 x2 , K1 = f x1 x1 x1 = x2 =1 f (x1 , x2 ) x1 x2 for x1 , x2 = 0, (4.24) (4.25)

x1 ,x2

and, by analogy, K2 = 1. The conditioning of the arithmetic operations is given in Table. 4.1.

50

4.3 Conditioning

Operation Addition Subtraction

f (x1 , x2 ) x1 + x2 x1 x2 x1 x2 x1 /x2 x1

K1
x1 x1 +x2 x1 x1 x2

K2
x2 x1 +x2

f
x1 x1 +x2 x1 x1 x1 x2 x1

Restrictions +
x2 x1 +x2 x2 x2 x1 x2 x2

x1 , x2 > 0 x1 , x2 > 0; x1 = x2

x1x2 2 x 1 1

Multiplication Division Square root

1 1
1 2

x1 + x2 x1 x2
1 2 x1

x1 > 0

Table 4.1: Conditioning of some mathematical operations (x1 , x2 = 0) Multiplying, dividing, and square root calculation are not dangerous operations, because the relative errors of the input errors do not propagate into the results in an amplied way; in particular, the Kj are constant. Addition is also a well-conditioned operation, because x1x1 2 and x1x2 2 lie in a range +x +x between 0 and 1. Especially the case of x1 x2 and x1 x2 (and vice versa) yields a relatively small value of x1 +x2 . This is called an error damping: x1 = 1 , x1 = 10 x2 = 1000 , x1 + x2 = 1001 ,

x1 = 10 , (x1 + x1 ) + x2 = 1011 , 1011 1001 0.01 . x1 +x2 = 1001

For substraction is/are x1x1 2 and/or x1x2 2 > 1, thus the errors x1 and/or x2 become x x amplied. A big amplication occurs if x y, which causes a cancellation error during the calculation of x y: x1 = 100 , x2 = 0.01 x2 = 99 , x1 x2 = 1 ,

x2 = 0.99 , x1 (x2 + x2 ) = 0.01 , 1 0.01 = 0.99 . x1 x2 = 1

In this example, a perturbation of the input value x2 by 1% results in a cancellation error of 99% with respect to the correct result!

51

4 Basic Numerical Concepts

52

   s t u qr p i h g

e i pq r s t u vw
" R  () 01 "  7 b  G "F 4 kj H Q R IP d S TU   e fcd  y  " f h z |{ X ` Y x yvw VW pq  u t U sr klm on P R Q TS GH I XY wx yz { | } ~  a b` ji x y q rs tuv WV  h g hf e vw B DC F&E @A d u t  ! #" % &$ s lm o pn 7 9 8   q r y 6 " b R  p i   g h i d h 5 4   f g f  1 32 "d e @89  cde 0)  ~} ( 65 b a 3 42 '  b ' " & "% #$ " ! " " xy      " wv 

f h ig

5.1.1 Problem Denition and Terminology

5.1 Principles of Numerical Integration

5 Numerical Integration of Ordinary Dierential Equations

The general (implicit) form of a scalar ordinary dierential equation of rst order is

Figure 5.1: List of dierent integration methods available in Simulink (MathWorks, 2002).

x(t0 ) = x0 .

 s
"

x y

     !" $# & % '( 0)


"

0 = F (x, x, t) t t0 ,
  x s  fd e 1 32 465 7 k l m n op ACB D E F HG
f "  A B DEC

f gh 98 @ i j

r sq tu wxv z sy {| } ~  a c db X `Y


IP Q SR U T VW

The simulation of processes which are described by dierential equations (see the vehicle example in Section 1.3.3) requires methods for numerical integration. Simulation tools such as e.g. Simulink (MathWorks, 2002) often have accommodated various numerical integration procedures for selection (compare Fig. 5.1). In this chapter the basics of numerical integration will be discussed and dierent integration methods will be introduced.

(5.1)

(5.2)

53

5 Numerical Integration of Ordinary Dierential Equations

Example 0 = 2x + x t 0 , x(0) = 1 . Note that in this example, there is no explicit dependency of F on t.

(5.3) (5.4)

We will restrict our presentation to dierential equations that are given in the explicit form x = f (x, t) t t0 , x(t0 ) = x0 . Sometimes, it is possible to transform a DE in implicit form into the explicit form. Example The example above can easily be transformed into x 2 x(0) = 1 . x= t 0, (5.7) (5.8) (5.5) (5.6)

Solving a dierential equation (DE) means to nd a function x : R R such that the dierential equation (5.1) as well as the initial condition (5.2) are fullled. We will denote this exact solution as x(t). In Section 3.2 we have discussed a sucient criterion for the existence of a unique solution. In general, x(t) cannot be calculated analytically. Rather, we must apply a numerical integration method in order to get an approximation of x(t). More precisely, a numerical method yields a set of N pairs (tk , xk ) such that xk x(tk ) (5.9)

for all k. Each tk represents a discrete time point, and therefore the set {tk |k = 1, ..., N } is called a discrete time grid. The time step length between tk and tk+1 is denoted hk , i.e. tk+1 = tk + hk . (5.10)

To simplify the following discussion, we will assume a constant time step length h = hk , k = 1, ..., N such that we get an equidistant time grid.

54

5.1 Principles of Numerical Integration

5.1.2 A Simple Integration Method


An integration of equation (5.5) from t0 to t1 = t0 + h yields
t0 +h

x(t1 ) x(t0 ) =
t0 t0 +h

f (x, t)d , f (x, t)d .


t0

(5.11) (5.12)

x(t1 ) = x(t0 ) +

fk

xk+1 h fk tk

tk+1 tk

f dt = h fk + xk+1

tk+1

Figure 5.2: Illustration of the rectangle rule. Equation (5.12) is fullled by the solution x(t), but it cannot be used to calculate x(t1 ) as an exact evaluation of the integral is not possible. Numerical integration methods are based on the idea to approximate the integral term. A simple strategy is to replace the integral represented by the dashed area in Figure 5.2 with the product h f (x0 , t0 ) represented by the rectangle in Figure 5.2:
t0 +h

f (x, t)d h f (x0 , t0 ) .


t0

(5.13)

The error of the approximation corresponds to the area between the curve and the rectangle. By inserting equation (5.13) and the initial condition x(t0 ) = x0 into equation (5.12), we get an approximation x1 for x(t1 ): x1 = x0 + h f (x0 , t0 ) . In a similar way, an approximation x2 for x(t2 ) is calculated: x2 = x1 + h f (x1 , t1 ) . (5.15) (5.14)

55

5 Numerical Integration of Ordinary Dierential Equations Note that for the calculation of x2 x(t2 ), we use the approximation x1 for x(t2 ). Therefore, in addition to the error of the integral approximation, an error of x1 inuences the accuracy of x2 . The general form of this integration method, known as the explicit Euler method (cf. Section 5.2.1), is xk+1 = xk + h f (xk , tk ) . (5.16)

The accuracy of each xk depends on the accuracy of its predecessor and of the accuracy of the approximation for the integral.

}
}
+
+

Figure 5.3: Illustration of the tangent rule. The explicit Euler method can also be deduced by means of an alternative consideration (see Figure 5.3): The derivative of x(t) w.r.t t, evaluated at tk , can be approximated (tangent rule) f (xk , tk ) = x(tk ) hence x(tk+1 ) x(tk ) + hf (xk , tk ) . (5.18) x(tk+1 ) x(tk ) , h (5.17)

By replacing the exact values of x(tk ) and x(tk+1 ) with the corresponding approximations, we get the integration rule xk+1 = xk + h f (xk , tk ) . Example We apply the explicit Euler method to solve the dierential equation x x= , 2 x(0) = 1 , (5.20) (5.19)

56

5.1 Principles of Numerical Integration i.e. f (x, t) = x , t0 = 0, x0 = 1. The application of equation (5.16) to this problem leads 2 to xk h xk+1 = xk + hf (xk , tk ) = xk h = (1 )xk . (5.21) 2 2 If we choose the step size h = 0.5, we get the following approximations of the correct result x(t) = et/2 for t1 = t0 + 1 h and t2 = t0 + 2 h (see Figure 5.4): 3 x(t0 + 1 h) = x(0.5) x1 = x0 = 0.75 4 3 x(t0 + 2 h) = x(1.0) x2 = x1 = 0.5625 . 4 (5.22) (5.23)

1.0 exact solution et/2 0.5 explicit Euler with h = 0.5 0 0 0.5 1.0 1.5 2.0

Figure 5.4: Illustration of the explicit Euler method. Numerical integration methods dier in how the area/gradient is determined. Numerical integration methods can be divided into

one-step methods, multiple-step methods, and extrapolation methods (which are not subject of the lecture). explicit and implicit methods.

These methods can be further divided into

5.1.3 Consistency
The dierence quotient of the exact solution x(t) between t0 and t1 = t0 + h (see Figure 5.5) is x(t0 + h) x0 if h = 0 , (x0 , t0 , h) = (5.24) h f (x0 , t0 ) if h = 0 ,

57

5 Numerical Integration of Ordinary Dierential Equations

x x0 = x(t0 ) x(t1 ) x1 t0 t1 t x(t)

Figure 5.5: Dierence quotient of exact solution and approximation. whereas the dierence quotient of a numerical approximation of x(t) is given by (x0 , t0 , h) = x1 (x0 , t0 , h) x0 . h (5.25)

depends on the integration method by which x1 is calculated. The local discretization error is dened as (x0 , t0 , h) = (x0 , t0 , h) (x0 , t0 , h) . (5.26)

is a measure for the deviation of the numerical approximation x1 from the exact solution x(t1 ). The term local error emphasizes that describes the error of one integration step under the precondition that the calculation of xk+1 is based on an exact value xk = x(tk ); this is why we have dened and for k = 0 we know the exact value x0 . For a reasonable integration method, we demand that the local discretization error is small for a small step size h:
h0

lim (x0 , t0 , h) = 0 .

(5.27)

If this equation holds for an integration method, we say that the method is consistent. As limh0 (x0 , t0 , h) = f (x0 , t0 ), equation (5.27) can be rewritten
h0

lim (x0 , t0 , h) = f (x0 , t0 ) .

(5.28)

Example We show that the explicit Euler method is consistent. By inserting equation (5.16) into (5.25), we get (x0 , t0 , h) = x0 + hf (x0 , t0 ) x0 = f (x0 , t0 ) . h (5.29)

As limh0 (x0 , t0 , h) = f (x0 , t0 ), the method is consistent.

58

5.2 One-Step Methods

Consistency is a condition for a reasonable integration method, but the denition above does not allow for the comparison of dierent integration methods. Therefore, we dene the order of consistency p as the error order of the local discretization error: (x0 , t0 , h) O(hp ) . (5.30)

If an integration method is of consistency order p for all problems, i.e. all admissible f , x0 , and t0 , we say that the integration method itself is of order p. Example For the explicit Euler method, we have (x0 , t0 , h) = 1 [ x(t0 + h) x0 ] h 1 dx = x(t0 ) + h + O(h2 ) x0 h dt t=t0 = dx dt
t=t0

+ O(h) (5.31)

= f (x0 , t0 ) + O(h) , (x0 , t0 , h) = 1 [x1 x0 ] h 1 = [x0 + hf (x0 , t0 ) x0 ] h = f (x0 , t0 ) , and (x0 , t0 , h) = (x0 , t0 , h) (x0 , t0 , h) O(h) . The explicit Euler method is of order 1.

(5.32)

(5.33)

5.2 One-Step Methods


One-step methods are also called Runge-Kutta methods. Hereby, you determine an average gradient s(xk , xk+1 , tk , h) over an interval [tk , tk+1 ]. The general one-step method reads: xk+1 = xk + h s(xk , xk+1 , tk , h),
m m

k = 0, . . . , K, with
i=1

(5.34) (5.35)

xk+1 = xk + h
i=1

wi si ,

wi = 1.

59

5 Numerical Integration of Ordinary Dierential Equations

x 4 3 2 1

h=6

h=1 0 2 1 2 4 6 8 10 h=4 12 t

Figure 5.6: Explicit Euler method: Inuence of step size h.

5.2.1 Explicit Euler Method (Euler Forward Method)


The explicit Euler method (see Fig. 5.7) is the simplest integration method. The algorithmic equation reads: xk+1 = xk + h fk .

(5.36)

"

Figure 5.7: Explicit Euler method. The characteristics of the explicit Euler method are:

Eciency: Quick calculation of one step. Accuracy: For a high accuracy small step lengths are required. This leads to stability
problems for sti systems. 60

5.2 One-Step Methods

5.2.2 Implicit Euler Method (Euler Backward Method)


The implicit Euler method (see Fig. 5.8) uses the function value at the digit k + 1 instead of the function value at the digit k. Therefore the algorithmic equation reads: xk+1 = xk + h fk+1 .

(5.37)

"

Figure 5.8: Implicit Euler method. The method is called implicit due to the fact, that fk+1 = f (xk+1 ) . Therefore an iteration or an approximation is required, for which the nonlinear equation 0 = xk+1 xk h f (xk+1 ). has to be solved for xk+1 . The characteristics of an implicit Euler method are: (5.38)

Eciency: For every step the nonlinear equation (5.38) has to be solved. This increases the computational eort signicantly. Accuracy: For a high accuracy small step lengths are required. However, the stability
of the algorithm is ensured.

5.2.3 Semi-Implicit Euler Method


The implicit Euler method, xk+1 = xk + hf (xk+1 ) , (5.39)

requires the solution of a non-linear equation system for xk+1 because f (xk+1 ) is unknown. The semi-implicit Euler method is based on the idea to approximate f (xk+1 ) be means of a linearization of f at xk : f (xk+1 ) = f (xk ) + f x
x=xk

xk + O( xk 2 ) ,

(5.40)

61

5 Numerical Integration of Ordinary Dierential Equations where xk = xk+1 xk . Thus, equation (5.39) becomes xk+1 = xk + h f (xk ) + f x (xk+1 xk ) . (5.41)

x=xk

This is a linear equation for xk+1 . Its solution is xk+1 = xk + h I h f x


1 x=xk

f (xk ) .

(5.42)

5.2.4 Heuns Method


Heuns Method (see Fig. 5.9) is a so-called predictor-corrector method.

 

! "

 
+
+

#$
xp = xk + hfk . k+1

Figure 5.9: Heuns method. In a predictor step you obtain a rst approximation of the value xp : k+1 (5.43)

In the subsequent corrector step the gradient is averaged with this value and the initial value: xk+1 fk + f (xp ) k+1 , = xk + h 2
()

&

(5.44)

s1 = f (xk , tk ), s2 = f (xk + hs1 , tk + h), h xk+1 = xk + (s1 + s2 ). 2

(5.45) (5.46) (5.47)

The term () corresponds to an average of two gradients (compare Fig. 5.9). It is a two-stage one-step method, which is also called Runge-Kutta method of second order.

62

5.2 One-Step Methods

5.2.5 Runge-Kutta Method of Fourth Order


This method averages four gradients (see Fig. 5.10), which are determined at specic nodes. Therefore it is called a four-stage one-step method.

+


"

Figure 5.10: Runge-Kutta method of fourth order.

s1 = f (xk , tk ) h h s2 = f (xk + s1 , tk + ) 2 2 h h s3 = f (xk + s2 , tk + ) 2 2 s4 = f (xk + hs3 , tk + h) h xk+1 = xk + (s1 + 2s2 + 2s3 + s4 ) 6


averaged gradients

(5.48) (5.49) (5.50) (5.51) (5.52)

Through the averaging the accuracy of the approximation (of the gradient) is strongly increased.

5.2.6 Consistency Condition for One-Step Methods


In order to guarantee the consistency of an integration procedure, it must be ensured that the method converges for an innitely small step length towards the solution of the dierential equation:
h0

lim

xk+1 xk ! = lim s(xk , xk+1 , tk , h) = f (x(tk ), tk ) = x(tk ) h0 h

(5.53)

As an example this is shown for two one-step methods:

63

5 Numerical Integration of Ordinary Dierential Equations

a) Heun
h0

lim s(xk , h) = lim

h0

f (xk ) + f (xk + hfk ) 2

= f (x(tk )) = x(tk )

(5.54)

b) Runge-Kutta of fourth order


h0

lim

1 1 [s1 + 2s2 + 2s3 + s4 ] = [fk + 2fk + 2fk + fk ] = x(tk ) 6 6

(5.55)

5.3 Multiple-Step Methods


Multiple-step methods use an interpolation polynomial for x(t) and/or x(t) = f (x) over more than one interval [tkn , tk+1 ], n 1 for the determination of xk+1 .

 

Figure 5.11: Multiple-step method. Hereby you proceed in the following way (see Fig. 5.11): a. Interpolation of a polynomial pn (t) of order n through fkn , fkn+1 , . . . , fk .
tk+1

b. Determination of the area F p =


tk

pn (t) dt.

c. Calculation of xk+1 = xk + Fp . The general form of a multiple-step method reads:


m m

xk+1 =
i=1

i xk+1i + h
i=0

i fk+1i

k = m 1, m, . . .

! "

#$

Hereby you distinguish again between implicit and explicit methods: 0 = 0: explicit multiple-step method (e.g. 1 = 1, i = 0: Adams-Bashforth method) 0 = 0: implicit multiple-step method (e.g. 1 = 1, i = 0: Adams-Moulton method) The following points have to be noted:

64

( ) 0 % 21
(5.56)

()
 &'
+

 

5.4 Step Length Control

k = 0, . . . , m 1 requires a start-up calculation, e.g. with a one-step method with the same accuracy. The computational eort for explicit multi-step methods is only 1 f -evaluation
k

per step, if fk1 , fk2 , . . . is stored interim. However, this increases signicantly the organizational eort.

As an example we will investigate the Gear-Method of second order ( = 0, implicit method): 4 1 2 xk+1 = xk xk1 + hfk+1 . 3 3 3 The consistency of this method is ensured:
1 1 2 xk+1 xk 3 xk 3 xk1 + 3 hfk+1 lim = lim h0 h0 h h xk xk1 2 = lim + fk+1 h0 3h 3 1 2 = x(tk ) + x(tk ) 3 3 = x(tk )

(5.57)

(5.58)

5.3.1 Predictor-Corrector Method


For the use of implicit multiple-step methods, generally fk+1 has to be determined iteratively. An initial value can be determined with the help of an explicit method (predictor). E.g., the Adams-Moulton-Bashforth method (3./4. order) comprises

the Adams-Bashforth method of third order (polynomial of second order) as a predicator and the Adams-Moulton method of fourth order (polynomial of third order) as a corrector.

5.4 Step Length Control


Methods with step length control pursue the goal of obtaining the desired accuracy within the least possible number of integration steps. You proceed in the following way: At rst, the local error of xk in calculation of xk are estimated with two methods of dierent accuracy. If this error is big, you reduce the step length. If the error is (too) small, you enlarge the step length. Moreover, with multiple-step methods the order m can be adapted.

65

5 Numerical Integration of Ordinary Dierential Equations

66

6 Algebraic Equation Systems


In contrast to dierential equation systems, in algebraic equation systems no temporal dependencies of the state variables appear. In this chapter, linear and nonlinear equation systems as well as the corresponding solution approaches will be introduced.

6.1 Linear Equation Systems


An example from chemistry serves us as an introductory example. We consider two parallel equilibrium reactions of rst order describing the production of the components B and C from the component A: A A
k1 k2 k3 k4

B C

(6.1) (6.2)

Solely component A exists at time t0 . The mole fractions xA , xB , xC have to be found, which are present in an equilibrium. In the equilibrium the reaction rates ri are dependent on the reaction rate constants ki and the concentrations ci : rAB = k1 cA k2 cB = 0, rAC = k3 cA k4 cB = 0.
! !

(6.3) (6.4)

The equilibrium constants Ki are dened as: KAB = KAC xB xA xC = xA or or KAB xA xB = 0, KAC xA xC = 0. (6.5) (6.6)

In terms of the mole fractions, the total mole balance can be set up: xA + xB + xC = 1. Thus, we have three equations (6.5) - (6.7) for the three unknown variables. (6.7)

67

6 Algebraic Equation Systems

The general form of a linear equation system can be written as: Ax = b (6.8)

with A : (nn)-matrix and x, b Rn . The equation system is solvable, if the rank A = n, or the determinant det A = 0. In our example this means: 1 1 1 A = KAB 1 0 , KAC 0 1 xA x = xB , xC 1 b = 0 . 0

(6.9)

This equation system is solvable because det A = 1 + KAC + KAB = 0.

6.1.1 Solution Methods for Linear Equation Systems


The solution of the linear equation system (6.8) can be represented as follows: x = A1 b. x can be calculated by means of the Gau elimination. This comprises three steps: 1. Triangulation of A = L R (6.10)

=


2. Forward substitution Lz = b z

3. Backward substitution Rx = z

68

6.2 Nonlinear Equation Systems

In our example this means: 1 KAB KAC 1 1 0 1 0 1 1 (KAB ) (KAC ) 0 0 (6.11)

1 1 1 1 0 (1 + KAB ) KAB KAB (KAC ) 0 KAC (1 + KAC ) KAC (1 + KAB ) 1 1 1 1 0 (1 + KAB ) KAB KAB 0 0 (1 + KAB + KAC ) KAC (6.13 c) gives: xc = KAC . 1 + KAB + KAC KAB KAC = KAB . 1 + KAB + KAC (a) (b) (c)

(6.12)

(6.13)

(6.14)

By inserting the above into (6.13 b) we obtain: (1 + KAB )xB + (6.15)

This solution is then inserted into (6.13 a): xA = 1 . 1 + KAB + KAC (6.16)

The Gau elimination is considered to be the most important algorithm in the numerical mathematics. There are many numerically robust modications, which are often implemented into standard subroutines of commercial software (e.g. MATLAB (MathWorks, 2000)).

6.2 Nonlinear Equation Systems


We consider again the example of Section 6.1 and enlarge it in a way such that in addition to the components B and C a new component D is involved in the reactions:
k1

A A

k2
k3

B+D C +D

(6.17) (6.18)

k4

In the equilibrium the following equations hold: rABD = k1 cA k2 cB cD = 0, rACD = k3 cA k4 cC cD = 0.


! !

(6.19) (6.20)

69

6 Algebraic Equation Systems

Again, the molar fractions xA , xB , xC , xD have to be found, which appear in the equilibrium. The total mole balance for the four components is: xA + xB + xC + xD = 1. Furthermore, the mole balance of the component D is given by: xD = xB + xC . Together with the two nonlinear equations for the equilibrium constant Ki : KABD = KACD xB xD xA xC xD = xA or or KABD xA = xB xD , KACD xA = xC xD (6.23) (6.24) (6.22) (6.21)

you obtain four equations (6.21) - (6.24) for the four unknown mole fractions. The general form of a nonlinear equation can be written as: f1 (x1 , x2 , . . . , xn ) 0 f2 (x1 , x2 , . . . , xn ) 0 or f (x) = 0. = . . . . . . fn (x1 , x2 , . . . , xn ) In our example this means: f1 (xA , xB , xC , xD ) = xA + xB + xC + xD 1= 0, f2 (xA , xB , xC , xD ) = xB + xC xD f3 (xA , xB , xC , xD ) = KABD xA xB xD f4 (xA , xB , xC , xD ) = KACD xA xC xD = 0, = 0, = 0. (6.26) (6.27) (6.28) (6.29) 0

(6.25)

6.2.1 Solvability of the Nonlinear Equation System


The necessary condition for the solvability of a nonlinear equation system is: det f x =0 x. (6.30)

Jacobian matrix

Here, the problem is that in many cases the Jacobian matrix can only determined with high computational eort. An alternative formulation for the solvability of the nonlinear equation system, also a necessary condition, reads: The nonlinear equation system is only solvable, if exactly one variable xj can be assigned to every equation fi = 0, so that this equation can be used for calculating the assigned variable under the pre-requisite that all other variables are known.

70

6.2 Nonlinear Equation Systems

For an illustration of this necessary condition, we consider the following example: f1 : f2 : f3 : f4 : f5 : x1 + x4 10 2 x2 x3 x4 x5 6 x1 x1,7 (x4 5) 8 2 =0 =0 =0 (6.31) (6.32) (6.33) (6.34) (6.35)

x4 3x1 + 6 = 0 x1 x3 x5 + 4 = 0

We make use of the incidence matrix which helps us to determine information on the structural regularity of the problem. In the incidence matrix, it is indicated which equation fi depends on which variable xi (marked by ). x1 f1 f2 f3 f4 f5 x2 x3 x4 x5

One then tries to assign to each equation exactly one variable as its solution (marked by ). If this succeeds, as in our example, the structural regularity is given.

6.2.2 Solution Methods for Nonlinear Equation Systems


6.2.2.1 Newtons Method for Scalar Equations First, we study the case of a scalar equation f (x). An iteration procedure is sought to determine the zero(s) x satisfying f (x ) = 0. In order to achieve this, one expands a Taylor series for f (xi+1 ) = 0 at xi , with i as the iteration index: f (xi+1 ) = f (xi ) + df i i+1 (x )(x xi ) + = 0. dx (6.36)

You truncate it after the rst-order terms: xi+1 = xi f (xi ) fx (xi ) i = 0, 1, . . . (6.37)

This method is called Newtons method for scalar equations and is graphically illustrated in Fig 6.1.

71

6 Algebraic Equation Systems

f(x)

x* x 3

x2

x1

Figure 6.1: Newtons method for scalar equations. 6.2.2.2 Newton-Raphson Method for Equation Systems In the vector case f (x) = 0 the procedure is similar to the scalar case. One also expands a Taylor series for f (xi+1 ) = 0 for the vector xi with the iteration index i: f (xi+1 ) = f (xi ) + f i (x ) x
Jacobian matrix

(xi+1 xi ) + = 0 xi

(6.38)

f1 i x1 (x ) f2 i (x ) f i (x ) = x1 . x . . fn (xi ) x1

f1 i (x ) . . . x2 f2 i (x ) . . . x2 . .. . . . fn i (x ) . . . x2

f1 i (x ) xn f2 i (x ) xn . . . fn i (x ) xn

(6.39)

The solution procedure for the determination of the zero vector x, as shown below, is referred to as Newton-Raphson method: f i (x )xi = f (xi ). x (6.40)

This is a linear equation system of the form Axi = b and contains n linear equations for xi . After their solution (see Section 6.1), starting with a sensible initial value x0 , an update (called Newton step or Newton correction) of xi+1 is carried out: xi+1 = xi + xi i = 0, 1, . . . , i0 < imax (6.41)

72

6.2 Nonlinear Equation Systems

The solution of the linear equation system and the following correction continue until either the maximum index imax or the absolute and/or relative accuracy is reached: xi0 < abs + rel xi0 . (6.42)

In order to come to a better understanding of the Newton-Raphson method, we want to consider the rst steps of the solution procedure in a simple example. The example contains two nonlinear equations: x3 x3 x2 x2 1 2 1 x2 1 + x2 2 = = 7, 4, (6.43) (6.44)

f1 (x1 , x2 ) = x3 x3 x2 x2 7 1 2 1 f1 (x1 , x2 ) = x2 1 x2 2 4

= =

0, 0.

(6.45) (6.46)

The Jacobian matrix reads: f = x 3x2 2x1 x2 3x2 x1 1 2 . 2x1 2x2 (6.47)

As initial value we choose x0 = x0 1 x0 2 = 2 . 0

Inserted for the rst iteration step it results in: f1 (x0 ) = 1, f2 (x ) = 0,


0

f 0 (x ) = x

12 4 . 4 0

(6.48)

The linear equation to be solved is: 12 4 4 0 with the solution x0 = 0


1 4

x0 1 x0 2

1 0

(6.49)

(6.50)

As new approximation you obtain: x1 = x0 + x0 = 2


1 4

(6.51)

With this solution you carry out the second iteration step f (x1 ) = . . .

73

6 Algebraic Equation Systems

6.2.2.3 Convergence Problems of the Newton-Raphson Method Convergence problems within the Newton-Raphson method can occur for various reasons. In the following we study graphical interpretations of various problems. Convergence Range in the Case of Multiple Solutions (see Fig 6.2) The zero x can 2 only be found, if the initial value x0 is chosen between the two extreme values of the function. If the initial value is chosen outside this range, the solution converges towards one of the two outer zeros. That means, that the convergence range of the Newton-Raphson method is problem specic.
f(x)

x* 1

x* 2

x* 3

Figure 6.2: Convergence range in the case of multiple solutions.

Divergence and Singularity (see Fig 6.3) The zero x of the function can only be found, if the initial value x0 is chosen on the right of the pole. If it is chosen on the left-hand side, the method cannot converge to the root. The method leads then to the proximity of the extreme value and fails there, because the reciprocal values of the gradient of the function tend towards innity. The method has to be aborted due to numerical singularity. Dicult Problem (see Fig 6.4) In this example, divergence appears in spite of an initial value x0 in the proximity of the roots x1 , x2 . This is due to the fact, that the search direction leads into an area from which the method can no longer reach the roots. The Newton-Raphson method is basically not suitable for such functions.

74

6.2 Nonlinear Equation Systems

Figure 6.3: Divergence and singularity.

Figure 6.4: Dicult problem.

75

6 Algebraic Equation Systems

76

7 Dierential-Algebraic Systems

Besides dierential equation systems and algebraic systems, which were both introduced in the previous chapters, a combination of both can also appear: the so-called dierentialalgebraic (equation) systems (DAE systems). The introductory example comes from the area of electrical engineering.

7.1 Depiction of Dierential-Algebraic Systems

Example 1

We observe an electrical circuit consisting of two resistors (R1 , R2 ), a coil with the inductivity L and a capacitor with the capacity C (see Fig. 7.1).

"

+
%   &  

Figure 7.1: Electrical circuit example 1. The voltage source U0 as well as R1 , R2 , L and C are assumed to be known. The electrical currents i0 , i1 , i2 , il , iC and the voltages u1 , u2 , uC , ul are to be found. Ohms law and

77

7 Dierential-Algebraic Systems

Kirchhos loop rule yields the model equations: u1 = R1 i1 u2 = R2 i2 diL uL = L dt duC iC = C dt U0 = u1 + uc uC = u2 uL = u1 + u2 i0 = i1 + iL i1 = i2 + iC (7.1) (7.2) (7.3) (7.4) (7.5) (7.6) (7.7) (7.8) (7.9)

This leads to an equation system, which consists of two dierential equations (7.3), (7.4), and seven algebraic equations. Therefore, it is called a dierential-algebraic system. For the representation of dierential-algebraic systems several dierent characteristic classes can be distinguished.

7.1.1 General Nonlinear Implicit Form


The general implicit form of a dierential-algebraic system is: 0 = F (z, z, d), with z Rn : dR :
d

(7.10)

state vector, vector of the input quantities and parameter.

(7.11) (7.12)

The system can be transformed to: 0 = f (z, z, d) 0 = g(z, d) f Rnk , g Rk (7.13) (7.14)

(7.13) comprises a system of ordinary dierential equations, (7.14) comprises a system of (non-) linear algebraic equations.

78

7.1 Depiction of Dierential-Algebraic Systems

7.1.2 Explicit Dierential-Algebraic System


The explicit dierential-algebraic system is a special case of 7.1.1 and occurs often e.g. in chemical engineering: x = f (x, y, d), 0 = g(x, y, d), x : vector of the dierential state variable, y : vector of the algebraic state variable. (7.15) (7.16)

y consists of variables, which are not dierentiated. Therefore, the vector z can be divided into dierential and algebraic variables. Example 1 is an example for such an explicit DAE system. This can be easily seen after the following transformations: diL 1 = uL dt L 1 duC = iC dt C g1 : g2 : g3 : g4 : g5 : g6 : g7 : with x = [iL , uC ]T , y = [i0 , i1 , i2 , iC , u1 , u2 , uL ] , d = [U0 , R1 , R2 , L, C] .
T T

x = f (x, y, d),

0 = u1 R1 i1 0 = u2 R2 i2 0 = U0 u1 uC 0 = uC u2 0 = uL u1 u2 0 = i0 i1 iL 0 = i1 i2 iC

0 = g(x, y, d),

(7.17) (7.18) (7.19)

7.1.3 Linear Dierential-Algebraic System


In addition to nonlinear dierential-algebraic systems, linear dierential-algebraic systems with constant coecients play a special role: M z = Az + Bd, z = x, y
T

(7.20)

You obtain such models e.g. through linearization of nonlinear implicit or explicit dierentialalgebraic systems. Many control engineering and system theoretical concepts are based on systems of the form (7.20).

79

7 Dierential-Algebraic Systems

7.2 Numerical Methods for Solving Dierential-Algebraic Systems


Dierential-algebraic systems are often solved through a combination of dierential equation solvers and nonlinear equation solvers, which were described in detail in the previous chapters.

7.3 Solvability of Dierential-Algebraic Systems


Dierential-Algebraic systems are not necessarily solvable. For the dierential-algebraic system (7.13), (7.14) a solution exists, if the system is of dierential index 1. In the case g is regular. A of an explicit DAE system, this means that the rank [gy ] = k or y necessary condition is given by the analysis of the structural rank of the incidence matrix, which was already introduced in Section 6.2.1. In case of example 1, the incidence matrix looks as follows: i0 g1 g2 g3 g4 g5 g6 g7
1

i1
5

i2

iC

u1

u2

uL

It can be seen, that the electrical current i0 can only be determined with equation g6 . From this it follows that iC can be determined from g7 , uL from g5 , i2 from g2 , i1 from g1 , u1 from g3 and u2 from g4 . The structural integrity is given. In this example, the solution is even unique. This is not always the case, as the following example shows.

Example 2
In example 2 a third resistor R3 is introduced replacing the capacitor in example 1 (see Fig.. 7.2).

80

7.3 Solvability of Dierential-Algebraic Systems


 

"

+
# %   !

'

Figure 7.2: Electrical circuit example 2.

In addition to the already known equations

u1 = R1 i1 , u2 = R2 i2 , diL 1 = uL , dt L uL = u1 + u2 , i0 = i1 + iL , i1 = i2 + i3 ,

&

(7.21) (7.22) (7.23) (7.24) (7.25) (7.26)

you obtain

u3 = R3 i3 , U0 = u1 + u3 , u3 = u2 .

(7.27) (7.28) (7.29)

This is an explicit dierential-algebraic system, which consists of one dierential equation and eight algebraic equations:

x = [iL ], y = [i0 , i1 , i2 , i3 , u1 , u2 , u3 , uL ] , d = [U0 , R1 , R2 , R3 , L] .


T T

(7.30) (7.31) (7.32)

The incidence matrix has the following appearance:

81

7 Dierential-Algebraic Systems i0 g1 g2 g3 g4 g5 g6 g7 g8
1

i1

i2

i3

u1
4

u2

u3

uL


8 3

i0 can only be calculated through equation g6 , uL only with g5 . Now you have to decide, with which variable and equation you want to proceed further. E.g. u3 can be determined from g3 . Doing so, you decide that u1 has to be determined from g1 , i1 from g7 , i2 from g2 , il from g8 and u2 from g4 . In contrast to example 1, you see here that no unique solution exists, but the problem is still structurally solvable. The decision to calculate u3 with g3 leads to the problem of algebraic loops: u3 = g3 (u1 ) u1 = g1 (i1 )

i1 = g7 (i2 , i3 ) i3 = g8 (u3 )

u3 = g(u3 , . . . )

The output variable u3 is at the same time an input variable. The problem of algebraic loops is only solvable iteratively. In Simulink a solution method based on Newtons method is used. Models with algebraic loops run slower than models without. If possible, algebraic loops should be avoided. As example (see Fig. 7.3(a)) we consider the algebraic loop in y = A(x By) Here, remedy can be achieved through a simple transformation (see Fig. 7.3(b)): (1 + AB)y = Ax (7.34) (7.33)

82

7.3 Solvability of Dierential-Algebraic Systems

(b) transformed

(a) original

Figure 7.3: Algebraic loop.

Example 3
We consider a further example 3 (see Fig. 7.4), where capacitor and coil from example 1 are exchanged.
 

+
&  # "  

Figure 7.4: Electrical circuit example 3. In addition to the already known model equations u1 = R1 i1 , u2 = R2 i2 , diL uL = L , dt duC iC = C , dt you obtain U0 = u1 + uL , uL = u2 , uC = u1 + u2 , i0 = i1 + iC , i1 = i2 + iL . (7.39) (7.40) (7.41) (7.42) (7.43) (7.35) (7.36) (7.37) (7.38)

83

7 Dierential-Algebraic Systems

The model includes two dierential equations and seven algebraic equations. The vectors x, y and d correspond to those of example 1. The incidence matrix looks like the following: i0 g1 g2 g3 g4 g5 g6 g7 i1 i2 iC u1 u2 uL

It can be seen, that for the calculation of i0 and iC at each case only the equation g6 is available. Therefore, the dierential-algebraic system is not solvable in this form. This is referred to as structural singularity.

84

8 Partial Dierential Equations


8.1 Introductory Example
For the introduction of partial dierential equations (PDE) we consider the example of a heat conductor. In the heat conductor there exists a heat ux from regions of higher temperature to regions of lower temperature (see Fig. 8.1).

Figure 8.1: Schematic representation of a heat conductor. The temperature T of the heat conductor is a function of the position in space and time. The dierential heat balance for a volume element with the area A and the thickness dx reads: A dx c T (x, t) t = A [q(x, t) q(x + dx, t) ] (8.1)

with

: density, q : heat ux density , c : specic heat capacity.

The heat ux density q can be represented as follows according to Fouriers law: T (x, t) , : heat conductivity. x You insert this equation into (8.1) and obtain: q(x, t) = A dx c T (x, t) t T (x, t) t = A = T 2 (x, t) dx x2 T 2 (x, t) c x2 (8.2)

Taylor series: q q(x, t) + x (x, t)dx + . . .

(8.3) (8.4)

temperature conductivity

85

8 Partial Dierential Equations

(8.4) is the heat conductivity equation. This is a partial dierential equation, because it contains derivatives with respect to space as well as derivatives with respect to time. For a complete mathematical model additional boundary conditions have to be determined. The number of the conditions for every independent variable (x, t) corresponds to the order of its highest derivative. T . Therefore one boundary condition for t is t needed, e.g. an initial condition: T (x, t0 ) = T0 (x), the temperature prole along the heat conductor at time t0 . For the time t the rst derivative is 2T . So two boundary conditions are needed for x. x occurs in the second derivative x2 Examples for these are: a) Dirichlet conditions T (0, t) = T1 (t) T (l, t) = T2 (t) b) Neumann conditions T (0, t) x T (l, t) x = 0 = 0 (8.7) (8.8) (8.5) (8.6)

c) Robbins conditions T (0, t) x T (l, t) x = 0


4 = (Ta T (l, t)4 )

(8.9) (8.10)

8.2 Representation of Partial Dierential Equations


The highest order of derivative occurring in a (partial) dierential equation is called the order of the dierential equation. The general form of a partial dierential equation of second order (z = z(x, t)) reads: a()ztt + 2b()ztx + c()zxx + d()zt + e()zx + f ()z + g() = 0 (8.11)

You distinguish between linear, quasi-linear and nonlinear partial dierential equations: linear: quasi-linear: nonlinear: a(), b(), c() constant or function of t and x, a(), b(), c() function of t, x, z, zt , zx , in all other cases.

86

8.3 Numerical Solution Methods

Linear partial dierential equations of second order can be further distinguished into hyperbolic, parabolic and elliptic dierential equations: hyperbolic: parabolic: elliptic: b2 ac > 0, b2 ac < 0, b2 ac = 0.

The choice of a suitable numerical solution method strongly depends on the type of the partial dierential equation. Except for linear partial dierential equations that only contain a small number of variables, in most cases an analytical solution of partial dierential equations is not possible. The heat conductivity equation (8.4) Tt Txx = 0 (8.12)

is a linear, elliptic partial dierential equation of second order, because a = b = 0 and c = = const and so b2 ac = 0. In our example we choose: = 1 x T (x, 0) = sin l T (0, t) = T (l, t) = 0 (8.13) (8.14) (8.15)

The analytical solution reads then:

0 21 At @ 2
l

T (x, t) = e

sin

x . l

(8.16)

8.3 Numerical Solution Methods


The explanation of the numerical solution methods is based on scalar partial dierential equations. The solution of vector dierential equations can be obtained accordingly.

8.3.1 Method of Lines


The idea of the method of lines (Schiesser, 1991) is the parameterization of the solution function in such a way that it depends only on one continuous coordinate. Usually, one retains the time coordinate and discretizes the spatial coordinate. In that way, the partial dierential-algebraic system is transformed into a dierential-algebraic system with lumped parameters which can be solved according to Chapter 7.

87

8 Partial Dierential Equations

The evaluation is done at N+1 selected nodes xk (see Fig. 8.2): k l k = 0, . . . , N N zk (t) = z(t, xk ) xk = z(t, x) [z0 (t), z1 (t), . . . , zN (t)]
T

(8.17) (8.18) (8.19)

Figure 8.2: Discretization of the solution function z(x, t). Inserting the above into the partial dierential equation, one receives n dierential equations which have to be solved at the points xk . This is illustrated in Fig. 8.3. In the example of the heat conduction equation, the equations are: Tt |xk Txx |xk
k 0

= 0
k T0 ,

k or Ttk Txx = 0,

k = 1, . . . , N,

(8.20) (8.21) (8.22) (8.23)

T (t = t0 ) =
n

k = 1, . . . , N,

T (t) = T1 (t), T (t) = T2 (t).

For the use of the method of lines the dierentials with respect to the spatial coordinate have to be determined. For this purpose you use the method of nite dierences. 8.3.1.1 Finite Dierences First of all, zk+1 is expanded into a Taylor series: zk+1 = zk + dzk 1 d 2 zk x + x2 + . . . dx 2 dx2 (8.24) dzk : dx (8.25)

In the case of a rst-oder approximation, one solves this equation with respect to dzk zk+1 1 d2 zk zk+1 zk = x = O1 (x) 2 dx x 2 dx x

88

8.3 Numerical Solution Methods

Figure 8.3: Illustration of the method of lines. Admittedly, this is a bad approximation. An approximation of second order is better, in which both zk+1 and zk+2 are expanded into a Taylor series: dzk 1 d 2 zk x + x2 + . . . dx 2 dx2 dzk 1 d 2 zk = zk + (2x) + (2x)2 + . . . dx 2 dx2

zk+1 = zk + zk+2

(8.26) (8.27)

dzk d 2 zk and is obtained. By multiplying (8.26) dx dx2 by four and substracting (8.27) from this equation, you obtain: In this way, a linear equation system in dzk x dx 3zk + 4zk+1 zk+2 + O2 (x) 2x

4zk+1 zk+2 = 3zk + 2 dzk dx =

(8.28) (8.29)

In the approximation by nite dierences, there are the following degrees of freedom:

number of nodes, selection of nodes:


forward dierences: as in (8.29) only points on the right-hand side are considered (compare with Table 8.1), backward dierences: only points on the left-hand side are considered, central dierences: both sides are considered (compare with Table 8.2).

89

8 Partial Dierential Equations

Table 8.1: One-sided dierences. Derivative x


z x xk

label 2-point 3-point 4-point 5-point 6-point 7-point

error order 1 2 3 4 5 6 1 2 3 4 5

zk 1 3 2 11 6 25 12 137 60 49 20 1 2
35 12 15 4 203 45

zk+1 1 2 3 4 5 6 2 5 26 3 77 6 87 5

zk+2

zk+3

zk+4

zk+5

zk+6

1 2 3 2 1 3 3 4 10 3 20 3

3 5 15 2 1 4
19 4 107 6 117 4

1 4 5 4 15 4
1 5 6 5 1 6

x2

2z x2 x k

3-point 4-point 5-point 6-point 7-point

1 14 3 13 254 9
11 12 61 12 33 2

5 6 27 5
137 180

8.3.1.2 Problem of the Boundaries

At the boundaries of the dened range of the spatial coordinate x problems may arise in using nite dierences, if points are not available which are necessary for the evaluation of the dierences. In the central ve-point formula, this applies for example to zk1 and zk2 . Among others there are the following solutions:

Extrapolation method (see Fig. 8.4):


Extrapolation points are introduced outside of the dened range. These points are then used in the approximation formulas.

Sliding Dierences (see Fig. 8.5): The center point is shifted successively.
However, both methods reduce the approximation order at the boundaries.

90

8.3 Numerical Solution Methods

Table 8.2: Central dierences. Derivative h


z x xk

label 3-point 5-point 7-point

error order 2 4 6 2 4 6

zk3

zk2

zk1
1 2

zk 0 0 0 2 5 2 49 18

zk+1
1 2 2 3 3 4

zk+2

zk+3

1 12 1 60 3 20

2 3 3 4 1

1 12 3 20 1 60

h2

2z x2 x k

3-point 5-point 7-point

1
4 3 3 2 1 12 3 20 1 90

1 12 1 90 3 20

4 3 3 2

Figure 8.4: Extrapolation method.

8.3.2 Method of Weighted Residuals


The method of weighted residuals (Lapidus and Pinder, 1982) approximates the solution z(x, t) with a nite functions series:
N

z(x, t) =
i=0

i (t)i (x)

(8.30)

with i . . . known basis of the function system, i . . . coecients. The quality of the approximation depends on the dimension N and the choice of the basis

91

8 Partial Dierential Equations

Figure 8.5: Sliding dierences. . The task is now to determine the coecients i . In the example of the heat conduction equation you proceed as follows:

2T T 2 = 0 t x T (x, t0 ) T0 (x) = 0 T (0, t) T1 (t) = 0 T (l, t) T2 (t) = 0 x

(8.31) (8.32) (8.33) (8.34)

This corresponds to a representation in residual-form, because only zeros occur on the right-hand side. The sought solution T is approximated by applying (8.30):
N

T (x, t) =
i=0

i (t)i (x)

(8.35)

This is inserted into the heat conducting equation, leading to equation residuals Ri , because only an approximation of the real solution is used.

2T d T 2 = RP DGL (x, t) = RP DGL (, , x) = 0, t x dt T (0, t) T0 (x) = RAB (x) = 0, T (0, t) T1 (t) = RRB0 (t) = 0,

(8.36) (8.37) (8.38) (8.39)

T (l, t) T2 (t) = RRBl (t) = 0. x

92

8.3 Numerical Solution Methods

An approximation z is considered acceptable, if the suitable weighted equation residual R disappears in the means over the considered range:
l

R ,
0

d , x wi (x)dx = 0, dt

i = 1, . . . , N.

(8.40)

The choice of the weights wi characterizes the weighted residuals method. The three most important methods are briey introduced in the following sections (see Fig. 8.6). 8.3.2.1 Collocation Method In the collocation method Dirac impulses are used as weighting functions:
l

R ,
0

d , x (x xi )dx = 0. dt

(8.41)

As a consequence, you do not have to solve an integral system, but rather only an algebraic equation system: R , d , xi dt = 0, i = 1, . . . , N. (8.42)

8.3.2.2 Control Volume Method The control volume method uses weights wi only in an interval between two active nodes: 1 0 if xi1 < x < xi elsewhere

wi (x) =

(8.43)

You obtain as the resulting equation system:


xi

R ,
xi1

d , x dx = 0, dt

i = 1, . . . , N.

(8.44)

8.3.2.3 Galerkin Method The Galerkin method uses as weighting functions the sensitivity of the approximation function with respect to the parameters to be determined. In other words, the weighting functions are the basis functions: z(x, t) wi (x) = = i (x), i

i = 1, . . . , N.

(8.45)

93

8 Partial Dierential Equations

Figure 8.6: Dierent weighting functions (according to Lapidus and Pinder, 1982). 8.3.2.4 Example The method of weighted residuals shall be illustrated with the example of an ordinary dierential equation of rst order, Newtons cooling law. An object with the temperature T is exposed to its environment with the temperature Tu and cools down: dT + k(T Tu ) = 0, dt T (0) = 1. (8.46)

k is a proportionality constant. Let k = 2 , Tu = 1 , and the length of the object shall 2 be normalized to be one. As basis function (t) the so-called hat function is used (see Fig. 8.7): t ti1 ti1 t ti ti ti1 i (t) = (8.47) t i+1 t ti t ti+1 ti+1 ti

N is selected to be three and you receive the approximation for T :


3

T =
j=1

Tj j (t).

(8.48)

94

8.3 Numerical Solution Methods

Figure 8.7: Hat function.

Tj is the value of the temperature of T at the node j. The condition for the weighted residuals reads:
1

R(t)wi dt = 0,
0

i = 1, 2, 3

(8.49)

with
dT R(t) = + k(T Tu ) dt

(8.50)

you obtain
1

Tj d + kj dt kTu wi (t) dt = 0, i = 1, 2, 3. (8.51)

j=1

This equation is to be solved both with the Galerkin method and the collocation method:

Galerkin method With the Galerkin method you obtain the following expression as a result of the weighting with the basis functions:
3 1 1

Tj
j=1 0

di + kj i dt = dt
0

kTu i dt,

i = 1, 2, 3.

(8.52)

95

8 Partial Dierential Equations

In matrix notation this yields 1/2 1/2 d1 d2 1 + k1 1 dt 1 + k2 1 dt dt dt 0 0 1/2 1 d d1 2 2 + k1 2 dt 2 + k2 2 dt dt dt 0 0 1 d2 0 3 + k2 3 dt dt 1/2

0
1 1/2 1 1/2

d3 2 + k3 2 dt d3 3 + k3 3 dt

T1 dt T2 = dt T3

1/2

k Tu 1 dt 0 1 k Tu 2 dt . 0 1 k T dt
u 3 1/2

(8.53) For instance, with 1 = 1 2t the rst integral is


1/2 1/2

d1 1 + k1 1 dt = dt
0 0 1/2

2 + 4t + k(1 2t)2 dt = 1 k 1 k = + 2 6 2 6

k 2t + 2t2 (1 2t)3 6

= 1 +
0

leading (with T1 = 1 as initial condition) to kTu 1+ 0 1 (a) 1 2 2k k T2 = kTu (b) 1+ 6 3 2 kT u T3 (c) 1 + k 1 + k 6 3 2 Therefore, we have three equations for two unknown variables, yielding 1 + 1 1 + 2 0
k 3 k 6 k 6

(8.54)

(b), (c) (a), (c) (a), (b)

[T1 , T2 , T3 ] = [1, 0.678, 0.571] [T1 , T2 , T3 ] = [1, 0.625, 0.550] [T1 , T2 , T3 ] = [1, 0.625, 0.625]

(8.55) (8.56) (8.57)

The exact analytical solution of (8.46) reads: T (t) = T (0)ekt + Tu (1 ekt ) leading to: [T1 , T2 , T3 ] = [1, 0.684, 0.568]. If you compare the exact solution with the numerically calculated ones, you can see that the best solution in this case is (8.55). Generally speaking, in the Galerkin method one should delete rows and columns with known coecients.

96

8.4 Summary

Collocation Method Applying the collocation method to our example, you receive an equation system without the integrals of the equation residuals (due to the use of Dirac impulses for weighting):
3 1

Tj
j=1 0

di + kj dt

kTu (t ti )dt = 0,

i = 1, 2, 3.

(8.58)

The collocation points can be chosen as desired, e.g.: t1 = 0 (initial condition),

t2 = 0.25, t3 = 0.75. With this you obtain (8.58): d1 0 dt + k1 | 0 d1 d2 | | dt + k1 0.25 dt + k2 0.25 d2 + k2 | 0 dt 0.75 1 2 + 0 0 2+ k 2 2 + k 2

0 0 d3 + k3 | dt 0.75 kTu | T1 0 T2 = kTu | 0.25 kTu | T3


00.75

(8.59)

k 2

0 T1 1 0 T2 = kTu kTu T3 2+ k 2

(8.60)

From that we can derive the approximated solution: T1 T2 T3 = 1, 0.667, 0.555 .

8.4 Summary

In collocation methods only the equation residuals have to be determined, but not their integrals, which are frequently not analytically evaluable . If an analytical evaluation is not possible, the computational eort increases drastically. There are no general statements on the most suitable method, because the best choice is strongly problem dependent. The approximation quality of the method of the weighted residuals is strongly de In the dierences method many degrees of freedom exist (order, centering, boundary
treatment). 97 pendent on the choice of the basis function. Using the collocation method, the choice of the location of the collocation points is crucial for a high approximation quality.

8 Partial Dierential Equations

98

9 Discrete Event Systems


In discrete event systems (DES) the system state only changes at discrete time moments at which events occur. Fig. 9.1 shows a comparison of trajectories of discrete event systems and continuous systems.
state x state x

time t

time t

a) time and value continuous

b) time continuous and value discrete

state x

state x

time t

time t

c) time discrete and value continuous

d) time and value discrete

Figure 9.1: Trajectories for continuous and time discrete systems (according to Kiencke, 1997). In discrete event systems, the states change at unpredictable moments of time, which only depend on the events which cause the state change (Fig. 9.1 b,d). Input signals are the discrete events. In continuous systems, the states change continuously in time and (usually) have continuous input variables (Fig. 9.1 a). A discrete time system (Fig. 9.1 c) only approximates a

99

9 Discrete Event Systems

continuous time system. The variables still have continuous values, but are captured only at discrete moments. Discrete events occur either as natural discrete events or through the reduction of a continuous state transition.

Natural discrete events


Examples:

Quantities are dened as discrete events, if the transition process is relatively short or can even be neglected.

Signal change in trac controls Number of trains in a railway station neglecting the in and out driving trains Floors during an elevator trip

Reduction of the transition

A state passes into a successive state through a continuous transition. This transition can be (articially) dened as a discrete event, if for example only qualitative changes are of interest.

As an example we observe the driving ability of a motor vehicle driver. The ability decreases with the alcohol content (continuous variable) in the drivers blood. Thus the ability to drive is a continuous variable. The law declares that one loses the ability to drive with an alcohol content of > 0,5 . A graphical illustration as a discrete state graph is shown in Fig. 9.2.

>

<

Figure 9.2: State graph of the driving ability.

9.1 Classication of Discrete Event Models


Discrete event models are going to be characterized in this chapter on the basis of dierent criteria (Kiencke, 1997).

9.1.1 Representation Form


Discrete event models can be either represented as mathematical or as graphical models.

100

9.2 State Model

Mathematical model Graphical model

The objects are represented through mathematical variables as well as input and output functions. (see Section 9.2). The variables are depicted as graphical symbols. The input and output functions are executed through graphical operations (see Section 9.3). The advantage of this representation is the vividness. On the other hand, graphical symbols often cannot completely represent a real process. Every graphical model can be transformed into a mathematical model (e.g. with matrix operations).

9.1.2 Time Basis


In opposite to time continuous models, no time basis is available in discrete time models. The model analysis is carried out for a logical sequence.

9.1.3 States and State Transitions


You distinguish between deterministic and stochastic states and state transitions. Stochastic states only appear with a specic probability. Thereby, the state transitions take place with conditional probabilities.

9.2 State Model


A discrete event system is described with a model M , which can be represented by a 7-tuple: M = x, z, y, IN , EX , , with x = [x1 , x2 , . . . , xk ]T z = [z1 , z2 , . . . , zm ] y = [y1 , y2 , . . . , yn ]
T T

(9.1)

vector of inputs (external events), vector of states , vector of outputs.

IN : z z mathematical EX : z x z functions :zxy : z R+

state transition function for internal events state transition function for external events output function residence time function

101

9 Discrete Event Systems

State transition function for internal events The system is in the state zi with its corresponding residence time (zi ). A new state will be reached after the end of the residence time: z[t + (zi )] = IN (zi ) State transition function for external events The system is in the state z(t). Because of an external event x(t) it switches to the new state z z (t) = EX [z(t), x(t)] Output function y(t) = [z(t), x(t)] Input variables x A model is called autonomous, if it works without being inuenced by the environment (without inputs x). M = z, y, IN , , Time dependency If the state transition functions IN , EX , the output function , and the residence time function are dependent on time, one calls the system time-variant, otherwise timeinvariant. Dependency of the the state transitions on the systems previous history A model is called memoryless, if the conditional transition probability for the successive state is only dependent on the immediate state zi . This characteristic eases the mathematical analysis. (9.4) (9.3) (9.2)

9.2.1 Example
We consider a liquid vessel as an illustrative example. The lling and emptying of the vessel is supposed to be realized by means of a digital feedforward control (see Fig. 9.3). Three process states (lling, emptying, equilibrium) are considered in the model: h > 0 if h < 0 if h=0 v1 = 0, v2 = 0 v1 = 0, v2 = 0 =0 =0 (lling) (emptying) (equilibrium)

if v1 = v2

102

9.3 Graph Theory

Figure 9.3: Model of a liquid vessel. The inputs are x1 = v1 und x2 = v2 . For these we have: xi = 1 if vi = 0 0 if vi = 0 i = 1, 2

The process states can be represented with the Boolean variables x1 and x2 in the following way: lling: x1 x2 x1 x2 emptying: equilibrium: x1 x2 + x1 x2 The state vector z must contain two components z1 and z2 , because three states have to be represented: z = z = z = 1 0 0 1 0 0 lling, emptying, equilibrium.

This yields to the following state equations: z1 = z 1 x1 x2 + z1 z 2 x1 x2 , z2 = z 1 x1 x2 + z1 z 2 x1 x2 (9.5) (9.6)

9.3 Graph Theory


The graph theory provides methods, which are well-suited for the description and the analysis of discrete event systems.

103

9 Discrete Event Systems

9.3.1 Basic Concepts


The elements of a system are called vertices. Two vertices get related to each other through a connecting edge. The set V of the vertices and the set E of the edges are called graph G (see Fig. 9.4).

Figure 9.4: Illustration of a graph. In the following some important concepts of the graph theory will be explained.

Directed graph

A directed graph (see Fig. 9.5) is a graph, in which all edges are directed. A directed edge is called arrow.

Figure 9.5: Directed graph.

Predecessor and successor Sink and source Parallel arrows Loop


104

Vertices, which are directly connected with an arrow to another vertex k, are called predecessor and successor of k, respectively (see Fig. 9.6). Sources are vertices without any predecessor. Sinks are vertices without any successor (see Fig. 9.7). Two arrows are called parallel, if they have the same starting and ending vertex (see Fig. 9.8). A degenerate edge (arrow) of a graph which joins a vertex to itself is called loop (see Fig. 9.9).

9.3 Graph Theory

Figure 9.6: Predecessor and successor.

Figure 9.7: Sink and source.

Figure 9.8: Parallel arrows.

Figure 9.9: Loop.

Simple graph

A graph which is permitted to contain loops and multiple edges is called a simple or general graph (see. Fig. 9.10).

105

9 Discrete Event Systems

Figure 9.10: Simple graph.

Finite graph

Digraph A simple and nite graph is called a digraph. Reachability

A graph is called nite, if the sum of vertices as well as the sum of arrows or edges of this graph are nite.

A vertex w of a digraph is called reachable from a vertex V , if there exists a path from the starting vertex V to the ending vertex w. This sequence of arrows is called reachability graph

9.3.2 Representation of Graphs and Digraphs with Matrices


One has to model mathematically discrete event systems in a matrix representation in order to simulate them. Thereby, one distinguishes between the adjacency matrix and the incidence matrix. The adjacency matrix a11 a12 a21 a22 A= . . . . . . A represents the edges between the single vertices: a1n a2n . .. . . . ann

(9.7)

an1 an2

n is the number of vertices. For the elements of the adjacency matrix, the following holds: aij = 1 0 if the edge from vertex i to j exists otherwise

In the incidence matrix I the n vertices and m edges of a graph are represented. The rows correspond to the vertices, and the columns to the edges: i11 i12 i1m i21 i22 i2m I= . (9.8) . . .. . . . . . . . in1 in2 inm

106

9.3 Graph Theory

The edge el is called incident with the vertex vk , if it starts at vk or if it ends at vk . ikl = 1 0 if el is incident with vk otherwise k = 1, ..., n l = 1, ..., m (9.9)

In a digraph, the directions of the edges are distinguished: 1 ikl = 1 0 if el is positive incident with vk (edge el leaves from the vertex vk ) if el is negative incident with vk ( edge el leads to the vertex vk ) otherwise (9.10) In a digraph the elements of the incidence matrix satisfy:
n

ikl = 0
k=1

l = 1, . . . , n

(9.11)

The column sum equals zero because every edge starts at one vertex and leads to another one. This characteristic can be used for error checking. The above introduced matrices shall be illustrated in the following example (see Fig. 9.11):

Figure 9.11: Example of a graph.

The adjacency matrix reads: 1 1 0 2 0 3 0 A= 4 0 5 0 6 0 2 1 0 0 0 0 1 3 0 1 0 0 0 0 4 0 1 1 0 0 0 5 0 0 1 1 0 0 6 0 0 0 0 1 0

(9.12)

107

9 Discrete Event Systems

The incidence matrix reads correspondingly: a b c d e f g h 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 2 1 1 0 0 1 1 0 1 0 0 3 I= 0 0 0 4 0 1 0 1 1 0 0 0 1 1 1 0 5 0 0 0 0 0 0 1 1 6 0 9.3.2.1 Models for Discrete Event Systems The following model types are described in more detail in the following sections:

(9.13)

automaton models Petri net models


9.3.2.2 Simulation Tools Various tools exist for the simulation of discrete event systems:

common programming languages: FORTRAN, C(++), PASCAL simulation languages for general DES: GPSS, SIMAN, HPSIM simulation systems for special DES: Simple++ for manufacturing systems
9.4 Automaton Models
Examples for automaton models are: pay phones, washing machines, money transaction machines. One characteristic of an automaton is, that it is operated from outside. Thus, a sequence of discrete time events from outside is present. A time basis does not exist, therefore the residence time function is not needed. Because the system reacts exclusively on inputs, also no state transition function is needed for the internal events IN . The input xi at the state zi denitely determines the successive state zi+1 and the output yi . EX (zi , xi ) = zi+1 (zi , xi ) = yi Therefore, an automaton model is a 5-tuple x, z, y, EX , . (9.14) (9.15)

108

9.4 Automaton Models


=

= (

Figure 9.12: State graph in an automaton model. The state transition in the automaton model (state graph) is illustrated in Fig. 9.12. A soda machine serves us an example of an automaton. After inserting a coin you can choose between two possible drinks or you can recall the coin. Upon maloperation a signal sounds. First of all we dene the inputs, the states, and the outputs of the model:

1. input x1 . . . x2 . . . x3 . . . x4 . . .

insert coin choose drink 1 choose drink 2 recall coin

2. states z1 . . . soda machine ready z2 . . . amount of money sucient 3. output y1 . . . signal sounds y2 . . . output drink 1 y3 . . . output drink 2 y4 . . . output coin

Fig. 9.13 shows the resulting state graph of the soda machine model.

  

' ) 2 0 &$ % 1 (

Figure 9.13: State graph of the soda machine model.

  

9 B@ 4 75 8A 36

"

109

9 Discrete Event Systems

9.5 Petri Nets


A Petri net is a special graph (Hanisch, 1992). It is a directed digraph with two kinds of nodes, namely places and transitions. The elements of Petri nets are (see Fig. 9.14):

Places are represented by circles and describe (possible) states. Transitions describe (possible) events and are represented as bars. Places and transitions are connected with directed edges. Thereby, connections The actual system states are represented through tokens, which are stored in the
places. between places and places as well as connections of transitions with transitions are not allowed.

Tokens are produced, deleted or moved by the ring of the transitions. The dynamic ow of a Petri net is represented by the transitions between the marked states (token distribution in all places). One distinguishes between two Petri net models:

Discrete time (causal) models: Petri nets describe logically what happens in which sequence. Continuous time models:
Time Petri nets predict additionally when the events occur.

Figure 9.14: Graphical elements of a Petri net.

9.5.1 Discrete Time Petri Nets


The mathematical representation of Petri nets is given in matrix form. Thereby, you distinguish the matrices for the pre- and the post-weights: W : matrix for pre-weights (connection of a place with a transition) W + : matrix for post-weights (connection of a transition with a place)

110

9.5 Petri Nets

The weights state how many tokens can be taken away or enter according to the capacity restriction of a node. With the matrices W and W + the incidence matrix W of a Petri net can be calculated: W = W + W . As an example we will study the message exchange between a message receiver and a message transmitter. A transmitter sends a message to a receiver and waits for an answer (see Fig. 9.15).
 

Figure 9.15: Petri net model for message exchange. The incidence matrices W of this Petri net reads: t 1 1 0 0 = 0 0 0 t2 0 1 0 1 0 0 t3 0 0 1 0 0 0 t4 0 P1 0P2 0P3 0P4 1P5 1 P6 t 1 0 1 0 = 0 0 1 t2 0 0 1 0 0 0 t3 0 0 0 1 1 0 t4 1 P1 0P2 0P3 0P4 0P5 0 P6

W+

(9.16)

W = W+ W

t t 2 t3 t4 1 1 0 0 1 P1 1 1 0 0 P2 0 1 1 0 P3 = 0 P4 0 1 1 0 0 1 1P5 1 0 0 1 P6

(9.17)

111

9 Discrete Event Systems

9.5.2 Simulation of Petri Nets


In order to simulate Petri nets, the information concerning the position of the tokens, the capacity restrictions as well as the ring conditions have to be represented. For this purpose the following vectors are dened:

Marking (state) vector m: vector of the numbers of tokens m in all n places p at


i i

the discrete time point r

Capacities k of the places p


i

m(r) = [m1 (r), m2 (r), ..., mn (r)]T


i

(9.18)

k = [k1 , k2 , ..., kn ]

(9.19) i = 1, 2, . . . , n

Transitions

Thereby, the following restriction holds: mi (r) ki t = [t1 , t2 , ..., tm ]T

(9.20)

In order to activate transitions, which leads to a change of the marking vector (m(r 1) m(r)), the following conditions have to be fullled: 1. In all pre-places a sucient amount of tokens has to be available:
pi utj : mi (r 1) wij wij = 0

(9.21)

2. The capacity in all post-places is not allowed to exceed the maximal capacity after the ring of the transition:
+ pi tj u : mi (r 1) ki + wij wij

(9.22)

After the transition, the following holds:


+ mi (r) = mi (r 1) wij + wij ki

(9.23)

An individual verication of both activation conditions is a necessary condition. As the result you obtain the activation function uj for the transition tj : uj (r) = 1 0 tj activated otherwise (9.24)

In the following, we assume that no conicts occur. Then we obtain the new state with:
+ mi (r) = mi (r 1) wij uj (r) + wij uj (r)

(9.25)

The new marking vector can be described by the introduction of a switching vector u at the time moment r: u(r) = [u1 (r), u2 (r), ..., um (r)]T m(r) = m(r 1) W u(r) + W u(r) m(r) = m(r 1) + W u(r)
+

(9.26) (9.27) (9.28)

112

9.5 Petri Nets

9.5.3 Characteristics of Petri Nets


One advantage of Petri nets lies in the fact that many characteristics of control systems can be analyzed with the help of Petri nets. Therefore, we want to explain the most important characteristics of Petri nets in the following section. 9.5.3.1 Reachability A state Ml is called reachable for a starting state M0 , if there is a ring sequence, which transports M0 into Ml . This reachability graph is the basis for many analysis methods. In the example of the message exchange the reachability graph is given in Fig. 9.16.

( )= [
9 8

( )= [






( )=[
4

( )=[
%

'

"

()

01

23

Figure 9.16: Reachability graph for the message exchange model. One sees clearly that all states can be reached from any starting state. One uses the reachability graph for instance to review, whether desired states are not and undesired states are reachable. In this case the Petri net design has to be improved. Furthermore, one can avoid predecessors of dangerous states by the use of suitable design methods. 9.5.3.2 Boundedness and Safety A Petri net is said to be bounded if in no single place pi more than a certain maximal amount of ki tokens are present. In the case of ki = 1 the Petri net is also said to be safe. 9.5.3.3 Deadlock A deadlock in a Petri net is a transition (or a set of transitions) which cannot re. We consider again the slightly modied example of the messages exchange (see Fig. 9.17). Through the introduction of the new transitions t5 and t6 deadlocks appear.

&

113

9 Discrete Event Systems


   

Figure 9.17: Deadlock in the message exchange model.

9.5.3.4 Liveness Transitions that cannot be longer activated are said to be non-live. Fig. 9.18 shows an example of a deadlock free but non-live Petri net including the reachability graph.

( )=[

( )=[

( )=[

( )=[

Figure 9.18: Non-live Petri net. A live Petri net is always deadlock free. However, this does not hold true in the inverse case. For practical uses, a Petri net should be live. Fig. 9.19 shows a live variant of the

114

9.5 Petri Nets

Petri net for the modied example of the messages exchange.

Figure 9.19: Live message exchange model.

9.5.4 Continuous Time Petri Nets


Continuous time Petri nets oer the possibility of representing the state-dependent and the time-dependent behavior in only one model. The temporal behavior is represented by timed Petri net elements. One possibility is a temporal restriction of the token permeability of the Petri net (see Fig. 9.20).

Figure 9.20: Token in a timed place (according to Kiencke, 1997).

115

9 Discrete Event Systems

Another possibility is a delay in the ow of the tokens (see Fig. 9.21). Hereby, the transition res delayed after the delay time .

Figure 9.21: Delayed ow of the tokens (according to Kiencke, 1997).

116

10 Parameter Identication
10.1 Example
We consider the simulation of a bioprocess as an introductory example. It concerns the fermentation of Escherichia coli. In Fig. 10.1 the measured values for biomass, substrate (glucose), and the product, which are obtained during a fermentation, are represented.
50 40 glucose [g/l] 30 20 10 0 0 10 20 30 time [h] 40 biomass glucose product addition inductor 125 100 75 50 25 0 biomass [g/l], product [%]

Figure 10.1: Measured values for a high cell density fermentation of Escherichia coli. After a so-called lag phase the biomass grows exponentially. Through the addition of an inductor (IPTG) the product generation is started. Shortly afterwards, the product generation stops and the biomass dissolves (dies). The goal of the simulation task is to determine an optimal feed prole, which delivers as much product as possible at the end of the fermentation. This task is performed according to the simulation procedure (see Fig. 1.4). A so-called compartment model is used (Schneider, 1999) as the model structure. The structure of a simulation model is determined by setting up basic mathematical relations (e.g. balance equations). As the result of the model building in this case, one obtains a dierential-algebraic system which consists of ten coupled nonlinear dierential equations of rst order and two nonlinear algebraic equations. This system can be solved according to the methods introduced in Chapter 7. The parameters of the model, which nally determine the concrete behavior of the model, still have to be dened and/or identied. The compartment model contains 30 parameters

117

10 Parameter Identication

(20 model parameters and 10 initial conditions). There are 15 parameters which can be determined using preliminary considerations and biological knowledge. The task of the parameter identication is to determine the remaining parameters of the simulation model on the basis of measurements. The general parameter identication procedure is depicted in Fig. 10.2 (Norton, 1997). The measurements are compared to the simulated values. If the deviations are signicant, the parameters are changed until a good t is obtained.

Figure 10.2: Parameter identication procedure. In order to evaluate the deviations between the real outputs (measurements) and the simulated outputs (output quantities), an objective function is formulated. With the help of identication methods the parameters are determined in such a way that the objective function is minimized. The degrees of freedom for parameter identication are the choice of the objective function and the calculation rule for the determination of the model parameters.

10.2 Least Squares Method


We consider a linear single-input single-output system (SISO system). An extension to multiple-variable systems (MIMO systems) and multiple parameters is possible. The equation of the real system is yk = a xk + nk k = 1, . . . , N (10.1)

118

10.2 Least Squares Method

with y : x : n : a : output or measurement, input, disturbance, parameter.

The model equation reads: yk = a xk . The error between the real system and the model is given by ek = yk yk . (10.3) (10.2)

The aim of the parameter identication is the determination of the parameter a in a way that the objective function Q is minimized. The objective function evaluates the deviations between the simulated and the measured values. In case of the least squares method the objective function has the following form:
N N N

Q=
k=1

e2 k

=
k=1

(yk yk ) =
k=1

(yk axk )2

(10.4)

N denotes the number of measurements. The error is counted quadratically in the objective function in order to prevent the positive and negative deviations of the measurements from the model predictions from balancing themselves. In the vector form you obtain the following representation: xT y
T T T

= [x1 , x2 , . . . , xN ], = [y1 , y2 , . . . , yN ], = [1 , y2 , . . . , yN ], y = [e1 , e2 , . . . , eN ].

(10.5) (10.6) (10.7) (10.8)

y e

The objective function arises accordingly as follows: Q = eT e = (y ax)T (y ax) = y T y 2y T x + a2 xT x. a (10.9)

We study the valid necessary and sucient conditions for a minimum of Q. The necessary condition for the minimum of Q is: Q = 2y T x + 2xT x = 0. a (10.10) a The sucient condition is always satised, because of 2Q = 2xT x > 0. 2 a Therefore, the sought parameter a is given by: yT x . xT x This equation is called regression equation. a= (10.11)

(10.12)

119

10 Parameter Identication

10.3 Method of Weighted Least Squares


The method of least squares does not use any information about the disturbance n. If such information exists, it can be used for an improvement of the result of the parameter identication. The idea of the method of weighted least squares consists of weighting the errors e with factors k :
N

Q=
k=1

1 2 e = eT 1 e. k k

(10.13)

is a weighting matrix. Information about the disturbance can be available e.g. in form of the variance matrix C. Therefore, this variance matrix is a sensible choice for . = C. The variance matrix has the 2 (n1 ) 0 0 2 (n2 ) C= . . . . . . 0 0 following form: 0 0 . . .. . . . 2 (nN ) (10.14)

(10.15)

With this you obtain as the objective function Q = (y ax)T C 1 (y ax) = y T C 1 y 2y T C 1 x + a2 xT C 1 x a The necessary condition for the minimum of the objective function leads to: Q = 2y T C 1 x + 2xT C 1 x = 0. a a The sought parameter a arises as follows: a= y T C 1 x . xT C 1 x (10.18) (10.17) (10.16)

This identication method is called Markov estimation, if is the covariance matrix of the disturbance. In many cases the expectation E can be characterized by E[n(ti )] = E[n] = 0, E[ni nT ] j = ij C(ti ). (10.19) (10.20)

This leads to a simplication of the objective function Q= 1 T e e. 2 (10.21)

120

10.4 Multiple Inputs and Parameters

10.4 Multiple Inputs and Parameters


The introduced methods can be extended for the case of m inputs x and parameters a. With one output and N measurements the following equation is obtained: yk = a1 x1 + a2 x2 + + am xm + nk k k k In vector form the equations are: 1 x1 y1 y2 x1 2 y = Xa + n or . = . . . . . yN The model equations read: y = X a. Correspondingly, the objective function is: Q = eT e = (y y )T (y y ) = (y X a)T (y X a) = y T y aT X T y y T X a + aT X T X a. The necessary condition Q = 2X T y + 2X T X a = 0 a leads to the normal equation X T X a = X T y, with which the sought parameters a can be calculated: a = (X T X)1 X T y. (10.28) (10.27) (10.26) (10.25) (10.24) k = 1, . . . , N. (10.22)

x2 1 . . . x2 N

.. .

x1 N

xm N

xm n1 a1 1 m a n2 x2 2 . . + . . . . . . . . am nN

(10.23)

If (X T X)1 is ill-conditioned, you will have numerical problems inverting the matrix.

10.5 Recursive Regression


If one wants to (quickly) carry out online parameter identications, the method of recursive regression is applied. In this method, the parameter ak+1 is estimated on the basis of each new measured value yk+1 using previously calculated quantities. This possibility arises, if the parameter estimation can be carried out in parallel with consecutive measuring (e.g. for adaptive control). The goal is the determination of new estimates with the help of the old ones and the updated measurements.

121

10 Parameter Identication

The k-th estimation (up to the present k measurements are available) is given as (see (10.28)): ak = (X T X k )1 X T y k . k k The (k + 1)-th estimation is: ak+1 = (X T X k+1 )1 X T y k+1 . k+1 k+1 With x1 x2 1 1 x1 x2 2 2 = . . .. . . . . . xm xm 1 2 x1 k x1 k . . . xm k | x1 k+1 2 | xk+1 = X T y k + xk+1 yk+1 . k . | . yk | xm k+1 yk+1 y1 y2 . . . (10.30) (10.29)

X T y k+1 k+1

(10.31)

you obtain ak+1 = (X T X k+1 )1 (X T y k + xk+1 yk+1 ). k k+1 (10.29) can be transformed to X T X k ak = X T y k . k k This equation is inserted into (10.32): ak+1 = (X T X k+1 )1 Xk T X k ak + (X T X k+1 )1 xk+1 yk+1 + k ak . a k+1 k+1
extension

(10.32)

(10.33)

(10.34)

We now dene the matrix S k as S k = (X T X k )1 k and obtain ak+1 = ak + S k+1 reads S k+1 = (X T X k + xk+1 xT )1 = (S 1 + xk+1 xT )1 , k k+1 k+1 k therefore S 1 can be written as k S 1 = S 1 xk+1 xT . k+1 k k+1 The estimation equation of the recursive regression is ak+1 = ak + S k+1 xk+1 (yk+1 xT ak ), k+1
error

(10.35)

S k+1 S 1 I ak + S k+1 xk+1 yk+1 . k

(10.36)

(10.37)

(10.38)

a0 = 0,

(10.39)

with S k+1 (because of the matrix inversion lemma): S k+1 = S k 1 1+ xT S k xk+1 k+1 S k xk+1 xT S k , k+1 S 0 = I. (10.40)

122

10.6 General Parameter Estimation Problem

10.6 General Parameter Estimation Problem


In contrast to the previously considered problems, we now consider the general case of a nonlinear estimation problem. Hereby, also a parameter vector p is sought which minimizes a given objective function Q: x = f ( , p, t), x y (ti ) = g( , p, ti ), x Q = h(y, y , , p). p x(0) = x0 ( ), (10.41) (10.42) (10.43)

The already introduced quadratic objective function is used here, as well. A direct solution for p is not possible, rather it involves a nonlinear optimization problem which has to be solved with optimization methods. An optimization method is an algorithm which changes the sought parameters until the value of objective function is as small as possible. Optimization methods, as briey introduced in the following, can be used for dierent purposes. Once a model exists which represents well the measured values (for example after a parameter identication), one can use optimization methods e.g. for design improvement etc. (compare with simulation procedure use of simulator). Out of the wide eld of mathematical optimization methods, in the lecture only the socalled hill-climbing methods shall be considered. Dierent hill-climbing methods can be distinguished (Homann and Hofmann, 1971):

Search methods: Only the objective function Q() is evaluated. p Gradient methods: The objective function Q() and the derivative (the gradient) p Q ( ) are evaluated. p Newton methods: The objective function Q(), the gradient Q () and the second p p
p p

derivative Qpp ( ) are evaluated. p

In the following sections the search methods are briey discussed. In order to understand these methods, one has to clarify the connection between the sought parameters and the objective function. For this reason we consider in Fig. 10.3 the representation of an objective function in a two-dimensional space with respect to the sought parameters. The lines with the same value of the objective function (level curves) can be e.g. ellipses. On such a level curve the values of the objective function remain the same independently of the combination of the parameters. In this example the minimum is in Q .

10.6.1 Search Methods


In search methods the optimum is determined simply through an evaluation of the objective function. This oers advantages especially for such cases in which the derivatives are very dicult to calculate, or cannot be made available at all. In general, search methods are less ecient than gradient or Newton methods.

123

10 Parameter Identication

  

Figure 10.3: Objective function in the two-dimensional space.

10.6.1.1 Successive Variation of Variables The method of the successive variation of variables starts at an initial point p(1) . One varies p1 (for constant p2 ) until Q( ) reaches a minimum. Then one varies p2 (with the p new (constant) value for p1 ) until the objective function reaches a minimum (see Fig. 10.4). Then again p1 is varied and so forth.

    % & (' ! " $#

Figure 10.4: Successive variation of variables. The new parameter vector is given by: pj+1 = pj + v j , j j = 1, . . . , n. (10.44)

The search direction v j is determined through the choice of the coordinate system. The change of the objective function determines the length of the vector v.

124



10.6 General Parameter Estimation Problem

10.6.1.2 Simplex Methods For the simplex method, at the beginning n + 1 points are determined (in Fig. 10.5 the points 1, 2, 3), dening an n-dimensional simplex.



"





Figure 10.5: Simplex method. For these points the objective function is evaluated. The point with the worst objective function value is reected in the centroid of the remaining n points (reection). For this new parameter combination (point 4 in Fig. 10.5) the objective function is evaluated and again the point with the worst objective function is reected, and so on. For the use of such optimization methods it is not guaranteed that the global optimum is always found.



"





Figure 10.6: Problem within the simplex method.

125

10 Parameter Identication

In Fig. 10.6 such a case is depicted for the simplex method. Because of the structure of the level curves the simplex method remains in a local minimum; the global optimum is not found. 10.6.1.3 Nelder-Mead Method The Nelder-Mead method comprises a modied simplex method. In addition to the reection used in the simplex method, other possibilities of determining new parameter combinations are considered (see Fig. 10.7). In doing so, a search is performed into one direction as long as the objective function improves.

( )< ( )

( ) ( )

( ) ( ) ( )> ( )

Figure 10.7: Nelder-Mead method. With the Nelder-Mead method 15 parameters of the compartment model of the introductory example are determined. In Fig. 10.8 the simulation results calculated with the optimal parameter set is compared to the measurements.
50 40 glucose [g/l] 30 20 10 0 0 10 20 30 time [h] 40 biomass glucose addition inductor 125 8 acetate, nitrogen [g/l] 100 biomass [g/l] 75 50 25 0 acetate nitrogen product addition inductor 1 0.5 0

0 0 10 20 time [h] 30 40

Figure 10.8: Simulated trajectories after parameter identication.

126

product []

11 Summary
This chapter gives a summary of the course oriented along the simulation procedure (see Fig. 1.4) and Page (1991).

11.1 Problem Denition


In order to dene the simulation problem, we have to start with the denition of the questions of interest and the goals we want to reach with the help of simulation. The problem denition corresponds to the denition of the systems borders. Therefore in the following we are going to dene the term system in more detail. A system is a set with interrelated parts (components), which interact to reach a common goal (see Fig. 11.1).

Figure 11.1: Basic concepts in systems theory (according to Page, 1991) If a system component cannot be further decomposed, it is called a system element. These system elements have specic characteristics (attributes). Changes in these characteristics correspond to changes in the state variables. The set of all values of the state variables forms the state of the system. You can distinguish between the following systems:

127

11 Summary open system at least one input or output static system closed system no inputs or outputs dynamic system time dependent change of the state

cybernetic system non-cybernetic system with without feedback loops

11.2 Modeling
Among other things, models can be classied according to their transformation and their analysis method (see Fig 11.2), according to their state transition (see Fig. 11.3), or according to the application of the model (see Fig. 11.4).

Figure 11.2: Classication of models according to the used transformation and analysis method (according to Page, 1991).

Classication of Dynamic Mathematical Models As illustrated in the course, dynamic (graphical) mathematical models play an important role in simulation techniques. Therefore, they are going to be discussed and classied in more detail in this section.

128

11.2 Modeling

Figure 11.3: Classication of models according to the type of state transition (according to Page, 1991).

Figure 11.4: Classication of models according to their application (according to Page, 1991).

Time continuous and time discrete systems: Discrete event systems:

In time continuous systems the state variables are continuous functions of time. Their mathematical representation can be given by systems of ordinary dierential equations of rst order or algebraic equation systems or a combination of both (DAE system). Examples, which have been discussed during the course, are the vertical movement of a motor vehicle and an electrical circuit. Discrete event systems are a special case of time discrete systems. The time steps during the process are determined by incidentally occurring external events or by functions of the actual values of the state variables. The graphical mathematical representation is carried out by e.g. Petri nets. In the course we studied the examples of a soda machine and a trac light circuit. In contrast to distributed systems, lumped systems include a nite number of states. The mathematical representation of distributed systems is given by partial dierential equations, which comprise not only derivatives of the time but also of the space coordinates. In the course, the example of a heat conductor has been studied.

Lumped and distributed systems:

Time invariant and time variant systems:

129

11 Summary

Linear and nonlinear systems:

In time invariant systems the systems function is independent of time (autonomous system). Therefore, the model parameters are time independent as well.

In linear systems the superposition principle holds true: f (u1 + u2 ) = f (u1 ) + f (u2 ) (11.1)

Steady-state and dynamic models:


11.3 Numerics

The predator-prey model has been introduced as an example for a nonlinear model.

In contrast to dynamic models, the behavior of steady-state models is time independent. An example is a chemical reaction system.

The numerical methods treated in the course shall be listed in note form in this section:

equation systems
linear: Gau elimination

ordinary dierential equation systems


implicit/explicit method

nonlinear: Newton-Raphson method

one-step method/multi-step method

dierential-algebraic systems: solvability partial dierential equations


method of lines, method of dierences

discrete event systems

method of weighted residuals

graphical representation mathematical representation

11.4 Simulators
Simulation software can be distinguished according to dierent criteria:

130

11.4 Simulators

11.4.1 Application Level

Software on the level of programming languages: Software on the level of models:

compilers or interpreters of a programming language, which is suitable for the implementation of simulation models (e.g. Matlab) completely implemented models, only the input parameters are freely selectable (e.g. ight simulator) integrated, interactive working environment (e.g. Simulink)

Software on the level of support systems:


11.4.2 Level of Problem Orientation

universally usable software: e.g. compiler for general high-level programming languages simulation specic software: software especially developed for the purpose of simulation (e.g. Simulink) software oriented along specic classes of problems and specic model types software oriented along the problem denition of one application domain general programming languages e.g. FORTRAN, C(++), PASCAL Simulation packages collection (library) of procedures and functions Simulation languages
low-level simulation languages high-level simulation languages: implementation of models is eased systems-oriented simulation language: direct modeling of specic system classes

11.4.3 Language Level

11.4.4 Structure of a Simulation System


Fig. 11.5 shows the general structure of a simulation system. Thereby, the modules modeling component, graphical user interface, results analysis, model and method data base, experiment control, and simulation data base can be distinguished. Not all of these modules have to be present in a simulation system. Furthermore, the complexity of these modules in dierent simulation systems varies signicantly.

131

11 Summary

Figure 11.5: Structure of a simulation system (according to Page, 1991). Simulation environments can be classied according to the following criteria and characteristics:

with or without programming environment which model classes are going to be supported complexity of the model library complexity of the numerical methods on-line or oine graphics possible simulation experiments open or closed structure (interfaces) carrier language: FORTRAN, C, ... compiler or interpreter platform: PC, VAX, UNIX, ... equation- or block-oriented
The dierences between equation- and block-oriented simulators are illustrated by the following example.

132

11.4 Simulators

89



'

&



"

Figure 11.6: Block-oriented representation of a simulation model. Example In Fig. 11.6, the block-oriented representation of a simulation model is given.

The equations of all model blocks are given by: f1 (x11 , x12 , y11 ) = 0 f2 (x21 , y21 ) = 0 f3 (x31 , y31 , y32 ) = 0 f4 (x41 , x42 , y41 ) = 0 The connections between the model blocks are described by: x12 y31 = 0 x21 y11 = 0 x31 y21 = 0 x42 y32 = 0 (11.6) (11.7) (11.8) (11.9) (11.2) (11.3) (11.4) (11.5)

An equation-oriented simulator solves (11.2) - (11.9) simultaneously. There is no block structure identiable in the equations anymore. For this reason, an equation-oriented simulator has no graphical representation (input) of the model structure. A block-oriented simulator solves each block separately in a specic order. From (11.2) (11.5) the equations of the block-oriented simulator can be derived: y11 = g11 (x11 , x12 ) y21 = g21 (x21 ) y31 = g31 (x31 ) y32 = g32 (x31 ) y41 = g41 (x41 , x42 ) (11.10) (11.11) (11.12) (11.13) (11.14)

The known inputs of the system are: x11 and x41 . One possible solution strategy for the equations is the following:

56

133

11 Summary

1. Estimate x12 2. Calculate y11 , then y21 , . . . , y31 , y32 3. x12 y31 =? a) = 0: calculate y41 b) = 0: set x12 = y31 and repeat step 2

11.5 Parameter Identication


In parameter identication the parameters of a model are calculated in such a way, that the dierences between simulated and measured values are going to be minimized. Therefore, these dierences have to be squared and summarized in an objective function. With the use of some calculation algorithm one tries to minimize this objective function. Thereby, two cases can be distinguished:

direct solution: (weighted) least squares method indirect solution using optimization methods
11.6 Use of Simulators
The major reasons for the use of simulators are:

The system is not available. Experiments within the system are too dangerous. The experiment costs are too high. The time constants of the system are too large. The variables of interest are dicult to measure or not measurable at all. There are disturbances within the real system.

134

11.7 Potentials and Problems

11.7 Potentials and Problems


In spite of the many dierent possibilities and chances oered by simulation, you have to assume that the use of simulation for a specic problem ist justied (and aordable). Therefore, the potentials and problems of modeling and simulation are summarized in Fig. 11.7.

Figure 11.7: Potentials and problems of modeling and simulation (according to Page, 1991).

135

11 Summary

136

Bibliography
Astrm, K., Elmqvist, H. and Mattsson, S. (1998). Evolution of continuous-time modeling o and simulation, 12th European Simulation Multiconference, ESM 98, Manchester, pp. 110. Bachmann, B. and Wiesmann, H. (2000). Objektorientierte Modellierung Physikalischer Systeme, Teil 15, at - Automatisierungstechnik 48: A57A60. Bauknecht, K., Kohlas, J. and Zehnder, C. (1976). Simulationstechnik, Springer, Berlin. Beater, P. (2000). Objektorientierte Modellierung Physikalischer Systeme, Teil 10, at Automatisierungstechnik 48: A37A40. Bennett, B. (1995). Simulation Fundamentals, Prentice Hall. bestellt, 10.7.00. Bischo, K. and Himmelblau, D. (1967). Fundamentals of Process Analysis and Simulation, number 1 in AIChE Continuing Education Series, American Institute of Chemical Engineers, New York. Bossel, H. (1992). Modellbildung und Simulation, Vieweg & Sohn Verlagsgesellschaft mbH, Braunschweig. Bratley, P., Fox, B. and Schrage, L. (1987). A Guide to Simulation, Springer, New York. Breitenecker, F., Troch, I. and Kopacek, P. (eds) (1990). Simulationstechnik, Vieweg, Braunschweig. Cellier, F. (1992). Simulation modelling formalism: Ordinary dierential equations, in D. Atherton and P. Borne (eds), Concise Encyclopedia of Modelling & Simulation, Pergamon Press, Oxford, pp. 420423. Clau, C., Schneider, A. and Schwarz, P. (2000a). Objektorientierte Modellierung Physikalischer Systeme, Teil 13, at - Automatisierungstechnik 48: A49A52. Clau, C., Schneider, A. and Schwarz, P. (2000b). Objektorientierte Modellierung Physikalischer Systeme, Teil 14, at - Automatisierungstechnik 48: A53A56. Dahmen, W. (1994). Einfhrung in die numerische Mathematik (fr Maschinenbauer), u u Institut fr Geometrie und Praktische Mathematik, RWTH Aachen. u Engeln-Mller, G. and Reutter, F. (1993). Numerik-Algorithmen mit Fortran 77u Programmen, BI Wissenschaftsverlag, Mannheim.

137

Bibliography

Fishman, G. (1978). Principles of Discrete Event Simulation, John Wiley, New York. Fllinger, O. and Franke, D. (1982). Einfhrung in die Zustandsbeschreibung dynamischer o u Systeme, Oldenbourg, Mnchen. u Fritzson, P. (1998). Modelica - a language for equation-based physical modeling and high performance simulation, APPLIED PARALLED COMPUTING 1541: 149160. bestellt, 10.7.00. Fritzson, P. and Engelson, V. (1998). Modelica - a unied object-oriented language for system modeling and simulation, ECOOP98 (the 12th European Conference on ObjectOriented Programming). Gipser, M. (1999). Systemdynamik und Simulation, B.G: Teubner, Stuttgart. Gordon, G. (1969). System Simulation, Prentice-Hall, Englewood Clis. Hanisch, H.-M. (1992). Petri-Netze in der Verfahrenstechnik, Oldenbourg, Mnchen. u Homann, U. and Hofmann, H. (1971). Einfhrung in die Optimierung, Verlag Chemie, u Weinheim. Hohmann, R. (1999). Methoden und Modelle der Kontinuierlichen Simulation, Shaker, Aachen. Holzinger, M. (1996). Modellorientierte Parallelisierung von Modellen mit verteilten Parametern, Masters thesis, Technische Universitt Wien. a Jain, R. (1991). The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling, John Wiley & Sons, New York. Kiencke, U. (1997). Ereignisdiskrete Systeme, Oldenbourg, Mnchen. u Kcher, D., Matt, G., Oertel, C. and Schneewei, H. (1972). Einfhrung in die Simulao u tionstechnik, Deutsche Gesellschaft fr Operations Research. u Korn, G. and Wailt, J. (1983). Digitale Simulation kontinuierlicher Systeme, Oldenbourg, Mnchen. u Kramer, U. and Neculau, M. (1998). Simulationstechnik, Carl Hanser, Mnchen. u Lapidus, L. and Pinder, G. (1982). Numerical Solution of Partial Dierential Equations in Science and Engineering, John Wiley & Sons, New York. Law, A. and Kelton, W. (1991). Simulation Modeling and Analysis, McGraw Hill, New York. Lehman, R. (1977). Computer Simulation and Modeling: An Introduction, Lawrence Erlbaum, Hillsdale.

138

Bibliography

Liebl, F. (1992). Simulation - Problemorientierte Einfhrung, Oldenbourg, Mnchen. u u Lotka, A. J. (1925). Elements of Physical Biology, Williams and Wilkins, Baltimore. Margolis, D. (1992). Simulation modelling formalism: Bond graphs, in D. Atherton and P. Borne (eds), Concise Encyclopedia of Modelling & Simulation, Pergamon Press, Oxford, pp. 415420. Marquardt, W. (1991). Dynamic process simulation - recent progress and future challenges, in W. Ray and Y. Arkun (eds), Chemical Process Control - CPC IV. MathWorks (2000). Using MATLAB, Version 6, The MathWorks, Inc., Natick, USA. MathWorks (2002). Simulink for model-based and system-level design. Product informationen online available at http://www.mathworks.com/ (viewed 25 March 2002). Meadows, D. L., Meadows, D. H. and Zahn, E. (2000). Die Grenzen des Wachstums. Club of Rome. Bericht des Club of Rome zur Lage der Menschheit, Deutsche VerlagsAnstalt. Mezencev, R. (1992). Simulation languages, in D. Atherton and P. Borne (eds), Concise Encyclopedia of Modelling & Simulation, Pergamon Press, Oxford, pp. 409415. Minsky, M. (1965). Matter, mind and models, in A. Kalenich (ed.), Proceedings International Federation of Information Processing Congress, Spartan Books, Washington, pp. 4549. Mller, D. (1992). Modellbildung, Simulation und Identikation dynamischer Systeme, o Springer, Berlin. bestellt, 22.9.00, nicht nachweisbar. Nidumolu, S., Menon, N. and Zeigler, B. (1998). Object-oriented business process modeling and siumlation: A discrete event system specication framework, Simulation Practice and Theory 6: 533571. Norton, J. (1997). An Introduction to Identication, Academic Press. Odum, H. and Odum, E. (2000). Modeling for All Scales: An Introduction to System Simulation, Academic Press, San Diego. Otter, M. (1999a). Objektorientierte Modellierung Physikalischer Systeme, Teil 1, at Automatisierungstechnik 47: A1A4. Otter, M. (1999b). Objektorientierte Modellierung Physikalischer Systeme, Teil 2, at Automatisierungstechnik 47: A5A8. Otter, M. (1999c). Objektorientierte Modellierung Physikalischer Systeme, Teil 3, at Automatisierungstechnik 47: A9A12. Otter, M. (1999d). Objektorientierte Modellierung Physikalischer Systeme, Teil 4, at Automatisierungstechnik 47: A13A16.

139

Bibliography

Otter, M. (2000). Objektorientierte Modellierung Physikalischer Systeme, Teil 17, at Automatisierungstechnik 48: A65A68. Otter, M. and Bachmann, B. (1999a). Objektorientierte Modellierung Physikalischer Systeme, Teil 5, at - Automatisierungstechnik 47: A17A20. Otter, M. and Bachmann, B. (1999b). Objektorientierte Modellierung Physikalischer Systeme, Teil 6, at - Automatisierungstechnik 47: A21A24. Otter, M., Elmqvist, H. and Mattson, S. (1999). Objektorientierte Modellierung Physikalischer Systeme, Teil 8, at - Automatisierungstechnik 47: A29A32. Otter, M., Elmqvist, H. and Mattsson, S. (1999). Objektorientierte Modellierung Physikalischer Systeme, Teil 7, at - Automatisierungstechnik 47: A25A28. Otter, M. and Schlegel, M. (1999). Objektorientierte Modellierung Physikalischer Systeme, Teil 9, at - Automatisierungstechnik 47: A33A36. Page, B. (1991). Diskrete Simulation. Eine Einfhrung mit Modula-2, Springer, Berlin. u Piehler, J. and Zschiesche, H.-U. (1976). Leipzig. Simulationsmethoden, BSB B.G. Teubner,

Press, W. H., Flannery, B. P., Teukolsky, S. A. and Vetterling, W. T. (1990). Numerical Recipes - The Art of Scientic Computing, Cambridge University Press, Cambridge. Roberts, N. e. A. (1994). Introduction to Computer Simulation: A System Dynamics Modeling Approach, Productivity Press. bestellt, 10.7.00. Sadoun, B. (2000). Applied system simulation: A review study, Information Sciences 124: 173192. Savic, D. and Savic, D. (1989). BASIC Technical Systems Simulation, Butterworth & Co, London. Schiesser, W. (1991). The Numerical Method of Lines, Academic Press, San Diego. Schneider, R. (1999). Untersuchung eines adaptiven prdiktiven Regelungsverfahrens a zur Optimierung von bioverfahrenstechnsichen Prozessen, Fortschritt-Berichte VDI, Reihe 8, Nr. 855, VDI Verlag, Dsseldorf. u Schne, A. (1973). Simulation Technischer Systeme, Carl Hanser. bestellt, 7.7.00. o Schumann, R. (1998). CAE von Regelsystemen, atp - Automatisierungstechnische Praxis 40: 4863. Schwarze, G. (1990). Digitale Simulation, Akademie-Verlag, Berlin. Shannon, R. (1975). Systems Simulation: The Art and Science., Prentice Hall, Englewood Clis.

140

Bibliography

Tummescheit, H. (2000a). Objektorientierte Modellierung Physikalischer Systeme, Teil 11, at - Automatisierungstechnik 48: A41A44. Tummescheit, H. (2000b). Objektorientierte Modellierung Physikalischer Systeme, Teil 12, at - Automatisierungstechnik 48: A45A48. Tummescheit, H. and Tiller, M. (2000). Objektorientierte Modellierung Physikalischer Systeme, Teil 16, at - Automatisierungstechnik 48: A61A64. Tzafestas, S. (1992). Simulation modelling formalism: Partial dierential equations, in D. Atherton and P. Borne (eds), Concise Encyclopedia of Modelling & Simulation, Pergamon Press, Oxford, pp. 423428. Verein Deutscher Ingenieure (1996). VDI-Richtlinie 3633: Simulation von Logistik-, Materialu- und Produktionssystemen - Begrisdenitionen. Volterra, V. (1926). Variations and uctuations of the number of individuals in animal species living together, in C. R. N. (ed.), Animal Ecology, McGraw-Hill, New York, pp. 409448. Watzdorf, R., Allgwer, F., Helget, A., Marquardt, W. and Gilles, E. (1994). Dynamische o Simulation verfahrenstechnischer Prozesse und Anlagen -Ein Vergleich von Werkzeugen, in G. Kampe and M. Zeik (eds), Simulationstechnik, 9.ASIM-Symposium in Stuttgart, pp. 8388. Wllhaf, K. (1995). Objektorientierte Modellierung und Simulation verfahrenstechnischer o Mehrproduktanlagen, Shaker, Aachen. Woolfson, M. and Pert, G. (1999). An Introduction to Computer Simulation, Oxford University Press, New York. Zeigler, B. (1984a). Multifacetted Modelling and Discrete Event Simulation, Academic Press, chapter Multifacetted Modelling: Motivation and Perspective, pp. 319. Zeigler, B. (1984b). Multifacetted Modelling and Discrete Event Simulation, Academic Press, chapter System Architectures for Multifacetted Modelling, pp. 335371. Zeitz, M. (1999). Simulationstechnik. Lecture notes, Institut fr Systemdynamik und u Regelungstechnik, Universitt Stuttgart. a

141