Sie sind auf Seite 1von 11

Lecture 3:

ODEs, Direction Fields, DSolve, NDSolve, Lyapunov Exponents

Differential equations are used to model physical systems.


General form of an Nth order linar ODE (the order is the order of the highest derivative):
i d N-1 x yz
ÅÅÅÅÅÅÅÅÅNÅÅÅÅÅ = f jjt, x, ÅÅÅÅÅÅÅÅÅ , ... .. ÅÅÅÅÅÅÅÅÅN-1
ÅÅÅÅÅÅÅÅÅÅÅÅ z
dN x
k {
dx
dt dt dt

Plus N initial conditions (for derivatives up to order N-1):


xH0L = x0 , x' H0L = v0 , x'' H0L = a0 , ... ....x,
If the derivative of f with respect to each of its arguments is continuous over an interval containing the initial conditions,
then the solution of the ODE with those initial conditions exists and is unique.

The equation alone has a family of solutions, or general solution. The initial conditions are necessary to define a particular
solution. This is because the solution of an ODE of Nth order is obtained by N integrations, each yielding a constant of
integration.

For example, the first order ODE that expresses Newton's 2nd law is:

dv 1
ÅÅÅÅÅÅÅÅÅ = ÅÅÅÅÅÅ FHtL
dt m

The solution only requires one integration and introduces one constant of inegration:

‡ ÅÅÅÅdtÅÅÅÅÅ dt = ÅÅÅÅ
ÅÅ ‡ FHtL dt
t dv 1 t

t0 m t0

Using the fundamental theorem of calculus:

vHtL - vHt0 L = ÅÅÅÅÅ


m Ÿt
1 t
FHtL dt
0

A particular solution is obtained when the initial velocity, vHt0 L, is specified.

If initial conditions are given, the problem is called initial value problem. In some cases, we are given boundary condi-
tions, for example the solution at the initial and final time, or, in the case of partial differential equations, the solution
may be given on a surface that bounds a volume (you will soon encounter this case at the end of 100A, when you will solve
the Poisson equation for the electrostatic potential). When boundary conditions are imposed, the problem is called bound-
ary value problem.
A particular solution is obtained when the initial velocity, vHt0 L, is specified.

2 If initial conditions are given, the problem is called initial value problem. In some cases, we are given boundaryLecture3.nb
condi-
tions, for example the solution at the initial and final time, or, in the case of partial differential equations, the solution
may be given on a surface that bounds a volume (you will soon encounter this case at the end of 100A, when you will solve
the Poisson equation for the electrostatic potential). When boundary conditions are imposed, the problem is called bound-
ary value problem.

Graphical Analysis of Initial Value Problems: Direction Fields

It is sometimes possible to learn a lot about an ODE by studying the ODE itself, without computing its solution. We will
consider 4 cases:
i) First order with explicit time dependence;
ii) Second order with no explicit time dependence;
iii) Two first order equations with no explicit time dependence;
iv) Three first order equations with no explicit time dependence (exercise on the Lorents system, not discussed here).

i) First order with explicit time dependence: y' = f(t,y)

The trick is that the function f(t,y) represents the slope of the solution y over the (t,y) plane. If we represent that slope all
over the (t,y) plane, we can visualize the solution y everywhere. The slope is visualized as short straight segments
(elements) that are tangent to the solution y. The x components of the elements is dt, and their y component is f(t,y),
because:

(dt,dy) = dtJ1, ÅÅÅÅÅÅ


dy
dt
Å ) = dt(1,f(t,y))

So all we need to do is to plot the field (1,f(t,y)) on the (t,y) plane: this is called a direction field.
Example:
Lecture3.nb 3

<< Graphics`;

PlotVectorField@81, f@t, vD<, 8t, 0, 6<, 8v, -2, 3<, Axes Ø True,
f@t_, v_D = t - v;

ScaleFunction Ø H1 &L, AxesLabel Ø 8"t", "v"<D

t
1 2 3 4 5 6

-1

-2

Ü Graphics Ü
4 Lecture3.nb

To visualize a particular solution, just choose an intial condition (a position on the plane) and follow the vectors. It is clear
the in this region the solutions are unique, because the lines of the direction field don't intersect each other.

ii) Second order with no explicit time dependence.

The second order problem ends up in a 3D direction field, (1,v,f(x,v)), in the (t,x,v,) space, because:

(dt,dx,dv) = dtJ1, ÅÅÅÅÅÅ


dx
dt
dv
Å , ÅÅÅÅÅÅ
dt
Å ) = dt(1,v,f(x,v))

However, since f(x,v) does not depend explicitly on time, the direction field looks the same at all time, so we can just plot
its projection on the (x,v) plane. We are back to a 2D flow, (v,f(x,v)).
For the damped harmonic oscillator we have:

ÅÅÅÅÅÅÅÅÅ2ÅÅÅÅ = f Hx, vL = -w0 2 x - g v


d2 x
dt
Lecture3.nb 5

PlotVectorField@8v, f@x, vD<, 8x, -2, 2<, 8v, -2, 2<, Axes Ø True,
f@x_, v_D = -x - v;

ScaleFunction Ø H1 &L, AxesLabel Ø 8"x", "v"<D

v
2

x
-2 -1 1 2

-1

-2

Ü Graphics Ü

The solution spirals into the center in an infite time. Still no intersection and the solutions are unique.

iii) Two first order equations with no explicit time dependence.

It is clear that each equation requires only one dimension (the unknown, say v in the case i) if it did not depend on time). So
the system can be represented in 2D. As an example we consider an Hamiltonian system:

dx ∂ HHt,x,pL
ÅÅÅÅ
dt
ÅÅ = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
∂p
ÅÅÅÅ

dp ∂ HHt,x,pL
ÅÅÅÅ
dt
ÅÅ = - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
∂x
ÅÅÅÅ

An example of a Hamiltonian system with no time dependence is the one associated with motion in a potential V(x). The
Hamiltonian is the sum of kinetic and potential energy:

2
p
H(x,p) = ÅÅÅÅÅÅÅÅ
2m
Å + VHxL

We then get:
dt ∂x

6 An example of a Hamiltonian system with no time dependence is the one associated with motion in a potential V(x). The
Lecture3.nb
Hamiltonian is the sum of kinetic and potential energy:

2
p
H(x,p) = ÅÅÅÅÅÅÅÅ
2m
Å + VHxL

We then get:

p
ÅÅÅÅ
dx
dt
ÅÅ = ÅÅÅÅ
m
Å = v

dp ∂ VHxL
ÅÅÅÅ
dt
ÅÅ = - ÅÅÅÅÅÅÅÅ
∂x
ÅÅÅÅÅ

∂ V HxL
Now the direction field has components (v,- ÅÅÅÅÅÅÅÅ
∂x
ÅÅÅÅÅÅ ) and is graphed on the (x,p) plane.

The collection of all possible states of a system is called phase space. For our 2D field, the phase space is (x,p). It has 2
dimensions, and, in the language of Hamiltonian systems, 1 degree of freedom. If the Hamiltonian depended on time as
well, then the phase space would be (t,x,p), it would have 3 dimensions, and 1 +1/2 degrees of freedom.

Autonomous (time independent) Hamiltonian systems (also called conservative systems) conserve the phase-space
area. The area is bounded by a perimeter. Chose your initial conditions on that perimeter, let the solution evolve (e.g.
follow the direction field), and map that preimeter (set of initial conditions) at a later time. You will find that the surface
area inclosed by that perimeter is the same.

Dissipative system (for example time dependent Hamiltonian systems) do not conserve phase-space area. This is easy to
understand, because dissipative systems tend to shrink with time, meaning they explore a smaller and smaller region of
phase space as time goes.

We can use a great tool of vector calculus to compute if a flow is area conserving or not: The divergence operator. You are

area conserving. That means that a flow Hvx , vy L on the (x,y) plane, is area conserving if:
welcome to read in the textbook the little derivation (based on the divergence theorem) of the fact that a diverge free flow is

ƒƒ ƒƒ
ƒƒ ƒƒ
ƒƒy + ÅÅÅÅÅÅÅÅÅÅÅÅ ƒƒx
ƒƒ ƒƒ
∂ vx ∂ vy
“·v(x,y) = ÅÅÅÅÅÅÅÅÅÅÅÅÅ = 0
∂x ∂y

The flow of a time independent Hamiltonian system has


1. Zero divergence (area conserving)
2. It is everywhere tangent to surfaces of constant H

The proof is trivial as it is given in the textbook.

As you will learn next quarter when you will study magnetic fields, the field lines of a divergence free field are closed
loops. If the solution of the ODEs exists and is unique, the lines cannot intersect. Since they are closed loops, in 2D they
must be nested inside each other, like the altitude lines of a topographic map.
2. It is everywhere tangent to surfaces of constant H

Lecture3.nb 7
The proof is trivial as it is given in the textbook.

As you will learn next quarter when you will study magnetic fields, the field lines of a divergence free field are closed
loops. If the solution of the ODEs exists and is unique, the lines cannot intersect. Since they are closed loops, in 2D they
must be nested inside each other, like the altitude lines of a topographic map.

If the Hamiltonian is time dependent, or if the phase space has more than 2 dimensions, all the above considerations most
likely do not apply. Such systems can be very complex and chaotic. Since you will hardly ever meet in Nature systems that
are as simple as the examples above, most systems are actually chaotic, at least under certain conditions. This is true also for
dissipative systems: Even if they shrink into a small part of the phase space called attractor, their behavior can still be
chaotic within the attractor. But let's define what we mean by chaotic.

Chaotic Systems and Lyapunov Exponents

There two main groups of dynamical systems:

1) Deterministic systems: They are predictable (e.g. solution of differential equations)


2) Stochastic Systems (also called random systems): They are unpredictable, but the probability of a certain state may be
known. For example the toss of a coin can end up into two states, each with 50% probability, but we do not know which one
we will get.

Deterministic systems can often misbehave so much that, even if in principle they are predictable, we can predict them over
a long time only with an infinitely precise knowledge of the initial conditions.

Chaotic systems are deterministic systems with very high sensitivity to the initial conditions
(small errors do not have small consequences!).

Two trajectories that are initially very close, can exponentially diverge with time. Their separation, Dx, can be expressed as:

Dx ~ exp(lt)

If l is positive, the separation grows exponentially. If we interpret Dx as the uncertainty in the initial conditions (say the
16th decimal figure for our typical machine single precision), then the solution we compute would diverge from the correct
one at the exponential rate l. l is called the Lyapunov exponent:

lHz0 L = limtض,»d0 »Ø0 Y ÅÅÅÅ1t lnI ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ


ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ M]
»zHt,z0 +d0 L-zHt,z0 L»
»d0 »

where z0 = Hx0 , v0 L is an initial condition, z is the phase-space trajectory (the solution of the ODEs) and d0 = HDx0 , Dv0 L is
a small displacement from the initial condition. The average, X...\, is computed over many small displacements d0 .

You will compute the Lyapunov exponent in one of the homework problems. Follow the example in the textbook.
8
lHz0 L = limtض,»d0 »Ø0 Y ÅÅÅÅ1t lnI ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ M]
»zHt,z0 +d0 L-zHt,z0 L»
»d0 » Lecture3.nb

where z0 = Hx0 , v0 L is an initial condition, z is the phase-space trajectory (the solution of the ODEs) and d0 = HDx0 , Dv0 L is
a small displacement from the initial condition. The average, X...\, is computed over many small displacements d0 .

You will compute the Lyapunov exponent in one of the homework problems. Follow the example in the textbook.

Analytical and Numerical Solutions of ODEs with Mathematica:


DSolve and NDSolve

This is the general syntax for DSolve:

DSolve[{ODE,initial conditions}, unknown function, independent variable]

The same syntax can be used for a system of ODEs:

DSolve[{ODE1,ODE2,...., ICs1, ICs2,.....}, unknown function, independent variable]

Since DSolve compute an analytical solution, it is not necessary to provide initial conditions. If initial conditions are not
provided, then Mathematica finds the general solution.

† DSolve can solve linear ordinary differential equations of any order with constant coefficients. It can solve also many
linear equations up to second order with non-constant coefficients.
† DSolve includes general procedures that handle a large fraction of the nonlinear ordinary differential equations whose
solutions are given in standard reference books such as Kamke.
† DSolve can find general solutions for linear and weakly nonlinear partial differential equations. Truly nonlinear partial
differential equations usually admit no general solutions.

Example: General solution of the harmonic oscillator:

DSolve@x ''@tD ã -w0 2 x@tD, x@tD, tD

88x@tD Ø C@2D Cos@t w0 D + C@1D Sin@t w0 D<<

This is the general solution, with the two constants. If we wanted a particular solution we could specify the initial conditions:
Lecture3.nb 9

DSolve@8x ''@tD ã -w0 2 x@tD, x@0D ã x0, x '@0D ã v0<, x@tD, tD

v0 Sin@t w0 D + x0 Cos@t w0 D w0
99x@tD Ø ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅ ==
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
w0

Many equations do not have analytical solutions. We can still find solutions numericaly with NDSolve:

NDSolve[{ODEs,initial conditions},x[t],{t,tmin,tmax}]

With NDSolve you must:


1.Specify the initial conditions
2.Have no non-numerical constant
3.Provide a finite interval of the variable

Example: Back to the oscillator, with w0 = 1 :

NDSolve@8x ''@tD ã -x@tD, x@0D == 0, x '@1D ã 3<, x@tD, 8t, 0, 10<D

88x@tD Ø InterpolatingFunction@880., 10.<<, <>D@tD<<

Mathematica has generated an interpolation function that is a pure function. You can evaluate that function like this:

x@tD ê. %

8InterpolatingFunction@880., 10.<<, <>D@tD<

And then plot the result:


10 Lecture3.nb

Plot@%, 8t, 1, 10<D

2 4 6 8 10

-2

-4

Ü Graphics Ü

IMPORTANT: Often we need to use several times the result of NDSolve, and we may have solved a system of equations, or
a table of many equations. A good method is to define a function and assign to it the result of NDSolve in this way:
Lecture3.nb 11

x@tD ê. NDSolve@8x ''@tD ã -x@tD, x@0D == 0, x '@1D ã 3<, x@tD,


solution@t_D =

8t, 0, 10<D@@1DD;
Plot@solution@tD, 8t, 0, 10<D

2 4 6 8 10

-2

-4

Ü Graphics Ü

Notice the [[1]] after NDSolve. That is the standard indexing notation. It extract the first value of the list coming out of
NDSolve. Weird? Yes weird, but if you pay attention, Mathematica put the interpolation function into a double curly
bracket, so the solution is inside a nested list. If you do not extract it like this you may end up in trouble when trying to
extract differnt components of a solution, for example for a parametric plot. Another solution to get rid of the extra curly
brackets is to use the function Flatten[ ].

Das könnte Ihnen auch gefallen