Beruflich Dokumente
Kultur Dokumente
The equation alone has a family of solutions, or general solution. The initial conditions are necessary to define a particular
solution. This is because the solution of an ODE of Nth order is obtained by N integrations, each yielding a constant of
integration.
For example, the first order ODE that expresses Newton's 2nd law is:
dv 1
ÅÅÅÅÅÅÅÅÅ = ÅÅÅÅÅÅ FHtL
dt m
The solution only requires one integration and introduces one constant of inegration:
‡ ÅÅÅÅdtÅÅÅÅÅ dt = ÅÅÅÅ
ÅÅ ‡ FHtL dt
t dv 1 t
t0 m t0
If initial conditions are given, the problem is called initial value problem. In some cases, we are given boundary condi-
tions, for example the solution at the initial and final time, or, in the case of partial differential equations, the solution
may be given on a surface that bounds a volume (you will soon encounter this case at the end of 100A, when you will solve
the Poisson equation for the electrostatic potential). When boundary conditions are imposed, the problem is called bound-
ary value problem.
A particular solution is obtained when the initial velocity, vHt0 L, is specified.
2 If initial conditions are given, the problem is called initial value problem. In some cases, we are given boundaryLecture3.nb
condi-
tions, for example the solution at the initial and final time, or, in the case of partial differential equations, the solution
may be given on a surface that bounds a volume (you will soon encounter this case at the end of 100A, when you will solve
the Poisson equation for the electrostatic potential). When boundary conditions are imposed, the problem is called bound-
ary value problem.
It is sometimes possible to learn a lot about an ODE by studying the ODE itself, without computing its solution. We will
consider 4 cases:
i) First order with explicit time dependence;
ii) Second order with no explicit time dependence;
iii) Two first order equations with no explicit time dependence;
iv) Three first order equations with no explicit time dependence (exercise on the Lorents system, not discussed here).
The trick is that the function f(t,y) represents the slope of the solution y over the (t,y) plane. If we represent that slope all
over the (t,y) plane, we can visualize the solution y everywhere. The slope is visualized as short straight segments
(elements) that are tangent to the solution y. The x components of the elements is dt, and their y component is f(t,y),
because:
So all we need to do is to plot the field (1,f(t,y)) on the (t,y) plane: this is called a direction field.
Example:
Lecture3.nb 3
<< Graphics`;
PlotVectorField@81, f@t, vD<, 8t, 0, 6<, 8v, -2, 3<, Axes Ø True,
f@t_, v_D = t - v;
t
1 2 3 4 5 6
-1
-2
Ü Graphics Ü
4 Lecture3.nb
To visualize a particular solution, just choose an intial condition (a position on the plane) and follow the vectors. It is clear
the in this region the solutions are unique, because the lines of the direction field don't intersect each other.
The second order problem ends up in a 3D direction field, (1,v,f(x,v)), in the (t,x,v,) space, because:
However, since f(x,v) does not depend explicitly on time, the direction field looks the same at all time, so we can just plot
its projection on the (x,v) plane. We are back to a 2D flow, (v,f(x,v)).
For the damped harmonic oscillator we have:
PlotVectorField@8v, f@x, vD<, 8x, -2, 2<, 8v, -2, 2<, Axes Ø True,
f@x_, v_D = -x - v;
v
2
x
-2 -1 1 2
-1
-2
Ü Graphics Ü
The solution spirals into the center in an infite time. Still no intersection and the solutions are unique.
It is clear that each equation requires only one dimension (the unknown, say v in the case i) if it did not depend on time). So
the system can be represented in 2D. As an example we consider an Hamiltonian system:
dx ∂ HHt,x,pL
ÅÅÅÅ
dt
ÅÅ = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
∂p
ÅÅÅÅ
dp ∂ HHt,x,pL
ÅÅÅÅ
dt
ÅÅ = - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
∂x
ÅÅÅÅ
An example of a Hamiltonian system with no time dependence is the one associated with motion in a potential V(x). The
Hamiltonian is the sum of kinetic and potential energy:
2
p
H(x,p) = ÅÅÅÅÅÅÅÅ
2m
Å + VHxL
We then get:
dt ∂x
6 An example of a Hamiltonian system with no time dependence is the one associated with motion in a potential V(x). The
Lecture3.nb
Hamiltonian is the sum of kinetic and potential energy:
2
p
H(x,p) = ÅÅÅÅÅÅÅÅ
2m
Å + VHxL
We then get:
p
ÅÅÅÅ
dx
dt
ÅÅ = ÅÅÅÅ
m
Å = v
dp ∂ VHxL
ÅÅÅÅ
dt
ÅÅ = - ÅÅÅÅÅÅÅÅ
∂x
ÅÅÅÅÅ
∂ V HxL
Now the direction field has components (v,- ÅÅÅÅÅÅÅÅ
∂x
ÅÅÅÅÅÅ ) and is graphed on the (x,p) plane.
The collection of all possible states of a system is called phase space. For our 2D field, the phase space is (x,p). It has 2
dimensions, and, in the language of Hamiltonian systems, 1 degree of freedom. If the Hamiltonian depended on time as
well, then the phase space would be (t,x,p), it would have 3 dimensions, and 1 +1/2 degrees of freedom.
Autonomous (time independent) Hamiltonian systems (also called conservative systems) conserve the phase-space
area. The area is bounded by a perimeter. Chose your initial conditions on that perimeter, let the solution evolve (e.g.
follow the direction field), and map that preimeter (set of initial conditions) at a later time. You will find that the surface
area inclosed by that perimeter is the same.
Dissipative system (for example time dependent Hamiltonian systems) do not conserve phase-space area. This is easy to
understand, because dissipative systems tend to shrink with time, meaning they explore a smaller and smaller region of
phase space as time goes.
We can use a great tool of vector calculus to compute if a flow is area conserving or not: The divergence operator. You are
area conserving. That means that a flow Hvx , vy L on the (x,y) plane, is area conserving if:
welcome to read in the textbook the little derivation (based on the divergence theorem) of the fact that a diverge free flow is
ƒƒ ƒƒ
ƒƒ ƒƒ
ƒƒy + ÅÅÅÅÅÅÅÅÅÅÅÅ ƒƒx
ƒƒ ƒƒ
∂ vx ∂ vy
“·v(x,y) = ÅÅÅÅÅÅÅÅÅÅÅÅÅ = 0
∂x ∂y
As you will learn next quarter when you will study magnetic fields, the field lines of a divergence free field are closed
loops. If the solution of the ODEs exists and is unique, the lines cannot intersect. Since they are closed loops, in 2D they
must be nested inside each other, like the altitude lines of a topographic map.
2. It is everywhere tangent to surfaces of constant H
Lecture3.nb 7
The proof is trivial as it is given in the textbook.
As you will learn next quarter when you will study magnetic fields, the field lines of a divergence free field are closed
loops. If the solution of the ODEs exists and is unique, the lines cannot intersect. Since they are closed loops, in 2D they
must be nested inside each other, like the altitude lines of a topographic map.
If the Hamiltonian is time dependent, or if the phase space has more than 2 dimensions, all the above considerations most
likely do not apply. Such systems can be very complex and chaotic. Since you will hardly ever meet in Nature systems that
are as simple as the examples above, most systems are actually chaotic, at least under certain conditions. This is true also for
dissipative systems: Even if they shrink into a small part of the phase space called attractor, their behavior can still be
chaotic within the attractor. But let's define what we mean by chaotic.
Deterministic systems can often misbehave so much that, even if in principle they are predictable, we can predict them over
a long time only with an infinitely precise knowledge of the initial conditions.
Chaotic systems are deterministic systems with very high sensitivity to the initial conditions
(small errors do not have small consequences!).
Two trajectories that are initially very close, can exponentially diverge with time. Their separation, Dx, can be expressed as:
Dx ~ exp(lt)
If l is positive, the separation grows exponentially. If we interpret Dx as the uncertainty in the initial conditions (say the
16th decimal figure for our typical machine single precision), then the solution we compute would diverge from the correct
one at the exponential rate l. l is called the Lyapunov exponent:
where z0 = Hx0 , v0 L is an initial condition, z is the phase-space trajectory (the solution of the ODEs) and d0 = HDx0 , Dv0 L is
a small displacement from the initial condition. The average, X...\, is computed over many small displacements d0 .
You will compute the Lyapunov exponent in one of the homework problems. Follow the example in the textbook.
8
lHz0 L = limtض,»d0 »Ø0 Y ÅÅÅÅ1t lnI ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ M]
»zHt,z0 +d0 L-zHt,z0 L»
»d0 » Lecture3.nb
where z0 = Hx0 , v0 L is an initial condition, z is the phase-space trajectory (the solution of the ODEs) and d0 = HDx0 , Dv0 L is
a small displacement from the initial condition. The average, X...\, is computed over many small displacements d0 .
You will compute the Lyapunov exponent in one of the homework problems. Follow the example in the textbook.
Since DSolve compute an analytical solution, it is not necessary to provide initial conditions. If initial conditions are not
provided, then Mathematica finds the general solution.
† DSolve can solve linear ordinary differential equations of any order with constant coefficients. It can solve also many
linear equations up to second order with non-constant coefficients.
† DSolve includes general procedures that handle a large fraction of the nonlinear ordinary differential equations whose
solutions are given in standard reference books such as Kamke.
† DSolve can find general solutions for linear and weakly nonlinear partial differential equations. Truly nonlinear partial
differential equations usually admit no general solutions.
This is the general solution, with the two constants. If we wanted a particular solution we could specify the initial conditions:
Lecture3.nb 9
v0 Sin@t w0 D + x0 Cos@t w0 D w0
99x@tD Ø ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅ ==
ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
w0
Many equations do not have analytical solutions. We can still find solutions numericaly with NDSolve:
NDSolve[{ODEs,initial conditions},x[t],{t,tmin,tmax}]
Mathematica has generated an interpolation function that is a pure function. You can evaluate that function like this:
x@tD ê. %
2 4 6 8 10
-2
-4
Ü Graphics Ü
IMPORTANT: Often we need to use several times the result of NDSolve, and we may have solved a system of equations, or
a table of many equations. A good method is to define a function and assign to it the result of NDSolve in this way:
Lecture3.nb 11
8t, 0, 10<D@@1DD;
Plot@solution@tD, 8t, 0, 10<D
2 4 6 8 10
-2
-4
Ü Graphics Ü
Notice the [[1]] after NDSolve. That is the standard indexing notation. It extract the first value of the list coming out of
NDSolve. Weird? Yes weird, but if you pay attention, Mathematica put the interpolation function into a double curly
bracket, so the solution is inside a nested list. If you do not extract it like this you may end up in trouble when trying to
extract differnt components of a solution, for example for a parametric plot. Another solution to get rid of the extra curly
brackets is to use the function Flatten[ ].