Beruflich Dokumente
Kultur Dokumente
Lee1
Author address:
111 Cummington St., Department of Mathematics, Boston Univer-
sity, Boston MA 02155
E-mail address : deville@math.bu.edu
1 based exclusively on notes given by G. R. Hall in MA 775, Fall 1996, at Boston University
Contents
Introduction 5
Chapter 1. Introduction 7
1. Some preliminary denitions (Sept. 4) 7
2. More examples, Changing variables (Sept. 6) 9
3. Dierentiation, change of time variable (Sept. 9) 11
4. Change of time variables(Sept. 11) 15
5. Example, using the 2-body problem (Sept.13) 18
6. McGehee Collision Manifold (Sept. 16) 22
7. Finishing analysis of collision manifold (Sept.18) 26
Chapter 2. The Two Big Theorems 29
1. Existence-Uniqueness Theorem (Sept.20) 29
2. Some proof of the Existence-Uniqueness Theorem (Sept.25) 30
3. Some more proof of the Existence-Uniqueness Theorem (Sept.27) 33
4. Picard iteration (Sept. 30) 34
5. Invariant Sets (Oct. 2) 37
6. Sinks and conjugacy (Oct. 4 and Oct. 7) 41
7. Preliminary to Stable/Unstable Manifold Theorem (Oct. 9) 46
8. Stable/Unstable Manifold Theorem(Oct. 15) 50
9. More S/U Theorem (analytic version) (Oct. 16) 54
10. C k version of S/U Theorem (Oct. 18) 56
11. Continuation of the C k proof (Oct. 21) 59
12. Smoothness (Oct. 28) 62
13. The Stable/Unstable Manifold MetaTheorem (Oct. 30) 65
Chapter 3. Using maps to understand
ows 69
1. Periodic orbits and Poincare sections (Nov. 1) 69
2. Computing Floquet Multipliers (Nov. 4) 73
3. More computation of Floquet multipliers (Nov. 6) 77
4. Bifurcation Theory (Nov. 8) 80
5. Hyperbolicity in bifurcations (Nov. 11) 84
6. Bifurcation diagrams (Nov. 13) 88
7. Spiral sinks and spiral sources, normal form calculation (Nov. 15) 90
8. More normal form calculations, complexication (Nov. 18) 94
Chapter 4. Topics 99
1. Setup for Hopf Bifurcation Theorem (Nov. 20) 99
2. More Hopf Bifurcation Theorem (Nov. 22,25) 102
3
4 CONTENTS
Introduction
In Fall 1996, Professor G. R. Hall gave a class at Boston University titled
\MA 775 - Ordinary Dierential Equations". The following pages are a (hopefully
accurate) copy of the notes I took in this class. He deserves all the credit for
assembling the material and presenting it. The only mathematics I ended up doing
was checking and making sure I had all of the inequalities pointing the right way,
etc.
This document was produced using AMS -LATEX macros on top of Version
3.1415 of the LATEX2 compiler. I used the amsbook documentclass. The only
packages I had to import were epsg and graphicx for the pictures, and theorem to
dene the various theorem environments. All of the pictures were made either with
xfig or Mathematica, and then dumped into PostScript1 .
For further information (including how to get a copy of this PostScript le),
please go to http://math.bu.edu/people/deville/Notes/. The latest versions
of this le will be maintained there when corrections are made. If you nd any
errors, or have any general comments on the notes, please send email to me at
deville@math.bu.edu. I will maintain an errata page for these notes (and all
future releases), and any and all comments are greatly appreciated.
1 In this particular version, a few of the pictures are still not in. They tend to be more at the
end, and most of them are concerning the Melnikov estimates. I plan to do the rest eventually,
but they're hard and complicated, so relax.
6 CONTENTS
CHAPTER 1
Introduction
1. Some preliminary denitions (Sept. 4)
Definition 1.1. Given a vector eld f : Rn ! Rm then the dierential
equation associated with f is
(1) x_ = f (x); dx
dt = f (x); x = (x1 ; x2 ; : : : ; xn ):
Definition 1.2. A solution is a curve
: R ! Rn such that
(2)
_ (t) = d
dt = f (
(t)); for all t
Definition 1.3. An initial condition is x0 2 Rn : An initial value prob-
lem (
(3) x_ = f (x);
x(0) = x0
such that the solution is
(t) with
(0) = x0 .
Definition 1.4. A
ow is a map : R Rn ! Rn such that
(4) (0; x0 ) = x0 ; 8x0 2 Rn
(5) (s + t; x0 ) = (s; (t; x0 ))
Equation (5) is known as the group property.
A
ow is a solution of the dierential equation x_ = f (x) if
(6) @ (t; x ) = f ((t; x )); for all t; x :
@t 0 0 0
Example: If x_ = x on R, then we have
x(t) = xo et ; x(0) = x0 :
(t; x0 ) = x0 et :
Claim: This is a
ow.
Proof: (4) is simple to check.
For (5), we see that
(s + t; x0 ) = xo es+t ;
and
(s; (t; x0 ) = (s; xo et ) = (x0 et)es = xo es+t :
7
8 1. INTRODUCTION
The group property says that the \rules of evolution" (i.e. the vector eld) do
not change with time, or f : Rn ! Rn does not depend on t. Such equations are
called autonomous.
A \good theorem" could be that every \reasonable" dierential equation has
(almost)
ow for solution, and
ow is \nice".
Theorem 1.1. Given a smooth (C 1 )
ow : R Rn ! Rn , it is the solution
of a dierential equation, i.e. there exists a vector eld f : Rn ! Rn with as
solution.
Proof: Dene f (x0 ) = @t@ (0; x0 ). Check that is a solution for x_ = f (x),
i.e.
@
@t (t; x0 ) = f ((t; x0 )) for all t; x0 : Now, since
(t + t; x0 ) = (t; (t; x0 ));
then
@ (t; (t; x0 )) (0; (t; x0 ))
@t (t; x0 ) = lim
t!0 t
@
= @t (0; (t; x0 ))
def
= f ((t; x0 )):
Example:
= sin()
Convert to
(_
=!
!_ = sin
Example:
y + 3y_ + 2y = cos(2t)
The 3y_ term is damping, the 2y term is Hooke's Law, and the cos(2t) term is
an external forcing term.
Convert this to
(
y_ = v
v_ = 3v 2y + cos(2t)
2. MORE EXAMPLES, CHANGING VARIABLES (SEPT. 6) 9
dened on
8
>
< 1 < t < 1=x0 x0 > 0
(t; x0 ); where > 1 < t < 1 x0 = 0
:1=x0 < t < 1 x0 < 0
Definition 2.2. A semi
ow is : R+ ! Rn satisfying
(9) (0; x0 ) = x0
(10) (s + t; x0 ) = (s; (t; x0 ))
where they are dened.
Example: The Kepler Problem. Let
q be the dierence in position of 2 masses
q = Gq 3
kqk
where G is the gravity constant and is a constant related to the masses of
the objects.
Make this a rst order system (q = (q1 ; q2 )):
8q_ = p
>
> 1 1
>
> q
_2 = p 2
>
<p_1 = p Gq1
(11) > ( q12 + q22 )3
>
>
>
:p_2 = (pq2Gq 2
+ q2 )3
1 2
This is not a vector eld on all of R , because
4
p 2 1 23 ! 1 as q1; q2 ! 0
q1 + q2
The phase space for this problem is R4 n |f(0; 0;{zp1; p2)g} . There are solu-
\the collision set"
tions which approach the collision set in nite time.
We can dene a dierential equation on any object where you have a good
notion of tangent vector, such as Rn , open subsets of Rn, smooth surfaces on Rn,
or manifolds (which look locally like Rn ).
Definition 2.3. Given a
ow or a dierential equation x_ = f (x), the orbit
of x0 is
O(x0 ) = f(t; x0 ) : dened at tg
i.e., the image of the solution curve through x0 .
We also dene the forward orbit of x0 :
O+ (x0 ) = f(t; x0 ) : t 0g
3. DIFFERENTIATION, CHANGE OF TIME VARIABLE (SEPT. 9) 11
h w
U
V
z
TxU
11
00
x
11
00
Dh x
1
0
0h(x)
1
Th(x) V
h(x1 ; x2 ; : : : ; xn ) =
(h1 (x1 ; x2 ; : : : ; xn ); h2 (x1 ; x2 ; : : : ; xn ); : : : ; hn (x1 ; x2 ; : : : ; xn ));
then
2 @h1 @h1 3
66 @x1 : : : @xn 77
Dhjx = 66 ... . . . ... 77
4 @hn @hn 5
@x1 : : : @xn
For example, start at x and move with velocity (1; 0; 0; : : :; 0):
We get
011
@h @h BC
1 ; 2 ; : : : ; @hn = Dhj B0C
@x1 @x1 @x1 xB
@ ... CA
0
3. DIFFERENTIATION, CHANGE OF TIME VARIABLE (SEPT. 9) 13
Example: h : R2 ! R2
h(x; y) = (x2 + 2xy; y + cos(x)) = (z; w)
h(1; 1) = (3; 1 + cos(1))
2x + 2y 2x
Dh = sin(x) 1
4 2
Dh(1;1) = sin(1) 1
1 4
Dh(1;1) 0 = sin(1)
z
11
00
11
00 00
11
11
00
x(t)
h -1
x
U
h
1
0
(t; z0 )
0
1
(t; h(z0 )) 1010
I
1
0
1
0
1
0
0Y
1 z0
h(z0) h
Figure 4. A conjugacy.
If h is not a dieomorphism, we can still use as a conjugacy for
ows, but we
can't move the vector eld.
Remark: Conjugacy is a very strong notion of \the same".
Example: X 2 Rn, X_ = AX . Do a linear change of variables, X = PY ,
where P is an n n matrix. Then h(Y ) = PY , so Dh xj= P . So, in the new
variables,
Y_ = P X_ = PAX = PAP 1 Y
Example:
x_ 1 2x
y = 1 1 y
4. CHANGE OF TIME VARIABLES(SEPT. 11) 15
p p
The eigenvalues are 1 2, so there is P s.t. PAP 1 = 1+ 2 0p
0 1 2 :
So, let X = PY , then
p
Y_ = 1 +0 2 1 0p2 Y
where
z
Y = w ; so
p
z_ = (1 + 2)z and
p
w_ = (1 2)w
These are conjugate linear systems.
Another change of variables is change in time variable. Start with f : U ! Rn,
with a solution (local)
ow . Pick x0 2 U , let x(t) be the solution through x0 .
Change to new time variable S (t), so new time is a function of old time. Let
T (s) = t, so that S 1 = T , and old time is a function of new time.
So what dierential equation is x(T (s)) a solution of?
4. Change of time variables(Sept. 11)
Start with a vector eld f : U ! Rn , x0 2 Rn . Change time variables, get new
time variable s, s = S (t), and t = T (s).
For what dierential equation is x(T (s)) the solution?
Dierentiate:
d dt
ds x(T (s)) = x_ (T (s)) ds
dt
= f (x(T (s))) ds s
Eect of changing to a new time is only on the speed, not the direction.
Start with f : U ! Rn , all as above. Suppose we have : U ! R+ , smooth.
Then we can make a new vector eld
g(x) = (x)f (x)
What are the solution curves of x_ = g(x)? Start with an IC x0 . Let x(t) be the
solution of dx
dt = f (x) with x(0) = x0 . The goal is to change x(t) into a solution of
dx = g(x) by dening a new time variable.
ds
Suppose T (s) is a change of time variable such that
dx(T (s)) = g(x(T (s)))
ds
But then
f (x(T (s))) = dx
dt
dt T (s) ds s
= (x(T (s))) f (x(T (s)))
16 1. INTRODUCTION
So, we need
dt = (x(T (s)))
ds s
If there exists such a T (s) then we can change from one system to another just
by changing time variable. But the equation
dt = (x(T (s)))
ds s
is just a 1-dimensional ODE where we know and x. By the Existence-
Uniqueness Theorem, there is a unique solution for each initial condition.
A nice feature is that if you change speeds (lengths) of the vector eld, you
don't change the phase portrait.
For example, the system
(
x_ = x
y_ = 2y
and the system
(
x_ = x(x2 + y2 + 1)
y_ = 2y(x2 + y2 + 1)
have exactly the same phase portraits.
Another notion of when
ows are \the same":
Definition 4.1. Two
ows : U R ! U , : V R ! V are said to be
topologically equivalent if there exists a homeomorphism h : V ! U such that,
8y0 2 V ,
(14) hf (t; y0); t 2 R g = f(t; h(y0 )); t 2 R g
where orientation of increasing time is also preserved.
Example: Zharkovskii's model of glider
ight. (Theory of Oscillators, An-
dronov)
Velocity:
v = v(cos ; sin )
v_ = v_ (cos ; sin ) + v_(cos ; sin )
The model is
(15) m dv 1
dt = mg sin 2 FCx v
2
(16) mv d
dt = mg cos + 1 FC v2
2 y
where m is mass, g the acceleration due to gravity, the density of air, F the
area of the wing, Cx the resistance to motion per unit area of wing, and Cy the lift
per unit area of the wing.
4. CHANGE OF TIME VARIABLES(SEPT. 11) 17
1
0z
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1 v
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
1111111111111111111
0000000000000000000
0
1 x
0
1
0
1
Figure 5. Here the glider is moving with speed v and angle .
So we get
dv = g sin 1 FCx v2
dt 2 m
d = 1 g cos + 1 FCy v2
dt v 2 m
What are the essential parameters for the qualitative behavior? We have control
of the units of distance and time. Use a new variable y for v, such that v = ky,
with k constant.
dy = g sin k FCx y2
dt k 2 m
d = 1 g cos + k FCy y2
dt y k 2 m
dy = dy dt = dy
d dt d dt
d = d
dt dt
dy = g sin k FCx y2
d k 2 m
d = 1 g cos + k FCy y2
d y k 2 m
2m .
Pick k = g, and k = FC
y
Thus s
2m
= gFC
y
s
2mg
k = FC
y
and
dy = sin Cx y2
dt Cy
d = cos + y2
dt y
m1 q_ 1 = p1
m2 q_ 2 = p2
So in u,r variables,
u_ = v
v_ = 0
r_ = s
s_ = q1 q2 = Gr
krk3
where = m1 + m2 .
Remark: This is how masses of planets are determined, looking at orbits of
their moons.
Definition 5.1. A constant of motion or integral for a dierential equa-
tion x_ = f (x) is a function from the phase space to R , such that h(x(t)) is constant
for every solution.
For
r_ = s
s_ = Gr 3 ;
krk
let r = (x; y) and s = (z; w), then
2 2 p
(17) H (x; y; z; w) = z +2 w G x2 + y2
is a constant of motion (energy).
Note: In fact,
x_ = @H
@z
@H
y_ = @w
z_ = @H
@x
@H
w_ = @y ;
20 1. INTRODUCTION
x_ = z
y_ = w
z_ = p 2Gx 2 a
x +y
w_ = p 2Gy 2 a
x +y
When a = 3 above, it is the Newtonian problem.
2 2
H (x; y; z; w) = z +2 w p
G
(a 1)
(a 1) x2 + y2
Our goal is to see what happens near r = 0. First, change variables, using
polar coordinates, \blowing up" the origin.
Start with
(x; y) = ( cos ; sin )
with a constant.
We will also use complex notation, i.e. x + iy = ei . Also choose new velocity
coordinates
z + iw = ei (u + iv)
5. EXAMPLE, USING THE 2-BODY PROBLEM (SEPT.13)
21
*
= 0 is singular
Figure 6. This change of coordinates is singular at = 0.
z + iw = x_ + iy_
d ( ei )
= dt
= ( 1) e _ i
_ i + ie
and
ei (u + iv) = 1 e _ i
_ i + ie
so we get, nally
_ = 1 +1 u
_ = v:
+
22 1. INTRODUCTION
Let = 1, and then solve for . In the case that a = 2, we get = 2=3,
= 1=3.
_ = 23 u
_ = 1 v
u_ = 1 ( G + 21 u2 + v2 )
v_ = 1 ( 21 uv)
What do we have?
1. We have removed the singularity at = 0, and extended the vector eld to
= 0.
2. Also, the system s: There are no 's in 0 , u0 , v0 , and no 's or 's in u0 or
v0 .
Recall the energy in new variables:
2 2
H (; ; u; v) = 2 u +2 v G
(a 1)( ) 1
x + iy = ei = 2=3 ei
z + iw = ei (u + iv) = 1=3 ei (u + iv)
2 2
H = 2=3 u +2 v G 2 =3
u2 + v2
H (; ; u; v) = 2=3 2 G
0 = 0
0 = v
u0 = G + 21 u2 + v2
v0 = 21 uv
So if (0) = 0 then (t) = 0 for all t.
What this means is that we have added a boundary to the phase space.
x=y=0 =0
@
@
R
@ u; v
HH
j
H
6
=0
u; v
So the torus will live in (; u; v)-space with the restriction u2 + v2 = 2G, and
the vector eld
0 = v
2
u0 = v2
v0 = 12 uv
Suppose I understand what happens for this
ow on the torus. The original
problem was to study orbits which come close to collision, i.e. come close to =
0 = x = y. An orbit coming close to = 0 must behave almost the same as an
orbit on = 0 because of continuity of the solution
ow.
2
u0 = v2
v0 = 21 uv:
7. Finishing analysis of collision manifold (Sept.18)
We have added the boundary = 0 to our phase space. v = 0 gives the rest
points, and if v 6= 0, then u0 = 0, i.e. u is increasing.
increases if v > 0, decreases if v < 0. Since u is always increasing, we can
make u the time variable. Since u = u(t), we can think of t = t(u), the inverse.
So
d 1
du = 2v
dv u
du = v
Solving, we get
v dv = u du
v2 = u2 + C
2 p2
v = C u2 :
We already know that u2 + v2 = 2G, so C = 2G. Thus
p
v = 2G u2
Plug in and solve for :
d 1
du = 2p2G u2
Z
(u) = p 1 2 du
2 2G u
Think about this
ow. Why does it have rest points? In original problem, we
know that there are solutions which go to collision as t ! t0 . In new variables,
there must be solutions which go to = 0 as ! 1.
The set of points which go to collision are the points which tend to rest points
on bottom.
0 = 23 u
2
u0 = v > 0
2
A solution that goes to collision must have u 0 for all . The only way is to
have solution approach rest points on the bottom.
To pass close to collision means being on an orbit which is close to an orbit
going to a rest point on = 0.
Follow
ow on = 0 until near top of = 0 then leave near an ejection orbit.
There are two ways to be close to collision in some direction , \outside" or \inside"
the sheet of orbits which go to collision.
7. FINISHING ANALYSIS OF COLLISION MANIFOLD (SEPT.18) 27
ejection -
=0
collision -
Question: When the orbit comes close to collision, in what direction does it
leave collision? This is the same question as \Given an orbit on = 0, which as
28 1. INTRODUCTION
and, for k = 1; : : : ; n,
X
fk (x1 ; x2 ; : : : ; xn ) = akl1 ;l2 ;:::;ln xl11 xl22 xlnn
l2Z+n
So, if, for example, n = 2, expanding about 0,
f (x1 ; x2 ) = a00 + a10 x1 + a01 x2 + a02 x22 + a11 x1 x2 + a20 x21 + : : :
and the fk 's have nonzero radius of convergence at each point.
If we expand about x0 = (x01 ; : : : ; x0n ),
f (x1 ; x2 ) = a00 + a10 (x1 x01 ) + a01 (x2 x02 ) +
Fix x0 2 U . Then there exists x(t) where x : ( ; ) ! U for some > 0 such
that
1. x(0) = x0
2. x(t) is a solution.
3. If x(t) = (x1 (t); : : : ; xn (t)), then
X
1
xk (t) = kl tl
l=0
with radius of convergence .
Idea of Proof. This comes from Siegel and Moser's Celestial Mechanics.
Assume f is as above. Fix x0 2 U , then there is an r > 0, M such that for all
k,
jfk (x)j < M for jx x0 j < r
This gives a \maximum speed" for the solution. Fix = (n +r1)M . This is
how long (at least) we expect solution to exist.
Look for x(t) solution, x(0) = x0 , and kx(t) x0 k < r for t 2 ( ; ).
The steps in the proof:
1. Change variables so that x0 = 0, r = M = 1. (Can rescale x to get r = 1,
and rescale time to get M = 1.)
2. Solve formally: Write for k = 1; : : : ; n,
X
1
xk (t) = ak mtm
m=0
Plug this into x_ = f (x), solve for 's. For example:
X
1
x_1 (t) = m1m tm 1
m=1
= f1(x1 ; x2 ; : : : ; xn )
X
= a1l1 ;l2 ;:::;ln xl11 : : : xlnn
l2Z+n
X X
1 !l1 X
1 !l n
10 = 20 = = n0 = 0
11 (constant on left) = a100:::0 (constant on right)
12 = 11 a110:::0 + 21 a101:::0 + : : : :
In general, we nd that each km is a polynomial in akl1 ;l2 ;:::;ln where
l1 ; l2 ; : : : ; ln < m.
3. We have a formal solution by solving for km 's in terms of a's. Does this
power series for xk (t) converge? (If yes, then it must be the solution.)
To show that xk (t)'s converge use \method of majorants". This is es-
sentially the Comparison Test { we nd a power series that converges and
that has coecients bigger than the xk (t)'s.
To build the new power series (the majorant), look at
X
y_ = g(y) = b1l1 ;l2 ;:::;ln y1l1 : : : ynln
P
with akl1 ;l2 ;:::;ln bkl1 ;l2 ;:::;ln . Then claim if yk (t) = km tm is a solution
with y(0) = 0 then 8k; m; jkm j km .
Note: The equations for the 's are the same as the equations for the
's with a's replaced with b's. The equation for kn is a polynomial in a's
and has all positive coeeicients. So replacing a's with bigger b's will make
's bigger than the 's.
4. Find a nice g whose solution we know. A fact from complex analysis: Be-
cause jfk j < 1(= M ) on ball of radius 1(= r), we know
a
kl1 ;l2 ;:::;ln 1:
So let all bk 's be 1. Solve
X l1 l2 ln
y_ = g(y) = y1 y2 : : : yn : : :
l1 ;l2 ;:::;ln
Note: Each component of g(y) is the same.
X
y1l1 y2l2 : : : ynln = 1 + y1 + y2 + + yn + y12 + y1 y2 + y1 y3 +
l1 ;l2 ;:::;ln
(25) Yn
= (1 yr ) 1
r=1
since 1 + y + y2 + + yn + = 1 1 y .
So
Y
n
y_1 = (1 yr ) 1
r=1
with I.C. y(0) = 0.
Solution must have y1 (t) = y2 (t) = , so
y_ (t) = (1 y) n
y(0) = 0
3. SOME MORE PROOF OF THE EXISTENCE-UNIQUENESS THEOREM (SEPT.27) 33
1
yk (t)y(t) = 1 (1 (n + 1)t) n+1
and each yk (t) is a convergent power series for t < n +1 , with jyj < 1.
1
So the xk (t)'s converge in a least the same size ball.
3. Some more proof of the Existence-Uniqueness Theorem (Sept.27)
Example: 8
>
< dx = a1 + a2x + a3y + a4x2 + a5xy + a6y2
dt
>
: dy
dt = b1x + b2 y P
P
Assume x(t) = 1 tm , y(t) = 1 tm , with initial condition
0 m 0 m 0 = 0 =
0.
Plug in:
X X X
mam tm 1 = a1 + a2 m tm + a3 m tm
X m2 X mX m X m2
+ a4 m t + a5 m t m t + a6 m t
X X X
mm tm 1 = b1 m tm + b2 m tm
So we need to equate coecients, and we nd that the m 's and the m 's are
polynomials in a's and b's with positive coecients. So increasing a's and b's in
absolute value increases the 's and the 's.
For now, we assume that we are given F : U ! Rn Lipschitz with constant L,
i.e.
kf (x1 ) f (x2 )k L:
kx1 x2 k
To attack uniqueness problem, look at \separation problem". Suppose we are
given two initial conditions x1 ; x2 2 U . Suppose we have two solutions x1 (t), x2 (t),
with xi (0) = xi . Can we get an estimate on the distance kx1 (t) x2 (t)k?
Only thing we know is that x_ 1 = f (x1 (t)), so
d
(26)
(x1 (t) x2 (t))
= kf (x1 (t)) f (x2 (t))k
dt
(27) = L kx1 (t) x2 (t)k
Let z = kx1 (t) x2 (t)k. We
sort of have that
z_ Lz .
d (x (t) x (t))
6= d k(x (t) x (t))k.
(We don't exactly because
dt 1 2
dt 1 2
Lt
The worst case would be z_ = Lz , i.e. z (t) = e . This means that how fast
solutions separate depends on L, or on how fast f changes from point to point.
Change this to integral equations:
Z Z
x_ i (t) dt = f (xi ( )) d
34 2. THE TWO BIG THEOREMS
Zt
xi (t) xi (0) = f (xi ( )) d
0
Zt
xi (t) = xi + f (xi ( )) d
0
Then we have
Zt
(28) k(x1 (t) x2 (t))k =
x1 x2 + (f (x1 ( )) f (x2 ( ))) d
0Z t
(29)
kx1 x2 k + kf (x1 ( )) f (x2 ( ))k d
Z0t
(30) kx1 x2 k + L kx1 ( ) x2 ( )k d
0
Theorem 3.1 (Gronwall's Inequality). If , > 0, : R ! [0; 1] is contin-
uous and Z t
(t) + ( ) d
0
then
(t) et :
The proof is left as an exercise.
So this lemma gives us that
kx1 (t) x2 (t)k kx1 x2 k eLt
Corollary 3.1 (Uniqueness.). Let x1 (t), x2 (t) be solutions with x1 (0) = x2 (0).
Then kx1 (t) x2 (t)k 0, i.e. x1 (t) = x2 (t) for all t.
Corollary 3.2. If : R U ! U is the local
ow solution for x_ = f (x) then
is continuous in x.
Proof: Fix t. Then
kt (x1 ) t (x2 )k eLt kx1 x2 k
Zt
1 (t) = x0 + f (
0 ( )) d
0
So we dene an operator
:
0 7!
1
where
Zt
[ (
0 )](t) = x0 + f (
0 (t)) d
0
A solution would be
such that
(
) =
so
would be a xed point of . We hope that there is only one xed point,
and n (
0 ) !
.
So we assume that f : U ! Rn , f is Lipschitz with constant L. We must set
up a domain for , and show that has xed points.
Note:
: [ ; ] ! U for some > 0.
U
U1
V
r
x0
-
k (
1 ) (
2 )k < k
1
2 k
Thus if
0 2 C ([ ; ]; U; x0 ), we can construct
n+1 = (
n ), so that
5. INVARIANT SETS (OCT. 2) 37
k
n
n+1 k = k n (
0 ) n (
1 )k
< n k
0
1 k :
So if m > n,
k
m
n k k
n
n+1 k + + k
m 1
m k
(1 + + 2 + + m n ) k
n
n+1 k
m n+1
= 1 1 n k
0
1 k
-
I J
We will try to classify the nonlinear behavior in the same way: linearize, and
try to describe the behavior near x = 0.
Now we can start classifying.
Definition 5.4. An isolated invariant set I is called an attractor if there
exists an isolating neighborhood N (so I is maximal invariant set in N ) such that
for all x 2 N , (t; x) 2 N o , for all t > 0. Such an N is called an attractor block.
An attracting xed point is called a sink.
Lemma 5.2. If N is an attractor block for I , then for all x 2 N , !(x) I .
Proof: Now that !(x) is an invariant set (exercise), so !(x) I , since I is
maximal in N , and !(x) N .
In a weak sense, sinks are all the same, as we see by the following
Theorem 5.2. If , are
ows on Rn with N Rn an attractor block for
both and with maximal invariant sets xed points x0 for , and y0 for , then
jN and jN are conjugate, i.e. there is a homeomorphism h : N ! N such that
(t; h(x)) = h((t; x)):
@
R
@ @
R
@
x0
r
r
y0
I
@
@
I
@
@
@
R
@
@
R
@
I
@
@
I
@
@
Figure 5. A sink
There are two big steps:
1. Compare x_ = f (x) to x_ = Ax where A = Df j0 .
2. Show attractor block results for x_ = Ax.
1. Write x_ = f (x) = Ax + g(x) where g(x) = f (x) Ax. There there is a
K > 0 such that for all kxk < 1,
kg(x)k K kxk2
We know this is true in one dimension for C 2 functions by Taylor's Remainder
Theorem. For each x0 with kx0 k = 1 look at the map
s 7! f (sx0 ) A(sx0 )
and each component satises the 1-dimensional Taylor theorem. So
kf (sx0 ) A(sx0 )k < Kx0 s2
Kx0 is determined by ds d2 f (sx ), which is determined by D2 f . But all
2 0
second partials of f are uniformly bounded in the unit ball so we can replace
Kx0 by K .
6. SINKS AND CONJUGACY (OCT. 4 AND OCT. 7) 43
u -
Now
20 3
60 0
1
66 . 1 0 7
0 7
. . . . . . .. 77 z z
kz k2 :
66 .. ..
. .7
40 0 0 1 5
0
0 0
0 0 0
So provided that < jj =2, then B1 z z < kz k2 + kz k2 < 0 because
< 0.
3.
Bi = ab ab ; (Bi z ) z = a(z12 + z22) 0
4. 0a b 0 1
B b a 0 0C
Bi = B
B@ ...
CC ; (Bi z) z a kzk2 + kzk2 :
A
0
which is < 0 provided that < jaj =2.
So provided that is suciently small (less than half the real part of the
eigenvalue closest to the imaginary axis) then Bz z < =2 kz k2 , i.e. if the o-
diagonal terms of B are suciently small then
B z z (Diag B ) z < 0:
But we wanted
(B z + g2 (z )) z < 0
But note that
kg2 (z ) zk K2 kz k2 kz k
= K2 kz k3
provided that kz k < 2 .
Thus we have
(B z + g2 (z )) z = Bz z + g2 (z ) z
< kz k2 + K2 kz k3
= kz k2 ( + K2 kz k):
So provided that r < =2K2, we have
(B z + g2 (z )) z < 0:
Pull back to original coordinates:
So we get a family of attractor blocks about x = 0 in the original variables.
Thus 0 is a sink.
Remark: We showed that two sinks are conjugate by homeomorphism. We
didn't expect the conjugacy the be smooth, because then the eigenvalues would
have to be the same.
46 2. THE TWO BIG THEOREMS
=
-eigenspace
-eigenspace
Figure 8. This is a saddle point; the 's are stable and the 's are unstable.
We know that from part 1 that near 0, the phase space of the solution
to x_ = f (x) \looks like" this up to homeomorphism:
We want to show that the set of points whose orbits go to 0 as t ! 1,
or as t ! 1, are nice smooth manifolds, i.e. recover some of the structure
of the linear
ow by restricting solutions that go to 0 as t ! 1.
48 2. THE TWO BIG THEOREMS
y_ = Ay x_ = f (x)
Figure 10. These are not allowed. The rst has a cusp and the
second intersects itself.
Figure 11. These are allowed. The manifold can limit onto itself.
We say that X is C k if g if C k .
An alternate denition:
Definition 7.2. X Rn is an immersed m-submanifold if for every x0 2 X ,
there is a neighborhood of x0 in Rn such that if V is the component of X containing
x0 in X [ U then V is the graph of a smooth function from a neighborhood of 0 in
Rm to Rn m, i.e. we can choose coordinates in U s.t.
V = f(x1 ; x2 ; : : : ; xm ; (x1 ; x2 ; : : : ; xm )); where (x1 ; x2 ; : : : ; xm ; 0; : : : ; 0) 2 U g
and is dierentiable.
V is a graph of a smooth function, so the only possible weirdness is \global"
rather than local, i.e. if you are inside X , it looks like Rm locally.
50 2. THE TWO BIG THEOREMS
x(t) = e( Df j0 t x0 :
8. STABLE/UNSTABLE MANIFOLD THEOREM(OCT. 15) 51
Es
Eu
For example, in R2 ,
0
Df j0 = 0 :
So x (t) x (0) etx (0)
1 0 t 1 = t 1 :
x2 (t) = exp 0 x2 (0) e x2 (0)
So W s (0) = E s and W u (0) = E u .
Given a (nonlinear) f as above, the W s (0), W u (0) are immersed
C -submanifolds, with W s (0) tangent to E s at 0, W u (0) tangent to E u at 0.
k
Es Ws(0)
Wu(0)
Eu
Figure 14. The stable and unstable manifolds are tangent to the
linear subspaces.
The local version of the Stable/Unstable Manifold Theorem says that there
exists a neighborhood V of x0 such that W u (0; V ) is the graph of a C k function
52 2. THE TWO BIG THEOREMS
: E u \ V ! E s , i.e.
W s (0; V ) = f| (x1 ; x2 ; : : :{z
; xu ; 0; : : : ; 0)} + |(x1 ; x2 ; : : : ;{zxu ; 0; : : : ; 0)}g
2Es 2Eu
where (0) = 0, D j0 = 0.
Similarly, for W s (0; V ), there exists a C k function : E s \ V ! E u , where
W (0; V ) is the graph of , (0) = 0, Dj0 = 0.
s
Moreover, if f depends smoothly on , then so do W s , W u (i.e. for near
0 , f has a rest point near the rest point of f0 and stable/unstable manifolds
depend smoothly on ), as long as the xed point of f is hyperbolic.
Moreover, if z 2 V , z 62 W s (0; V ), then for some t > 0, (t; z ) 62 V , and if
z 62 W u (0; V ), for some some t < 0, (t; z ) 62 V .
Es Ws(0)
V
Wu(0)
Eu
Es
(x; (x))
@
R
@
x Eu
2 = 2b20
3 = 2a2023+ b11 2 + b30
9. More S/U Theorem (analytic version) (Oct. 16)
We can solve for those 's formally.
1. Do they converge?
2. Are solutions on graph of ?
3. Unique?
Try the C k version, x_ = f (x), x 2 R2 , f (0) = 0.
Look at time T map associated to the solution
ow for x_ = f (x). Let
FT (x) = (x; T ). FT 1 exists, since FT 1 (x) = (x; T ). FT is as dierentiable as
is in x. FT (0) = 0, a rest point.
FTn (0) = F| T FT {z FT} = (x; nT )
n times
by the group property.
Let
OFT (x) = fFTn (x)j n 2 Z g:
FT
Lemma 9.1. Let Ws (0) be a stable set of 0 for and WFsT (0) be a stable set
of 0 for FT ; T > 0. Then
Ws (0) = WFsT (0)
Proof: Clearly, Ws(0) WFsT (0), because if x 2 Ws (0) then (x; t) ! 0 as
t ! 1 which means that (x; nT ) ! 0 as n ! 1.
If x 2 WFsT (0) then FTn ! 0 as n ! 1 or (x; nT ) ! 0 as n ! 1.
Note that for all > 0, there is a > 0 such that if z 2 R2 , kz k < , then
k(z; t)k < for 0 < t T .
Fix > 0, and choose N so large such that for all n N , if k(x; nT )k < ,
then k(x; nT + s)k < , so that k(x; t)k < for all t > NT .
So we can deal with Ws (0) and WFsT (0) similarly.
What is a hyperbolic xed point for a map?
Suppose we have the same situation as above for x_ = f (x), with
0
Df j0 = 0
How do we compute DFT ?
x_ = f (x)
X_ = Df jx X
where X is an n n matrix, with initial conditions x(0) = x0 and X (0) = I . If
x(t); X (t) is a solution, then
X (t) = Dx(x0 ; t):
Recall, this comes from dierentiating twice:
d d
dt Dx = Dx( dt )
= Dxf j(x;t) Dx (x; t)
Suppose we take x(0) = 0 and start at the rest point.
Solve x_ = f (x) part, have x(t) = 0. So
X_ = Df jx(t) X = Df j0 X
and we get
_X = 0 X; X (0) = I:
0
So 0
X (t) = exp 0 t ;
0 eT 0
DxFT j0 = exp 0 T = 0 eT :
The eigenvalues of DFT j0 are e(eigenvalues of Df j0 )T .
So if is an eigenvalue of Df
j0
with real part > 0, then the corresponding
eigenvalue for DFT j0 is eT with
eT
> 1. If is an eigenvalue of Df
j0 with
real
part < 0, then the corresponding eigenvalue for DFT jo 0 is eT with
eT
< 1.
56 2. THE TWO BIG THEOREMS
0
0
!
For small boxes, the map will look a lot like its linear part.
Note that F (W u (0)) = W u (0).
Set up a map on graphs in the box and look for xed points. (Graph Transform
Method).
10. C k version of S/U Theorem (Oct. 18)
Lemma 10.1. Suppose F is a C k -dieomorphism and F (0) = 0 is a hyperbolic
xed point. If W u (0; V ) is the graph of a C k function then W u (0) is a C k immersed
submanifold, i.e. local implies global.
Proof: Recall our denition of a C k immersed submanifold is that a neigh-
borhood of each point in the manifold is (in nice coordinates) a C k graph. So pick
z 2 W u (0), i.e. F n (z ) ! 0 as n ! 1. So for any neighborhood V of 0, there is
an N such that F n (z ) 2 V for all n N . So F N (z ) 2 W u (0; V ), then F N (z )
10. C k VERSION OF S/U THEOREM (OCT. 18) 57
F
!
V
C
CW
?
F (V )
OC 6
C
C
Take v < .
For the top, we need
j + g2(x; )j < for < x < ;
i.e. jg2(x; )j < (1 );
p 2
k1 2 < (1 );
< 1 p ( < 1):
2K1 2
So choose v to be less than all of these.
Next, chose our set of graphs. Let
GL = f : E u \ V ! E s j x1 ; x2 2 [ v ; v ];
j (x1 ) (x2 )j < L jx1 x2 jg
Assume that L < 1. Let
GL = fA V j A is the graph of 2 GL g
Dene the graph transform: For A 2 GL let
F (A) = fF (x; y)j (x; y) 2 Ag \ V
We need that F (A) 2 GL . We will rst show that F (A) is the graph of some
function, and then that this function is in GL .
Claim: for xed A 2 GL , for each x 2 [ v ; v ], there is a y 2 [ v ; v ] such
that (x; y) 2 F (A).
Graph of A is curve connecting left to right. So F (A) does the same thing for
F (V ). So by conditions on F (V ), we know that for each x 2 [ v ; v ], there is a
y 2 [ v ; v ] such that (x; y) 2 F (A).
If the line containing x from top to bottom didn't intersect F (A), this would
be a contradiction. But what if both (x; y1 ) and (x; y2 ) 2 F (A)?
Look at F 1 (x; y1 ); F 1 (x; y2 ) in A. But we can write
F 1 (x; y) = (x=; y=) + (
1 (x; y);
2 (x; y))
11. CONTINUATION OF THE C k PROOF (OCT. 21) 59
A
F (A)
(x; y0) -
(x; y1) -
where
DF 1 0 = ( DF j0 ) 1 = 1= 0 1=
0
1
f
1(x; y1 )
(x; y1 )
F F (1)
!
(x; y2 )
F (2)
f
1(x; y2 )
2
Claim: For v suciently small, there is an r < 1 such that if (x; y) 2 C0,
then if (x; y1 ) = F 1 (x; y) then jx1 j < r jxj.
Proof: We know that x1 = x= +
1(x; y). Then
j
1 (x; y)j = j
1 (x; y) 0j
jxj + jyj
jxj + L jxj :
So
jx1 j < x + ( + L) jxj
or
jx1 j < ( 1 + + L) jxj
1= < 1, so take v small so that
1 + + L = r < 1:
We know that u (0) = 0, because for and 2 GL , F n ( ) ! u . If (0) = 0,
then the value of F ( ) at 0 is also 0. And u 2 GL , so the graph of u is in C0 and
if (x; y) 2 graph u , then F 1 (x; y) 2 graph u . So F n (x; y) 2 graph u C0
for all n 0.
So x-coordinate of F n (x; y) ! 0, but this is Lipschitz, so the y-coord ! 0
also. Thus the graph of u W u (0; V ).
If you're not in the graph, the vertical distance increases, so eventually you
leave V .
12. Smoothness (Oct. 28)
Start with F : R2 ! R2 , a dieomorphism, F (0) = 0, and
0
DF j0 = 0 ; > 1 > > 0:
Lift to the unit tangent bundle
T 1R2 = f((x; y); v)j v 2 R2 ; kvk = 1g
and !
DF (v)
F ((x; y); v) = F (x; y);
(x;y)
:
DF(x;y)(v)
At x = y = 0,
Eigendirections at (x; y) = (0; 0) give the xed points of F . Here = 0 is
stable in the -direction.
Compute DF j((0;0);0) :
0 0 01
@0 0A
?
At x = y = 0, what does F do to ((0; 0); )?
Note: corresponds to (cos ; sin ).
12. SMOOTHNESS (OCT. 28) 63
W u((0; 0); 0)
x
Figure 25. The unstable manifold at ((0; 0); 0). The picture may
be a bit unclear. The -axis moves back into the 3rd dimension.
Recall that there are xed points on the -axis, alternating between
sink and source.
64 2. THE TWO BIG THEOREMS
y
W u((0; 0))
R is the higher order terms, so we can forget about it when we take limit, so
we get
c(xn x) + d( (xn ) (x))
nlim
!1 a(xn x) + b( (xn ) (x)) ;
thus
F : C (x) ! C (x1 ):
W s((0; 0); 0)
y
Es + z
A
F (A)
then
E1 = f(x1 ; 0; : : : ; 0)g;
E2 = f(0; x2; x3 ; : : : ; 0)g; : : :
The Ei are invariant subspaces for DF j0 .
For r 0, there is an invariant manifold Wrs (0) tangent to E1 Em ,
where Re 1 ,Re 2 ; : : : ; Re m r.
First, dim Wrs (0) = dim(E1 Em ). Also, if r < 0; r < Re i < 0 for some
i, then Wrs (0) is called a strong stable manifold and denoted Wrss (0). If r = 0
and there is some eigenvalue, i , with Re i = 0, then Wrs (0) is called the center
stable manifold and denoted Wrcs (0).
Same for r 0, we get invariant manifolds associated with eigenvalues j with
Re j r. If r > 0, and there is j with 0 < Re j < r, then Wru (0) is called the
strong unstable manifold, and denoted Wrsu (0). If r = 0 and there is j with
Re j = 0, Wru (0) is called the center stable manifold, and denoted Wrcu (0).
A center manifold is dened as
Wrsc (0) \ Wruc (0);
and is denoted W c (0). (It is tangent to e-spaces for eigenvalues with real part 0.)
Remark: Proofs use same sort of ideas (set contraction).
13. THE STABLE/UNSTABLE MANIFOLD METATHEOREM (OCT. 30) 67
Example: (
x_ = x2 ;
y_ = y
x x2
F y = y ;
DF j0 = 00 01 :
Figure 2. If the initial condition is far enough away, it may not
actually return to .
F (U )
3. We have by transversality,
dim W s (O(x0 )) + dim W u (O(x0 )) = n + 1:
-1 1
-1
Figure 5. The xed point at the origin has two homoclinic orbits
in this picture.
Two questions:
1. What of this theory is independent of the choice of ?
2. Can you ever compute the eigenvalues for DP jx0 , which is the eigenvalues
of Dx(x0 ; T )?
2. COMPUTING FLOQUET MULTIPLIERS (NOV. 4) 73
2
1
Idea. Dene
p1 : 1 ! 2
x 7! (x; 1 (x))
where
1 (x) = minft 0j (x; (x)) 2 2 g:
As before, p1 is dened near x0 and as smooth as and . Also, is invertible:
p1 1 (x) = (x; 1 1 (x)) where x 2 2 :
and
1 1 (x) = maxft < 0j (x; t) 2 1 g:
Then our claim is that P1 = p1 1 P2 p1 . This is the group property of . So
P1 jx0 = Dp1 1 x1 DP2 jx1 Dp1 jx0 :
But
D(p1 1 )x1 = (Dp1 )jx0 1 :
So DP1 jx0 is similar to DP2 jx1 , and in particular these two matrices have the
same eigenvalues.
The same idea works if x0 = x1 :
2
1
Figure 8. We assume that the Poincare sections intersect at x0 .
Dene
p1 : 1 ! 2
x 7! (x; 1 (x))
where 1 (x) = t near 0 such that (x; t) 2 2 .
So the eigenvalues of the return map are invariants of the orbit. But we still
need a way to computer the eigenvalues of the Poincare maps.
Claim: The eigenvalues of Dxj(x0;T ) are 1; 1; : : : ; n 1 where 1; : : : ; n 1
are the Floquet multipliers.
Proof: The time T map takes a neighborhood of x0 to a neighborhood of
x0 . But (; T ) preserves the periodic orbit O(x0 ).
Let
(t) = (x0 ; t) so
: R ! Rn .
2. COMPUTING FLOQUET MULTIPLIERS (NOV. 4) 75
Then
(
(t); T ) = ((xo ; t); T )
= (x0 ; t + T )
= (x0 ; t)
=
(t);
i.e. all points on
are xed under (; T ).
So
d (
(t); T ) = D j d
dt x (
(t );T ) dt
Also,
d (
(t); T ) = d
:
dt dt t
If we let t = 0, (
(0) = x0 )
= d
;
Dx j(x0 ;T ) d
dt 0 dt 0
is an e-vector with eigenvalue 1.
i.e. d
dt 0
Also,
d
= F (
(0)) = F (x );
dt t=0 0
so
Dxj(x0 ;T ) F (x0 ) = F (x0 ):
We need that the rest of the eigenvalues of Dxj(x0 ;T ) are eigenvalues of DP jx0
for P : ! . (i.e. Poincare section)
Choose nice variables so that F (x0 ) = (1; 0; : : : ; 0), and then
01 ?
1 ?
B0 " CC
Dxj(x0 ;T ) = B
B@ .. C:
. B ! A
0 #
From linear algebra, we know that the eigenvalues of B are the rest of the
eigenvalues of Dx, so
0x 1 0x1+?x2 +?0x3 +1 +?xn 1 0x 1
1 BB x1 CC B .1 C
Dxj(x0 ;T ) B .
@.A @
. C = B B C
B @ ... A CA = @ .. A :
x n xn x n
Next, choose = f(0; x2 ; : : : ; xn )g where in these nice coordinates, x0 = 0. Let
: ! R be the rst return time map, so Poincare map P : ! is
P (x1 ; x2 ; : : : ; xn ) = 1 ((0; x2 ; : : : ; xn ); (0; x2 ; : : : ; xn ));
where
1 (x1 ; x2 ; : : : ; xn ) = (x2 ; : : : ; xn ):
76 3. USING MAPS TO UNDERSTAND FLOWS
sin(y) -
1111
0000
000010
1111
000010
1111
10
1010 y=0
10
Figure 9. The acceleration due to gravity is of the magnitude sin(y).
We can see that (; 0) are hyperbolic xed points. The point (0; 0) have
eigenvalues i. We have an invariant of motion H (y; v) = v2 =2 cos y.
d @H @H
dt H (y(t); v(t)) = @y y_ + @v v_
= sin(y)v + v( sin(y)) = 0
So this is a Hamiltonian system. Add a small periodic forcing and get
y = sin y + g|{z}(t)
external
where g(t + T ) = g(t).
We can make this system autonomous:
y_ = v
v_ = sin(y) + g(s)
s_ = 1:
The phase space is
R2 S 1 = f(y; v; s)j (y; v) 2 R2; s 2 S 1g:
and S 1 = [0; T ] where 0 and T are identied.
When = 0, we get
y_ = v
v_ = sin y
s_ = 1
3. MORE COMPUTATION OF FLOQUET MULTIPLIERS (NOV. 6) 79
Figure 11. Some of the periodic orbits of the newly autonomous system
What happens to the periodic orbits when > 0 but small, i.e. for small
periodic forcing? Are there period T periodic orbits near (y = 0; v = 0) and/or
(y = ; v = 0)? This is a problem in bifurcation theory: what changes or stays
the same when we change a parameter?
The rst step is to understand the orbits in the = 0 case. Compute the
Floquet multipliers of (0; 0; 0); (; 0; 0). Let's start with (; 0; 0). We expect a
saddle with 1-dimensional W s , 1-dimensional W u .
Let ((y; v; s; ); t) be the solution
ow for
y_ = v
v_ = sin y = F (y; v; s)
s_ = 1:
Then ((; 0; 0); t) = (; 0; t) is the periodic orbit.
00 1 01
X_ = DF j(;0;t) X = @1 0 0A X:
0 0 0
If we diagonalize, with
01 1 01
P = @1 1 0A ;
0 0 1
we see that
00 1 01 01 0 01
P 1 @1 0 0A P = @0 1 0A :
0 0 0 0 0 0
Making a change of coordinates PZ = X , we get that
01 0 01
Z_ = @0 1 0A Z; Z (0) = I:
0 0 0
80 3. USING MAPS TO UNDERSTAND FLOWS
So
0et 0 01
Z (t) = @ 0 e t 0A ;
0 0 1
and changing back to the original coordinates we get
0et e t 01
1
X (t) = 2 @et e t 0A :
0 0 1
At time T the eigenvalues are eT ; e T ; 1. Note that one is greater than 1, one
is less than 1. This is a hyperbolic periodic orbit.
Now, at (0; 0; 0), ((0; 0; 0); t) = (0; 0; t), and
0 0 1 01
X_ = @ 1 0 0A X; X (0) = I:
0 0 0
Thus
0 cos t sin t 01
X (t) = @ sin t cos t 0A ;
0 0 1
and the eigenvalues of this matrix are cos(T ) sin(T ); 1. This is not hyperbolic. If
T = 2n then the eigenvalues are 1; 1; 1, and if T = (2n + 1) then the eigenvalues
are 1; 1; 1.
4. Bifurcation Theory (Nov. 8)
Consider the family of systems x_ = F (x), where F : Rn ! Rn for all . If
2 R , we call this a 1-parameter family, and if 2 Rm , we call this an m-parameter
family.
Figure 12
4. BIFURCATION THEORY (NOV. 8) 81
How does the solution change as changes? We say that a bifurcation occurs
at = 0 if the
ow for < 0 is \dierent" from the
ow for > 0 . It does depend
on the context, however, what you mean by \dierent".
Example: Is the
ow for
x_ 1 0x
y = 0 2 y (See Figure 13)
dierent from
y
The most important thing to note is that bifurcations almost never happen.
Theorem 4.1 (Straightening-Out Lemma). Given a vector eld F : Rn ! Rn ,
suppose that F (x0 ) 6= 0, then there is a neighborhood V of x0 and coordinates on
V such that
F (x) = (1; 0; 0; : : :; 0) for all x 2 V
In these coordinates, the solution
ow is (x; t) = x + (t; 0; : : : ; 0).
Idea of Proof.
1. Choose surface of section at x0 such that vector eld is never tangent to
.
2. Choose coordinates so that f(0; x2; : : : ; xn )g.
3. For x close to , nd a small time t so that (x; t) 2 . Then coordinates
of x are ( t; x2 ; : : : ; xn ) where (x; t) = (0; x2 ; : : : ; xn ).
4. Check, in these coordinates, that F = (1; 0; : : : ; 0).
So, give a 1-parameter family of
ows, if F0 (x0 ) 6= 0 for some choice of 0 ; x0 ,
then (assuming that F is at least continuous in ), for near 0 , F (x0 ) 6= 0. So
there is a neighborhood about x0 so that the
ow for F is just the straight-line
ow. Up to a change of coordinates, locally, there is no change in the
ow. This
says that bifurcations can only happen
1. at F (x0 ) = 0 (at xed points), or
2. globally.
These global bifurcations are hard to see, need to nd another way to make
them \local".
We will have three ways of attacking this problem:
1. Bifurcations of xed points (innitely many cases),
2. Bifurcations of periodic orbits (xed points on Poincare maps), and the
3. Melnikov method
So, for the rst:
Let F be a 1-parameter family, and assume that F is as smooth as necessary
in x and . Assume we have a xed point F0 (0) = 0. Again, we will see that
bifurcations of xed points almost never happen.
4. BIFURCATION THEORY (NOV. 8) 83
x2
x1
( )
d , d :
Let's try to compute dx
1 dx1
0 @f @f @ @f @ 1
B@ @x1 + @x2 @x1 + @ @x1 CA = 0 ;
@g @g @ @g @ 0
@x1 + @x2 @x1 + @ @x1
so we have to have
0 @ 1 0 @f @f 1 1 0 @f 1
B@ @x1 CA = B@ @x2 @ CA B@ @x1 CA :
@ @g @g @g
@x1 @x2 @ @x1
Thus we need
0 @f @f 1
B@ @x2 @ CA
@g @g
@x2 @ x=0;=0
to be invertible.
@f
We know that @x = 0 and @g = .
2 (0;0) @x2 (0;0)
86 3. USING MAPS TO UNDERSTAND FLOWS
This gives us
0 @f 1
B@0 @ CA
@g @
x=0;=0
which is clearly invertible i @f
@ (0;0) 6= 0.
This means that the x1 -component of F0 depends on , i.e. changes at nonzero
speed as changes.
Theorem 5.1. Given F : R2 ! R2 , F0 (0) = 0,
Dx F0 jx=0 = 00 0 ; < 0;
provided that
@ f1st component of F g 6= 0;
@ (x=0;=0)
then there is a 1 > 0 and a curve : ( 1 ; 1 ) ! R2 , (x1 ) = ((x1 ); (x1 )), and
a neighborhood V of ((0; 0); 0) such that
1. F (x1 ) (x1 ; (x1 )) = 0 for all x 2 ( 1 ; 1 ),
2. If ((x1 ; x2 ); ) 2 V and F (x1 ; x2 ) = 0 then x2 = (x1 ), = (x1 ), and
3. = (; ) is smooth.
x2
Figure 17. We can see that there are no xed points for < 0,
and two for > 0.
5. HYPERBOLICITY IN BIFURCATIONS (NOV. 11) 87
x2
x1
Figure 18. Here, we have two xed points for < 0, and none
for > 0. The dierence between this picture and the last is the
sign of the second partial.
x2
x1
Need to compute more about (x1 ), and we need its 1st and 2nd derivatives.
We know from before that
0 @ 1 0 @f @f 1 1 0 @f 1
B@ @x1 CA = B@ @x2 @ CA B@ @x1 CA
@ @g @g @g
@x1 @x2 @ @x
0 @g1 @f @f @g 1
= @f @g 1 @f @g B 1 C
@ @@g @x@f1 @@f@x@g A:
@x1 @ @ @x2 @x2 @x1 + @x2 @x1
88 3. USING MAPS TO UNDERSTAND FLOWS
At x = 0; = 0, we get (0; 0), i.e. the curve (x1 ; (x1 ); (x1 )) is tangent to the
x1 -axis at (0; 0). We also have
d2 = @ 2f =@x21 ; d2 = 0:
dx21 @f=@ dx21
6. Bifurcation diagrams (Nov. 13)
We want to know if the nondegeneracy conditions from the last chapter are
satised, so let's try to reduce dimension, since nothing much happens in the x2 -
direction.
Consider
0x_ 1 0F (x )1 0x 1
@x12 A = @F(x12 )A = F^ @x12 A :
0
At the xed point (0; 0; 0),
00 0 @f =@1
D(x1 ;x2 ;)F (0;0;0) = @0 @g =@A :
0 0 0
The eigenvalues are 0; 0; and . So there is a 2-dimensional center manifold
and a 1-dimensional (since < 0) strong stable manifold, W ss .
Note: All xed points must be on W c(0; 0; 0), since if they are not, in
backwards time, the distances stretch exponentially.
Consider the
ow restricted to W c (0; 0; 0), and return to the 1-parameter fam-
ily. If is considered as a coordinate on W c (0; 0; 0), then _ = 0, so we have
x_ = h(x);
a 1-dimensional, 1-parameter family.
We know that H0 (0) = 0, and dh dx (0; 0) = 0, and we require that
@h 6= 0 and @ 2 h 6= 0:
@ @x2
6. BIFURCATION DIAGRAMS (NOV. 13) 89
Bifurcation diagrams:
@ 2 h2 <0
@x
A
A
AU
@h > 0 @h < 0
@ @
h > 0 h < 0
h = 0 h = 0
h < 0 h > 0
Figure 20. These are the possible bifurcations if the second par-
tial is negative. The dierence between the two bottom pictures is
the sign of the rst partial @h
@ . When it is negative, for example,
we go from no xed points to two.
90 3. USING MAPS TO UNDERSTAND FLOWS
@ 2 h2 >0
@x
Figure 21. These are the possible bifurcations if the second par-
tial is positive. See the last gure also. Again, the sign of the rst
partial determines whether we go from two xed points to none or
vice-versa.
A saddle-node bifurcation occurs when there is 1 simple zero eigenvalue
with nondegeneracy conditions. What about 2 zero eigenvalues, such as
0 1 0 0
DF0 j0 = 0 0 or even 0 0 ?
And why would we consider such a degenerate situation?
So, to study these situations, we need a 2-parameter family. (See Figure 25
7. Spiral sinks and spiral sources, normal form calculation (Nov. 15)
Consider F : Rn ! Rn , F0 (0) = 0, and DF0 j0 has eigenvalues i!; 3; : : : ; n ,
with the real part of the j all not 0.
First, restrict attention to 2 dimensions, and consider F : R2 ! R2 . We can
justify this by saying that if the dimension is > 2 then restrict to the center manifold
for
0x 1 0F (x )1
BB ...1 CC BB ... 1 CC
F^ B@xn CA = B@F(xn )CA
0
7. SPIRAL SINKS AND SPIRAL SOURCES, NORMAL FORM CALCULATION (NOV. 15) 91
sink
-
source
-
source
XX
XXX
XX
9 XXz
X
y
XXX :
XXX
X XX
X
sink
Figure 23. In the left picture, we undergo a bifurcation from no
xed points, through a node, and then to a sink-source pair. The
parabola represents the xed points. Any vertical slice through the
picture corresponds to the system at some value of . The picture
on the right is the same, except in reverse order.
or
z_ 1
w = I + Dhjz;w A wz + Ah wz + F2 wz + h.o.t. :
Can we write (I + Dh) 1 as I + B ? Well,
I = (I + Dh) (I + B ) = I + Dh + B + Dh B;
so, up to rst order, (I + Dh) 1 = (I Dh). Thus
z_ z z z
w = I Dh jz;w A w + Ah w + F2 w + h.o.t. :
Since
Dhjz;w = 22a1zz +
+ b1 w b1 z + 2c1 w ;
1 w 1 z + 2
1 w
1
z_ 1 2a z + b w b z 2c w
1 1 1 1
w = 21 z 1 w 1 1 z 2
1 w +
w + ( + a b )z2 + ( 2a + + b c )zw + ( b +
+ c)w2
1 1 1 1 1 1 1 1
= 2 2 :
t + (1 a1 + )z + (21 + 2
1 b1 + )zw + (1 c1 +
)w
Make all coecients 0,
0 0 0 0 0 1 0 a 1 0 a 1
BB 2 0 2 0 0 CC BB b11 CC BB b CC
BB 0 0 0 0 CC BB c1 CC = BB c CC
BB 0 0 0 0 CC BB1 CC BB CC
@ 0 0 2 0 2 A @1 A @ A
0 0 0 0
1
The determinant of this matrix is 3 6= 0, so we can change variables and
6
eliminate all 2nd order terms.
Remark: This is an example of a normal form calculation. We will try the
cubic terms next.
8. More normal form calculations, complexication (Nov. 18)
In the case we have been talking about, the linear part is a center. But we
could have a change in dim W u ; W s at = 0. What is happening at = 0?
Let ; by the eigenvalues of DF j0 . So = + i where 0 = 0; 0 = .
Write
x_ x x
y = F y = y + h.o.t.
In the last section, we saw that we can do an algebraic change of variables
which eliminates all 2nd order terms of F0 . How many terms can we eliminate?
Which terms have geometric information? Let us take advantage of the fact that
we're in R2 , and use complex-valued algebra.
Think of (x; y) 2 C2 , and can linearize linear part. Let
1 1
P= i i for :
8. MORE NORMAL FORM CALCULATIONS, COMPLEXIFICATION (NOV. 18) 95
Do change of variables
P zz1 = xy
2
and get the system
z_
1 1 z1
z2 = P P z2 + h.o.t.
0 z
= 0 z1 + h.o.t.
2
Where did the original R go, i.e. where is f(x; y)j x; y 2 R g? Since
2
z x
1 1
z2 = P y
and
i ;
P 1=1 1
2 1 i
so
z1 = (x iy)=2
z2 = (x + iy)=2:
Let = f(z1; z2 )j z2 = z1 g, a 2-dimensional subspace of the 4-dimensional C2 .
Moreover, since the space f(x; y)j x; t 2 R g is invariant in the original equation,
is invariant under the transformed equation. So if we only care about (x; y) 2 R ,
then we need only consider solutions on where z2 = z1 .
So we take the expansion
z_1 = z1 + F2 (z1 ; z2 )
and replace z2 with z 1 . So
X ij
z1 = z1 + aij z1 z1 :
i+j 2
(If F is C N +1 , then
X
z1 = z1 + aij z1i zj1 + terms of order N + 1:)
i+j 2
Remarks:
1. This is not complex analytic!
2. aij also depend on .
Now, try to do a change of variables which eliminate terms of the power series.
For example, try to get rid of z k zl (k + l 2). Try a new variable w 2 C where
z = w + bz k zl :
We see that
_ k 1 wl + blwl 1 wk ;
z_ = w_ + bkww
96 3. USING MAPS TO UNDERSTAND FLOWS
and
X
z + aij z i z j
i+j 2
X j
= (w + bwk wl ) + aij (w + bwk wl )i (w + bwk wl ) :
i+j 2
So we get
X
w_ = w + aij wi z j
k+li+j 2
i;j 6=k;l
+ ( b bk bl + akl )wk wl + h.o.t.
All we need is
b bk bl + akl = 0:
So let
b= akl
k l
and we need worry only if
k l = 0
Remark: This change of variable only aects k; l terms and terms of higher
order. This does change the coecients of the higher order terms, so we must do
it in order, from lowest order to highest order.
= + i , and there are two cases:
1. 6= 0; 6= 0. We only need that
k l 6= 0 or
k + l 6= 0
Since we have that 6= 0; 6= 0, we need that
1 k l 6= 0 or
1 k + l 6= 0:
Since k + l 2, both are never 0. Thus the denominator is never 0!. So
we can eliminate any non-linear terms. Does this mean that we can linearize,
i.e. change variables to w_ = w?
No. There are innitely many changes of variable and the domain may
shrink to 0. But there does exist a neighborhood of 0 and a polynomial
change of variables such that in the new variables
w_ = w + order N
where F is C N .
We can linearize formally, though (if F is C 1 ; C ! ). This was Poincare's
thesis.
2. When = 0, = i = i:
The denominator i ki + li = 0 if 1 k + l = 0 or l = k 1. So we
can eliminate all terms, except z 2 z; z 3 z2 ; z 4z 3 ; : : :
8. MORE NORMAL FORM CALCULATIONS, COMPLEXIFICATION (NOV. 18) 97
Topics
1. Setup for Hopf Bifurcation Theorem (Nov. 20)
We know from before the the eigenvalues of DF0 j0 are i. Let 0 = i, and
put z_ = F (z ) into normal form near = 0, i.e.
z_ = (z ) + a1 ()z 2 z + h.o.t.
Take = + i , and a1 () = + i
. In polar coordinates, we get
_ i = ( + i )rei + ( + i
)r3 ei + h.o.t.
_ i + ire
re
so
r_ = r + r3 + h.o.t.
_ = +
r3 + h.o.t.:
What happens as is changed through 0, = 0? Assume that
d 6= 0:
d =0
There are two cases, where d =d > 0 and d =d < 0. In the rst case, for
< 0, 0 is a spiral sink, and for > 0, 0 is a spiral source. Also, the behavior at
= 0 depends on : If 0 < 0, then for near 0, < 0, so that means that at
= 0, 0 is a spiral sink.
Now let us look at the global picture at = 0. There exists an attractor block
around 0. Look at
r_ = r + r3 :
p
The xed points of r + r3 = 0 are r = 0; =. In two dimensions,
Do we get the same picture with higher order terms? The answer is yes. Set
up regions in the (r; ) plane:
Show that near 0, even with higher order terms, r is increasing. Far from 0, r
is decreasing. Inside the ring show that the derivative of
ow in r direction is < 1.
Look at Poincare return map to = 0, show derivative < 1.
Three other pictures:
Global picture doesn't change for small changes in (r3 term in r_ is huge when
r is big). Behavior near r = 0 is changed by changing . So the r^ole of the attractor
is transferred from the xed point to the periodic orbit.
One other case: 0 = 0 = for all
99
100 4. TOPICS
Then
r_ = r
d > 0:
_ = +
r2 ; d =0
Think of 0 as taking the plane of periodics at = 0 and bending it.
Now, how can periodic orbits bifurcate? Bifurcation of periodic orbits of
ows
corresponds to bifurcations of xed points of maps. Given a periodic orbit, we can
associate a Poincare map.
Look at the 1-parameter family of maps P : Rn ! Rn with P0 (0) = 0.
1. SETUP FOR HOPF BIFURCATION THEOREM (NOV. 20) 101
Figure 5. The spiral sink becomes a center which becomes a spiral source.
y=x
r
sink
sink
source
Figure 10. The sink becomes a source-sink pair.
Combine r and :
Adjusting the signs of d =d for = 0 and give the other three pictures:
2. MORE HOPF BIFURCATION THEOREM (NOV. 22,25) 107
One can show (by building the attractor and repeller blocks) that if the higher
order terms are added in, for near 0, the picture is the same.
108 4. TOPICS
-1 1
-1
Figure 15. This will be a picture of the phase plane for v2 =2 cos(y).
v s=T
First, draw the Poincare map for s = 0. This is the period T map of the
2-dimensional
ow.
The 3-dimensional picture:
Now make > 0. Damping or antidamping is still possible, with G(y; v; t) =
(0; v).
Remark: By bifurcation theory, we know that if is small then the Poincare
map has a xed point near (0; 0).
Another (new) possibility:
3. SETUP FOR MELNIKOV METHOD (NOV. 25, DEC 2) 111
Figure 17. This will be a picture of the Poincare map for the
unperturbed
ow.
Figure 18. This will be the phase plane for the
ow corresponding
to the map in the last gure.
Figure 20. The Poincare map, and the stable and unstable man-
ifolds for the damped pendulum.
These are interesting
ows. Are there specic examples? Start with
y_ = v
v_ = y y3
and devise a perturbation that creates transverse homoclinics,
y_ v
v = y y3 + G(y; v; t):
Look at the Poincare map, P : R2 ! R2; and pick a neighborhood V as shown.
Choose Q : R2 ! R2 , such that Q = I outside V , and is a small rotation well
inside V . Then Q P creates a transverse homoclinic orbit.
4. USING MELNIKOV FOR EXISTENCE OF A TRANSVERSE HOMOCLINIC (DEC. 4) 113
(d) If d1 (t0 ) = 0; d0 (t0 ) 6= 0, then 9t for suciently small such that
d(t ; ) = 0 and
@d(t ; ) 6= 0:
@t
6. Compute d1 (t), which is the order term of the signed distance between
W s (
) and W u (
) on . Then check if d1 (t) is ever 0. If it is, then for
suciently small, the system has homoclinic points.
Now why do we hope that we can compute d1 (t)? Look at system
y_
v = F (y; v) + G(y; v; t):
We know that q0 (t) is a solution for = 0, and q0 (t) ! (0; 0) as t ! 1. When
6= 0, but small, try to write solutions near q0 (t), using Euler's method:
If we follow the vector eld for 6= 0, the rst step error is of order t, where
t is the step size. There are two sources of error on the next step:
1. Because the red orbit is o of q0 (t), the orbits can move exponentially apart.
This error is of order and fast-growing.
Let f f
F = f1 ; F? = 2
f1 :
2
120 4. TOPICS
so
0 @f1 @f1 1 0 @f2 @f2 1
@ @f@y2 @f@v2 CA ;
DF = B @ @f@y1 @f@v1 CA :
DF ? = B
@y @v @y @v
trace DF measures the rate of growth of pieces of area under the
ow F . Our
example was Hamiltonian, with
@H @H
F = @v ; @y ;
so 0 @2H @2H 1
B @y@v @v2 CC ;
DF = B
@ @ 2H @ 2 H A
@y2 @v@y
so trace(DF ) = 0!
So let us assume that F is Hamiltonian (or at least that trace(DF ) = 0), so
_ s (t; t0 ) = F ? (q0 (t t0 )) G(q0 (t t0 ); t);
giving
Z
s (t; t0 ) = F ? (q0 (t t0 )) G(q0 (t t0 ); t) dt:
Note: F ?(q0(t t0)) ! 0 as t ! 1.
So Z1
d s F ? (q0 (t t0 )) G(q0 (t t0 ); t) dt:
dt (t0 ; t0 ) = t0
The above, and
d u (t; t ) = F ? (q (t t )) qu (t t )
dt 0 0 0 1 0
gives
Z t0
s (t; t0 ) = F ? (q0 (t t0 )) G(q0 (t t0 ); t) dt
1
by the same calculation.
So q0 (0) = q0 (t0 t0 ). Putting the two calculations together, we have
F ? (z ) [q1s (t0 ; t0 ) q1u (t0 ; t0 )]
= s (t0 ; t0 ) u (t0 ; t0 )
Z1
= F ? (q0 (t t0 )) G(q0 (t t0 ); t) dt
Z t0t0
F ? (q0 (t t0 )) G(q0 (t t0 ); t) dt
Z 1
1
= F ? (q0 (t t0 )) G(q0 (t t0 ); t) dt:
1
So we have calculated the order term of the distance between W s (
) and
u
W (
) on .
6. AN EXAMPLE OF A MELNIKOV CALCULATION (DEC. 9) 121
Then
y_ = f (y) + O(2 )
= f (y) + 2 f1 (y; t; ):
Compare
y_ = f (y) + 2 f1 (y; t; ) to
z_ = f~(z ):
Let us prove the rst part of the theorem:
Proof:
Z t z Ljz(}|s) y(s)j {
jz (t) y(t)j = z (0) y(0) + f (z (s)) f (y(s)) ds
Zt 0
f1 (y(s); s; ) ds
2
0
So
jy(t) z (t)j jz (0) y(0)j
Z t
+ f (z (s)) f (y(s)) ds
Z0 t
+ 2 jf1 (y(s); s; )j ds
0
Now let (t) = z (t) y(t). Then
Zt Zt
j (t)j j (0)j + L j (s)j ds + 2 C dt
0 0
where L is the Lipschitz constant for f 1 and C is a bound for f1 (on neighborhoods
z (s); y(s); 0 s t.
Remember Gronwall's inequality: if
Zt
v(t) c(t) + u(s)v(s) ds; then
0Z Zt Z t
t
v(t) c(0) exp u(s) ds + c0 (s) exp u( ) d ds:
0 0 s
Let c(t) = (0) + 2 Ct, then
Zt
j (t)j j (0)j eLt + 2 C eL(t s) ds:
0
So
j (t)j j (0)j eLt + C
L e tL C
L
j (0)j eLt + C tL
Le ;
so
C
j (t)j j (0)j + L eLt:
7. AVERAGING (DEC. 11) 125
Why look at a system like this? One case may be where you have a center. If
you take the right T , the time T map looks like the identity.
Index