Sie sind auf Seite 1von 28

Study of Classical Integrable Field Thoery and Defect Problem

2nd Year Master Research Project in Fundamental Physics


Ecole Normale Superieure de Lyon & Universite Claude Bernard Lyon 1

Cheng Zhang
cheng.zhang@ens-lyon.fr
Supervised by

Vincent Caudrelier
Centre for Mathematical Science
City University, London
v.caudrelier@city.ac.uk

August 20, 2010

This research project is focusing on the study of classical integrable field models and the associated defect problems. I will present a basic understanding of
these topics. The emphasis of this report will be put on the Inverse Scattering
Method (ISM). The ISM consists in solving exactly the initial value problem
of integrable nonlinear evolution equations by using an auxiliary linear differential system. It can be considered either as a Fourier-type transform to
solve nonlinear evolution equations or as a canonical transformation in the
sense of integrable Hamiltonian system of infinite degrees of freedom. The
Lax pair and the AKNS scheme are the generalised auxiliary linear differential systems, used to identify and solve nonlinear equations. Among the
interesting integrable models, the nonlinear Schrodinger equation (NLS) will
be studied in details. In the context of the classical integrable field theories,
boundary conditions of fields determine the integrability of systems. An internal boundary condition could be interpreted as a defect. An ISM approach
to systematically generate defects in keeping systems completely integrable
will be presented. The modified physical quantities can be identified in terms
of defect matrices.

Contents
1 Introduction
2 Inverse Scattering Method
2.1 Lax Pair . . . . . . . . . . . . . .
2.2 AKNS Scheme . . . . . . . . . .
2.3 Inverse Scattering Method . . . .
2.4 Conservation Laws and Complete

. . . . . . . .
. . . . . . . .
. . . . . . . .
Integrability

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

4
4
5
8
12

3 Exact Solution of Nonlinear Schr


odinger Equation
16
3.1 Soliton Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 General Solution of ISM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4 Defect Problem

19

5 Conclusions and Outlook

21

Appendices

22

A KdV Hierarchy

22

B Proof of Analyticity of ix

23

C Symmetry Relations between q and r

25

D Proof of Expression of log a

25

E Proof of No Zeros of a for ( > 0) in (3.1)

26

Bibliography

27

Introduction

The invention of the Inverse Scattering Method (ISM), which marked the birth of the
integrable field theory, was one of the most important advances in mathematical physics
in the 2nd half of 20th century. It was first introduced by Gardner, Greene, Kruskal and
Miura [7] to solve the Korteweg-de Vries equation [13] (KdV)
ut + 6uux + uxxx = 0, 1

(1)

which appeared originally in fluid dynamics. They considered a 1D Schrodinger equation

d2
+ u(x, t) = k 2 ,
dx2

(2)

whose potential u(x, t) plays the role of the solution of the KdV equation and showed that
exact solution can be obtained by use of the solution of the inverse scattering problem
for the Schr
odinger equation. This method was soon expressed in a general form by Lax
[14], who introduced the Lax pair to generate a class of integrable nonlinear evolution
equations. In 1972, Zakharov and Shabat [23] solved the nonlinear Schrodinger equation
(NLS)
iut + uxx 2|u|2 u = 0,
(3)
by using a 2 2 Dirac-type operator
x = i + u(x, t),

x = i u (x, t).

(4a)
(4b)

In the same year, Wadati [21] showed that the modified Kortewege-de Vries equation
(mKdV)
ut + 6u2 ux + uxxx = 0,
(5)
can be solved by generalising the Zakharov-Shabat system. Soon afterwards in 1973,
Ablowitz, Kaup, Newell and Segur [1] showed that the sine-Gordon equation
uxt = sin u,

(6)

can be solved in the same way, and they established a general framework [16, 17], now
called the AKNS scheme, which identifies and generates nonlinear evolution equations
solvable by the ISM.
In contrast to integrable models of finite degrees of freedom in the sense of Liouville
[19, 2]2 , the physical quantity u(x, t) in the above equations is a field, and the connection
between the ISM to the solve the nonlinear field equations and the integrability of Hamiltonian systems of infinite degrees of freedom was unveiled by Gardner [8] and Zakharov and
Faddeev [22]. This deep insight led soon to an enormous amount of results, and extended
the topic to quantum mechanics.
The solutions of the above classical integrable models were all first solved under the
boundary conditions where the field u(x, t) is defined on an infinite line i.e. < x < ,
and tends to 0 as |x| . Since a physical system can not be of infinite size, people
1

The subscripts t and x denote partial derivatives with respect to t and x respectively. We will keep
this notation in the rest of this report.
2
For finite dimensional Hamiltonian systems (N coordinates, N momenta), Liouville theorem [19, 2]
asserts that if there exist N conserved quantities with linearly independent gradients that are in involution,
the equations of motion can be integrated by quadrature.

then started asking if there exist suitable changes of the boundary conditions which keep
systems integrable. Some models [12, 3, 15] with non-vanishing or periodic boundary
conditions have been shown integrable. Another parallel approach consists in adding
internal boundary conditions at a fixed point x0 in the domain where u(x, t) is defined.
The internal boundary conditions arise natural interpretations as defects or impurities
in the systems. These steps lead integrable models to be more physically realistic, and
keeping the integrability of these systems gives us a more powerful prediction for real
physical systems.
The report is organised in the following way. I will first present the ISM. The Lax pair
and the AKNS scheme will both be introduced, although the ISM will only be developed
in the AKNS scheme. A discussion of the ISM in the sense of integrable system will also
be presented. In section 3, the solutions, by using the ISM, of the NLS equation will be
shown. In section ??, I will outline the associated defect problem. My conclusions and
perspectives for future investigations will be discussed in the final section.

2
2.1

Inverse Scattering Method


Lax Pair

After the invention of the ISM [7], people asked if the KdV equation was the only case
solvable by the ISM. Lax introduced a new framework, now called Lax pair, which identified
the KdV equation in a general expression and generated a class of integrable nonlinear
evolution equations. Consequently, new nonlinear equations and solvable by the ISM other
than the KdV equation, have been found.
Suppose that u(x, t) satisfies some nonlinear evolution equations of the form
ut = N (u),

(7)

with u(x, t = 0) = f (x) for < x < which is the initial value. We assume that
u Y for all t, where Y is some appropriate function space, and that N : Y Y is some
nonlinear operator which is independent of t but may involve u or derivatives of u with
respect to x. For example, for the KdV equation we have
N (u) = 6uux uxxx ,

< x < .

(8)

Then we suppose that the evolution equation can be expressed in the form
Lt + [L, M] = 0,

(9)

where L and M are some linear operators on some Hilbert space H, and which may
depend on u(x, t). We assume that L is self-adjoint in H, so that (L, ) = (, L) for all
, H. We introduce the eigenvalue equation, for H,
L = ,

for t 0, and < x < ,

(10)

where is the eigenvalue which is non-degenerate. Differentiating (10) with respect to t


by putting the term t on the left, we have
t = Lt + Lt t ,
= (L )t + (ML LM),
= (L )t + (M LM),
= (L )(t M).

(11)

The inner product of with this equation gives


t (, ) = (, (L )(t M))
= ((L ), t M).

(12)

Since L is self-adjoint, we have


t (, ) = (0, t M) = 0.

(13)

t = 0.

(14)

L(t M) = (t M),

(15)

Hence
Combining (11) and (13) yields

and we can see that (t M) is an eigenfunction of L with eigenvalue . Hence (t


M) . We can always redefine M, in adding a product of the identity operator and
an appropriate function of t, in order to get the evolution equation for ,
t = M,

for t > 0,

(16)

without altering (9).


In other words, the basic idea behind the Lax pair is the following. If the nonlinear
evolution equation (7) can be expressed with the Lax pair L and M which satisfy (9), and
if L = with non-degenerate, then t = 0, and evolves according to (13). Once
(7) is expressed in (9), by setting appropriate boundary conditions for u(x), we can obtain
u(x, t) by solving the inverse scattering problem of the operator L.
In Appendix (A), I will show how Lax obtained the KdV equation. I give here just the
expression of the Lax pair for the KdV equation,
2
+ u,
x2

3
+ 3 u + A(t),
M = 4 3 + 3u
x
x
x
L=

(17a)
(17b)

Where A(t) is an arbitrary function depending on t. Substituting them into (9) yields (1).

2.2

AKNS Scheme

Despite its elegance, Lax method meets some difficulties. In particular, it is in general
hard to find a suitable Lax pair. An important development due to Zakharov and Shabat
[23] was to solve the NLS by using a 2 2 Dirac type operator as the spectral operator
L. In 1974 Albowitz, Kaup, Newell and Segur [16] generalised the idea of Zakharov and
Shabat. They introduced a more general framework to determine an integrable nonlinear
evolution equation. This method is now known as the AKNS scheme3 . In this section, I
will show how to derive interesting nonlinear evolution equations via the AKNS scheme.
The way to solve these equations will be represented in section 2.3.
The basic idea behind the AKNS scheme is the following. Consider a pair of 2 2
operators X and T satisfying
(
vx = X v,
(18)
vt = T v.
3

In the original paper of Albowitz, Kaup, Newell and Segur [16], they called this method the generalised
Zakharov and Shabat method.

Using the compatibility condition vxt = vtx yields the following relation
Xt Tx + [X , L] = 0,
where v is 2-component vector with vt = (v1 , v2 ). X is in the form
(


v1x = iv1 + qv2
i q
X =
,
i.e.
.
r
i
v2x = iv2 + rv1
We can express (20) as a 2 2 eigenvalue problem Lv = v with
(


v1 = iv1x iqv2
ix iq
.
L=
,
i.e.
ir ix
v2 = iv2x + irv1

(19)

(20)

(21)

Nonlinear equations are obtained by using (19), and r and q in (20) playing the role of
potentials will be the solutions of nonlinear equations. Note that the operator X contains
a spectral parameter . We assume that is time-independent, i.e. t = 0, and we will
express it as
= + i,
and , R.
(22)
In contrast with the Lax method in the sense that the evolution operator M does not
contain the spectral parameter, the operator T depends on as well. Express the evolution
operator T in a general form
(


v1t = Av1 + Bv2 ,
A B
,
i.e.
.
(23)
T =
C D
v2t = Cv1 + Dv2 .
Using again the compatibility condition vxt = vtx , and applying it to (21) and (23), we
can find that A, B, C, D functions satisfy
Ax = qC rB,

(24a)

Bx + 2iB = qt (A D)q,

(24b)

Cx 2iC = rt + (A D)r,

(24c)

(D)x = qC rB.

(24d)

Taking A = D without loss of generality, we have


Ax = qC rB,

(25a)

Bx + 2iB = qt 2Aq,

(25b)

Cx 2iC = rt + 2Ar.

(25c)

Since is a free parameter, we can express A, B, C as truncated series in powers of . A


simple expansion which yields interesting nonlinear evolution equations is
A = A2 2 + A1 + A0 ,

(26a)

B = B2 2 + B1 + B0 ,

(26b)

C = C2 + C1 + C0 .

(26c)

Substitute (26) into (25) and equate coefficients in the same power of . The coefficients
of 3 immediately yield B2 = C2 = 0. For 2 , we have A2 = a2 =const., B1 = ia2 q and
C1 = ia2 r, for , we have A1 = a1 =const., B0 = a2 qx /2 and C0 = a2 rx /2, and for

0 , A0 = a2 qr/2 + a0 . With the particular choice that a1 = 0 and a0 = 0, we have the


evolution equations
1
a2 qxx = qt a2 q 2 r,
2
1
a2 rxx = rt + a2 qr2 .
2
If we let r = q and a2 = 2i, we find the equation

(27b)

iqt + qxx 2|q|2 q = 0,

(28)

(27a)

which is the nonlinear Schr


odinger equation (NLS), with the expression of the functions
A = +2i 2 iqq ,

(29a)

B = 2q + iqx ,

(29b)

C = 2q

iqx .

(29c)

In the same way, taking


1
1
ia3
A = a3 3 + a2 2 + (a3 qr + a1 ) + a2 qr
(qrx qx r) + a0 ,
2
4


2

1
1
i
i
2
2
B = ia3 q + ia2 q a3 qx + ia1 q + a3 q r a2 qx a3 qxx ,
2
2
2
4




1
1
i
i
C = ia3 r 2 + ia2 r + a3 rx + ia1 r + a3 qr2 + a2 rx a3 rxx ,
2
2
2
4

(30a)
(30b)
(30c)

gives the evolution equation


i
qt + a3 (qxxx 6qrqx ) +
4
i
rt + a3 (rxxx 6qrrx )
4

1
a2 (qxx 2q 2 r) ia1 qx 2a0 q = 0,
2
1
a2 (rxx 2qr2 ) ia1 rx + 2a0 r = 0.
2

(31a)
(31b)

Choosing a0 = a1 = a2 = 0, a = 4i and r = 1, we have the KdV equation (1). If


r = q, we have the mKdV equation (5).
Again, taking
A=

a(x, t)
,

B=

b(x, t)
,

C=

c(x, t)
,

(32)

yields
1
ax = (qr)t ,
qxt = 4iaq,
rxt = 4iar.
2
With the special choice
 
 
i
i
ux
a=
cos u,
b=c=
sin u,
q = r = ,
4
4
2
we can obtain the sine-Gordon equation (6), and with
 
 
i
i
a=
cosh u,
b = c =
sinh u,
4
4

q=r=

ux
,
2

(33)

(34)

(35)

we obtain the sinh-Gordon equation


uxt = sinh u.
Other interesting nonlinear equations can be obtain in the same way.

(36)

2.3

Inverse Scattering Method

We have seen in the previous sections that nonlinear evolution equations can be associated
with a linear scattering problem, usually called auxiliary problem (L = in the Lax
method and X v = vt in the AKNS scheme), which contains a time-independent spectral
parameter , and potentials.
In the auxiliary system, x is an independent variable, and and t appear as a parameter. At each fixed t, u(x, t) satisfy some boundary conditions as |x| , and scattering
process takes place, in which the potential u(x, t) can uniquely be associated with some
scattering data S(, t). The problem of determining S(, t) for all , from a given potential
u(x, t) for all x, is known as the direct scattering problem for the auxiliary problem. On
the other hand, the problem of determining u(x, t) from S(, t) is known as the inverse
scattering problem. In order to solve the nonlinear equation and obtain u(x, t), one needs
(i) solve the corresponding direct scattering problem for the associated auxiliary problem
at t = 0, i.e. determine the initial scattering data S(, 0) from the initial potential u(x, 0),
(ii) evolve the scattering data from S(, 0) to S(, t) for t > 0, (iii) solve the corresponding
inverse scattering problem for the auxiliary problem at fixed t, i.e. determine the potential
u(x, t) from the scattering data S(, t).
Lets see now how the ISM works in the AKNS scheme. First we will perform the
direct scattering step by constructing the scattering data S(, 0). We assume that in (20)
the potentials r, q vanish rapidly as |x| ( these consist of our boundary conditions for
the fields q and r). For the moment, we will assume we are at t = 0, and we drop the time
), (x, ) and (x,
) as solutions of (20) satisfying
dependence. Consider (x, ), (x,
the boundary conditions

 
 
1 ix
0 ix

e
e
(x, ) =
(x, ) =

0
1
as x , (37)
as x ,
 
 
1 ix
0

ix

e
e
(x, ) =
(x, ) =

0
1
with real, i.e. = . Define the Wronskian of two vectors u, v by,
W (u, v) = u1 v2 u2 v1 .

(38)

W (u, v) represents the linear dependence of two vectors, i.e. if W (u, v) 6= 0, then u and v
d
are linearly independent. From (20) we can check that dx
W (u, v) = 0 for u, v satisfying
(20). Hence, W (u, v) is independent of x. With the boundary condition (37), we have
= 1 and W (, )
= 1. So () and ()
are linearly independent, and we can
W (, )
express them as
= a() + b(),

=
a() + b(),

(39a)
(39b)

or in a matrix form
 
 

=
S
,


with S =


a b
.
b
a

(40)

The matrix S is a scattering matrix, and a(), a


(), b() and b() are transition coefficients.

Using (39) and W (, ) = 1 we can have


a()
a() + b()b() = 1.

(41)

This result ensures our terminology of using scattering matrix and transition coefficients.
From (20) and (39) we can also obtain
a = W (, ),

b = W (, ),

),

a
= W (,
b = W (,
).

(42a)
(42b)

For further steps, we need know the analytic properties of the quantities eix , eix ,
ix and e
ix in the complex plane of the variable . We state here the results. The
e
proof will be established in Appendix (B). In the case where q(x), r(x) L1 , eix and
ix and
eix are analytic in the upper half plane of , i.e. > 0, and in contrast e
ix

e
are analytic in the lower half plane ( < 0). From (42), we can see that a and a
are
respectively analytic in the upper and lower half plane. In addition to these properties,
we assume that [15, 17]: (i) no zeros of a (
a) occur on the real axis, (ii) all zeros of a (
a)
are simple.
We assume that the general solution of (20) can be written
 
Z
0 ix
K(x, s)eis ds,
(43a)
=
e +
1
x
 
Z
1 ix

e
+
K(x,
s)eis ds,
(43b)
=
0
x
are two component-vector kernels independent of
where K and K
 
 

K1
= K1 .
K=
,
K
2
K2
K

(44)

Substituting (43a) into (20), we have [17]


Z
eis [(x s )K1 (x, s) q(x)K2 (x, s)] ds
x

[q(x) + 2K1 (x, x)]eix + lim [K1 (x, s)eis ] = 0,


s
Z
eis [(x + s )K2 (x, s) r(x)K1 (x, s)]ds

(45a)

lim [K2 (x, s)eis ] = 0.


s

(45b)

It is necessary and sufficient to have


(x s )K1 (x, s) q(x)K2 (x, s) = 0,

(46a)

(x + s )K2 (x, s) r(x)K1 (x, s) = 0,

(46b)

subject to the boundary condition


1
K1 (x, x) = q(x),
2
lim K(x, s) = 0.
s

(47a)
(47b)

Consider on a contour C in the complex plane starting at = + i0+ , passing over


all zeros of a() and ending at = + i0+ . (39a) can be extended into the upper half
plane, so that
(x, )
) + b() (x, ).
= (x,
(48)
a()
a()

Substituting (43) into (48), we find


 
 

Z
Z

b
1 ix
1 ix

=
e
+
K(x,
)eis ds + ()
e +
K(x, )eis ds .
(49)
0
0
a
a
x
x
R
R
1
iy for y > x, using (x) = 1
ix , and
Operating on this equation with 2
2 c de
c de
interchanging integrals, we obtain
 
Z
1

K(x, s)F (s + y)ds,


(50)
I = K(x, y) +
F (x + y) +
0
x
where
Z
1
b
F (x)
()eix d,
2 c a
Z
(x, ) iy
1
I
e d.
2 c a()

(51)
(52)

Since eix is analytic in the upper half plane, y > x, and the contour C passes over
all the zeros of a, we have I = 0. Hence,
 
Z
1

F (x + y) +
K(x, s)F (s + y)ds = 0.
(53)
K(x,
y) +
0
x
Performing the equivalent operations on the analytic extension of (39b) in the lower half
plane yields
 
Z
0

F (x + y)
K(x,
s)F (s + y)ds = 0,
(54)
K(x, y)
1
x
where
1
F (x)
2

Z
b ix

()e
d.
a

(55)

The contour C goes from i0+ to i0+ in passing below all zero of a
(). Contour
integrations of (51) and (55) give
1
2

1
F (x) =
2

F (x) =

where

X
b
()eix d i
Cj eij x ,
a

(56a)

N
X
b

ix
()e d + i
Cj eij x ,
a

(56b)

j=1

j=1

b
b
(j )4 ,
Cj = 0 (j ).
(56c)
0
a
a

In (56) Cj and Cj correspond to discrete values of j and j where a(j ) and a


(j ) are
zeros respectively. In Appendix (C), we will present the interesting consequences of the
symmetric relations between r and q for the transitions coefficients a, a
, b and b.
The linear integral equations (50) and (54), called Gelfang-Levitan-Marchenko equation [9], can be put in a matrix form by defining




1 K1
K
0 F
K=
,
F=
,
(57)
K2 K2
F 0
Cj =

40

denotes derivatives with respect to .

whereby we have
Z

K(x, s)F(s + y)ds = 0.

K(x, y) + F(x, y) +

(58)

In a general form, we can find that the potentials q(x) and r(x) satisfy
q(x) = 2K1 (x, x),
2 (x, x).
r(x) = 2K

(59a)
(59b)

As we discussed at the beginning of this section, for given boundary conditions (37), we
can connect them to a set of scattering data S( ab (), ab (), Cj , Cj ), and then perform the
inverse scattering scenario for x from to a fixed point x. This procedure is computed
in (58), i.e. we first put the initial scattering data in (56), and then solve (58), via (59a)
we can obtain the expression of potentials r(x), q(x) at x. Furthermore, we need some
analysis [15, 17, 18] to prove the existence and uniqueness of the solution of the linear
integral equation (58).
To complete our presentation of the IMS, we need evolve the scattering data S from
t = 0 to t. From the form of the functions (A,B,C,D) in the evolution operator T (23), we
can see that requiring q, r 0 as |x| gives us nonlinear equations with the property
that A A (), D A (), B, C 0 as |x| , where A is time-independent
parameter in power of . The time-dependent eigenfunctions are defined as
( (t)
( (t)
= eA t ,
= eA t ,
,
(60)
A t ,
A t ,
(t) = e
(t) = e
and satisfy (20) and the boundary conditions (37). The time-dependent
where , ,
functions (t) , (t) , (t) and (t) satisfy the general evolution equations


(t)
A B
(t) .
(61)
=
C D
t
Hence, satisfies

=
t



A A ()
B
.
C
D A ()

(62)

If we use the relation


 
 
1 ix
0 ix

= a + b a
e
+b
e ,
x
0
1

(63)

then (62), as x , yields


 ix  

0
at e
=
.
2A ()beix
bt eix

(64)

Thus, we obtain the time-dependence of the transition coefficients


b(, t) = b(, 0)e2A ()t ,

(65a)

a(, t) = a(, 0),

(65b)

Cj (t) = Cj,0 e2A (j )t ,

j = 1, 2, . . . , N,

(65c)

where Cj (t) = bj (, t)/a0j (, t). Now with (65a), we obtain the time evolutions of the
scattering data, which are contained in
1
F (x, t) =
2

1
F (x, t) =
2

X
b
d (, 0)eix2A ()t i
Cj,0 eij x2A (j )t ,
a

(66)

N
X
b

ix2A ()t
d (, 0)e
+i
Cj,0 eij x2A (j )t .
a

(67)

j=1

j=1

In figure (1) [17], we illustrate this idea of the ISM. We come to the observation that
the ISM works precisely in the same way as the Fourier transform does in linear problems,
namely it transforms the dependent variable which satisfies a given partial differential
equation to a set of new dependent variables whose evolution in time is described by an
infinite sequence of ordinary differential equations. The particularities of the ISM are: (i)
the basis (66) moves in the ISM in contrast to Fourier transform in which the basis are
in the form (ei(tkx) ), (ii) the spectral parameter consists no longer of the continuum
of real numbers, but includes in addition a finite number of isolated complex values. This
gives rise to the entities known as solitons. Precisely, the discrete j (j ), for which a(j )
(
a(j )) is zero, gives us a soliton solution. The reason is that, from (42a), the zeros of a
is proportional to ().
Since both and
(
a) mark the values of j (j ) for which ()
have decaying behavior at and + respectively, when = > 0 and = < 0, these
values j and j are associated with bounded eigenfunctions. We obtain the same form of
the potentials as x and x , which are indeed soliton solutions.
(
q(x, 0)

FT

- b(k, 0)

- S(, 0) =
q(x, 0) direct scattering

(k): dispersion relation

(b/a)(, 0)
{j ; Cj,0 }N
j=1

(k)

(b/a)(, t) =

(b/a)(, 0)ei(2)t
q(x, t) 
?
b(k, t) = b(k, 0)ei(k)t q(x, t) 
?
S(, t) =

j = const.

inverse scattering

inverse FT

Cj (t) = Cj,0 ei(2i t)


Fig. 1: The Inverse Scattering Transform-Fourier analysis for nonlinear problems.

2.4

Conservation Laws and Complete Integrability

In this section, we will show the connection between integrability and the ISM. Because of
the fact that the ISM solves completely nonlinear field equations, integrable field models
should possess infinite conserved quantities. By a simple analysis, we will prove this point
for systems defined by the AKNS scheme. Then we will make clear that the AKNS scheme
exhibits naturally an Hamiltonian structure of functional variables r and q. This step allows us to come to the conclusion that the ISM consists of a canonical transformation from
r and q to action-angle type variables. This result conforms with the sense of integrability.
Lets first see how to obtain infinite conserved quantities. Assuming that (1 ,2 ) is
solution of (20) that satisfies the boundary conditions (37), we define the quantity as

2
.
1

(68)

By direct calculations, i.e. deriving (68) with respect to t and x and using (20), (21) and
(29), we obtain the local conservation equation
(q)t = (B + A)x ,

(69a)

where A and B are functions in the evolution operator T , and the Riccati equation
x = 2i + r q2 .

(69b)

From Appendix (C), we can see that as || , 1 eix and 2 eix evolve as
Z x
1
1
ix
r(y)q(y)dy + ( 2 ),
1 e = 1
2i

1
1
2 eix =
r(x) + ( 2 ).
2i

(70a)
(70b)

Hence 0, as || . We may expand in inverse power of


=

X
n (x, t)
n=1

(2i)n

(71)

k nk , for n 1.

(72)

Substituting this into (69b) yields


1 = r,

n+1 = nx + q

n1
X
k=1

From (69a), it follows that


(
)
(
)

X qn

B X n
=
A+
.
t
(2i)n
x
q
(2i)n
n=1

(73)

n=1

The conserved quantities are expressed as


Z
Cn =

qn dx.

(74)

Thus the first conserved quantities are


Z
Z
C1 =
(qr)dx,
C2 =
(qrx ) dx,

Z
Z



2
C3 =
qrxx + (qr) dx,
C4 =
qrxxx + 4q 2 rrx + r2 qqx .

(75)
(76)

For example, in the case where r = q with = 1 and


A = 2i 2 i|q|2 ,

B = 2q + iq,

we have the NLS equation. Substituting these into (73) yields


"
#
"
#

X qn
X

n
t
+ ix 2 2 + |q|2 + (2iq qx )
= 0.
(2i)n
(2i)n
n=1

n=1

(77)

(78)

The first local conservation laws are


n = 1,

t [|q|2 ] + ix [qqx q qx ] = 0,

n = 2,

t [qqx ]

+ ix [|qx |

qqxx

(79)
4

+ |q| ] = 0.

(80)

As t plays the role of parameter in the auxiliary problem (20), we assume the systems are
at t = 0, and we drop the time dependence for the moment. From Appendix (C), we know
that for = > 0, both a() and 1 eix approach to 1 as || , moreover, from (37), we
can see
a() = lim 1 eix .
(81)
x

We may express 1 as

1 = exp(ix + ),

(82)

with vanishes as || . Substituting (82) into (20) gives us


x = q.

(83)

From (74) and (83), we have


) =
(x

dxq.

(84)

Since a are time-independent, taking log of a gives us


= +) =
log a() = (x

X
Cn
,
(2i)n

(85)

n=1

where Cn is defined as (74). This result conforms with our definition of Cn . From Appendix
(D), we can deduce an explicit expression of log a()
( N

X 1
X
n
log a() =

[( )n (m )n ]
n m
m=1
n=1
"
# )
Z
N
N

X
X

m
m
a() +
log
(2i)1
(n1) log a
+
log
d .
m
m

m=1

l=1

(86)
Note that this expression must coincide with that in (74), so that for n = 1, 2, 3, . . . ,
"
#
Z
N
N

X
X
1
m
m
n
+
log
d
Cn =
(2i) log a
a() +
log

m
m
m=1

N
X
m=1

l=1

1
n
[(2im
) (2im )n ] .
n

(87)

If r = q , these simplify to
1
Cn =

(2i)n log |a()|2 d +

N
X
1
n
[(2im
) (2im )n ] .
n

(88)

m=1

These are the trace formulae for (20). They relate the infinite set of conserved constants,
Cn , to moments of log a() and the powers of the discrete eigenvalues.

Now we present the Hamiltonian structure of (20). We can state the general theorem
that: if A is an entire function of in the form

1 X
(2)n1 an
2i

A =

(89)

n=1

then the system defined by (20) and (21) is Hamiltonian. q(x, t) and r(x, t) are the conjugate variables and the appropriate Hamiltonian is
H(q, r) = i

an Cn (q, r)(i)n ,

(90)

n=0

where Cn is defined by (74). The prove of this theorem can be found in [17, 15]. We can
also show that the equations of motion can be written in the form
qt =

H
,
r

and rt =

H
.
q

(91)

Define the Poission structure


Z

{A, B} =

A B A B

q r
r q


.

(92)

We can immediately see the infinite set of conserved quantities Cn are all in involution by

Z 
dCn
Cn q Cn r
0=
=
+
dx
dt
q t
r t


Z 
Cn H
Cn H
=

dx
(93)
q r
r q

= {Cn , H}

(94)

We can rewrite the Hamiltonian by substituting (87) into (90),

 Z
N
N
N Z m

X
X
X

l
H=
A () log a()
a() +
log
+
log
A ()d.
d+4i

m
m=1

l=1

m=1

(95)
Define a new pair of variables P () and Q() in the following way, for real, i.e. = ,

P () = log{a()a()},

1
Q() = log b().

(96)

There may be discrete eigenvalues for = > 0,


a(m ) = 0,
and for < 0,
a
(j ) = 0,

cm =

b
| ,
a0 m

b
cj = 0 |j ,
a

m = 1, . . . , N,

(97a)

j = 1, . . . , N.

(97b)

Let
Pm = m ,
Pl = l ,

Qm = 2i log cm ,
l = 2i log cl ,
Q

m = 1, . . . , N,
.
l = 1, . . . , N

(98a)
(98b)

Denote by S the variables defined in (96) and (98). We can show that the mapping
(r, q) S is a canonical transformation with the Hamiltonian defined in (95). Now it
is apparent that H depends on the generalised momenta (P (), Pm , Pl ), but not on the
l ). This is the defining property of action-angle variables. It is
coordinates (Q(), Qm , Q
also apparent that Hamiltons equation take the form
P
= 0,
t

Q
H
=
.
t
P

(99)

Thus, P is time-independent variable, while Q evolves linearly in time. Explicitly, we have

(log a
a()) = 0,
t

m = 0,
t

l = 0,
t

(log b()) = 2A (),


t

(log cm ) = 2A (),
t

(log cl ) = 2A (),
t

for R,
m = 1, . . . , N,
.
l = 1, . . . , N

(100a)
(100b)
(100c)

These results lead us to the conclusion that the ISM is indeed a canonical transformation
(r, q) S, and the variables S lead us to action-angle variables.

Exact Solution of Nonlinear Schr


odinger Equation

In this section, we will use the ISM to solve the nonlinear Schrodinger equation (NLS)
iut + uxx 2|u|2 u = 0,

3.1

where = .

(101)

Soliton Solution

In Appendix (E), we will prove that if = +1 the coefficient a() has no zero in the upper
half complex plane of . Hence, there is no soliton solution. This result conforms with
our physical intuition: if = +1, the interaction being repulsive in (101), there will be
no bounded state, while if = 1, the interaction being attractive, we may have soliton
solution.
Provided that = 1 and b = 0 which corresponds to the reflectionless condition i.e.
there is no reflection in the auxiliary problem, we will take N = 1, i.e. a() has one zero
in the upper half plane. Consequently (66) becomes
F (x, t) = iC(t)eix ,

= + i,

and > 0.

(102)

Dropping for the moment the time-dependence, and substituting (102) into (48) yields
Z Z

i (x+y)
K1 (x, y) = iC e

K1 (x, z)|C|2 eiz eis( ) ei y dsdz.


(103)
x

1 (x) =
Define K
we get

R
x

K1 (x, z)eiz dz. Multiplying (102) by eiy and operating it with

R
x

dy,

1 (x) =
K

C ei(2 )x
 .
1 |C|2 e2i( )x /( )2

(104)

1 in the form
From a direct calculation of (103), we can have K1 (x, y) in terms of K
Z
Z

dseis( ) ei y
dzK1 (x, z)eiz
K1 (x, y) =iC ei (x+y) |C|2
x
x
!
y i( )x
i
e
e

1 (x)
=iC ei (x+y) |C|2 K
.
(105)
i( )
1 into (105) gives us
Substituting the expression of K


|C|2
i ( x+y)
2i( )x
K1 (x, y) = iC e
1
e
.
( )2

(106)

Then the potential q(x) is given by


q(x) = 2K1 (x, x) =

2iC e2ix
 2
.
2x
e2x + |C|
e
2
4

(107)

Putting the time evolution in C wiht C(t) = C0 e2A ()t . Since A () = 2i 2 , we have
2
C = C0 e4i t . Defining C0 = |C0 |ei0 , x0 = log |C0 |/2, we obtain the usual expression
for q(x, t),

2
2
q(x, t) = 2e2ix e4i( )ti(0 + 2 ) sech(2x 8t x0 ).
(108)

3.2

General Solution of ISM

We will express the solution of the NLS equation in a general form by the ISM. Recall
that from section (2.3), we have the general setting
r = q ,

= 1,
Z

K(x, y) = F (x + y)

q = 2K(x, x),
Z
ds1
dz1 K(x1 , z1 )F (z1 + s1 )F (s1 + y),

(109)
(110)

where
Z
b
1
()eix d,
F (x)
2 c a
Z
1
b ix

F (x)
( )e
d.
2 c a
We can express F (x) and F (x) in a more general form
Z
F (x) =
(k) exp(ikx)dk,
ZC
F (x) =
(k ) exp(ik x)dk ,

(111a)
(111b)

(112a)
(112b)

where C presents the complex plane of k, in (111) is replaced by k, (k) ab (k) and

(k ) ab (k ), and dk and dk present some appropriate measures in C which keep (111)


and (112) equivalent. Iterating K in the integral equation (110), we have
Z

2
K = F (x, y)
ds1 dz1 F (x + z1 )F (z1 + s1 )F (s1 + y)
x
Z
+ 3
ds1 ds2 dz1 dz2 F (x + z2 )F (z2 + s2 )F (s2 + z1 )F (z1 + s1 )F (s1 + y) + . . .
x

= K0 + K1 + K2 + =

X
n=0

Kn ,

(113)

with
n n+1

Z
ds1 . . . dsn dz1 . . . dzn

Kn = (1)

C 2n+1

[x,]2n

exp ipn+1 x + i

n
X

(q p+1 z ) + i

=1

= (1)n n+1

Z
C 2n+1

n Z
Y
j=1

n
X

dp1 . . . dpn+1 dq1 . . . dqn

(q p )s ip1 y

=1

dp1 . . . dpn+1 dq1 . . . dqn


dsj dzj exp(i(qj pj+1 )zj + i(qj pj )sj ) exp(ipn+1 x ip1 y)

= (1)n n+1

Z
C 2n+1

dp1 . . . dpn+1 dq1 . . . dqn

n
Y

Ij exp(ipn+1 x ip1 y).

(114)

j=1

We can calculate directly Ij with


Z
Ij =
dsj dzj exp(i(qj pj+1 )zj + i(qj pj )sj )
x
!
!




1
1

=
exp(iqj ipj+1 )zj x
exp(iqj ipj )sj x

iqj ipj+1
iqj ipj
!
1
1

=
exp(i(qj ipj+1 )x + i(qj pj )x).
(115)

qj pj+1 qj pj
Hence we have
Z

n
Y

1
Kn =()
dqj
qj pk+1
C 2n+1
j=1
)




exp ix(2qj pj pj+1 ) exp i(pn+1 x + p1 y) .
n+1

dp1 . . . dpn+1 dq1 . . . dqn




1

qj pk
(116)

Adding the time dependence for (k, t) in accord with (65) in knowing that A (k) = 2ik 2 ,
we have

(k, t) = (k) exp i4k 2 t ,
(117a)



2
(k , t) = (k ) exp i4(k ) t .
(117b)
Define the following measures
Z
Z
d(k2j+1 ) = dpj ,
Z
Z
d(k2j ) = dqj ,

with pj k2j+1 ,
with qj k2j ,

(118a)
(118b)

where j = 1, 2, . . . , n. With the relation between pj and qj , we can see that d (k ) =


d(K). Doing some appropriete scaling between x and t, and absorbing the extra multiplicative terms in (exp) into the measures d(k2j+1 ) and d(k2j ), we can finally obtain

the general solution of nonlinear Schrodinger equation written in the form


q(x, t) =

n=0
m=2n+1

|q(x, t)| = x

Z
Cm

n1

Z
Cm

n=1

where n =

exp (im (x, t))


d1 d2 d3 d4 . . . dm ,
Q2n
j=1 (kj + kj+1 )

(119)

(1) exp (i2n (x, t))


d1 d2 . . . d2n1 d2n ,
Q2n1
j=1 (kj + kj+1 )

(120)

n
X


kj x + (1)j kj2 t .

(121)

j=1

Defect Problem

Generally speaking, a defect in 2D (1D space + 1D time) integrable field theories can be
viewed as internal boundary conditions in the domain where the field u(x, t) is defined. In
[4, 5], Bowcock, Corrigan and Zambon introduced a Langrangian formalism to generate
defects in the classical integrable models in keeping the systems completely integrable. In
this section, we present another approach [6], introduced by Caudrelier, which uses the
ISM in the AKNS scheme to generate defects. Integrability is proved systematically by
constructing the generating function of the infinite set of modified integrals of motion. The
contribution of the defects to all orders will be explicitly identified in terms of a defect
matrix.
Consider a following system: choose a point x0 R and suppose that the auxiliary
problem exists for x > x0 with the operators X and T , and for x < x0 with T and X . X
and T are defined in the form






i q
0 q
A B
X =
i3 + W,
with W =
,
and T
, (122)
r
i
r 0
C A
and X and T are defined in the same way. We assume that these two systems are connected
by
(x, t, ) = L(x, t, )v(x, t, ),
v
(123)
at x = x0 , where L(x, t, ) is called defect matrix which links the solutions of two sides of
x0 . From the general definitions (122) and (123), we can obtain
Lx = X L LX ,
Lt = T L LT .

(124a)
(124b)

We now introduce the generating function I() for the integral of motions
I() = I L () + I R () + ID (),

(125)

where
Z

qdx,

(126a)

qdx,

(126b)

ID () = log(L11 + L12 )|x=x0 ,

(126c)

, I () =
I L () =

and Lij are the entries of the defect matrix L. Lets prove this formula. From the definition
of two first elements of I(), we have
Z
Z
= (B

+ A (B + A))|x=x .
(127)
qdx
qdx + t
t
0

, from (126a) and (126b) we can have


By using (122) to both u and u
Z
Z
= (L11 + L12 )t |x=x ,
qdx
qdx + t
t
0
(L11 + L12 )

(128)

which corresponds the derivative of (126c) with respect to t. Hence


t I() = 0.

(129)

Then we suppose that L(x, t, ) may be expanded in inverse power of ,


L(x, t, ) =

N
X

Ln (x, ) n .

(130)

n=0

For n = 1,
L(x, t, ) = L0 (x, ) + 1 L1 (x, ).

(131)

Substituting this form into (124a) gives


0 =[L0 , 3 ],
L0 L0 W,
L0x =i[L1 , 3 ] + W
L1 L1 W.
L1x =W

(132a)
(132b)
(132c)

If T and T are polynomials in with coefficients Tj and Tj , where j = 0, . . . , N , from


(132) and (124b), we have the relations for Tj (Tj ),
L1t =T0 L1 L1 T0 ,
L0t =T1 L1 L1 T1 T0 L0 L0 T0 ,
0 =T2 L1 L1 T2 + T1 L0 L0 T1 ,
..
.
0 =TN L0 L0 TN .

(133a)
(133b)
(133c)
(133d)
(133e)

Finally, we can show that (the proof is established in [6]) for n = 1 of (130), L(x, t, ) can
be written

n
 2
1 o
1
2
a

4a
a
a
+
2
3
2
2
n
L = I + 1
(134)
 1 o ,
 2
1
2
a3
a

4a
a
+
2
3
2
where

i
i
a2 = (
q q),
a3 = (
r r),
(135)
2
2
and a C are the (x, t-independent) parameters of the defect. For example, if q = u,
r = q , = , u is a complex scalar field. L can be written
n
o

p
1
i
2 + |
2
b
u

u|

i
(
u

u)
+
2
2
n
o ,
L = I + 1
(136)
p
i
1

2 + |
2
2 (
u u )
a

i
b
u

u|
+
2

where b = ia1 R. Hence the defect matrices are determined by two arbitrary variables
a+ and b. Taking into account r = q , it turns out that the real integral of the motions
can be expressed in the following combination
Ireal () = i(I() I ( )),

(137)

and the contribution of defect can be written


D
Ireal
() = i (log(L11 + L12 ) log(L11 + L12 ) ) .

(138)

Expand (137) in inverse power of , and adding (138), we can obtain the expression of
modified conserved quantities. For the first three conserved quantities, we have
Z x0
Z
2
(139a)
Cm =
|
u| dx +
|u|2 dx |x=x0 ,

x0

Pm = i

(
uu
x u
u
x )dx+i

(uux u ux )dxi [(u u


u
u 2b )] |x=x0 , (139b)

x0

Z
Hm =

x0

x0

|
ux | + |
u|

x0

dx +


|ux |2 + |u|4 dx

h
i


2
2
|
u| + |u| 3b2 2 ib (
uu u
u ) |x=x0 .
(139c)
3
This method stands for the AKNS scheme, the same procedure can so be done for other
integrable nonlinear equations.

Conclusions and Outlook

In this report, I have presented the Inverse Scattering method to solve integrable nonlinear evolution equations under the boundary conditions where u(x, t) 0 as |x| .
The AKNS scheme provides a simple and efficient framework to identify and solve systematically nonlinear equations. On one hand we can consider the ISM as a Fourier-type
transform of nonlinear equations, on the other hand in showing that the AKNS scheme
exhibits naturally a Hamiltonian structure, the ISM can be regarded as a canonical transformation which make the systems being written in terms of action-angle variables. In the
context of classical integrable models, an ISM approach to generate defects, which keep
systems completely integrable, has also been shown.
There are numerous alternative questions which are related to the ISM presented here.
First the ISM in the AKNS scheme is not the unique way to solve nonlinear equations.
Zakharov and Shabat [24] introduced a direct version of the ISM, called dressing method by
using an operator formalism. Furthermore, the procedure of obtain the Gelfand-LevitanMarchenko equation is closely related to Riemann-Hilbert problems in complex analysis.
There exist also Hirotas bilinear method [10], Backlund transformation...etc. We expect
that all these methods are related one to another. In addition, the AKNS scheme and
the ISM can be generalised in a matrix form [21, 20], where the field u(x, t) will represent
a vector. From another point of view, the study of integrability reveals a rich algebraic
structure. The classical r-matrix can be also involved in the ISM. The corresponding
Hamiltonian structure can be transported to the quantum case.
Based on this project, some developments are possible for future work. We can generate
the defect matrices to matrix generalisation of integrable models. Moreover, we can extend
the defect problem to other contexts other than the AKNS scheme. Finally, a generalisation
of what we present here will be to put defects in a random way in our integrable systems.

Acknowledgments
I would like to acknowledge Vincent Caudrelier, my supervisor, who introduced me into
the field of Mathematical Physics and accompanied me in the course of this project. I
thank warmly Andrea Cavagli`
a for numerous discussions and encouragements. I extend
also my gratitude to all people who helped me both in my study and in my life during my
stay in London, especially Andreas Fring, Olalla Castro-Alvaredo, Qinna Wang.

Appendices
A

KdV Hierarchy

I will show how Lax obtained the evolution operator M for the spectral operator L which
is defined as a 1D Schr
odinger operator
L=

d2
+ u(x, t).
dx2

(140)

Lax marked that L(t) is unitarily equivalent to L(0), i.e. there exists an linear operator
U on a Hilbert space H which satisfies
(t)L(t)U (t) = L(0),
U

(t) = I,
and U (t)U

(141)

is the adjoint of U , i.e. (, U ) = (U


, ). According to (141), operating on a
where U

function H with U L(t) gives


L(t) = L(0)U
.
U

(142)

Doing the inner product to the two sides of (142) with , we have
L(t)) = (, U
(t) ) = (t) (U , ),
(, U
) = (U L(0),

(, L(0)U
) = (0) (U , ).

(143a)
(143b)

Thus, (t) = (0) , the L(t) has the same eigenvalue as L(0). Now define M on H as
Ut = MU,

= M.
and M

(144)

Deriving (142) with respect to t yields


t LU + U
Lt U + U
t LUt
0 =U
MLU

Lt U + U
t LMU
=U
+U
(ML
+ Lt + LM)U
=U
(Lt [L, M]) U.
=U

(145)

Hence, L and M are indeed a Lax pair satisfying


Lt [L, M] = 0.

(146)

From the definition (144), we know that M is a linear operator which is to be antisymmetric so that (M, ) = (, M) for all , H. A natural choice is therefore to

construct M fromR a suitable linear combination of odd derivatives in x. (Consider inner

product (, ) = dx, then


Z n
Z

n

(M, ) =
dx
=
dx = (, M),
(147)
n
n
x
x
if n is odd and , x , . . . , , x , 0 as x .) The simplest choice is obviously
M=c

,
x

(148)

and this gives us


ut cux = 0.

(149)

For a third-order differential operator, we have the general form


M =

+V
+
V + A,
3
x
x x

(150)

where is a constant, V = V (x, t) and A = A(x, t). This operator identifies with (17b)
with a suitable choice that = 4, V = 34 u and A = A(t). Thus, the KdV equation has
appeared as the second example within the framework of the Lax pair. It is the first nontrivial case. The procedure adopted above can now be extended to higher-order nonlinear
evolution equations. With M defined as

n 
X
2n+1
2m1
2m1
M = 2n+1 +
Vm 2m1 + 2m1 Vm + A.
(151)
x

x
x
m=1

For n = 2, it can be shown that the evolution equation is


ut + 30u2 ux 20ux uxx 10uuxxx + uxxxx = 0,

(152)

which presents a fifth-order KdV equation. Clearly, there is an infinity of evolution equations, and we call the class of equations obtained in this way the KdV hierarchy.

Proof of Analyticity of ix

I will prove the statement


of the analyticity of ix in section (2.3). For = + i, and
Rx
> 0, operating dy to 2y (y, )eiy gives,
Z x
Z x
h
ix
iy
iy
dy2y (y, )e = 2 (y, )e
i
dy2 (y, )eiy ,

Z x
ix
=2 (x, )e i
dy2 (y, )eiy .
(153)

i
Multiplying
R x e x to the second line of (20), changing the variable x to y and operating on
it with dy, we have
Z x
Z x
2 (x, )eix = 2i
dy2 (y, )eiy +
dyr(y)1 (y, )eiy .
(154)

Deriving (154) with respect to x, and putting


f (x, ) = 2 (x, )eix ,

g(x, ) = r(x)1 (x, )eix ,

(155)

yield
d
f (x, ) = 2if (x, ) + g(x, ).
dx
By solving this differential equation of 1st order, we have
Z x
ix
2 (x, )e =
dyr(y)e2i(xy) 1 (y, )eiy .

(156)

(157)

ix
In the same
R xway, multiplying e to the first line of (20), changing x to y and operating
on it with dy, we obtain
Z x
ix
1 (x, )e =1 +
dyq(y)2 (y)eiy ,

Z y
Z x
dzq(y)r(z)e2i(yz) (z, )eiz .
(158)
=1 +
dy

Setting
Z

Z
|r(y)|dy,

R0 (x)

|q(y)|dy,

Q0 (x)

(159)

we can show that


|1 (x, )eix | 1 +

dy1

Z x

1 +

Z x

dy1

dz1 q(y1 )r(z1 )1 z1 ,



Z
dz1 |q(y1 )||r(z1 )| 1 +

z1

z1

z1

z1

zn1

...

n
zn1 Y

j=1


(dyj dzj ) =

1
n!

Z

z2

z2

dz2 |q(y2 )||r(z2 )| 1 +

dy2

...

|
{zR
2n

... .

(160)
With
Z x Z

n
Y

(dyj dzj ) ,

j=1

(161)
we have
1
1
|1 (x, )eix | 1 + Q0 (x)R0 (x) + Q0 (x)2 R0 (x)2 + Q0 (x)3 R0 (x)3 + . . .
2!
3!
p
=I0 (2 Q0 (x)R0 (x))).

(162)

where I0 is the modified Bessel function which is absolutely convergent. So 1 eix is


bounded. The analyticity of 1 eix can be easily established by repeating the same procedure by differentiating (158) with respect to . In the same way, we can see that (x, )eix
ix and (x,
)eix are analytic
is also analytic in the upper half plane ( > 0), while e
in the lower half plane ( < 0). From (42a), a() and a
() are respectively analytic in the
upper and lower half plane.
Furthermore, by direct calculations of the expressions of (157) and (158), we can obtain
the asymptotic behaviors of 1 eix and 2 eix as || ,
Z x
1
1
ix
r(y)q(y)dy + ( 2 ),
(163a)
1 e = 1
2i

1
1
2 eix =
r(x) + ( 2 ).
(163b)
2i

ix , eix and e
ix . With these asymptotic
Similar expressions can be obtains for e
expansions and (42), as || we have the asymptotic expressions of a and a
,
Z
1
1
r(y)q(y)dy + ( 2 ),
(164a)
a() = 1
2i

Z
1
1
r(y)q(y)dy + ( 2 ).
(164b)
a
() = 1 +
2i

Symmetry Relations between q and r

The symmetry relations between q and r in (20) yield some interesting results for the
transition coeffients a (b) and a
(b).

Assuming r = q , (20) becomes






i q
x q
ux =
u,
or iu =
u.
(165)
q i
q x
satisfying the boundary conditions (37), we
With our usual definition of () and ()
can obtain the following relation


 

2 (x, )
i q
2x (x, )
.
(166)
=
1 (x, )
q
i
1x (x, )
This identifies with (165) with the symmetric property that , u1 2 (x, ) and
Similar
u2 1 (x, ). Note that ( 2 , 1 (x, )) satisfy the boundary condition for .

relations can be obtain for and , and we can come to the following results,




2 (x, )
2 (x, )

,
(167)
,
(x, ) =
(x, ) =
1 (x, )
1 (x, )
which imply that
b() = b ( ),
a
() = a ( ),
= N,
Ck = C .
N
k = ,
k

In the same way, in the case where r = q, we have






2 (x, )
2 (x, )

(x, ) =
,
(x, ) =
,
1 (x, )
1 (x, )

(168a)
(168b)

(169)

and
b() = b(),
a
() = a(),
= N,
N
k = k ,
Ck = Ck .

(170a)
(170b)

Proof of Expression of log a

We have stated in the section (2.4) that the conserved quantities are known once log a is
known for = > 0. Now we will show how to obtain the expression of log a [15, 17]. Recall
that a() is analytic for = > 0 with a finite number of zeros, and a 1 as || . We

also assume that: (i) the zeros of a are simple, (ii) no zero occurs on the real axis, (iii)
for real , n log a() a as || for all n 0. Define () as
() = a()

Y
m
,
m

(171)

m=1

() shares these same analytic properties as a(), but with no zero for = > 0. Similarly,
a
() is analytic in the lower half plane, and we can define
() as

N
Y
l

() = a
()
.
l

(172)

l=1

() is analytic with no zero in the half plane where= 0 and it tends to 1 as .


By Cauchys integral theorem, for = 0, we have
Z
log ()
1
d,
(173a)
log () =
2i
Z
1
log
()
0=
d.
(173b)
2i
By adding these, we find that, for = 0,
log a() =

N
X


log

m=1


+

1
2i

log (()
())
d.

(174)

log |()|2 ()
d.

(175)

If we further assume that r = q , this simplifies to


log a() =

N
X
m=1


log


+

1
2i

Keeping = > 0, we can expand the right side of (174), as || in inverse power of
( N

X
X 1

n
n+1
log a() =

(m
)
(m )n+1
n
n=1
m=1
"
# )

Z
N
N

X
X

m
m

a() +
log
n log a
+
log
d .
2i

m
m

m=1

l=1

(176)
This expression corresponds to (86) in section (2.4).

Proof of No Zeros of a for ( > 0) in (3.1)

As we have seen, the NLS equation (101) can be recasted into the relation r = q in the
AKNS scheme. According to (21), we can express the auxiliary problem as an eigenvalue
problem Lv = v, where


ix iq
L=
,
and = 1.
(177)
iq ix

If = +1, we can show that L is a self-adjoint operator, thus it can only have real
eigenvalues. Suppose that there exists a 0 such that a(0 ) = 0 with =0 > 0. It follows
from (42a) that the vectors and are linearly dependent. Since =0 > 0, and
have exponential decay behaviors as x and x respectively. Thus (20) has a
solution v decaying exponentially as |x| . However (20) is equivalent to the eigenvalue
problem (21), in which the self adjoint operator L would have real eigenvalue 00 . This
contradiction leads us to the conclusion that a() has no complex zeros, i.e. there are no
zeros in the upper half plane. No soliton solutions occur if = +1. If = 1, L is not
self-adjoint, and a() may have zeros.

Bibliography
[1] M. J. Ablowitz, D. J. Kaup, A. C. Newell, and H. Segur. Method for solving the
sine-gordon equation. Phys. Rev. Lett., 30(25):12621264, Jun 1973.
[2] V.I. Arnold. Mathematical Methods of Classical Mechanics. Springer, 1989.
udinger
[3] P.N. Bibikov and V.O. Tarasov. Boundary-value problem for nonlinear schrA
equation. Theoretical and Mathematical Physics, 79:570579, June 1989.
[4] P. Bowcock, E. Corrigan, and C. Zambon. Affine Toda field theories with defects.
Journal of High Energy Physics, 1:56+, January 2004.
[5] P. Bowcock, E. Corrigan, and C. Zambon. Classically Integrable Field Theories with
Defects. International Journal of Modern Physics A, 19:8291, 2004.
[6] V. Caudrelier. On a Systematic Approach to Defects in Classical Integrable Field
Theories. International Journal of Geometric Methods in Modern Physics, 5:1085+,
2008.
[7] Clifford S. Gardner, John M. Greene, Martin D. Kruskal, and Robert M. Miura.
Method for solving the kortewegdevries equation. Phys. Rev. Lett., 19(19):1095
1097, Nov 1967.
[8] C.S. Gardner. Kortewegde vries equation and generalizations. iv. the kortewegde
vries equation as a hamiltonian system. Journal of Mathematical Physics, 12, Dec.
1970.
[9] I.M. Gelfand and B.M. Levitan. On the determination of a differential equation from
its spectral function. Izv. Akad. Nauk SSSR Ser. Mat., 15:309360, 1951.
[10] R. Hirota. The direct method in soliton theory. Cambridge University Press, 2004.
[11] J. Ieda, M. Uchiyama, and M. Wadati. Inverse scattering method for square matrix
nonlinear Schr
odinger equation under nonvanishing boundary conditions. Journal of
Mathematical Physics, 48(1):013507+, January 2007.
[12] D.J. Kaup and P.J. Hansen. An initial-boundary value problem for the nonlinear
udinger equation. Physica D: Nonlinear Phenomena, 18:7784, Jan 1986.
schrA
[13] D.J. Korteweg and G. de Vries. On the change of form of long waves advancing
in a rectangular channel and on a new type of long stationary waves. Phil. Mag,
39:422443, 1895.

[14] P.D. Lax. Integrals of Nonlinear Equations of Evolution and Solitary Waves. Commun.
Pure Appl. Math, 21:467490, 1968.
[15] L.A. Takhtajan L.D. Faddeev.
Springer, 2007.

Hamiltonian Methods in the Theory of Solitons.

[16] A.C. Newell H. Segur M.J. Ablowitz, D.J. Kaup. The Inverse Scattering TransformFourier Analysis for Nonlinear Problems. Studies in Applied Mathematics, 1974.
[17] H. Segur M.J. Ablowitz. Solitons and the Inverse Scattering Transform. Studies in
Applied Mathematics, 1981.
[18] P.A. Clarkson M.J. Ablowitz. Solitons, Nonlinear Evolution Equations and Inverse
Scattering. Studies in Applied Mathematics, 1991.
[19] M. Talon O. Babelon, D. Bernard. Introduction to Classical Integrable Systems. Cambridge University Press, 2003.
[20] T. Tsuchida and M. Wadati. The Coupled Modified Kortewegde Vries Equations.
J. Phys. Soc. Jpn, 67:11751187, December 1998.
[21] M. Wadati. The Modified Kortewegde Vries Equation. J. Phys. Soc. Japan, 34:5,
1972.
[22] V.E. Zakharov and L.D. Faddeev. Kortewegde vries equation: A completely integrable hamiltonian system. Journal Functional Analysis and Its Applications, 5:422
443, Oct. 1971.
[23] V.E. Zakharov and A.B. Shabat. Exact theory of two-dimensional self-focusing and
one-dimensional self-modulation of waves in nonlinear media. Soviet Physics, 34:62
69, Jan 1972.
[24] V.E. Zakharov and A.B. Shabat. A scheme for integrating the nonlinear equations of
mathematical physics by the method of the inverse scattering problem. i. Functional
Analysis and Its Applications, 8:226235, 1974.

Das könnte Ihnen auch gefallen