Sie sind auf Seite 1von 114

INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

Lecture Notes given by DINH NHO HAO Universitat { GH { Siegen Summer Semester 1996

Foreword
The author has worked for 4 years (1993{1996) at the University of Siegen in the group of Applied and Numerical Mathematics. His stay was supported by a DFG{scholarship and by a guest professorship from the University of Siegen. During this time, this lecture was held for graduate students. The group of Applied and Numerical Math. and the students of his lecture like to thank Priv.{Doz. Dr. Dinh Nho Hao for his great e orts in research and teaching. Comments on this lecture note should be sent directly to the author via email (hao@thevinh. ac.vn) or to hjreinhardt@numerik.math.uni-siegen.de . Siegen, Sept. 1997 H.{J. Reinhardt

Contents
1 Introduction
1.1 De nitions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.2 Examples : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.2.1 Equations : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.2.2 Solutions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.3 First-order linear equations : : : : : : : : : : : : : : : : : : : : : 1.3.1 The constant coe cient case : : : : : : : : : : : : : : : : : 1.3.2 A variable coe cient case : : : : : : : : : : : : : : : : : : 1.4 Where PDE come from? : : : : : : : : : : : : : : : : : : : : : : : 1.4.1 Simple Transport : : : : : : : : : : : : : : : : : : : : : : : 1.4.2 Vibration Equations : : : : : : : : : : : : : : : : : : : : : 1.4.3 Di usion Equation : : : : : : : : : : : : : : : : : : : : : : 1.4.4 Steady State Equation : : : : : : : : : : : : : : : : : : : : 1.4.5 Schrodinger's Equation : : : : : : : : : : : : : : : : : : : : 1.5 Classi cation of Linear Di erential Equations : : : : : : : : : : : 1.5.1 Di erential Equations with Two Independent Variables : : 1.5.2 Di erential Equations with Several Independent Variables.

: : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : :

: 1 : 1 : 1 : 2 : 2 : 2 : 3 : 4 : 4 : 5 : 7 : 8 : 8 : 9 : 9 : 14 : : : : : :
17 18 21 21 23 24

2 Characteristic Manifolds and the Cauchy problem

2.1 Notation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2.2 The Cauchy problem : : : : : : : : : : : : : : : : : : : : : : : 2.3 Real Analytic Functions and the Cauchy{Kowalevski Theorem 2.3.1 Real Analytic Functions : : : : : : : : : : : : : : : : : 2.3.2 The Cauchy{Kowalevski theorem : : : : : : : : : : : : 2.4 The Uniqueness theorem of Holmgren : : : : : : : : : : : : : : i

: : : : : :

: : : : : :

17

ii

CONTENTS
2.4.1 The Lagrange{Green Identity : : : : : : : : : : : : : : : : : : : : : : 24 2.4.2 The Uniqueness theorem of Holmgren : : : : : : : : : : : : : : : : : : 25

3 Hyperbolic Equations

3.1 Boundary and initial conditions : : : : : : : : : : : : : : : : : : : : : : : 3.2 The uniqueness : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3.3 The method of wave propagation : : : : : : : : : : : : : : : : : : : : : : 3.3.1 The D'Alembert method : : : : : : : : : : : : : : : : : : : : : : 3.3.2 The stability of the solution : : : : : : : : : : : : : : : : : : : : : 3.3.3 The re ection method : : : : : : : : : : : : : : : : : : : : : : : : 3.4 The Fourier method : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3.4.1 Free vibration of a string : : : : : : : : : : : : : : : : : : : : : : : 3.4.2 The proof of the Fourier method : : : : : : : : : : : : : : : : : : 3.4.3 Non{homogeneous equations : : : : : : : : : : : : : : : : : : : : : 3.4.4 General rst boundary value problem : : : : : : : : : : : : : : : : 3.4.5 General scheme for the Fourier method : : : : : : : : : : : : : : : 3.5 The Goursat Problem : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3.5.1 De nition : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3.5.2 The Darboux problem. The method of successive approximations 3.6 Solution of general linear hyperbolic equations : : : : : : : : : : : : : : : 3.6.1 The Green formula : : : : : : : : : : : : : : : : : : : : : : : : : : 3.6.2 Riemann's method : : : : : : : : : : : : : : : : : : : : : : : : : : 3.6.3 An application of the Riemann method : : : : : : : : : : : : : : : 4.1 Boundary conditions : : : : : : : : : : : : : : : : : : : 4.2 The maximum principle : : : : : : : : : : : : : : : : : 4.3 Applications of the maximum principle : : : : : : : : : 4.3.1 The uniqueness theorem : : : : : : : : : : : : : 4.3.2 Comparison of solutions : : : : : : : : : : : : : 4.3.3 The uniqueness theorem in unbounded domains 4.4 The Fourier method : : : : : : : : : : : : : : : : : : : 4.4.1 The homogeneous problem : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

: : : : : : : : : : : : : : : : : : : : : : : : : : :

29
29 31 33 33 34 36 39 39 42 44 46 46 51 51 53 58 58 59 61 63 65 66 66 67 68 69 69

4 Parabolic Equations

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

: : : : : : : :

63

CONTENTS
4.4.2 The Green function : : : : : : : : : : : : : : : : : : : : : : : 4.4.3 Boundary value problems with non-smooth initial conditions 4.4.4 The non-homogeneous heat equation : : : : : : : : : : : : : 4.4.5 The non-homogeneous rst boundary value problem : : : : : 4.5 Problems on unbounded domains : : : : : : : : : : : : : : : : : : : 4.5.1 The Green function in unbounded domains : : : : : : : : : : 4.5.2 Heat conduction in the unbounded domain (?1; 1) : : : : 4.5.3 The boundary value problem in a half-space : : : : : : : : :

iii

: : : : : : : : : : : : : :

: : : : : : : : : : : : : :

: : : : : : : : : : : : : :

: : : : : : : : : : : : : :

: : : : : : : : : : : : : :

72 73 77 78 79 79 82 87 93 94 94 96 100 101

5 Elliptic Equations
5.1 5.2 5.3 5.4 5.5 5.6

The maximum principle : : : : : : : : : : The uniqueness of the Dirichlet problem : The invariance of the operator : : : : : Poisson's formula : : : : : : : : : : : : : : The mean value theorem : : : : : : : : : : The maximum principle (a strong version)

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

: : : : : :

93

Bibliography Index

103 105

Chapter 1 Introduction
1.1 De nitions
A partial di erential equation (PDE) for a function u(x; y; : : :) is a relation of the form

F (x; y; : : :; u; ux; uy ; : : :; uxx; uxy ; : : :) = 0;

(1:1:1)

where F is a given function of the independent variables x; y; : : :, and of the "unknown" function u and of a nite or in nite number of its derivatives. We call u a solution of (1.1.1) if after substitution of u(x; y; : : :) and its partial derivatives, (1.1.1) is satis ed identically in x; y; : : : in some region in the space of these independent variables. For simplicity, we suppose that x; y; : : : are real and that the derivatives of u occuring in (1.1.1) are continuous functions of x; y; : : : in the real domain . Several PDEs involving one or more unknown functions and their derivatives constitute a system. The order of a PDE or of a system is the order of the highest derivative that occurs. A PDE is said to be linear if it is linear in the unknown functions and their derivatives, with coe cients depending on the independent variables x; y; : : : . The PDE of order m is called quasilinear if it is linear in the derivatives of order m with coe cients that depend on x; y; : : : and the derivatives of order < m.

Remark 1.1.1 The dimension of the space where x; y; : : : belong to can be in nite. Remark 1.1.2 If in (1.1.1) derivatives of u are of in nite order, then we say that (1.1.1) is
of in nite order.

1.2 Examples
1.2.1 Equations
ux + uy = 0 (transport) ux + g(u)y = 0 (conservation law, shock wave)
1

CHAPTER 1. INTRODUCTION
uxx + uyy = 0 (the Laplace equation) utt ? uxx = 0 (the wave equation) ut = k u (the heat equation) h2 ih t = ? 2m + V (Schrodinger's equation) ut + cuux + uxxx = 0 (the Korteweg - de Vries equation) 1 X 1 dn u n = g (x) 0 n! dx

1.2.2 Solutions
get ux = const (in x). Since the solution depends on y, we have ux = f (y), where f (y) is arbitrary. Do it again to get u(x; y) = f (y)x + g(y).

Example 1.2.1 Find all u(x; y) satisfying the equation uxx = 0. We can integrate once to Example 1.2.2 Find u(x; y) satisfying PDE uxx + u = 0.
So we get

Solution: u(x; y) = f (y) cos x + g(y) sin x, f and g are arbitrary.

Example 1.2.3 Find u(x; y) satisfying uxy = 0. First integrate in x, regarding y as xed.

uy (x; y) = f (y): Next let integrate in y, regarding x as xed. We get the solution u(x; y) = F (y) + G(x); where F 0 = f .

1.3 First-order linear equations


1.3.1 The constant coe cient case
aux + buy = 0; (1:3:1) where a and b are constants not both zero. Geometric Method. The quantity aux + buy is the directional derivative of u in the ~ direction of the vector V = (a; b) = a~ + b~ . It must always be zero. This means that u(x; y) i j ~ ~ must be constant in the direction of V . The vector (b; ?a) is orthogonal to V . The lines ~ parallel to V satisfy the equation bx ? ay = const = c and along them the solution u has a constant value, say f (c). Then u(x; y) = f (c) = f (bx ? ay). Since c is arbitrary, we have u(x; y) = f (c) = f (bx ? ay) (1:3:2)
Consider the equation

1.3. FIRST-ORDER LINEAR EQUATIONS


for all values of x and y. Coordinate method. Change variables to

x0 = ax + by; y0 = bx ? ay:
y

(1:3:3)

y y

We have and

@u 0 @u 0 ux = @u = @x0 @x + @y0 @y = aux0 + buy0 @x @x @x @u 0 @u 0 uy = @u = @y0 @y + @x0 @x = bux0 ? auy0 : @y @y @y

Hence aux + buy = a(aux0 + buy0 ) + b(bux0 ? auy0 ) = (a2 + b2)ux0 : Since a2 + b2 6= 0, we have ux0 = 0. Thus, u = f (y0) = f (bx ? ay):

1.3.2 A variable coe cient case


ux + yuy = 0 (1:3:4) is linear and homogeneous but has a variable coe cient y. The PDE (1.3.4) itself asserts that the directional derivative in the direction of the vector (1; y) is zero. The curves in the xy plane with (1; y) as tangent vectors have slopes y. Their equations are dy = y : (1:3:5) dx 1
The equation

4
y

CHAPTER 1. INTRODUCTION

y = Cex: (1:3:6) These curves are called the characteristic curves of the PDE (1.3.4). As C is changed, the curves ll out the xy plane perfectly without intesecting. On each of the curves u(x; y) is a constant because d u(x; Cex) = @u + Cex @u = u + yu = 0: y dx @x @y x Thus, u(x; Cex) = u(0; Ce0) = u(0; C ) is independent of x. Putting y = Cex and C = e?xy, we have u(x; y) = u(0; e?xy): It follows that u(x; y) = f (e?xy) (1:3:7) is the general solution of our PDE.

This ODE has the solution

1.4 Where PDE come from?


1.4.1 Simple Transport
Consider a uid, say water, owing at a constant rate c along a horizontal pipe of a xed cross section in the positive x direction. A substance, say a pollutant, is suspended in the water. Let u(x; t) be its concentration in grams/centimeter at time t.
u t=1 t=2 t=3

R We know that the amount of pollutant in the interval 0; b] at the time t is M = 0b u(x; t)dx, in grams. At the later time t + h, the same molecules of pollutant have moved to the right by c h centimeters. Hence, Z b+ch Zb u(x; t + h)dx: M = u(x; t)dx =
0
ch

1.4. WHERE PDE COME FROM?

Di erentiate with respect b, we get

u(b; t) = u(b + ch; t + h):


Di erentiate with respect to h and putting h = 0, we get 0 = cux(b; t) + ut(b; t): Since b is arbitrary, the equation describing our simple transport problem looks as follows

ut + cux = 0:

(1:4:1)

1.4.2 Vibration Equations


Consider a exible, elastic homogeneous string or thread of length l, which undergoes relatively small transverse vibrations. Assume that it remains in a plane. Let u(x; t) be its displacement from equilibrium position at time t and position x. Because the string is perfectly exible, the tension (force) is directed tangentially along the string. Let T (x; t) be the magnitude of this tension vector. Limiting ourselves to the consideration of small vibrations of the string, we shall ignore magnitudes of order greater than tan = @u=@x.
y u(x+dx,t) u(x,t) T(x,t) x x+dx x T(x,t) T(x+dx,t)

Any small section of the string (a; b), after displacement from a state of equilibrium within the range of our approximation, does not change its length !2 u Z bv u t1 + @u dx ' b ? a S= a @x and therefore, in agreement with Hook's law, T is independent of t. We shall show now it is also independent of x. This means

T = T0 = const :

CHAPTER 1. INTRODUCTION

For the projections of the tension into the u? and x?axes (denoted by Tu and Tx, respectively) we have Tx(x) = T (x) cos = q T (x) 2 ' T (x); 1 + (ux) Tu(x) = T (x) sin = T (x) tan = T (x)ux; where is the angle between the tangent at the curve u(x; t) and the x?axis. The tension, the external force and the initia (Tragheitskrafte) are acting into a small piece (x1; x2) of the string. The sum of the projections into x?axis of all these forces must be zero, since we consider only the case of transverse vibrations. Because the external force and the initia (Tragheitskrafte) are acting along the u?axis direction, we have

Tx(x1) ? Tx(x2) = 0; or T (x1) = T (x2):


Since x1 and x2 are arbitrary, we see that T is independent of x. Thus, for all x and t, we have T T0 : Let F (x; t) be the magnitude of the external forces acting on the string at the point x at an instant of time t and directed perpendicular to the x?axis. Finally, let (x) be the linear density of the string at the point x, so that (x)dx is the mass of the element of the string (x; x + dx). We now construct an equation of montion for the element (x; x + dx) of the string. The tension forces T (x + dx; t), ?T (x; t) and the external force, the sum of which, according to Newton's law, must be equal to the product of the mass of the element and its acceleration, are acting upon the element (x; x + dx). Projecting this vector equation onto the u axis, on account of all that has been said, we obtain the equation 2 T0 sin jx+dx ? T0 sin jx + F (x; t)dx = (x)dx @ u(x; t) (1:4:2) @t2 However, whithin the range of our approximation tan = @u sin = p tan 2 @x 1 + tan and therefore " # @ 2u(x; t) = T 1 @u(x + dx; t) ? @u(x; t) + F (x; t) 0 dx @t2 @x @x that is, This is indeed the equation of small transverse vibrations of a string. If the density is constant, (x) = , the equation of vibration of a string assumes the form @ 2u = a2 @ 2u + f; (1:4:4) @t2 @x2

@ 2u = T @ 2u + F: @t2 0 @x2

(1:4:3)

1.4. WHERE PDE COME FROM?

where we have set a2 = T0= (constant), f = F= . We shall also call this equation the one-dimensional wave equation. By the same way one can also derive the equation of small transverse vibrations of a membrane as ! @ 2u = T @ 2u + @ 2u + @ 2u + F := T u + F: (1:4:5) 0 @t2 0 @x2 @y2 @z2

1.4.3 Di usion Equation


We now derive the equation of heat di usion or the di usion of particles in a medium. Denote by u(x; t) the temperature of the medium at the point x = (x1; x2; x3) at an instant of time t. We shall consider the medium to be isotropic and shall denote its density by (x), its speci c heat by c(x), and its coe cient of thermal conductivity at point x by k(x). F (x; t) denotes the intensity of heat sources at the point x at the instant of time t. We shall calculate the balance of heat in an arbitrary volume V during the time interval (t; t + dt). The boundary of V is denoted by S and ~ is the external normal to it. In agreement with n Fourier's law, an amount of heat Z Z Q1 = k @u dSdt = (k grad u;~ )dSdt n n S @~ S equal, by virtue of the Gauss{Ostrogradskii formula, to Z Q1 = V div(k grad u)dxdt enters volume V through surface S . An amount of heat

Q2 = V F (x; t)dxdt arises in volume V as a result of heat sources. Since the temperature in the volume V during the interval of time (t; t + dt) has increased in magnitude by u(x; t + dt) ? u(x; t) ' @u dt @t it follows that for this it is necessary to expend an amount of heat Z @u Q3 = c @t dxdt: V On the other hand, Q3 = Q1 + Q2, and thus # Z " @u dxdt = 0 div(k grad u) + F ? c @t V
from which, since volume V is arbitrary, we obtain the equation of heat di usion c @u = div(k grad u) + F: @t (1:4:6)

CHAPTER 1. INTRODUCTION

Eq. (1.4.7) is called the heat equation.

If the medium is homogeneous, that is, if c; , and k are constants, Eq. (1.4.6) assumes the form @u = a2 u + f (1:4:7) @t where F a2 = ck ; f = c :

1.4.4 Steady State Equation


For steady state processes F (x; t) = F (x); u(x; t) = u(x), and both the wave equation (1.4.5) and the di usion equation (1.4.6) assume the form

? div(p grad u) + qu = F (x):


When p = const and q = 0, Eq. (1.4.8) is called Poisson's equation u = ?f; f = F p when f = 0, Eq. (1.4.9) is called Laplace's equation

(1:4:8) (1:4:9) (1:4:10)

u = 0:

In the wave equation (1.4.5), let the external disturbance f (x; t) be periodic with frequency ! and amplitude f0(x), f (x; t) = f0(x)ei!t: If we seek periodic solutions u(x; t) with the same frequency and unknown amplitude u0(x), then for the function u0(x) we obtain the equation
2 u0 + k2u0 = ? f0a(2x) ; k2 = !2 a

(1:4:11)

called the Helmholtz equation.

1.4.5 Schrodinger's Equation


Let a quantum particle of mass m move in an external force eld with potential V (x). We shall denote by (x; t) the wave function of this particle, so that j (x; t)j2dx is the probability that the particle will be in a neighbourhood d(x) of the point x at an instant of time t; here dx is the volume of d(x). Then, the function satis es Schrodinger's equation

h2 ih t = ? 2m + V ; where h = 1:054:10?27 erg sec is Planck's constant.

(1:4:12)

1.5 Classi cation of Linear (Second-Order) Di erential Equations


1.5.1 Di erential Equations with Two Independent Variables

1.5. CLASSIFICATION OF LINEAR DIFFERENTIAL EQUATIONS

Let us consider a second-order di erential equation with two independent variables ! @ 2u + 2a @ 2u + a @ 2u + F x; y; u; @u ; @u = 0 a11 @x2 (1:5:1) 12 @x@y 22 @y 2 @x @y where we shall suppose that the coe cients a11; a12; and a22 belong to C 2 in a certain neighbourhood and do not become zero simultaneously anywhere in this neighbourhood. For de niteness we assume that a11 6= 0 in this neighbourhood. In fact, it may happen that a22 6= 0. But then, by reversing the roles of x and y, we obtain an equation in which a11 6= 0. If then a11 and a22 become zero simultaneously at a point, then a12 6= 0 in the neighbourhood of this point. In this case the change of variables x0 = x + y; y0 = x ? y leads to an equation in which both a11 and a22 are not zero. Using the new variables ! 2; 2; D ; = (x; y); = (x; y); 2 C 2C (1:5:2) x; y 6= 0; we have ux = u x + u x; uy = u y + u y ; 2 2 uxx = u x + 2u x x + u x + u xx + u xx; uxy = u x y + u ( x y + y x) + u x y + u xy + u xy ; 2 2 uyy = u y + 2u y y + u y + u yy + u yy : Putting these equations in our original equations, we get a new equation of the form a11u + 2a12u + a22u + F = 0; (1:5:3) where 2 2 a11 = a11 x + 2a12 x y + a22 y ; a12 = a11 x x + a12( x y + x y ) + a22 y y ; 2 2 a22 = a11 x + 2a12 x y + a22 y and F is independent of the second order derivatives with respect to and of u. We shall require that the functions (x; y) and (x; y) make the coe cients a11 and a22 vanish, that is they should satisfy the equations 2 2 (1.5.4) a11 x + 2a12 x y + a22 y = 0; 2 2 a11 x + 2a12 x y + a22 y = 0: (1.5.5) First we prove the following lemma.

10

CHAPTER 1. INTRODUCTION
2 2 a11zx + 2a12zxzy + a22zy = 0;

Lemma 1.5.1 1. If z = '(x; y) is a particular solution of the equation


(1:5:6)
then '(x; y) = C is a general integral of the ordinary di erential equation

solution of Eq. (1.5.6).

a11dy2 ? 2a12dxdy + a22dx2 = 0: (1:5:7) 2. Conversely, if '(x; y) = C is a general integral of Eq. (1.5.7), then z = '(x; y) is a

! ! 'x 2 ? 2a ? 'x + a a11 ' (1:5:8) 22 12 'y y for all x; y in the neighbourhood where z = '(x; y) is de ned and 'y (x; y) 6= 0. Let '(x; y) = C , Because 'y (x; y) 6= 0, we can nd a neighbourhood where y = f (x; C ), then # " dy = ? 'x(x; y) (1:5:9) dx 'y (x; y) y=f (x;C)
and

Proof. Since the function z = '(x; y) is a solution of Eq. (1.5.6),

3 2 ! ! ! dy 2 ? 2a dy + a = 4a ? 'x 2 ? 2a ? 'x + a 5 = 0: a11 dx 22 12 ' 12 dx 22 11 'y y y=f (x;C )

Thus, y = f (x; C ) is a solution of (1.5.7). Note that the last term vanishes for all x; y. Conversely, let '(x; y) = C be an integral of the Eq. (1.5.7). We shall show that for any point (x; y) a11'2 + 2a12'x'y + a22'2 = 0 (1:5:10) x y Let (x0; y0) be given. Draw through it an integral curve for Eq. (1.5.7), for which we set '(x0; y0) = C0. Consider now the curve y = f (x; C0). It is clear y0 = f (x0; C0). For any point of this curve we have 3 2 ! ! ! dy 2 ? 2a dy + a = 4a ? dy 2 ? 2a ? dy + a 5 : a11 dx 22 12 dx 22 12 dx 11 dx
y=f (x;C0 )

Putting x = x0 into this equation, we get

a11'2 (x0; y0) + 2a12'x'y (x0; y0) + a22'2 (x0; y0) = 0: x y


Since x0; y0 are arbitrary, we have just proved our lemma. 2 Eq. (1.5.7) is called the characteristic equation of the di erential equation (1.5.1) and its integral curves are called characteristics. Let = (x; y), where (x; y) = const is an integral of Eq. (1.5.7), we see that the coe cient of u vanishes. Analogously, set = (x; y) where (x; y) = const is another integral of Eq. (1.5.7) independent of (x; y), we get the coe cient of u to be also zero.

1.5. CLASSIFICATION OF LINEAR DIFFERENTIAL EQUATIONS


Eq. (1.5.7) is equivalent to the linear equations q dy = a12 + a2 ? a11a22 ; 12 dx a11 q dy = a12 + a2 + a11a22 : 12 dx a
11

11

(1.5.11) (1.5.12)

According to the terminology of the theory of second-order curves, we say that Eq. (1.5.1) at a point M is of
hyperbolic type, when at this point a2 ? a11a22 > 0, 12 elliptic type, when at this point a2 ? a11a22 < 0; 12 parabolic type, when at this point a2 ? a11a22 = 0: 12

a2 ? a11a22 = (a2 ? a11a22)( x y ? y x)2; 12 12 the type of Eq. (1.5.1) is invariant under our transformation (1.5.2). Note that, at di erent points of the domain the equation may have di erent type. In what follows we consider a neighbouhood G where our equation is of only one type. Through any point of G there are two characteristics which are real and di erent for an equation of hyperbolic type, complex and di erent for an equation of elliptic type, and real and coincided for an equation of parabolic type. We consider these cases separately. 1. For equations of hyperbolic type a2 ? a11a22 > 0, so the right-hand sides of (1.5.11) and 12 (1.5.12) are real and di erent. The general integrals (x; y) = C and (x; y) = C of these equations de ne two families of characteristics. Letting = (x; y); = (x; y) (1:5:13) we get Eq. (1.5.3) after dividing by the coe cient before u into the form u = F ( ; ; u; u ; u ) with a12 6= 0: (1:5:14) This is called the canonical form of a hyperbolic equation. Normally we are using another canonical form. Let = + ; = ? ; it means = + ; = ? ; 2 2 where and are considered as new variables. Then, 1 u = 1 (u + u ); u = 1 (u ? u ); u = 4 (u ? u ); 2 2 and Eq. (1.5.3) has the form u ? u = F1 :

Since

12

CHAPTER 1. INTRODUCTION

2. For equations of parabolic type a2 ? a11a22 = 0, therefore the equations (1.5.11) and 12 (1.5.12) coincide and we have only a general integral for Eq. (1.5.6): '( ; ) = const. In this case, set = '(x; y) and = (x; y); where (x; y) is an arbitrary function independent of '. For this choice we have 2 2 a11 = a11 x + 2a12 x y + a22 y = (pa11 x + pa22 y )2 = 0; since a12 = pa11pa22. It follows a12 = a11 x x + a12( x y + y x) + a22 y y = (pa11 x + pa22 y )(pa11 x + pa22 y ) = 0: Thus, u = ( ; ; u; u ; u ) with = ? aF (a22 6= 0) 22 is the canonical form of a parabolic equation. If the right hand side of this equation does not depend on u then we have an ordinary di erential equation. 3. For the case of elliptic equations a2 ? a11a22 < 0, and we see that the equation (1.5.11) 12 is complex conjugate to (1.5.12). If '(x; y) = C is a complex integral of Eq. (1.5.11), then ' (x; y) = C; is a complex integral of Eq. (1.5.12) where ' is the complex function conjugate to '. Let = '(x; y) = ' (x; y): In order to work with real functions, let further = ' + ' ; = ' ?i' : 2 2 It is clear that and are real, and = +i ; = ?i : We note that since we are working with complex variables, we have to suppose that all coe cients a11; a12; : : : are analytic. In this case 2 2 a11 x + 2a12 x y + a22 y 2 2 = (a11 2 + 2a12 x y + a22 2) ? (a11 x + 2a12 x y + a22 y ) x y +2i(a11 x x + a12( x y + y x) + a22 y y ); therefore a11 = a22 and a12 = 0: Now we have u + u = ( ; ; u; u ; u ) with = ? aF (a22 6= 0): 22 We summarize the results of this paragraph as follows.

1.5. CLASSIFICATION OF LINEAR DIFFERENTIAL EQUATIONS


a2 ? a11a22 > 0 (hyperbolic type): uxx ? uyy = or uxy = ; 12 a2 ? a11a22 < 0 (elliptic type): uxx + uyy = ; 12 a2 ? a11a22 = 0 (parabolic type): uxx = . 12

13

Example 1.5.2 Tricomi's equation


2 @2 y @xu + @ u = 0 2 @y 2 belongs to the mixed type: when y < 0 it is of hyperbolic type, but when y > 0 it is of elliptic type, and when y = 0 it is of parabolic type. Tricomi's equation is of interest in gas dynamics. With y < 0 the equations of the characteristics assume the form 1 y0 = p?y : Therefore the curves 3 x + q?y3 = C ; 3 x ? q?y3 = C 2 1 2 2 are characteristics of Tricomi's equation. The transform q q = 3 x + ?y3; = 3 x ? ?y3 2 2 reduces Tricomi's equation to the canonical form ! @ u ? @ u = 0; > : ~ ~ @ 2u ? 1 ~ @ @ 6( ? ) @ @
y elliptic x parabolic hyperbolic

p If y > 0, then ' = 3 x ? i y3 and with the change of variables 2 3 x; = ?qy3 =2 Tricomi's equation reduces to ~ ~ @ 2u + @ 2u + 1 @ u = 0; < 0: ~ @2 @ 2 3 @

14

CHAPTER 1. INTRODUCTION

1.5.2 Di erential Equations with Several Independent Variables.


We consider now the linear di erential equation with real coe cients n n n X XX aij uxixj + biuxi + cu + f = 0; (aij = aji);
j =1 i=1 i=1

(1:5:15)

where a; b; c and f are functions of variables x1; x2; : : :; xn. Making the change of variables
k

= k (x1; x2; : : :; xn) (k = 1; : : : ; n);


n X

we have

uxi = uxixj =

k=1 n n XX k=1 l=1

u k ik ; ukl
ik jl + n X k=1

u k ( k )xixj ;

where ik := @ k =@xi. Putting these expressions in (1.5.15) we get n n n X XX aklu k l + bk u k + cu + f = 0


k=1 l=1 k=1

where

akl = bk =

aij ik jl; i=1 j =1 n n n XX X aij ( k )xixj : bi ik + i=1 j =1 i=1


n n XX 0 aij yiyj ; i=1 j =1

n n XX

We consider now the quadratic form (1:5:16)

where a0 := aij (x0; : : :; x0 ). With the nonsingular linear transformation ij 1 n n X yi = ik k ; det( ik ) 6= 0;


k=1

we get the following new quadratic form n n XX 0 akl


k=1 l=1

k l; ik jl :

with

a0 = kl

n n XX 0 aij i=1 j =1

1.5. CLASSIFICATION OF LINEAR DIFFERENTIAL EQUATIONS

15

Thus, the coe cients of the principal part of Eq. (1.5.15) are transformed into the coe cients of a quadratic form with the aid of a nonsingular linear transformation. It is well known that there exists a singular linear transformation which makes the coe cients of the matrix (a0 ) of a quadratic form into a diagonal matrix, in which ij

ja0 j = 1; or 0; and a0 = 0 (i 6= j ; i = 1; 1; : : : ; n) ii ij
Moreover, by the Sylvester theorem about the inetia of quadratic forms, the number of coe cients a0 6= 0 equals the rank of the martrix (a0 ) and the number of negative coe cients ii ij is invariant. We say that it is of the canonical form. We say that Eq. (1.5.15) at the point x0 is of elliptic type, if all n coe cients a0 6= 0 and ii they have the same sign; of hyperbolic type, if all a0 6= 0 and n ? 1 coe cients a0 have the ii ii same sign and only one has other sign; of ultrahyperbolic type, when m coe cients a0 are ii equal, and n ? m others with other sign are equal (m > 1; n ? m > 1); of parabolic type, if at least one of the coe cients a0 vanishes. ii We take the new independent variables i such that at the point x0 @k= 0 ik = @xi ik where the 0 are the coe cients of the transformation which makes the quadratic form ik (1.5.16) into the canonical form. We have

ux x + ux x +
1 1 2 2

+ uxnxn + = 0; n X ux x = uxixx + i=2 n m X X uxixi + uxixi =


1 1

(elliptic type); (hyperbolic type); (ultrahyperbolic type); (parabolic type):

n?m X i=1

i=1

i=m+1

( uxixi ) + = 0; (m > 0)

Is it possible to use a single transformation to reduce (1.5.15) to the canonical form in a su ciently small neighborhood of each point? To make the reduction for any equation, it is necessary that the number of conditions

aik = 0; alk = la11

l 6= k; l; k = 1; 2; : : : ; n; l = 2; 3; : : : ; n; a11 6= 0;

where l = 0; 1 should not be greater than the number of the unknown functions l; l = 1; 2; n 2 IN: n(n ? 1) + n ? 1 n; that is n 2: 2 For n = 2 we have already proved that it is possible to use only one transformation to reduce (1.5.15) to the canonical form in a su ciently small neighborhood of each point.

16

CHAPTER 1. INTRODUCTION

Chapter 2 Characteristic Manifolds and the Cauchy problem


2.1 Notation
Let X = (x1; : : :; xn) 2 IRn. A vector = ( 1; : : :; n ) whose components are non{negative integers k is called a multi{index. The letters x; y; : : :; ; will be used for vectors, and ; ; ; : : : for multi{indices. Components are always indicated by adding a subscript to the vector{symbol: has components k . We set 0 = (0; 0; : : : ; 0) ; 1 = (1; 1; : : : ; 1) : (2:1:1) For an 2 ZZ n set j j = 1 + 2 + + n ; ! = 1 ! 2! : : : n ! ; (2:1:2) and for x 2 IRn ; 2 ZZ n x := x1 x2 : : : xnn : (2:1:3) By C we generally denote a coe cient depending on n nonnegative integers 1; 2; : : :; n : C = C ::: n : (2:1:4) The C may be real numbers, or vectors in a space IRm. The general m-th degree polynomial in x1; : : :; x1 is then of the form X P (x) = Cx (2:1:5)
1 2 1

with C 2 IR. Setting Dk = @=@xk, we introduce the \gradient vector" D = (D1 ; : : :; Dn ), and de ne the gradient of a function u(x1; : : :; xn) as the vector Du = (D1u; : : : ; Dnu) : (2:1:6) The general partial di erential operator of order m is then denoted by m (2:1:7) D = D1 D2 : : : Dn n = @ :@ : @x n n x : where j j = m.
1 2 1 1

2 ZZ n ;

x 2 IRn ;

j j m

17

18 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM

2.2 The Cauchy problem


j j m

Consider the mth{order linear di erential equation for a function u(x) = u(x1; : : : ; xn) X Lu := A (x)D u = B (x) : (2:2:1) The same formula describes the general mth{order system of N di erential equations in N unknowns if we interprete u and B as column vectors with N components and A as N N square matrices. Similarly the general mth{order quasi{linear equation (respectively system of such equations) is X Lu := A D u+C = 0 ; (2:2:2) where now the A und C are functions of the independent variables xk and of the derivatives D u of the unknown u of orders j j m ? 1. More general nonlinear equations or systems
j j m

F (x; D u) = 0

(2:2:3)

can be reduced formally to quasi{linear ones by applying a rst{order di erential operator to (2.2.3). On the other hand, an mth{order quasi{linear system (2.2.2) can be reduced to a (large) rst{order one, by introducing all derivatives D u with j j m ? 1 as new variables, and making use of suitable compatibility conditions for the D u. The Cauchy problem consists of nding a solution u of (2.2.2) or (2.2.1) having prescribed Cauchy data on a hyper{surface S IRn given by (x1; x2; : : :; xn) = 0 : (2:2:4) Here shall have m continuous derivatives and the surface should be regular in the sense that D = ( x ; : : :; xn ) 6= 0 : (2:2:5) The Cauchy data on S for an mth{order equation consist of the derivatives of u of order less than or equal to m ? 1. They cannot be given arbitrarily but have to satisfy the compatibility conditions valid on S (instead normal derivatives of order less than m can be given independently from each other). We are to nd a solution u near S which has these Cauchy data on S . We call S noncharacteristic if we can get all D u for j j = m on S from the linear algebraic system or equations consisting of the compatibility condition for the data and the eq. (2.2.2) taken on S . We call S characteristic if at each point x of S the surface S is not noncharacteristic. To get an algebraic criterion for characteristic surfaces we rst consider the special case where the hyper{surface S is the coordinate plane xn = 0. The Cauchy data then consist of the D u with j j < m taken for xn = 0. Singling out the \normal" derivatives on S of orders m ? 1: ! @uk = Dk u = (x ; : : :; x ) for k = 0; : : : ; m ? 1 and x = 0 (2:2:6) k 1 n?1 n n @xk n we have on S n D u = D1 D2 : : : Dn?? n (2:2:7) 1
1 1 2 1

2.2. THE CAUCHY PROBLEM

19

provided that n < m. In particular for j j m ? 1 we have here the compatibility conditions expressing all Cauchy data in terms of normal derivatives on S . Let denote the multi{index = (0; : : : ; 0; m) : (2:2:8) In the eq (2.2.1) or (2.2.2) taken on S it is only the term with = that is not expressible by (2.2.7) in terms of 0; : : :; m?1 and hence in terms of the Cauchy data. All others contain derivatives D u with n m ? 1. Thus D u, and hence all D u with j j m, are determined uniquely on S , if we can solve the di erential equation for the term D u. This is always possible in a unique way if and only if the matrix A is nondegenerated, i. e., det(A ) 6= 0. For a single scalar di erential equation this condition reduces to A 6= 0. In the linear case the condition (2:2:9) det(A ) 6= 0 does not depend on the Cauchy data on S ; in the quasi{linear case, where the A depend on the D with j j m ? 1 and on x, one has to know the k in order to decide if S is characteristic. Since the condition (2.2.9) involves coe cients of mth{order derivatives, we de ne the principal part Lpr of L (both in (2.2.2) and (2.2.1)) as X Lpr := AD : (2:2:10)
j j=m

The \symbol" of this di erential operator is the matrix form (\characteristic matrix" of L): X ( )= A : (2:2:11)
j j=m

The N N matrix ( ) has elements that are mth{degree formsmin the components of the @ m vector = ( 1; : : :; n ). In particular, the multiplier of Dn = @xm in Lpr is A = ( ), n where = (0; 0; : : : ; 0; 1) = D (2:2:12) is the unit normal to the surface = xn = 0. The condition for the plane = xn = 0 to be noncharacteristic is then Q(D ) 6= 0 ; (2:2:13) where Q = Q( ) is the characteristic form de ned by Q( ) := det( ( )) (2:2:14) for any vector . (In the case of a scalar equation (N = 1) the characteristic form Q( ) coincides with the polynomial ( )). We shall see that (2.2.13) is the condition for a surface = 0 to be noncharacteristic. Consider a general S given by (2.2.4), by assumption (2.2.5) we can always suppose that in a neighborhood of a given point of S , the condition xn 6= 0 holds. Thus, the transformation ( 1 yi = xi(x ; : : :; x ) for i = n; : : : ; n ? 1 (2:2:15) 1 n for i = is then locally regular and invertible. By the chain rule, @u = X C @u ; (2:2:16) @xi k ik @yk

20 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM

Cik = @yk (2:2:17) @xi are functions of x or of y. Denoting by C the matrix of the Cik and introducing the gradient operator d with respect to y with components @ (2:2:18) di = @y ; i we can write (2.2.16) symbolically, as D = Cd ; (2:2:19) taking D and d to be column vectors. Generally, for j j = m D = (Cd) + R ; (2:2:20) where R is a linear di erential operator involving only derivatives of order m ? 1. Then the principal part of L in (2.2.2) or (2.2.1) transformed to y{coordinates is given by X Lpr = A (Cd) = `pr (2:2:21)
j j=m

where the

and its symbol, the characteristic matrix of `, by X ( )= A (C ) :


j j=m

(2:2:22)

Since the mapping (2.2.15) is regular, S is noncharacteristic for L if the plane yn = 0 is noncharacteristic with respect to the operator L transformed to y{coordinates, i. e. if 0 1 X det( ( )) = det @ (2:2:23) A (C ) A 6= 0
j j=m

for = (0; 0; : : : ; 0; 1). But then

! @yn ; : : : ; @yn = ( ; : : :; ) = C = @x x x @x1 1 = D : Thus, the condition for noncharacteristic behavior of S can again be written as (2.2.13). If u in (2.2.2) stands for a vector with N components, the condition for S to be a characteristic surface is 0 1 X Q(D ) = det @ A (D ) A = 0 : (2:2:24)
1 1

j j=m

utt = c2(ux x + ux x ) for u = u(x1; x2; t), a characteristic surface t = (x1; x2) satis es equation 2 2 1 = c2 x + x :
1 1 2 2 1 2

Example 2.2.1 For the wave equation

(2:2:25) (2:2:26)

2.3. REAL ANALYTIC FUNCTIONS, THE CAUCHY{KOWALEVSKI THEOREM 21

2.3 Real Analytic Functions and the Cauchy{Kowalevski Theorem


2.3.1 Real Analytic Functions
2.3.1.1 Multiple in nite series
We say that the multiple in nite series X
converges , if

c ; 2 ZZ n

(2:3:1) (2:3:2)

jc j < 1 :

The sum of a convergent series does not depend on the order of summation.

Example 2.3.1 For x 2 IRn ; 2 ZZ n


X 01 n Y@X

1 1 1 xi i A = (1 ? x )(1 ? x ) : : : (1 ? x ) = (1 ? x)1 x = 1 2 n i=1 i =0 provided jxij < 1 for all i.

(2:3:3)

Example 2.3.2 For x 2 IRn ; 2 ZZ n , 1 X j j! X X j j! !x = !x


=
provided that jx1 j +
j =0 j j=j 1 X j =0

(x1 +

+ x n )j =

1 1 ? (x1 + + xn)

+ jxnj < 1.

Let c (x) be continuous scalar functions and de ned in a set S IRn; if

jc (x)j c ; 8 2 ZZ n ; x 2 S
and then

c < 1; c (x)

converges uniformly for x 2 S and represents a continuous function. If S is open and c 2 C j (S ) ; 8 , and if the series X D c (x)

22 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM converges uniformly for x 2 S and each with j j j , then P c (x) 2 C j (S ) and X X D c (x) = D c (x) ; x 2 S; j j j : Of particular importance are the power series X c x ; x 2 IRn ; 2 Z n ; c 2 IR : Assume that the series converges for a certain z: X jc j jz j < 1 : Then (2.3.4) converges uniformly for all x with jxij jzij for all i : Hence, X f (x) = c x (2:3:4) (2:3:5) (2:3:6) (2:3:7)

de nes a continuous function f in the set (2.3.6). Problem: All series obtained by formal di erentiation of (2.3.7) converge in the interior of (2.3.6), and even uniformly in every compact subset of the interior.

2.3.1.2 Real analytic functions De nition 2.3.3 Let f (x) be a function whose domain is an open set in IRn and whose range lies in IR. For y 2 we call f real analytic at y if there exist c 2 IR and a
neighborhood U of y (all depending on y) such that X f (x) = c (x ? y) ; 8x 2 U :

(2:3:8)

We say f is real analytic in (f 2 C !( )), if f is real analytic at each y 2 . A vector f (x) = (f1(x); : : :; fm(x)) de ned in is called real analytic, if each of its components is real analytic.

Theorem 2.3.4 If f = (f1; : : : ; fm) 2 C !( ), then f 2 C 1( ). Moreover, for any y 2 there exists a neighborhood U of y and positive numbers M; r such that for all x 2 U . X1 f (x) = (2.3.9) ! (D f (x))(x ? y) M j j!r?j j ; 8 2 Z n ; 8k : D fk (x) Theorem 2.3.5 Let f 2 C ! ( ) where is a connected open nset in IR. Let z 2 . Then f is uniquely determined in if we know the D f (z ) ; 8 2 ZZ . In particular f is uniquely
determined in by its values in any non{empty open subset of .

Proof:

Let f1; f2 2 C ! ( ) and let D f1(z) = D f2(z) for all z 2 ZZ n . Write g = f1 ? f2 and decompose into 1 = fx j x 2 ; D g (x) = 0 8 2 ZZ n g 2 = fx j x 2 ; D g (x) 6= 0 for some 2 ZZ n g : Then 2 is open by continuity of g. The set 1 also is open; for if y 2 1, we have g(x) = 0 in a neighborhood of y by (2.3.9). Since z 2 1 and is connected, 2 must be empty. 2

2.3. REAL ANALYTIC FUNCTIONS, THE CAUCHY{KOWALEVSKI THEOREM 23


Let x 2 Cn ; 2 ZZ n ; P Cn ; f : P ! Cm . We call f analytic in P (f 2 C a(P)), if for IP I I each y 2 we can represent f in the form (2.3.8) in a neighborhood of y. Here c 2 Cm . It I can be easily seen that c = 1! D f (y) : (2:3:10)
set

2.3.1.3 Analytic and real analytic functions

Theorem 2.3.6 A function f (x) with range in Cm and domain in Cn is analytic in the open I I P
if f is di erentiable with respect to the independent variables.

Theorem 2.3.7 If f P C ! ( ) with 2 n


exists a neighborhood x 2 S.

IRn , then for every compact subset S of there of S in C and a function F 2 C a(P) such that F (x) = f (x) for all I

2.3.2 The Cauchy{Kowalevski theorem


The theorem concerns the existence of a real analytic solution of the Cauchy problem for the case of real analytic data und equations. We restrict ourselves to quasi{linear systems of type (2.2.2) since more general nonlinear systems can be reduced to quasi{linear ones by di erentiation. We assume that S is real analytic in a neighborhood of one of its points x0, that is near x0 the surface S is given by an equation (x) = 0, where is real analytic at x0 and D 6= 0 at x0, say, Dn 6= 0. On S we prescribe compatible Cauchy data D u for j j < m which shall be real analytic at x0. The coe cients A ; C shall be real analytic functions of their arguments x ; D u at the point x0 ; D x0. We assume, moreover, S to be non{characteristic at x0 (and hence in a neighborhood of x0) in the sense that Q(D ) 6= 0. Then the Cauchy{Kowalevski theorem asserts that there exists a unique solution of (2.2.2) which is real analytic at x0. First of all, we can transform x0 into the origin and S locally by an analytic transformation into a neighborhood of the origin in the plane xn = 0. Then by introducing derivatives of orders less than m as new dependent variables one reduces the system to one of rst order. We make use here of the fact that the set of real analytic functions is closed under di erentiation and composition. One arrives at a rst order system in which the coe cient matrix of the term with @u=@xn is non{generate because S is non{characteristic. Hence we can solve for @u=@xn, obtaining a system in the standard form

X @U = n?1 ai(x; U ) @U + b(x; U ) @xn i=1 @xi

(2:3:11)

where the ai(x; U ) is a square matrix (aijk ), and b(x; U ) is a column vector with components bi, U is the new vector of u and its derivatives of order less than m. On xn = 0, near 0, we have prescribed initial values U = f (x1; : : :; xn?1). Here we can assume that f = 0, introducing U ? f as the new unknown functions. We can add xn as an additional dependent variable U or component of U satisfying the equation @u =@xn = 1, and the initial condition U = 0. This has the e ect that the ai and b in (2.3.11) do not depend on xn. In fact, we can write

24 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM

ai(x; U ) = = b(x; U ) = =

ai(x1; x2; : : :; xn; U ) ai(x1; x2; : : :; xn?1; U ) ; b(x1; x2; : : :; xn; U ) b(x1; x2; : : :; xn; U )

where U = (U; U ). Writing (2.3.11) componentwise, we have to prove the following version of the Cauchy{Kowalevski theorem:

Theorem 2.3.8 (Cauchy{Kowalevski) Let the aijk and bj be real analytic functions of
Z = (x1; x2; : : : ; xn?1; u1; : : : ; uN ) at the origin of IRN +n?1 . Then the system of di erential
equations

uj = 0 for xn = 0 ; j = 1; : : : ; N (2:3:13) has a unique (among the real analytic uj ) system of solutions uj (x1; : : : ; xn ) that is real
analytic at the origin.

with initial conditions

XN @uj = n?1 X ai (z) @uk + b (z) ; for j = 1; : : : ; N @xn i=1 k=1 jk @xi j

(2:3:12)

to analytic solutions of analytic Cauchy problems. It does not guarantee global existence of solutions; it does not exclude the possibility that other non{analytic solutions exist for the same Cauchy problem, nor the possibility that an analytic solution becomes non{analytic in some distance away from the initial surface.

Proof: See, e.g., John's book 10] or Petrovskii's book 17]. Remark: The theorem of Cauchy and Kowalewski is local in character, and applies only

2.4 The Uniqueness theorem of Holmgren


2.4.1 The Lagrange{Green Identity
Let be a domain in IRn with the boundary @ which is su ciently regular. Denote d=dn the di erentiation in the direction of the exterior unit normal = ( 1; : : :; n ) of @ . The Gauss{Ostrogradskii formula says that Z Z dxk dS = Z u( ) dS : Dk u(x) dx = @ u( ) dn (2:4:1) k @ If @ is su ciently regular then this formula is applicable to all u 2 C 1( ). The theorem can be generalized to u 2 C 1( ) \ C 0( ) by approximating from the interior. More generally, we have the formula for integration by parts, Z Z Z T D u dx = T u dS ? v k (2:4:2) v k Dk vT u dx ;
@

where u; v are column vectors belonging to C 1( ) with T denoting transposition.

2.4. THE UNIQUENESS THEOREM OF HOLMGREN


Let now L be a linear di erential operator X Lu = a (x)D u :
j j m

25 (2:4:3)

Let u; v be column vectors and a be square matrices in C m( ). Then by repeated application of (2.4.2) it follows that Z X vT a (x)D u dx jaj m Z X = (?1)j jD vT a (x) u dx Zj j m + M (v; u; )dS : (2.4.4) Here M in the surface integral is linear in the k with coe cients which are bilinear in the derivatives of v and u, the total number of di erentiations in each term being at most m ? 1. The expression M is not uniquely determined but depends on the order of performing the integration by parts. This is the Lagrange{Green identity for L which we also write in the form Z Z Z vT Lu dx = (L v)T u dx + M (u; v; )dS ; (2:4:5) @ where L is the (formally) adjoint operator to L, de ned by X L v := (2:4:6) (?1)j jD a (x)T v :
j j m
@

For the Laplace operator L = , and scalar functions u and v we have Z X Z X Z vuxi i dS ? uxi vxi dx v u dx = @ i Z X Z du uxi vxi dx : = v dn dS ? @ i Integrating once more by parts we obtain ! Z Z Z du ? u dv dS : v dn dn v u dx = u v dx + @

(2:4:7)

2.4.2 The Uniqueness theorem of Holmgren


The Cauchy{Kowalevski theorem does not exclude the possibility that nonanalytic solutions of the same Cauchy problem might exist, furthermore it works only for analytic Cauchy data. However, uniqueness can be proved for the Cauchy problem for a linear equation with analytic coe cients and for data (not necessarily analytic) prescribed in an analytic noncharacteristic surface S . The proof (due to Holmgren) uses of the Cauchy{Kowalevski theorem and the Lagrange{Green identity. Let u be a solution of a rst order system n X k @u (2:4:8) Lu = a (x) @x + b(x)u = 0 k k=1

26 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM


S u=0 RR Z

in a \lens{shaped" region R bounded by two hypersurfaces S and Z . Here x 2 IRn ; u 2 IRn and ak ; b are N N matrices. Assume that u has Cauchy data u = 0 on Z and that S is non{characteristic; that is, the matrix n X (2:4:9) A = ak (x) k is non{degenerate for x 2 S , and is the unit normal of S at x. Let v be a solution of the adjoint equation n X @ L v = ? @x (ak )T v + bT v = 0 for x 2 IR (2:4:10) (T for transposition) with Cauchy data
k=1 k k=1

v = w(x) for x 2 S : Applying the Lagrange{Green identity we nd that Z wT Au ds = 0 :


S

(2:4:11) (2:4:12)

Let now ? be the set of functions w on S for which the Cauchy problem (2.4.10){(2.4.11) has a solution v. If ? is dense in C 0(S ) we conclude that (2.4.12) holds for every w 2 C 0(S ). But then Au = 0 on S , and hence also, since A is non{degenerate, u = 0 on S . For if Au 6= 0 for some z 2 S , then also Au 6= 0 for all x in a neighborhood ! of z on S . We can nd a continuous non{negative scalar function (x) on S with support in ! and with (z) > 0. Then Z (Au)T (Au) ds > 0 for w = (Au) contrary to (2.4.12). Now in the case where the matrices ak and b are real analytic, and S and w are real analytic, the Cauchy{Kowalevski theorem guarantees the existence of a solution v of L v = 0 with v = w on S in a su ciently small neighborhood of S , and so we cannot be sure that this neighborhood includes all of R. To bridge the gap between S and Z and to conclude that u = 0 throughout R, we have to cover all of R by an analytic family of non{characteristic surfaces S .
S

De nition 2.4.1 A family of hypersurfaces S in IRn with parameter ranging over an


open interval = (a; b) forms an analytic eld, if the S can be transformed bi{analytically into the cross sections of a cylinder whose base is the unit ball in IRn?1 . This means that there exists a 1 ? 1 mapping F : ! IRn , where x = F (y) is real analytic in and has non{vanishing Jacobian; the S for 2 are the sets

n o S = x x = F (y) ; (y1; : : : ; yn?1) 2 ; yn = :

(2:4:13)

2.4. THE UNIQUENESS THEOREM OF HOLMGREN


Our conditions imply that the set

27 (2:4:14)

S ;

called the support of the eld, is open, and that the transformation x = F (y) has a real analytic inverse y = G(x) mapping P onto . In particular (x) = Gn (x) is real analytic in P.

Uniqueness theorem Let the S for 2 = (a; b) form an analytic eld in IRn with P
support . Consider the mth order linear system X Lu = A (x)D u = 0
j j m

(2:4:15)

where x 2 IRn ; u 2 IRN , and the coe cient matrices A (x) are real analytic in P. Introduce the sets n o X R = x x 2 ; xn 0 (2.4.16) n o X Z = x x 2 ; xn = 0 (2.4.17) and for 2 We assume that Z and all S are non{characteristic, with respect to L, and that P \R for any 2 is a closed subset of the open set P. Let u be a solution of (2.4.15) of class C m(R) and having vanishing Cauchy data on Z . Then u = 0 in R.

X n = x x 2 S for some with a <

(2:4:18)

Sb R Z
yn yn = b

xn= 0 S Sa

yn = yn = a yn = 0

28 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM

Chapter 3 Hyperbolic Equations


In this chapter we shall consider the following equation utt = a2uxx + f (x; t) for x 2 IR1 ; t 2 R+ .

3.1 Boundary and initial conditions


Consider the equation of small transverse vibrations of a string utt = a2uxx + f (x; t) (3:1:1) for 0 x `. If the ends of the string are xed, say, on the plane, then the boundary conditions u(0; t) = 0 ; u(`; t) = 0 (3:1:2) must be satis ed. Furthermore, the initial conditions , i. e. the form and the velocity @u=@t at the moment when the process begins, say at t0, are given: u(x; t0) = '(x) ; (3.1.3) ut(x; t0) = (x) : (3.1.4) These conditions must coincide with the conditions (3.1.2) at the ends of 0; `]. Here ' and are given functions. Later we shall prove that that these conditions fully describe the solution of the equation (3.1.1). If the ends of the string are moving with a given law, then the boundary conditions have the form u(0; t) = 1(t) ; u(`; t) = 2(t) ; (3:1:20) where 1(t) and 2(t) are given. The boundary conditions can have another forms. For example, if one end of the string is xed, say for x = 0, and the other end is free, then we have u(0; t) = 0 ; (3:1:5) 29

30 and at the free end the elasticity tension

CHAPTER 3. HYPERBOLIC EQUATIONS


T (`; t) = k @u @x

is zero (no external force, k(x) is the Young modulus at the point x). Thus, ux(`; t) = 0 : (3:1:6) If the end x = 0 is moving with a certain law (t), meanwhile at x = ` a given force (t) is acting then u(0; t) = (t) ; ux(`; t) = (t) ; (t) = k Another type of boundary conditions is ux(`; t) = ?h u(`; t) ? (t)] (3:1:7) (elastische Befestigung). Thus, we have the following types of boundary conditions Boundary condition of rst kind u(0; t) = (t) second kind ux(0; t) = (t) third kind ux(0; t) = h u(0; t) ? (t)] : There are also other kinds, e. g., 1 ux(`; t) = k F u(`; t)] ; : : : We do not enter in this direction, but refer to the book by Tikhonov and Samarskii 22]. If the time interval is small, then the boundary conditions have not so much in uence upon the vibration and so we can consider the problem as the limit case for the initial value problem in an unbounded domain: Find a solution of the equation utt = a2uxx + f (x; t) ; ?1 < x < 1 ; t > 0 ; with the initial deformation u(x; t) t=0 = '(x) ; ?1 < x < 1 ; and the initial velocity ut(x; t) t=0 = (x) ; ?1 < x < 1 : This is in fact the Cauchy problem for a hyperbolic equation. If one wants to study the behaviour of the string near to one end assuming that the in uence of the boundary condition at the other end is small, then we have, e. g., u(0; t) = (t) ; t 0 ; u(x; 0) = '(x) ; 0 x < 1 ; ut(x; 0) = (x) ; 0 x < 1 : For a concrete vibration process we have to nd appropriate boundary conditions for it.

x=`

3.2 The uniqueness


Theorem 3.2.1 The di erential equation
! @ 2u = @ k(x) @u + f (x; t) ; 0 < x < ` ; t > 0 ; (x) @t2 @x @x

3.2. THE UNIQUENESS

31

(3:2:1)

has no more than one solution which satis es the initial and boundary value conditions u(x; 0) = '(x) ; ut(x; 0) = (x) ; (3:2:2) u(0; t) = 1(t) ; u(`; t) = 2(t) : Here, we assume that the function u(x; t) and its rst and second derivatives are continuous in the interval 0 x ` for t 0 and (x) > 0 ; k(x) > 0 are continuous, given functions.

Proof:

v(x; t) = u1(x; t) ? u2(x; t) satis es the homogeneous equation ! @ 2v = @ k @v @t2 @x @x and the homogeneous conditions v(x; 0) = 0 ; 9 > > vt(x; 0) = 0 ; = v(0; t) = 0 ; > > v(`; t) = 0 : ;

di erence

Suppose that there exist two solutions u1(x; t) ; u2(x; t) of our problem, then its

(3:2:3)

(3:2:4)

We shall prove that with the conditions (3.2.4) v is identically zero. For this purpose we consider the function 1 Z ` nk(v )2 + (v )2o dx E (t) := 2 (3:2:5) x t 0 and prove that it is independent of t. The function E (t) has a physical meaning; it is the total energy of the string at the moment t. Since v is twice continuously di erentiable, we can di erentiate E (t) and get dE (t) = Z ` (kv v + v v ) dx (3:2:6) x xt t tt dt 0 An integration by part in the second term of the right{hand side gives Z` ` Z` kvxvxt dx = kvxvt 0 ? vt(kvx)x dx 0 0 Z` (3.2.7) = ? vt(kvx)x dx :
0

32

CHAPTER 3. HYPERBOLIC EQUATIONS

(Since v(0; t) = 0 it follows that vt(0; t) = 0 and v(`; t) = 0 implies vt(`; t) = 0 ). Thus, dE (t) = Z ` v v ? v (kv ) ] dx t tt t xx dt Z0` vtt ? (kvx)x] vt dx = 0 : = It means, E (t) = const. Furthermore, since v(x; 0) = 0 ; vt(x; 0) = 0 we have i 1Z `h E (t) = const = E (0) = 2 k(vx)2 + (vt)2 dx = 0 : (3:2:8) 0 t=0 On the other hand, the functions (x) and k(x) are positive, and it follows from (3.2.6), (3.2.8) that vx(x; t) 0 ; vt(x; t) 0 : Hence v(x; t) = const = v(x; 0) 0 : The same result remains valid for the second boundary problem, where ux(0; t) = 1(t) and ux(`; t) = 2(t) ; because in this case vx(0; t) = vx(`; t) = 0. The proof for the third boundary value problem ux(0; t) ? h1u(0; t) = 1 (t) ; h1 0 ; ux(`; t) + h2u(0; t) = 2 (t) ; h2 0 ; is a little bit di erent. In this case vx(0; t) ? h1v(0; t) = 0 ; vx(`; t) + h2v(0; t) = 0 ; and in (3.2.7) we obtain i ` @h kvxvt 0 = ? k @t h2v2(`; t) + h1v2(0; t) : 2 Integrating dE from 0 to t gives dt Z tZ ` v v ? (kvx)x] dx dt E (t) ? E (0) = 0 0 t tt h io i n h ? k h2 v2(`; t) ? v2(`; 0) + h1 v2(0; t) ? v2(0; 0) : 2 Taking into account that 1 Z ` nkv2 + v2o dx = 0 E (0) = 2 x t t=0 0 and v(`; 0) = v(0; 0) = 0, we get h i E (t) = ? k h2v2(`; t) + h1v2(0; t) 0 : 2 On the other hand E (t) 0, it follows that E (t) 0 and so v(x; t) 0.
0

3.3. THE METHOD OF WAVE PROPAGATION

3.3 The method of wave propagation (Wellenausbreitungsmethode)


3.3.1 The D'Alembert method
We consider the problem utt ? a2uxx = 0 ; ?1 < x < 1 ; t > 0 ; (3.3.1) u(x; 0) = '(x) ; ut(x; 0) = (x) ; ?1 < x < 1 : (3.3.2) First, we transform the equation (3.3.1) into the canonical form, in which there is the mixed derivative. The characteristic equation dx2 ? a2dt2 = 0 is equivalent to dx ? a dt = 0 ; dx + a dt = 0 ; the integrals of which are x ? at = C1 ; x + at = C2 : Introducing the new variables = x + at ; = x ? at we get ux = u + u ; uxx = u + 2u + u ut = a(u ? u ) ; utt = (u ? 2u + u ) a2 : Thus, the equation (3.3.1) is transformed into u =0: (3:3:3) It is clear that every solution of Eq. (3.3.3) has the form u ( ; )=f ( ); where f ( ) is an arbitrary function only of . Integrating this equation we obtain Z u( ; ) = f ( )d + f1( ) = f1( ) + f2( ) ; (3:3:4) where f1 depends only on and f2 depends only on . Conversely, if f1 and f2 are arbitrary di erentiable functions, then the function u( ; ) de ned by (3.3.4) is a solution of (3.3.3). It follows that u(x; t) = f1(x + at) + f2(x ? at) (3:3:5) is a general solution of (3.3.3). Assume that there exists a solution of our Cauchy problem (3.3.1), (3.3.2), then it is de ned by (3.3.5). The functions f1 and f2 are de ned by u(x; 0) = f1(x) + f2(x) = '(x) (3.3.6) 0 (x) ? a f 0 (x) = (x) : ut(x; 0) = af1 (3.3.7) 2 2

33

34 From (3.3.7) we have

CHAPTER 3. HYPERBOLIC EQUATIONS

1 Z x ( )d + c ; f1(x) ? f2(x) = a x where x0 and c are some constants. The last equation and (3.3.6) yield 1 '(x) + 1 Z x ( )d + c 9 > > f1(x) = 2 2a Zx 2 = (3:3:8) x c > f2(x) = 1 '(x) ? 21a ( )d ? 2 : > ; 2 x It follows that '(x + at) + '(x ? at) + 1 Z x+at ( )d ? Z x?at ( )d u(x; t) = 2 2 x xo or '(x + at) + '(x ? at) + 1 Z x+at ( )d u(x; t) = (3:3:9) 2 2a x?at The formula (3.3.9) is called D'Alembert's formula. It proves the uniqueness of the solution of the problem (3.3.1), (3.3.2). It is clear also that if ' is twice di erentiable and is once di erentiable then (3.3.9) is a solution of (3.3.1){(3.3.2). Thus the method of D'Alembert proves not only the uniqueness of the solution but also its existence.
0 0 0 0

t x+at=c2 M(x0 ,t0) x xat=c 1

Let (x0; t0) be a point in the plane (x; t) ; t 0. There are two characteristics crossing this point M = (x0; t0). Denoting the intersections of these characteristics with the line x = 0 by P and Q, respectively, we see that in order to determine u(x0; t0) we need only ' and in P; Q]. The triangle MPQ is called the characteristic triangle of the point M .

3.3.2 The stability of the solution


The solution of Eq. (3.3.1) is uniquely determined by the condition (3.3.2). We shall prove that it depends continuously on the Cauchy data. Namely, we prove the following result.

Theorem 3.3.1 For every time interval 0; t0], and " > 0 there always exists a number
= ("; t0) such that if
then

j'j < ; j j < ; ju(x; t)j < " (0 t t0) :

3.3. THE METHOD OF WAVE PROPAGATION

35

Remark 3.3.2 Let ui(x; t) be the solutions of (3.3.1) with the Cauchy data 'i and i ; i =
1; 2. The theorem says that, for every time interval 0; t0], and " > 0 there always exists a number = ("; t0) such that if

j'1 ? '2j < ; j 1 ? 2j < ;


then

ju1(x; t) ? u2(x; t)j < " (0 t t0) :


From the D'Alembert formula (3.3.9) we have

Proof:

ju(x; t)j
<
0

j'(x + at)j + j'(x ? at)j + 1 Z x+at j ( )jd 2 2a x?at


+ 21a :2at (1 + t0) :

" Thus, if we take = 1+t , then ju(x; t)j < ". A boundary value problem is called well{posed, if

1. It has a solution (Existence) 2. This solution is unique (Uniqueness) 3. The solution depends continuously on the initial and boundary conditions (Stability). If one of these three condtions is not met, we say that our problem is ill{posed. For our Cauchy problem we know that '(x + at) + '(x ? at) + 1 Z x+at ( )d u(x; t) = 2 2a x?at is a unique solution, if ' 2 C 2 ; 2 C 1, and this solution is stable in L1 {norm. Thus, our problem is well{posed in this class. However, if ' 62 C 2 ; 62 C 1, then the D'Alembert formula gives no solution to our Cauchy problem. Anyway, we have proved that the function u(x; t) designed by (3.3.9) is stable for any ' and , thus if ' 62 C 2 ; 62 C 1 then we can approximate them by '" 2 C 2 ; " 2 C 1 and receive a solution u" designed by (3.3.9) with respect to these data, such that ju ? u"j < ". Letting " ! 0, we have u" ! u. We say that u is a generalized solution of our Cauchy problem with respect to non-smooth Cauchy data ' and . The Cauchy problem for the Laplace equation has the form

Another example of ill{posedness

uxx + uyy = 0 ; ?1 < x < 1 ; y > 0 u(x; 0) = '(x) ; uy (x; 0) = (x) ; ?1 < x < 1 :
The functions

u(1)(x; y) = 0 ; u(2)(x; y) = 1 sin( x) cosh( y)

36
y

CHAPTER 3. HYPERBOLIC EQUATIONS

uxx+ u yy= 0

u(x, 0)= (x) uy(x, 0)= (x)

satisfy the Laplace equation; furthermore

u(1)(x; 0) = 0 ; u(2)(x; 0) = '(x) = 1 sin( x) u(1)(x; 0) = 0 ; u(2)(x; 0) = (0) = 0 : y y If ! 0, then ju(1)(x; 0) ? u(2)(x; 0)j ! 0; however, for y > 0 u(1)(x; y) ? u(2)(x; y) = 1 j sin xj j cosh(xy)j ?! 0 : =

Thus, the Cauchy problem for the Laplace equation is ill{posed.

3.3.3 The re ection method


Consider the problem in the half axis x > 0: utt ? a2uxx = 0 ; 0 < x < 1 ; t > 0 ; u(0; t) = (t) ; (or ux(0; t) = (t) ) ; u(x; 0) = '(x) ; 0 < x < 1 ; ut(x; 0) = (x) ; 0 < x < 1 : For simplicity, rst, suppose that u(0; t) = 0 (or, ux(0; t) = 0) ;

Lemma 3.3.3 Consider the problem (3.3.1), (3.3.2).


1. If the Cauchy data ' and are odd functions with respect to some point x0, then the corresponding solution u(x; t) is equal to zero at this point. 2. If the Cauchy data ' and are even functions with respect to x0, then the derivative of u with respect to x is equal to zero at this point.

Proof

1. We can suppose that x0 = 0, and thus '(x) = ?'(?x) ; (x) = ? (?x) : For x = 0 we have '(at) + '(?at) + 1 Z at ( )d = 0 : u(0; t) = 2 2a ?at

3.3. THE METHOD OF WAVE PROPAGATION


2. If '(x) = '(?x) ; (x) = (?x) then
0 0 h i ux(0; t) = ' (at) +2' (?at) + 21a (at) ? (?at) = 0 :

37

2
Now we consider the problem

utt = a2uxx u(x; 0) = '(x) ut(x; 0) = (x) u(0; t) = 0


Let

; ; ; ;

0 < x < 1; t > 0 ; 0<x<1; 0<x<1; t>0:

x) (x) = ?'('(x) ? ( ( (x) = ? (?x) x)

x>0 x<0 x>0 x<0

be odd extensions of ' and , respectively. It is clear that the function Z x+at u(x; t) = (x + at) + (x ? at) + 21a ( )d 2 x?at is well de nded. Furthermore, taking Lemma 3.3.3 into account , we see that

u(0; t) = 0 :
Besides, for x > 0

) u(x; 0) = (x) = '(x) ; ut(x; 0) = (x) = (x) : Thus u(x; t) is a solution of our problem. We can write the solution in the following way 8 '(x + at) + '(x ? at) 1 Z x+at > > ( )d for t < x ; x > 0 + 2a < 2 a ?at u(x; t) = > '(x + at) + '(at ? x) 1 Zxx+at > : ( )d ; t > x ; x > 0 + 2a 2 a at?x
t

t<

x a x

38

CHAPTER 3. HYPERBOLIC EQUATIONS

In the domain t < x the boundary condition has no in uence on the solution and it coincides a with the solution of (3.3.1) in (?1; 1). We consider now the case ux(0; t) = 0 : With this purpose we take the even extension of ' and : ( ( '(x) x > 0 ; (x) = (x) ; x > 0 : (x) = '(?x) x < 0 (?x) ; x < 0 As a solution of our equation (3.3.1) we have Z x+at u(x; t) = (x + at) + (x ? at) + 21a x?at ( )d 2 or 8 '(x + at) + '(x ? at) 1 Z x+at > > ( )d ; t< x + 2a < 2 a u(x; t) = > '(x + at) + '(at ? x) 1 xZ?at+at Z at?x x x : > : ( )d ; t > ( )d + + 2a 2 a 0 0 It is clear that u(x; t) satis es the equation, the initial conditions and the boundary condition ux(0; t) = 0. It remains to consider non{homogeneous cases

u(0; t) = (t) 6= 0 ;
and

ux(0; t) = (t) 6= 0 : Let us consider the case of homogeneous Cauchy data u(x; 0) = 0 ; ut(x; 0) = 0 ; u(0; t) = (t) ; t > 0 :

The general solution to this problem has the form

u(x; t) = f1(x + at) + f2(x ? at) :


Since u(x; 0) = 0 ; ut(x; 0) = 0, we see that f1(s) = ?f2(s) = c for s constant. From the boundary condition 0, where c is a

u(0; t) = (t) = f1(at) + f2(?at) = f2(?at) + c ; t > 0 :


Thus, f2(s) =
?s a

+ c ; s < 0. Therefore, if x ? at < 0, say t > x , we have a

u(x; t) =
Putting x = 0, we get c = 0.

? x ? at + c = a

?x + t + c : a

3.4. THE FOURIER METHOD


t

39

xat<0

If x ? at 0, say t < x , then a

u(x; t) = f1(x + at) + f2(x ? at) = 0 :


Now it is easily seen that the solution of our boundary value problem is 8 '(x + at) + '(x ? at) + 1 Z x+at ( )d ; t < x > > < 2 2a x?at a u(x; t) = > '(x + at) + '(at ? x) + 1 Z x+at ( )d ; t > x : > t? x + : a 2 2a at?x a

3.4 The Fourier method


3.4.1 Free vibration of a string
Consider the problem

utt = a2uxx ; 0 < x < ` ; t > 0 ; u(0; t) = u(`; t) = 0 ; t > 0 ; u(x; 0) = '(x) ; ut(x; 0) = (x) ; 0 x ` :

(3.4.1) (3.4.2) (3.4.3)

The equation (3.4.1) is linear and homogeneous, and therefore the sum of its special solutions is again a solution. We shall try to nd the solution of our problem (3.4.1){(3.4.3) via a sum of special solutions with appropriate coe cients. With this purpose we consider the auxiliary problem: Find a solution of the problem utt = a2uxx ; 0 < x < ` ; t > 0 ; (3:4:1) which is not identically vanishing and satisfying the homogeneous boundary conditions u(0; t) = 0 ; u(`; t) = 0 ; (3:4:2) and can be represented in the form

u(x; t) = X (x)T (t) :


Here X depends only on x and T depends only on t.

(3:4:4)

40 Putting (3.4.4) into (3.4.1) we get or, dividing by XT ,

CHAPTER 3. HYPERBOLIC EQUATIONS


1 X 00T = a2 T 00X ; (3:4:5)

X 00(x) = 1 T 00(t) : (3:4:6) X (x) 2a2 T (t) The function (3.4.4) is a solution of (3.4.1), if (3.4.5) or (3.4.6) are satis ed. The right{hand side of (3.4.6) depends only on t, meanwhile the left{hand side depends only on x. It follows that they must be equal to a constant, say ? : X 00(x) = 1 T 00(t) = ? ; (3:4:7) X (x) a2 T (t) here, we put the sign minus before only for our convenience. From (3.4.7) we obtain two ordinary di erential equations for X and T X 00(x) + X (x) = 0 ; (3.4.8) 00(t) + a2 T (t) = 0 : T (3.4.9) The boundary value conditions (3.4.2) give u(0; t) = X (0) T (t) = 0 ; u(`; t) = X (`) T (t) = 0 : It follows that X (0) = X (`) = 0 (3:4:10) otherwise T (t) 0 and u(x; t) 0, and u is not a non{trivial solution of our problem. Thus, in order to nd X (x) we get an eigenvalue problem: Find such that there exists a non{trivial solution of the problem ) X 00 + X = 0 ; (3:4:11) X (0) = X (`) = 0 ; and nd the solutions corresponding to these . The , for which a non{trivial solution exists, are called eigenvalues, and the corresponding solutions to them are called eigenfunctions. In what follows we distinguish between three cases:
1. For < 0, the problem has no non{trivial solution. In fact, the general solution of (3.4.11) has the form p p X (x) = C1e ? x + C2e? ? x : This solution must satisfy the boundary conditions: X (0) = C1 +pC2 = 0 ; p X (`) = C1e` ? + C2e?` ? = 0 : Thus, p p C1 = ?C2 and C1 e` ? ? e?` ? = 0 : p p p Because < 0 ; ? is real and positive, and so e` ? ? e?` ? 6= 0. Hence, C1 = 0 ; C2 = 0 and X (x) 0.

3.4. THE FOURIER METHOD


2. For = 0 there exists no non{trivial solution, since the general solution is

41

X (x) = ax + b ; and the boundary conditions then give X (0) = ax + b] x=0 = b = 0 X (`) = a` = 0 : Thus, a = 0 ; b = 0 and X (x) 0. 3. For > 0, the general solution has the form p p X (x) = D1 cos x + D2 sin x :
The boundary conditions give

X (0) = D1 = p ; 0 X (`) = D2 sin ` = 0 : Since X (x) is not identically vanishing, D2 6= 0, and therefore
sin say

`=0;

(3:4:12)

= `n ; n 2 ZZ : Thus, a non{trivial solution is possible only for the values 2 = `n ; n 2 IN : n These eigenvalues correspond to the eigenfunctions Xn (x) = Dn sin `n x : Thus, for the values which are equal to n 2 ; n 2 IN ; (3:4:13) n= ` there exist non{trivial solutions Xn (x) = sin `n x ; (3:4:14) which are uniquely de ned up to a constant coe cient. For these , the solutions of (3.4.9) are Tn(t) = An cos `n at + Bn sin `n at ; (3:4:15) where An and Bn are to be de ned. It follows that the functions (3:4:16) un(x; t) = Xn (x)Tn(t) = An cos `n at + Bn sin `n at sin `n x

42

CHAPTER 3. HYPERBOLIC EQUATIONS


are special solutions of (3.4.1), which satisfy the boundary condition (3.4.2). Since (3.4.1) is linear and homogeneous, the function 1 1 X X An cos `n at + Bn sin `n at sin `n x ; u(x; t) = un(x; t) = (3:4:17) n=1 n=1 | if it converges and one can di erentiate it termwise two times with respect to x and t | is a solution of (3.4.1) and satis es the boundary conditions (3.4.2). (We shall come back to this question in the next paragraph.) The initial conditions give 1 1 X X u(x; 0)= '(x)= un(x; 0) = An sin `n x ; n1 =1 n1 =1 (3:4:18) X @un X n nx : ut(x; 0)= (x)= @t (x; 0)= ` aBn sin ` n=1 n=1 Let ' and be piecewise continuous and di erentiable, then we have 1 X n x ; ' = 2 Z ` '( ) sin n d 'n sin ` (3.4.19) '(x) = n ` 0 ` n=1 Z` 1 X sin `n x ; n = 2 (3.4.20) ( ) sin `n d : (x) = n ` 0 n=1 A comparison of these two series with (3.4.18) gives ` An = 'n ; Bn = na Thus, the series (3.4.17) is completely de ned.
n

(3:4:21)

3.4.2 The proof of the Fourier method


Let L(u) be a linear di erential operator. It means that L(u), the action of L onto u, is equal to the sum of the corresponding derivatives of u with coe cients independent of u.

Lemma 3.4.1 Let the ui(i = 1; 2; : : :) be special solutions of a linear homogeneous di erenP

tial equation L(u) = 0. Then the series u = 1 Ciui is also a solution of this equation, if i=1 the derivatives in L(u) can be di erentiated termwise.

We come back to our boundary value problem. First, we have to prove the continuity of the function 1 1 X X An cos `n at + Bn sin `n at sin `n x : (3:4:22) u(x; t) = un(x; t) = n=1 n=1 As jun(x; t)j jAnj + jBnj ; we conclude that if 1 X jAnj + jBnj) (3:4:23)
n=1

3.4. THE FOURIER METHOD

43

converges, then the series (3.4.22) uniformly converges and u(x; t) is continuous. Analogously, in order to prove the continuity of ut(x; t) we have to prove the uniform convergence of the series 1 1 X @un X n ut(x; t) = a ` ?An sin `n at + Bn cos `n at sin `n x (3:4:24) n=1 @t n=1 or the convergence of the majorant series 1 a X n jA j + B j : n n ` n=1 Also we have to prove the uniform convergence of the series 1 1 2X 2 X @ 2 un uxx n An cos `n at + Bn sin `n at sin `n x ; 2 =? ` n=1 @x n=1 1 1 X @ 2 un a 2 X n2 A cos n at + B sin n at sin n x : utt =? ` n n 2 ` ` ` n=1 @t n=1 These correspond, up to a constant, to the majorant series 1 X 2 (3:4:25) n jAnj + jBnj :

` An = 'n ; Bn = na n ; where Z` Z` (x) sin n x dx ; 'n = 2 '(x) sin `n x dx ; n = 2 ` 0 ` 0 ` our method is proved if we can prove the convergence of the series 1 X k n j'n j (k = 0; 1; 2) n1 =1 X k n j nj (k = ?1; 0; 1) :
n=1

Since

n=1

(3:4:26)

Results from the theory of Fourier series

If F (x) is odd, then an = 0 and 1 X bn sin `n x F (x) = n=1 1 Z ` F ( ) sin n d = 2 Z ` F ( ) sin n d : bn = ` ` ` 0 ` ?`

Let F be a 2`{periodic function, then we can expand F into its Fourier series 1 X F (x) = a0 + an cos `n x + bn sin `n x ; 2 n=1 Z` Z` an = 1 F ( ) cos `n d ; bn = 1 F ( ) sin `n d : ` `
?` ?`

44

CHAPTER 3. HYPERBOLIC EQUATIONS

If F (x) is de ned only in (0; `), then we can oddly extend F (x) to be de ned in (?`; `) and then use the above expansion. It is known that, if F 2 C k and F (k) is piecewise continuous then the series 1 X k n janj + jbnj converges. If a function f (x) is de ned only in (0; `) then we can extend it oddly to be de ned in (?`; `), say to F (x). Since F (x) is continuous, f (0) must be 0. Further, f (`) must be also 0, since the F (x) must be 2`{periodic and continuous. The continuity of the rst derivative at x = 0 and x = ` is automatically satis ed. Generally, one has to require f (k)(0) = f (k) (`) = 0 (k = 0; 2; 4; : : : ; 2n) : From these results we conclude: 1. In order to guarentee the convergence of the series 1 X k n j'nj (k = 0; 1; 2) ; the function ' must be two times continuously di erentiable, furthermore the third derivative of ' must be piecewise continuous, and '(0) = '(`) = 0 ; '00(0) = '00(`) = 0 : (3:4:27) 2. In order to guarantee the convergence of the series 1 X k n j nj (k = ?1; 0; 1) the function must be continuously di erentiable, have a piecewise continuous second derivative, and (0) = (`) = 0 : (3:4:28) With these conditions, the representation (3.4.17) is in fact a solution of the problem (3.4.1), (3.4.2). Note that this solution is unique.
n=1 n=1 n=1

3.4.3 Non{homogeneous equations


We consider the non{homogeneous hyperbolic equation utt = a2uxx + f (x; t) ; 0 < x < ` ; t > 0 ; with the initial conditions u(x; 0) = '(x) ; ut(x; 0) = (x) ; 0 x ` ; and homogeneous boundary value conditions u(0; t) = u(`; t) = 0 ; t > 0 : (3:4:29) (3:4:30) (3:4:31)

3.4. THE FOURIER METHOD


We try to nd the solution of (3.4.30){(3.4.31) in the form 1 X u(x; t) = un(t) sin `n x ; n=1

45 (3:4:32)

where un(t) are to be determined. For this purpose we expand f; ' and in the Fourier series Z` 1 X f (x; t) = fn(t) sin `n x ; fn(t) = 2 f ( ; t) sin `n d ; ` 0 n=1 Z` 1 X (3.4.33) '(x) = 'n sin `n x ; 'n = 2 '( ) sin `n d ; ` 0 n=1 1 X 2 Z ` ( ) sin n d : nx ; (x) = n = n sin ` ` 0 ` n=1 Plugging (3.4.32) into (3.4.29), we get ! 1 X n x ?a2 n 2 u (t) ? u (t) + f (t) = 0 : sin n n n ` ` n=1 Thus, 2 un (t) + `n a2un (t) = fn(t) : On the other hand, 1 1 X X u(x; 0) = '(x) = un (0) sin `n x = 'n sin `n x ; n=1 n=1 1 1 X n x = X sin n x : _ ut(x; 0) = (x) = un(0) sin ` n ` n=1 n=1 It follows un (0) = 'n ; un(0) = n : _ Consequently, we can nd un(t) in the form
( un(t) = unI )(t) + u(II )(t) ;

(3:4:34)

(3:4:35)

where and Thus

` Zt ( unI )(t) = na sin `n a(t ? ) fn( ) d 0 ` u(nII )(t) = 'n cos `n at + na n sin `n at : u(x; t) = n a(t ? ) sin n x f ( ) d n ` ` ! n at + ` sin n at sin n x ; ` na n ` ` n=1 := u(I )(x; t) + u(II )(x; t) :
1 X ` Zt sin n=1 na 0 1 X 'n cos +

(3:4:36) (3:4:37)

(3.4.38)

46

CHAPTER 3. HYPERBOLIC EQUATIONS

Taking (3.4.33) into account for fn (t), we can represent u(I )(x; t) in the form ) Z t Z ` (2 X ` 1 n a(t ? ) sin n x sin n f ( ; ) d d sin ` u(I )(x; t) = 0 0 ` ` ` n=1 na Z tZ ` G(x; ; t ? )f ( ; ) d d ; (3.4.39) :=
0 0

where

1 X1 G(x; ; t ? ) := 2a n sin `n a(t ? ) sin `n x sin `n : n=1

(3:4:40)

3.4.4 General rst boundary value problem


Find a solution of the problem

utt = a2uxx + f (x; t) ; 0 < x < ` ; t > 0 ; u(x; 0) = '(x) ; ut(x; 0) = (x) ; 0 < x < ` ; u(0; t) = 1(t) ; u(`; t) = 2(t) ; t > 0 :
To deal with this problem we shall nd a di erentiable function U , such that

(3.4.41) (3.4.42) (3.4.43) (3:4:44)

U (0; t) = 1(t) ; U (`; t) = 2(t)

and nd an equation of the type (3.4.41) for which U (x; t) is a solution. It can be easily seen that the function U (x; t) = 1(t) + x 2(t) ? 1(t)] (3:4:45) ` satis es (3.4.44) and the equation Utt = a2Uxx + 1(t) + x 2(t) ? 1(t)] ` 2Uxx + f (x; t) : = a (3.4.46) Now consider the problem

vtt v(x; 0) vt(x; 0) v(0; t)

= = = =

a2vxx + f ? f ; '(x) ? U (x; 0) ; (x) ? Ut(x; 0) ; 0 ; v(`; t) = 0 :

(3:4:47)

It is clear that we can nd v in the form of (3.4.38) and furthermore

u(x; t) = U (x; t) + v(x; t) :

(3:4:48)

3.4.5 General scheme for the Fourier method


Let '(x); k(x); q(x) be positive functions.

3.4. THE FOURIER METHOD


Consider the problem

47

problem

" # @ k(x) @u ? q(x)u = (x) @ 2u ; L u] := @x (3.4.49) @x @t2 u(0; t) = 0 ; u(`; t) = 0 ; t > 0 ; (3.4.50) u(x; 0) = '(x) ; ut(x; 0) = (x) ; 0 < x < ` : (3.4.51) We now use the Fourier method to solve this problem. Namely, we try to nd a non{trivial solution of (3.4.49) which satis es the boundary conditions u(0; t) = 0 ; u(`; t) = 0 and can be represented in the form u(x; t) = X (x)T (t) : With the same argument as in x 3.4.1, we arrive at the problems " # d k(x) dX ? qX + X = 0 dx dx and T 00 + T = 0 : To nd X (x) we get the following eigenvalue problem: Find , for which the boundary value
L X] + X = 0 ; 0 < x < ` ; (3:4:52) X (0) = 0 ; X (`) = 0 ; (3:4:53) has a non{trivial solution. The are called eigenvalues, and the associated non{trivial solutions of (3.4.52), (3.4.53) are called eigenfunctions.

Properties of the problem (3.4.52), (3.4.53)

1. There exist countably many eigenvalues 1 < 2 < < n < , and their associated non{trivial solutions (eigenfunctions) X1(x); X2(x) : : :; Xn (x); : : : 2. For q 0, all n are positive. 3. The eigenfunctions are orthogonal in the interval 0; `] with the weight (x) in the sense that Z` Xm (x)Xn(x) (x) dx = 0 ; m 6= n : (3:4:54) 4. Any function F (x) de ned in 0; `] which is two times continuously di erentiable and satis es the boundary condition F (0) = F (`) = 0 can be expanded into a uniformly and absolutely convergent series of the Xn (x): 1 X 1 Z ` F (x)X (x) (x) dx ; F (x) = FnXn (x) ; Fn = N n n 0 n=1 Z` 2 (3:4:55) Nn = Xn (x) (x) dx :
0 0

48

CHAPTER 3. HYPERBOLIC EQUATIONS

The properties 1 and 4 are provided in the Theory of Integral Equations, and we do not give any proof for them here. We shall prove the properties 2 and 3. At rst, we give here the Green formula. Let u and v be de ned in a; b] ; u; v 2 C 2(a; b) and u; v 2 C 1 a; b]. Then

uL v] ? vL u] = u(kv0)0 ? v(ku0)0 h i0 = k(uv0 ? vu0) :


Integrating the last from a to b we get Zb b uL v] ? vL u] = k(uv0 ? vu0) a : a

Proof of the property 3. Let Xm (x) and Xn (x) be two eigenfunctions and m ; n be their
associated eigenvalues. From the boundary conditions (3.4.53) we have Z `n o Xm L Xn ] ? Xn L Xm] dx = 0 :
0

Taking Eq. (3.4.52) into account we obtain Z` ( n ? m) Xm (x)Xn(x) (x) dx = 0 :


0

Since

6=

m,

it follows

Z`
0

Xm (x)Xn(x) (x) dx = 0 :

(3:4:56)

We shall prove that for every eigenvalue there exists, up to a constant factor, only one eigenfunction. In order to prove this fact, we use a result which states that the solution of a linear ordinary di erential equation is uniquely de ned by its value and the value of its rst derivative at a point. Let X and X be two eigenfunctions associated with the eigenvalue . Since X (0) = X (0) = 0, 0 we have X 0(0) 6= 0 and X (0) 6= 0. Set

X 0(0) X (x) : X (x) := 0 X (0)


The function X (x) satis es the equation (3.4.52), furthermore

X 0(0) X (0) = 0 ; X (0) = 0 X (0) X 0(0) X 0(0) = X 0(0) : 0 X (0) = 0 X (0)


Thus, X (x) = X (x) and therefore

X (x) = X (x) = X 0(0) X (x) = AX (x) : X (0)

3.4. THE FOURIER METHOD

49

We conclude that if Xn (x) is an eigenfunction associated with the eigenvalue n , then AnXn (x) (An is a nonvanished constant) is also an eigenfunction. Since these eigenfunctions di er only by a constant factor, the value of this factor is not important, and it can be normalized through the equality Z` 2 Nn = Xn (x) (x) dx = 1 :
0

^ If an eigenfunction Xn (x) is not normalized, then we can normalize it by ^ Xn (x) = qR ` Xn (x) : ^2 0 Xn (x) (x) dx By this way, we have found an orthogonal system of eigenfunctions Xn (x) of the problem (3.4.52), (3.4.53): ( Z` Xm (x)Xn(x) (x) dx = 0 ;; m 6= n ;: 1 m=n
0

" # d k(x) dXn dx + Z ` q(x)X 2(x) dx : n = ? Xn n dx dx 0 0 Integrating by part gives Z` Z` 2 0 0 ` = ?Xn kXn 0 + k(x) Xn(x)]2 dx + q(x)Xn (x) dx n 0 0 Z Z` 0 (x)]2 dx + ` q (x)X 2(x) dx : k(x) Xn = n Z`
0 0
n

L Xn] = ? n (x)Xn (x) : Multiplying both sides of this equation by Xn (x) and integrating with respect to x from 0 to `, we obtain Z Z` 2 (x) (x) dx = ? ` Xn (x)L Xn ] dx : n Xn 0 0 R ` X 2(x) (x) dx = 1, we get Since
0 n

eigenvalue n :

Proof of the property 2. Let Xn (x) be the normalized eigenfunction associated to the

0 Since k(x) > 0 ; q(x) 0, Xn 2 C 2(0; `), Xn (x) is not identically vanishing, we have

>0:

For a function F (x) de ned in (0; `) we can formally expand F (x) as follows 1 X FnXn (x) ; F (x) = n=1 R ` (x)F (x)X (x) dx n Fn = 0 R ` 2 (x) dx ; n 2 IN : 0 (x)Xn

50

CHAPTER 3. HYPERBOLIC EQUATIONS

We say that Fn are the Fourier coe cients of the function F in the system fX1 ; X2; : : :g. With the same method as in (3.4.1), we can prove that q 1 q X An cos n t + Bn sin n t Xn (x) ; u(x; t) =
n=1

where

An = 'n ; B n = p n :
n

Here, 'n and n are the Fourier coe cients of ' and , respectively. This method has been developed by Steklov. We do not pursuit it in this lecture.

3.5. THE GOURSAT PROBLEM

3.5 The Goursat Problem

51

3.5.1 De nition
Consider the equation

uxy = f (x; y) : (3:5:1) Let C1 and C2 be two curves described by equations x = 1(y) and x = 2(y) and emanating from a common point, which we take to be the origin of the coordinate system. We assume that they do not intersect elsewhere. We assume further that the curves C1 and C2 are non{tangent to characteristics situated in the rst quarter and that the functions 1(y) and 2 (y ) are increasing, i. e. the curves C1 and C2 have time{like orientation. The functions 1 (y ) and 2(y ) have inverses #1 (x) and #2 (x), respectively. For de niteness suppose that 1 (y ) < 2(y ) for the positive values of the variable y for which these two functions exist.
y A C1 P0

B 0

C2 x

Let D be a domain contained between these arcs and two characteristics P0 A and P0B . We seek a solution u(x; y) of (3.5.1) de ned in the closure D of the domain D and satisfying the Dirichlet boundary conditions on the arcs C1 and C2:

u u

OA OB

= u( 1(y); y) = 1(y) ; 0 y = u( 2(y); y) = 2(y) ; 0 y

1;

2;

(3.5.2) (3.5.3)

where 1 and 2 are the y{coordinates of the points A and B . We assume that 1(0) = 2 (0) = 0. Conditions (3.5.2), (3.5.3) can also be represented in the form

If 1(0) = 2(0) = a 6= 0 it would be su cient to solve the Goursat problem with the conditions u = 1 ? c on C1 and u = 2 ? c on C2 and then add the function u0 c to the derived solution. The problem formulated above is the Goursat problem in a narrow sense. In a wider sense, in the Goursat problem the function 1(y) appearing in the equation of the curve C1 is

Goursat problem.

u(x; #1(x)) = '1(x) ; 0 x 1 ; (3.5.4) u(x; #2(x)) = '2(x) ; 0 x 2 ; (3.5.5) where 1 and 2 are the abscissae of the points A and B . This problem is called the

52

CHAPTER 3. HYPERBOLIC EQUATIONS

monotonically non{decreasing and the curve C2 is represented by an equation of the form y = #(x), where the function #(x) is monotonically non{decreasing. If C1 is the interval 0; ] of the axis Oy and C2 is the interval 0; ] of the axis Oy, then we have the Darboux problem:
y N(0, ) R

M( , 0)

Find the solution of (3.5.1) in OMRN, where M = ( ; 0) ; R = ( ; ) ; N = (0; ) and u(x; 0) = '(x) ; 0 x ; u(0; y) = (y) ; 0 y :
y A( , ) C

Let the arc OA of a curve C with the equation y = #(x), beginning at the origin of the coordinate system, belong to the class C 1 and #0(x) > 0. Let D be the domain contained between the arc OA, the axis Ox and the characteristic AB perpendicular to Ox. Let and be the coordinates of the point A. If we seek the solution of (3.5.1) in the domain D which satis es the boundary conditions u(x; 0) = '(x) ; 0 x ; u( (y); y) = (x) ; 0 y ; then we have the Picard problem. Remark: The Darboux problem and the Picard problem are special cases of the Goursat problem. Remark: The Goursat problem has many applications in gas dynamics, etc. We restrict ourselves to the Darboux problem in this introductory lecture.

3.5. THE GOURSAT PROBLEM

53

3.5.2 The Darboux problem. The method of successive approximations


We consider the Darboux problem uxy = f (x; y) ; x 0 ; y 0 ; (3:5:1) (3.5.6) (3.5.7)

u(x; 0) = '(x) ; x 0 ; u(0; y) = (x) ; y 0 :


y u(0, y) = (y)

u xy = f(x, y)

u(x, 0)=

(x)

Let ' and be di erentiable and '(0) = (0). Integrating (3.5.1) rst with respect to x, and then w.r.t. y, we get Zx uy (x; y) = uy (0; y) + f ( ; y) d ; 0 Zy Zx u(x; y) = u(x; 0) + u(0; y) ? u(0; 0) + d f ( ; ) d :
0 0

Thus,

f( ; ) d d : (3:5:8) u(x; y) = '(x) + (y) ? '(0) + 0 0 It is clear that u(x; y) de ned by (3.5.8) is a solution of the Darboux problem (3.5.1), (3.5.6), (3.5.7). Furthermore it is unique. We consider now a more general hyperbolic equation uxy = a(x; y)ux + b(x; y)uy + c(x; y)u + f (x; y)
(3:5:9)

Z yZ x

with the boundary conditions (3.5.6), (3.5.7). We assume also that '(0) = (0), and the coe cients a; b, and c are continuous functions of x and y. From (3.5.9) we see that a solution u(x; y) of (3.5.9) satis es the integral equation Z yZ xh a( ; )u + b( ; )u + c( ; )u] d d u(x; y) = 0 0 Z yZ x (3:5:10) f( ; ) d d : + '(x) + (y) ? '(0) + 0
0

The solution of (3.5.10) will be constructed via the method of successive approximations. As the zero-th approximation we take the function

u0(x; y) = 0 :

54

CHAPTER 3. HYPERBOLIC EQUATIONS

The equation (3.5.10) gives then the following expressions for the successive approximations Z yZ x u1(x; y) = '(x) + (y) ? '(0) + f( ; ) d d ; 0 0 : : :: : : : : : :: : : : : : :: : : : : :: : : Z: : :: : : : : : :: : : : : :: : : : : : :: : : : : : :: : Z y : xh (3:5:11) a( ; ) @un?1 + b(x; ) @un?1 + un (x; y) = u1(x; y) + @ 0 0 i@ +c( ; )un?1 d d : Note also that @un = @x @un = @y

" # @u1 + Z y a(x; ) @un?1 + b(x; ) @un?1 + c(x; )u n?1 d ; @x 0 " @x @ # @u1 + Z x a( ; y) @un?1 + b( ; y) @un?1 + c( ; )u n?1 d : @y 0 @ @y

(3:5:12)

In order to prove the uniform convergence of the series ) ( ) ( @un (x; y) ; @un (x; y) fun (x; y)g ; @x @y we consider the di erences

zn (x; y) = = @zn(x; y) = @x = zn (x; y) = @y =

un+1(x; y) ? un(x; y) # Z yZ x" @zn?1 + b( ; ) @zn?1 + c( ; )z ( ; ) d d ; a( ; ) @ n?1 @ 0 0 @un+1(x; y) ? @un(x; y) # Z y "@x @zn?1 @x @zn?1 + c(x; )z (x; ) d ; a(x; ) @x + b(x; ) @ n?1 0 @un+1(x; y) ? @un(x; y) @y @x # Z x" @zn?1 + b( ; ) @zn?1 + c( ; )z ( ; y) d : a( ; y) @ n?1 @y
0

Assume that ja(x; y)j; jb(x; y)j; jc(x; y)j M and

jz0j < H ; @z0 < H ; @z0 < H ; @x @y


where x 2 0; L] ; y 2 0; L], L is a given positive number. We want to estimate zn ; above. First we note that 2 jz1j < 3HMxy < 3HM (x + y) ; 2! @z1 < 3HMy < 3HM (x + y) ; @x
@zn ; @zn @x @y

@z1 < 3HMx < 3HM (x + y) : @y

3.5. THE GOURSAT PROBLEM


We shall prove by induction that +y jznj < 3HM n K n?1 (xn + ) ; ( 1)!
n+1

55

@zn < 3HM n K n?1 (x + y)n ; @x n! @zn < 3HM n K n?1 (x + y)n ; @y n!

(3.5.13)

where K = 2L + 2. For n = 1, our estimates are proved. Suppose that they are proved for n, we want to show that they are also valid for n + 1. In fact, from our de nition for zn we have # Z yZ x" @zn + b( ; ) @zn + c( ; )z ( ; ) d d jzn+1j = 0 0 a( ; ) @ n @ " Z y Z x ( + )n ( + )n+1 # 2 n! + (n + 1)! d d : 3HM n+1 K n?1
0 0

Noting that

Z y Z x ( + )k Z y (x + )k+1 k+1 ! k! d d = 0 (k + 1)! ? (k + 1)! d 0 0 k+2 k+2 y k+2 + y k+2 = (xk++ ) ? (kx+ 2)! ? (ky+ 2)! (xk + ) ; ( 2)! ( 2)!
+ y n+3 + y n+2 2 (xn + ) + (xn + ) jzn+1j ( 2)! ( 3)! n+2 +y 2+ x+y = 3HM n+1 K n?1 (xn + ) ( 2)! n+3 n+2 +y < 3HM n+1 K n (xn + ) : ( 2)! 3HM n+1 K n?1

we have

Analogously,

@zn+1 @x
and

+y < 3HM n+1 K n (xn + ) ; ( 1)! +y 3HM n+1 K n?1 (xn + ) ( 1)!
n+1 n+1

+y 3HM n+1 K n?1 (xn + ) ( 1)!


n+1

n+1

x+y +2 ; n+2

@zn+1 @y

+y < 3HM n+1 K n (xn + ) ( 1)!

x+y +2 ; n+2

56 Thus the estimates in (3.5.13) are proved. From (3.5.13) we see that

CHAPTER 3. HYPERBOLIC EQUATIONS

@zn < 3HM n K n?1 (x + y)n < 3H (2KLM )n ; @x n! K n! n @zn < 3HM n K n?1 (x + y) < 3H (2KLM )n : @y n! K n!
It follows
1 X

+ y n+1 3 )n+1 jznj < 3HM n K n?1 (xn + ) < K 2H (2KLM1)! ; ( 1)! M (n +

1 X 3H (2KLM )n+1 3H 2KLM ? 1 ; 2M (n + 1)! = K 2M e n=0 K n=0 1 @z 1 X n X 3H (2KLM )n 3H 2KLM < = K e ; n! n=0 @x n=0 K 1 1 X 3H (2KLM )n 3H 2KLM X @zn < = Ke : n! n=0 K n=0 @y

jznj <

We thus conclude that the series

un = u0 + z1 + + zn ; @un = @u0 + @z1 + + @zn ; @x @x @x @x @un = @u0 + @z1 + + @zn @y @y @y @y converges uniformly as n ! 1. We denote their limit functions by u(x; y) := nlim un(x; y) ; !1 v(x; y) := nlim @un (x; y) ; !1 @x w(x; y) := nlim @un (x; y) : !1 @y
Taking the limit in the integral equations (3.5.11), (3.5.12), we get Z yZ xh i a( ; )v + b( ; )w + c( ; )u d d ; u(x; y) = u1(x; y) + 0Z 0 i yh v(x; y) = @u1 (x; y) + 0 a(x; )v + b(x; )w + c(x; )u d ; @x @u1 (x; y) + Z y ha( ; y)v + b( ; y)w + c( ; y)ui d : w(x; y) = @y 0 From the rst equation it follows that

v = ux ; w = uy ;

3.5. THE GOURSAT PROBLEM


and therefore Z yZ x f( ; ) d d u(x; y) = '(x) + (y) ? '(0) + 0 0 Z yZ xh i a( ; )u + b( ; )u + c( ; )u d d ; +
0 0

57

(3:5:10)

which satis es (3.5.9) and clearly the boundary conditions. Thus, there exists a solution of the Darboux problem (3.5.1), (3.5.6), (3.5.7). We shall prove that the solution is unique. In fact, suppose that there are two solutions u1(x; y) and u2(x; y). Let U (x; y) = u1(x; y) ? u2(x; y) : The function U satis es the homogeneous integral equation Z yZ x (aUx + bUy + cU ) d d : U (x; y) = Let H1 be the majorant of jU j; jUxj and jUy j for x 2 0; L] ; y 2 0; L]. Following the successive procedure we get + y n+2 3 1 )n+2 jU j < 3H1M n+1 K n (xn + ) < KHM (2KLM2)! 2 ( 2)! (n + for any n. It follows U (x; y) 0 ; or u1(x; y) u2(x; y) : Thus, the solution of the Darboux problem is unique. If a; b, and c are constant, then we make the transformation u = ve x+ y ; for which ux = vxe x+ y + ve x+ y ; uy = vy e x+ y + ve x+ y ; uxy = (vxy + vx + vy + v) e x+ y : Thus vxy + vx + vy + v = = a(vx + v) + b(vy + v) + cv + fe? x? y : It follows vxy = (a ? )vx + (b ? )vy + + (a + b + c ? )v + fe? x? y : Letting = a ; = b, we get vxy = (ab + c)v + fe?(bx+ay) : (3:5:14) If ab + c = 0, then we can nd the solution explicitly in the form (3.5.8), if ab + c 6= 0 then we shall use the method of the next paragraph.
0 0

3.6 Solution of general linear hyperbolic equations


3.6.1 The Green formula
Let

58

CHAPTER 3. HYPERBOLIC EQUATIONS

L u] = uxx ? uyy + a(x; y)ux + b(x; y)uy + c(x; y)u ; where a(x; y) ; b(x; y) ; c(x; y) are di erentiable functions. Let L v] = vxx ? vyy ? (av)x ? (bv)y + cv ; 8 < vu ? v u + avu = (vu)x ? (2vx ? av)u = H=: x x ?(vu)x + (2ux + au)v ; 8 < (uv)y ? (2uy ? bu)v = K=: ?vuy + vu + bvu = ?(vu)y + (2vy + bv)u ; L u] = uxx ? uyy + a(x; y)ux + b(x; y)uy + c(x; y)u:
Since we have Here we take and The Green formula says that ZZ
G

(3:6:1) (3:6:2) (3:6:3) (3:6:4)

L v] = vxx ? vyy ? (av)x ? (bv)y + cv; vL u] ? uL v] = @H + @K : @x @y H = vux ? vy u + avu; K = ?vuy + vy u + bvu: vL u] ? uL v] d d = ZZ @H @K = @x + @y d d GZ = ? H cos(n; x) + K cos(n; y) ds Z S Hd ? Kd : =
S

(3.6.5)

Here, n is the inward normal vector to S , that is

dx = cos(n; y) ds ; dy = ? cos(n; x) ds ;
assuming that S is traced out anticlockwise so as to keep the interested domain area always on the left, and ds is taken to be positive.

3.6. SOLUTION OF GENERAL LINEAR HYPERBOLIC EQUATIONS

59

3.6.2 Riemann's method


We consider the problem of nding the solution of the linear hyperbolic equation L u] = uxx ? uyy + a(x; y)ux + b(x; y)uy + c(x; y)u = ?f (x; y) ; (3:6:6) which satis es the Cauchy data on a curve C , ujC = '(x) ; (3:6:7) unjC = (x) : Here, un is the normal derivative of u with respect to the curve C . The conditions posed on C are as follows: C is described by y = f (x) ; where f (x) is a di erentiable function. Further, every characteristic y ? x = const ; y + x = const intersects the curve C at most one time (for this, it is necessary that jf 0(x)j 1).
y y=x M

Q x

Let P and Q be two points on C . Through P and Q we draw two characteristics such that they intersect each other at a point M . Consider the domain bounded by the arc PQ of the curve C and by QM and MP . From the Green formula we have ZZ vL u] ? uL v] d d = MPQ (3:6:8) ZQ ZP ZM = (H d ? K d ) + (H d ? K d ) + (H d ? K d ) :
Q M P

On QM and MP we have

ds d = ?d = ? p 2 ds d = d = ?p 2
?!

on QM ; on MP ;

where s is the curve element along QM and MP . Thus ZM ZM (vu) ? (2v ? av)u d (H d ? K d ) =
Q

?!

? ? (vu) + (2v + bv)u d

60

Analogously, ZP
M

CHAPTER 3. HYPERBOLIC EQUATIONS Z Mh i ZM (vu) d + (vu) d ? 2 (v u d + v u d ) = Q Q ZM + (avu d ? bvu d ) Q Z M @v a + b ! ZM 2 @s ? p v u ds = ? d(uv) + Q Q 2 Z M @v a + b ! 2 @s ? p v u ds : = ? (uv)M + (uv)Q + Q 2
(H d ? K d ) = ?(uv)M + (uv)P +

ZM
Q

The last two equalities and (3.6.8) yield ! (uv)P + (uv)Q + Z M @v ? b ? a v u ds p (uv)M = 2 @s 2 2 P Z M @v b + a ! ? 2p2 v u ds + Q @s ZZ 1ZQ + 2 (H d ? K d ) ? 1 vL(u) ? uL (v) d d : 2
P MPQ

? ! @v ? bp a v u ds : 2 @s 2

(3.6.9)

Let u be a solution of the problem (3.6.6), (3.6.7), whence v be a solution of the following problem depending on M as a parameter, L (v) = v ? v ? (av) ? (bv) + cv = 0 in MPQ (3:6:10) 9 @v = b ? a v on the characteristic MP ; > p > > @s = 2 2 @v = b + a v on the characteristic MQ ; > (3:6:11) p > @s 2 2 > ; v(M ) = 1 : The conditions of (3.6.11) yield Z s b?a ! p ds on MP ; v = exp s 2 2 Z s b+a ! p ds on MQ ; v = exp s 2 2 where s0 is the value of s at the point M . Thus, we obtain the Darboux problem for v, which is uniquely determined in MPQ. The function v is called the Riemann function. Hence, from (3.6.6) and (3.6.9) we have u(M ) = (uv)P + (uv)Q Z2 P +1 (3:6:12) 2 ZZ v(u d + u d ) ? u(v d + v d ) + uv(ad ? bd )] Q 1 + 2 vf d d :
0 0

MPQ

3.6. SOLUTION OF GENERAL LINEAR HYPERBOLIC EQUATIONS

61

By this formula, our problem (3.6.6), (3.6.7) is solved, since v is known in MPQ, and, on C , u and du are given: dn ujC = '(x) ; q '0(x) ? (x)f 0(x) 1 + f 02(x) ux = ux cos(x; s) + un cos(x; n) = ; 1 + f 02(q) x '0(x)f 0(x) + (x) 1 + f 02(x) uy = us cos(y; s) + un cos(y; n) = : 1 + f 02(x) The formula (3.6.12) ensures the existence and uniqueness of the solution of the problem (3.6.6), (3.6.7). One can show also that the function u de ned by (3.6.12) satis es in fact the conditions of the problem. We do not pursuit it in this lecture.

3.6.3 An application of the Riemann method


Consider the problem

The operator L(u) = uxx ? uyy is self-adjoint, that is L(u) = L (u). PQ is now an interval of the axis y = 0.
y

f uyy = uxx + f1(x; t) ; ?1 < x < 1 ; y > 0 y = at ; f1 = a2 u(x; 0) = '(x) ; ?1 < x < 1 ; ! uy (x; 0) = 1(x) ; 1 = a ? 1 < x < 1 :

M(x, y)

x P(xy, 0) Q(x+y, 0)

Let the coordinate of M be (x; y), we have that the coordinate of P is (x ? y; 0) and of Q is (x + y; 0). It follows that x+y y x+(y? ) u(x ? y) + u(x + y) + 1 Z ( ) d + 1 Z Z f ( ; ) d d : u(x; y) = 2 2 x?y 1 2 0 x?(y? ) 1

From (3.6.11) we see that v 1 in MPQ. Since d = 0 on PQ, we get ZQ 1 ZZ u(M ) = u(P ) + u(Q) + 1 u d + 2 f ( ; ) d d : 2 2 P MPQ

62 Changing the variable, we get

CHAPTER 3. HYPERBOLIC EQUATIONS


x+a(t? ) u(x ? at) + u(x + at) + 1 xZ+at ( ) d + 1 Zt Z f ( ; ) d d : u(x; t) = 2 2a x?at 2a 0 x?a(t? )

Chapter 4 Parabolic Equations


In this chapter we shall consider the parabolic equation ! @u = @ k @u c @t @x @x 8 k : (Warmeleitfahigkeit) > > < > c : (spezi sche Warmekapazitat) > : : (die Dichte) in a simple form. Namely, we shall consider the heat equation

ut = a2uxx;

a2 = ck :

4.1 Boundary conditions


In contrast to hyperbolic equations, for parabolic equations we need only one initial condition u(x; t) for the initial time t = t0. Suppose that we consider our process for a bar 0; l]. a) At one end of the bar (x = 0, or x = l) the temperature is prescribed by u(0; t) = (t) (or u(l; t) = l (t)); where (t) ( l(t)) is a given function de ned on t0; T ]. Here T characterizes the time interval, where our heat transfer process is considered. b) At one end of the interval 0; l], the derivative of u is given: @u (l; t) = (t): @x Normally, the heat ux is de ned by @u Q(l; t) = k @x : x=l 63

64 Thus

CHAPTER 4. PARABOLIC EQUATIONS

@u (l; t) = Q(l; t) = (t): @x k x=l Note that the heat ux at x = 0 is @u Q(0; t) = ?k @x :


x=0

c) At one end, one has the following relation between the solution and its derivative @u (l; t) = ? u(l; t) ? (t)]: @x This corresponds to the Newton law of heat transfer of the body boundary with its outside environment, the temperature (t) of which is given. At x = 0 we have another condition, @u (0; t) = u(0; t) ? (t)]: @x d) If the interval 0; l] is very long and the time interval t0; T ] is small, then we ignore the boundary conditions by consider the initial value problem for parabolic equations in the domain ?1 < x < 1; t t0 with the initial condition u(x; t0) = '(x) (?1 < x < 1); where '(x) is a given function. e) Similarly, if the temperature at one end of the interval is given and the other end is very far, then we can consider our parabolic equation in the domain 0 x < 1; t t0 which satis es the conditions u(x; t0) = '(x) (0 < x < 1); u(x; 0) = (t) (t t0): Here ' and are given functions. f) We have also nonlinear boundary conditions. For example @u k @x (0; t) = u4(0; t) ? 4(0; t)]: This condition is called the Stefan-Boltzmann law.

De nition 4.1.1 A function u is called a solution of the rst boundary value problem for
a parabolic equation, if i) it is de ned and continuous in the closed domain 0 x l; t0 t T , ii) it satis es the parabolic equation in the domain 0 < x < l; t0 < t < T , iii) it satis es the initial and boundary conditions u(x; t0) = '(x); u(0; t) = 1(t); u(l; t) = 2(t); where '(x), 1(t) and 2(t) are continuous functions, and '(0) = 1(t0) = u(0; t0)] '(l) = 2(t0) = u(l; t0)]:

4.2. THE MAXIMUM PRINCIPLE

65

De nition 4.1.2 If the condition iii) in De nition 4.1.1 is replaced by


@u u(x; t0) = '(x); @u (0; t) = 1(t); @x (l; t) = 2(t); @x then we have "the second boundary value problem".

De nition 4.1.3 If the condition iii) in De nition 4.1.1 is replaced by


@u @u u(x; t0) = '(x); @x (0; t) = u(0; t) ? 1(t)]; @x (l; t) = ? u(l; t) ? 2(t)]; then we have "the third boundary value problem".
We shall consider in the sequel the following questions: i) Is the solution of our problems unique? ii) Is there a solution? iii) Do the solutions depend continuously on the data?

4.2 The maximum principle


In the sequel, we shall consider the equation with constant coe cients

vt = a2vxx + vx + v:
If we make the transformation

(4:2:1)

2 v = e x + tu; = ? 2a2 ; = ? 4a2 ; then we have the equation ut = a2uxx: (4:2:2) Thus, we have to consider the equation (4.2.2). The maximum principle: Let a function u(x; t) be de ned and continuous in the closed domain 0 t T , 0 x l and be satis ed the equation (4.2.2) in the open domain 0 < t < T , 0 < x < l. Then the function u(x; t) admits its maximum (minimum) at t = 0, or at x = 0 or x = l. Proof: Let M be the maximum of u(x; t) for t = 0 (0 x l), or for x = 0; x = l (0 t T ). Assume further that u(x; t) attains its maximum at some interior point (x0; t0), say 0 < x0 < l; 0 < t0 < T : u(x0; t0) = M + ; > 0: (4:2:3)

Since 0 < x0 < l and u(x; t) attains its maximum at (x0; t0), we have

@u (x ; t ) = 0; @ 2u (x ; t ) 0: @x 0 0 @x2 0 0

(4:2:4)

66

CHAPTER 4. PARABOLIC EQUATIONS


(4:2:5)

Further, since the function u(x0; t) attains its maximum at t0 2 (0; T ] we have @u (x ; t ) 0: @t 0 0
2 2

(In fact, @u (x0; t0) = 0 if t0 < T , @u (x0; t0) 0 if t0 = T .) @t @t We shall nd another point (x1; t1) 2 (0; l) (0; T ] such that @ u (x1; t1) 0; @u (x1; t1) > 0. @x @t Consider the function v(x; t) = u(x; t) + k(t0 ? t); (4:2:6) where k is a constant. We have v(x0; t0) = u(x0; t0) = M + and k(t0 ? t) kT: Choose k > 0 such that kT < 2 , that is k < 2T ; we see that the maximum of v(x; t) at t = 0 (0 x l), or x = 0 (0 t < T ), or x = l (0 t < T ) is not greater than M + 2 . It means that v(x; t) M + 2 (for t = 0; or x = 0; x = l); (4:2:7) since the rst member on the right-hand side of (4.2.6) is not greater than M and the second one is not greater than 2 . Since the function v(x; t) is continuous in 0; l] 0; T ], there is a point (x1; t1) 2 0; l] 0; T ] where v(x; t) attains its maximum: v(x1; t1) v(x0; t0) = M + : From (4.2.7) we see that 0 < x1 < l and 0 < t1 T . It follows that vxx(x1; t1) = uxx(x0; t0) 0 and vt(x1; t1) = ut(x1; t1) ? k 0: The last inequality says that ut(x1; t1) k > 0. Thus, at (x1; t1) the function u(x; t) does not satisfy the equation (4.2.2). This contradicts the assumption of the maximum principle. Thus, the rst part of the maximum principle is proved. To prove the second part we note that the function u1 = ?u satis es (4.2.2) and all other conditions of the maximum principle in the case of "maximum". 2

4.3 Applications of the maximum principle


4.3.1 The uniqueness theorem
0 x l; 0 t T , satisfying the equation ut = a2uxx + f (x; t); 0 < x < l; t > 0

The uniqueness theorem: Let u1(x; t) and u2(x; t) be continuous functions de ned on
(4:3:1)

4.3. APPLICATIONS OF THE MAXIMUM PRINCIPLE


and the same initial and boundary value conditions

67

u1(x; 0) = u2(x; 0) = '(x); 0 x l; u1(0; t) = u2(0; t) = 1(t); 0 t T; u1(l; t) = u2(l; t) = 2(t); 0 t T:


Then u1(x; t) u2(x; t). Proof: Let

v(x; t) := u1(x; t) ? u2(x; t): Since u1 and u2 are continuous in 0; l] 0; T ], the function v is also continuous there. Further, the function v satis es the equation vt = a2vxx for 0 < x < l; t > 0. Thus, v satis es the conditions of the maximum principle; that is v attains its maximum for t = 0 or x = 0, or x = l. Since v(x; 0) = 0; v(0; t) = 0 and v(l; t) = 0 we have v(x; t) 0. Thus, u1(x; t) u2(x; t). 2

4.3.2 Comparison of solutions


a) Let u1(x; t) and u2(x; t) be solutions of the heat equation with

u1(x; 0) u1(0; t)
Then

u2(x; 0) u2(0; t); u1(l; t) u1(x; t) u2(x; t)

u2(l; t):

for (x; t) 2 0; l] 0; T ]. In fact, the function v = u2 ? u1 satis es the heat equation, and v(x; 0) 0; v(l; t) 0. From the maximum principle we have

0, v(0; t)

v(x; t) 0 for 0 < x < l; 0 < t T:


b) If the solutions u(x; t), u(x; t) and u(x; t) of the heat equation satisfy the inequalities

u(x; t) u(x; t) u(x; t) for t = 0; x = 0 and x = l;


then these inequalities remain valid for all (x; t) 2 0; l] 0; T ]. This statement follows immediatly from a). c) Let u1(x; t) and u2(x; t) be two solutions of the heat equation, which satisfy the inequality

ju1(x; t) ? u2(x; t)j


Then

for t = 0; x = 0; x = l:

ju1(x; t) ? u2(x; t)j

8 (x; t) 2 0; l] 0; T ]:

68

CHAPTER 4. PARABOLIC EQUATIONS

The proof of this statement follows immediately from b) with u(x; t) = ? ; u(x; t) = u1(x; t) ? u2(x; t); u(x; t) = : Now we turn to the question about the stability of the solution of the heat equation: Let u(x; t) be the solution of the rst boundary value problem for the heat equation with the initial and boundary value conditions u(x; 0) = '(x); u(0; t) = 1(t); u(l; t) = 2(t): Let these functions be given approximately by ' (x); 1(t) and 2(t), respectively:

; j 1(t) ? 1(t)j ; ju2(t) ? 2(t)j ; such that there is a solution u (x; t) of the heat equation with these data. Then ju (x; t) ? u(x; t)j :

j'(x) ? ' (x)j

4.3.3 The uniqueness theorem in unbounded domains


Consider the problem

ut = a2uxx; ?1 < x < 1; t > 0: (4:3:2) Uniqueness Theorem: Let u1(x; t) and u2(x; t) be two solutions of (4.3.2) which are continuous and bounded by M :

ju1(x; t)j < M; ju2(x; t)j < M for ? 1 < x < M; t 0: If u1(x; 0) = u2(x; 0); ?1 < x < 1, then u1(x; t) u2(x; t); ?1 < x < 1; t 0: Proof: Let v(x; t) = u1(x; t) ? u2(x; t). The function v(x; t) is continuous, satis es the heat
equation and is bounded by 2M : jv(x; t)j ju1(x; t)j + ju2(x; t)j < 2M; ?1 < x < 1; t 0: Furthermore, v(x; 0) = 0. We shall apply the maximum principle to prove that v(x; t) = 0; ?1 < x < 1; t 0. Let L be a positive number. Consider the function ! 4M x2 + a2t : (4:3:3) V (x; t) = L2 2

The function V (x; t) is continuous and satis es the heat equation (4.3.2). Furthermore, V (x; 0) jv(x; 0)j = 0; V ( L; t) 2M v( L; t): Applying the maximum principle in the domain jxj L, we get ! ! 4M x2 + a2t v(x; t) 4M x2 + a2t : ? L2 2 L2 2 Let (x; t) be xed, and let L tend to in nity, we get v(x; t) = 0. Thus, v(x; t) 0. 2

4.4. THE FOURIER METHOD

4.4 The Fourier method


ut = a2uxx + f (x; t); 0 < x < l; 0 < t T; u(x; 0) = '(x); 0 x l; u(0; t) = 1(t); u(l; t) = 2(t); 0 t T:

69

In this section we shall apply the Fourier method to the following problem (4:4:1) (4:4:2) (4:4:3)

4.4.1 The homogeneous problem


Find the solution of the problem ut = a2uxx; 0 < x < l; 0 < t T; (4:4:4) u(x; 0) = '(x); 0 x l; (4:4:5) u(0; t) = u(l; t) = 0; 0 t T: (4:4:6) To do this we shall nd a solution of the equation ut = a2uxx; which is not identically vanishing and satis es the boundary conditions u(0; t) = 0; u(l; t) = 0: (4:4:7) Furthermore, this solution can be represented in the form u(x; t) = X (x) T (t); (4:4:8) where X (x) is a function depending only on x and T (t) is a function depending only on t. Putting (4.4.8) into (4.4.4) and then deviding both sides by a2XT , we get 1 T 0 = X 00 = ? ; (4:4:9) a2 T X where = const, since T 0=T depends only on t and X 00=X depends only on x. This leads us to consider the equations X 00 + X = 0; (4:4:10) 0 + a2 T = 0: T (4:4:11) Because of (4.4.7) we have X (0) = X (l) = 0: (4:4:12) Thus, we obtain the following eigenvalue problem for X (x): X 00 + X = 0; X (0) = 0; X (l) = 0: (4:4:13) We have proved that the eigenvalues of this problem are n 2 ; n = 1; 2; 3; : : : (4:4:14) n= l

70

CHAPTER 4. PARABOLIC EQUATIONS


(4:4:15) (4:4:16)

Xn (x) = sin ln x: With these values n the solutions of (4.4.11) have the form 2 Tn(t) = cn e?a nt;
where cn will be de ned later. We see that the function

and the eigenfunctions associated with them are

2 un(x; t) = Xn (x) Tn(t) = cn e?a n t Xn (x) (4:4:17) is a special solution of (4.4.4), which satis es the homogeneous boundary conditions (4.4.6). Consider the formal series 2 1 X ? ln a2t sin ln x : u(x; t) = cn e (4:4:18)

The series (4.4.18) is now completely de ned. We shall nd conditions which guarantee the convergence of the series and, furthermore, which allow to di erentiate (4.4.18) termwise two times w.r.t. x, and once w.r.t. t. We shall prove that the series 1 1 X @un X @ 2un and 2 n=1 @t n=1 @x for t t > 0 (t is xed) converge uniformly. For this, we note that 2 2 2 2 ? n a2 t @un = ?c sin ln x ane l n @t l

The function u(x; t) satis es (4.4.6), since every element of the series does so. If u(x; t) satis es also the initial condition, then 1 X (4:4:19) '(x) = u(x; 0) = cn sin ln x : n=1 Thus, cn are the Fourier coe cients of '(x), when it is expanded on (0; l) by 2 Zl '( ) sin n d : cn = 'n = l (4:4:20) l 0

n=1

< jcnj l a2n2e

2 ? ln a2t

Let ' be bounded: j'(x)j < M . Then 2 Zl '( ) sin n jcnj = l l 0

d < 2M:

4.4. THE FOURIER METHOD


Hence,
2

71
2 ? ln a2t

and Generally,

@un < 2M 2 2 @t l ane @ 2un @x2 < 2M l n2e


2

; t t;

2 ? ln a2t

; t t:

2 2k+l 2l+l 2k ? n a2 t @ k+l u < 2M n a e l ; t t: @tk @xl l 1 Consider the majorant series P n, where
n=1

= A nq e

2 ? ln a2t 2

(4:4:21)

Series of this kind converge because of the D'Alembert criterion


nlim !1 n+1 n

= nlim n + 1 !1 n 1 = nlim 1 + n !1

? l a2(n2 + 2n + 1)t e 2 ? l a2n2t


e

? l a2(2n + 1)t e = 0:

Thus, the series (4.4.18) for t t > 0 is in nitely di erentiable. Since t is arbitrary we can say that u(x; t) for t > 0 is in nitely di erentiable, furthermore, it is clear that for t > 0 it satis es the equation (4.4.4).

Theorem 4.4.1 If ' is piecewise di erentiable and '(0) = '(l) = 0, then the series
u(x; t) =
is a continuous function for t 0.
1 X
n=1

cn e

2 ? ln a2t

sin

nx l

(4:4:22)

In fact, the inequalities

yield the uniform convergence of the series (4.4.18) for t 0; 0 x l: For t > 0, we have proved this already; for t = 0, this follows directly, because '(0) = '(l) = 0 and ' is piecewise di erentiable.

jun(x; t)j < jcnj (t 0; 0 x l)

72

CHAPTER 4. PARABOLIC EQUATIONS

4.4.2 The Green function


From (4.4.22) we have

u(x; t) =

cn e n=1 2 1 X 4 2 Zl '( ) sin = n=1 l 0 2 Zl 6 2 X ? n 1 = 6l e l 4 n=1 0

1 X

2 ? ln a2t

sin ln x 3 n 2 n d 5 e? l a2t sin n x l l 3 2 2t a 7 sin ln x sin ln 7 '( ) d : 5

converges uniformly w.r.t. . Set

We can change the order of summation and integration for t > 0, since the series 2 1 X ? ln a2t for t > 0 e sin ln x sin ln
n=1

2 1 X ? ln a2t G(x; ; t) = 2 e sin l n=1 The function u(x; t) can then be written in the form Zl u(x; t) = G(x; ; t) '( 0

n x sin n : l l
)d :

(4:4:23) (4:4:24)

The function G(x; ; t) is called the Green function. It has a physical meaning. We shall prove that G(x; ; t), as a function of x, represents the temperature in the bar 0 x l at time t, if the temperature at the initial moment t = 0 equals to zero and at the same moment, at the point x = , produces a heat quantity Q (which will be determined later), while at the ends of the bar (x = 0; x = l) the temperature remains zero. The notion "a heat quantity Q is produced at the point " means that this is produced in a su ciently small neighbourhood of . The temperature change ' ( ), which produces a heat quantity at the point , is equal to zero outside the interval ( ? ; + ) and, inside this interval, ' ( ) is a positive, continuous and di erentiable function such that Z+ c ' ( ) d = Q: (4:4:25) Note that the left-hand side of (4.4.25) represents the heat quantity produced by ' ( ). The temperature in this case is Zl (4:4:26) u (x; t) = G(x; ; t) ' ( ) d :
0
?

4.4. THE FOURIER METHOD

73

Since G(x; ; t) for t > 0 is a continuous function of we have + R G(x; ; t) ' ( ) d = G(x; ; t) R+ ' ( ) d u (x; t) = ? ? (4:4:27) Q; = G(x; ; t) c where is a point lying between ? and + . Letting ! 0, and taking into account that G(x; ; t) is a continuous function of for t > 0, we get Q lim u (x; t) = c G(x; ; t) !0 (4:4:28) n 2 1 Q 2 X e? l t sin n x sin n = c l l l n=1
+ R (Here, we suppose that there exists the limit of the integral ' ( ) d as ! 0, and ? R+ ' ( ) d = Q .) Thus, G(x; ; t) represents the temperature in uence at the instant lim c ? of time into the heat pole of intensity Q = c , which is lying at t = 0 and 2 (0; l). We shall prove that for any x; and t > 0, G(x; ; t) 0. Consider the function ' ( ) de ned above, since ' ( ) > 0 for 2 ( ? ; + ), u (0; t) = u (l; t) = 0, from the maximum principle u (x; t) 0; 0 x l; t > 0: Thus, Q u (x; t) = G(x; ; t) c 0 (t > 0):

Letting ! 0, we obtain that

G(x; ; t) 0; 0 x;

l; t > 0:

4.4.3 Boundary value problems with non-smooth initial conditions


We have considered our rst boundary value problem in the class of all continuous functions in the closed domain 0; l] 0; T ]. This has some restriction, for example, if u(x; 0) = u0 6= 0, then the solution must be discontinuous at the points (0; 0) and (l; 0). It leads to the idea to consider our problem for non-smooth initial conditions.

Theorem 4.4.2 Let ' be a continuous function de ned on 0; l] and '(0) = '(l) = 0. Then
the solution of the equation

ut = a2uxx (0 < x < l; t > 0),


which is continuous in 0; l]

(4.4.4)

0; T ] and satis es the conditions

74

CHAPTER 4. PARABOLIC EQUATIONS


u(0; t) = u(l; t) = 0; t 2 0; T ], u(x; 0) = '(x); x 2 0; l],
(4.4.6) (4.4.5)

is unique and can be represented by

u(x; t) = G(x; ; t) '( ) d .


0

Zl

(4.4.24)

Consider a series of continuous, piecewise di erentiable functions 'n(x) ('n (0) = 'n(l) = 0), which uniformly converges to '(x) (as 'n (x) we can take the piecewise linear function, which coincides with '(x) at the points lk (k = 0; 1; : : : ; n)). For the 'n (x) we can determine n un (x; t) by (4.4.24), since 'n(x) is continuous and piecewise di erentiable. The functions un (x; t) uniformly converge to u(x; t). In fact, for > 0 small enough, there exists an n( ) such that j'n (x) ? 'n (x)j < (0 x l) 8n1; n2 n( ), since the 'n uniformly converge to '(x). It follows then by the maximum principle that jun (x; t) ? un (x; t)j < (0 x l; 0 t T ) if n1; n2 n( ). Thus, fun(x; t)g uniformly converge to u(x; t). Since un (x; t) is continuous, u(x; t) is so, too. For any xed (x; t) 2 0; l] 0; T ] we have Zl Zl u(x; t) = nlim un(x; t) = nlim G(x; ; t) 'n( ) d = G(x; ; t) '( ) d : !1 !1
1 2 1 2

Proof: The case where '( ) is continuous and piecewise di erentiable is already proved.

Thus, the function u(x; t) is continuous and satis es the condition (4.4.5). Further, as we have proved in x 4.4.1 this function also satis es (4.4.6). The uniqueness follows immediately from the maximum principle. 2

Theorem 4.4.3 Let ' be a piecewise continuous function. Then the function u(x; t) de ned by (4.4.24) represents a solution of the equation (4.4.4), satis es the condition (4.4.5), and is continuous where ' is continuous. Proof: First, let ' be linear:
Consider the sequence

'(x) = c x:

(4:4:29)

8 1 > cx > ; 0 x l 1? n < 'n(x) = > > c(n ? 1)(l ? x); l 1 ? 1 x l ; n 2 IN : : n

4.4. THE FOURIER METHOD


'6

75

1 1? n

B BB BB BB - x
r r r

The functions 'n(x) are continuous and 'n(0) = 'n (l) = 0, hence the functions un(x; t) de ned by (4.4.24) for 'n are the solutions of (4.4.4) which satisfy (4.4.6) and

un(x; 0) = 'n(x):
Since from the maximum principle,

'n (x) 'n+1(x) (0 x l);

un(x; t) un+1(x; t): The function U0(x) = cx is a continuous solution of the heat equation. The maximum principle now gives un (x; t) U0(x); as this is valid for t = 0; x = 0 and x = l. The series fun(x; t)g is monoton increasing and bounded above by the bounded function U0(x), so it is convergent. We have Zl Zl u(x; t) = nlim un(x; t) = nlim G(x; ; t) 'n ( ) d = G(x; ; t) '( ) d U0(x); !1 !1
because we can take the limit process under the integral. In x 4.4.1 we have proved that u(0; t) = u(l; t) = 0 and u(x; t) satis es (4.4.4) for t > 0. It remains to show that it is 1 continuous for t = 0 and 0 x < l. Let x0 < l. We choose n such that x0 < l 1 ? n . In this case 'n(x0) = U0(x0). Noting that
0 0

un(x; t) u(x; t) U0(x)


and we conclude that the limit lim x!x0 un (x; t) = xlim0 U0 (x) = !x t!0 lim x!x0 t!0

'(x0);

u(x; t) = '(x0)

exists and is independent of how x ! x0 and t ! 0. This implies the continuity of u(x; t) at (x0; 0). This function is bounded by U0(x). Thus, the theorem is proved for '(x) = cx. Changing x by l ? x, we see that the theorem is also proved for '(x) = b(l ? x).

76

CHAPTER 4. PARABOLIC EQUATIONS

Thus, the theorem is correct for the functions of the form '(x) = B + Ax. Further, the theorem is correct for any continuous function '(x) which needs not satisfy the condition '(0) = '(l) = 0. In fact, any such function can be represented in the form '(x) = '(0) + x ('(l) ? '(0)) + (x); l where (x) is a continuous function which vanishes at the ends of the interval: (0) = (l) = 0. From the superposition theorem we see that the theorem is proved as far as the continuity is concerned. Let ' be now a piecewise continuous function; we shall prove that the function u(x; t) de ned by (4.4.24) gives a solution to (4.4.4) and satis es (4.4.6). Suppose that x0 is a point where ' is continuous. We shall prove that for any positive , there exists a ( ) such that if jx ? x0j < , t < ( ), then ju(x; t) ? '(x0)j < . Because ' is continuous at x0, there exists an ( ) such that It follows that

j'(x) ? '(x0)j 2 for jx ? x0j < ( ):


'(x0) ? 2 '(x) '(x0) + 2 for jx ? x0j < ( ):
(4:4:30)

Let '(x) and '(x) be continuously di erentiable functions such that

'(x) = '(x0) + 2 for jx ? x0j < ( ); '(x) '(x) '(x) = '(x0) ? 2 for jx ? x0j < ( ); '(x) '(x) '(x): u(x; t) = u(x; t) =

for jx ? x0j > ( ); for jx ? x0j > ( ):

(4:4:31)

'(x) '(x)
From (4.4.30) we have Consider the functions

(4:4:32) (4:4:33)

Zl Zl
0 0

G(x; ; t) '( ) d ; G(x; ; t) '( ) d :

Since '(x) and '(x) are continuous, u(x; t) and u(x; t) are so at x0. It follows the existence of a ( ) such that 9 ju(x; t) ? '(x)j 2 ; > = > for jx ? x0j < ( ); t < ( ): ; ju(x; t) ? '(x)j 2 >

4.4. THE FOURIER METHOD


It means

77

u(x; t) u(x; t)

9 '(x) + 2 = '(x0)+ ; > > = > for jx ? x0j < ( ); t < ( ): ; '(x) ? 2 = '(x0)? >
u(x; t) u(x; t) u(x; t):
(4:4:34)

Since the function G(x; ; t) is not negative, the inequalities of (4.4.33) yield Consequently,

'(x0) ?
That is

u(x; t) '(x0) +

for jx ? x0j < ( ); t < ( ):

for jx ? x0j < ( ); t < ( ): Furthermore, (4.4.34) yields also the boundedness of u(x; t).

ju(x; t) ? '(x0)j <

4.4.4 The non-homogeneous heat equation


We consider now the non-homogeneous heat equation

ut = a2uxx + f (x; t); 0 < x < l; 0 < t T;


with the initial condition and the boundary conditions

(4.4.1) (4:4:35) (4.4.6) (4:4:36)

u(x; 0) = 0; 0 x l;

u(0; t) = u(l; t) = 0; 0 < t T:


We shall nd a solution of this problem in the form 1 X u(x; t) = un(t) sin ln x : n=1

In order to determine u(x; t), we have to nd un(t). For this purpose, we represent f (x; t) in the form 1 X f (x; t) = fn(t) sin ln x ; n=1 where 2 Zl f ( ; t) sin n d : fn (t) = l (4:4:37) l Putting (4.4.36) and (4.4.37) into (4.4.1), we get ( ) 1 X n 2 a2u (t) + u (t) ? f (t) = 0: nx _n sin l n n l n=1
0

78 It follows that On the other hand and therefore

CHAPTER 4. PARABOLIC EQUATIONS


2 un(t) = ?a2 ln un (t) + fn (t): _

(4:4:38)

u(x; 0) =

un(0) sin ln x = 0; n=1


(4:4:39)

1 X

un(0) = 0: Solving (4.4.38) with the initial condition (4.4.39) we obtain Zt ? n 2 a2(t ? ) fn ( ) d : un(t) = e l
0

(4:4:40)

Hence,

2 3 Zt ? n 2 a2(t ? ) 1 X6 7 fn ( ) d 7 sin ln x : u(x; t) = 6 e l 4 5 n=1 0

(4:4:41)

Taking (4.4.37) into account, we can write 8 Zt Zl > 2 X ? n 2 a2(t ? ) < 1 e l u(x; t) = sin n x sin n > l l 0 0 : l n=1 Zt Zl G(x; ; t ? ) f ( ; ) d d ; =:
0 0

9 > = > f( ; ) d d ;
(4:4:42) (4:4:43)

where

n 2 1 2 X e? l a2(t ? ) sin n x sin n G(x; ; t ? ) = l l l n=1

is in fact our Green's function de ned by (4.4.23). We can give here the physical meaning of (4.4.42) similar to that of (4.4.24), however, we omit to give it here.

4.4.5 The non-homogeneous rst boundary value problem


Find the solution of the problem

ut = a2uxx + f (x; t); 0 < x < l; 0 < t T; u(x; 0) = '(x); 0 x l; u(0; t) = 1(t); u(l; t) = 2(t); 0 < t T:

(4.4.1) (4.4.2) (4.4.3)

4.5. PROBLEMS ON UNBOUNDED DOMAINS

79

In order to solve this problem, we change it to a homogeneous boundary value problem. In fact, consider the function U (x; t) = 1(t) + x 2(t) ? 1(t)] : (4:4:44) l The function U satis es the boundary condition

U (0; t) = 1 (t) and U (l; t) = 2(t); Ut = a2Uxx + f1(x; t); where f1(x; t) = Ut(x; t) ? a2Uxx(x; t). It is clear now that the function v(x; t) = u(x; t) ? U (x; t)
is a solution of the problem and the equation

vt = avxx + f; v(0; t) = v(l; t) = 0; v(x; 0) = '(x) ? U (x; 0): The solution v can be found by the Fourier method.

4.5 Problems on unbounded domains


4.5.1 The Green function in unbounded domains
We have derived the Green function
2 1 X ? ln a2t sin ln x sin ln Gl (x; ; t) := 2 e l n=1

(4:5:1)

for the problem in the nite domain (0; l). Here we have used the subindex l to indicate the domain (0; l). We want to develope this theory when the domain is expanded to the whole l IR. In doing so, we rewrite (4.5.1) in another form, such that the ends of the bar are ? 2 and l. 2 l l l l Let x0 = x ? 2 , 0 = ? 2 . Then x0 and 0 are lying in (? 2 ; 2 ) and the Green function has the form n 2 ! ! 1 2 X e? l a2t sin n x0 + l sin n 0 + l : 0; 0 ; t) = (4:5:2) Gl (x l l 2 l 2
n=1

We rewrite (4.5.2) by the following arguments.

80

CHAPTER 4. PARABOLIC EQUATIONS

We study the limit of Gl(x0; 0; t) as l ! 1. We have 1 1 2 X 00 e? 2 a2t sin( x0) sin( 0) = 1 X 00 f ( ) ; n (4:5:4) n n 1 n l n=0 n=0 where 2 2 f1( ) = e? a t sin( x0) sin( 0); = 2l ; n = ln : The sum (4.5.4) is the integral sum for the function f1( ) over the interval 0 < 1. For l ! 1, ! 0. Taking the limit under the integral, we obtain 1 1 X 00 f ( ) = 1 Z1f ( ) d = 1 Z1e? 2a2t sin( x0) sin( 0 ) d : lim0 (4:5:5) 1 n 1 !
n=0

If n is even, that is n = 2m, then ! ! 2 m x0 + l sin 2 m 0 + l = sin 2 m x0 sin 2 m 0 : sin l 2 l 2 l l Further, if n is odd, that is n = 2m + 1, then ! ! (2m + 1) x0 + l sin (2m + 1) 0 + l = cos (2m + 1) x0 cos (2m + 1) 0: sin l 2 l 2 l l Thus, 2 1 X 00 ? ln a2t Gl(x0; 0; t) = 2 e sin ln x0 sin ln 0 l n=0 (4:5:3) n 2 a2t 1 X ? + 2 0e l cos ln x0 cos ln 0 ; l n=1 where P00 stands for even n, and P0 stands for odd n.

Remark 4.5.1 Here we have used the following result: For a continuous function de ned on 0; 1), if the integral sums
1 X
i=1 i

f ( i )( i ?

i?1 )

( i?

i?1

i?1

i)

for any choice of

converge, then there exists the integral Z1 f( ) d :


0
1 2 X 0 e? 2 a2t cos ( x0) cos ( n n l n=1 0 n )= 1 1 X 0f ( ) 2 n
n=1

Analogously, where

;
n

(4:5:6)

2 2 f2( ) = e? a t cos( x0) cos( 0 );

= 2l

and

= ln :

4.5. PROBLEMS ON UNBOUNDED DOMAINS


As

81

! 0, we get
1 X lim0 1 0 f2( n ) !
n=1

1 Z1 f ( ) d = 1 Z1e? 2a2t cos( x0) cos( 0 ) d : = 2


0 0

(4:5:7)

Finally, we have

G(x; ; t) = llim Gl(x; ; t) !1


1 Z1
0 0

1 Z1 e? 2a2t sin( x) sin( ) d + 1 Z1e? 2a2t cos( x) cos( ) d =


2 2 e? a t cos (x ? ) d : 0

The Green function for the unbounded domain (?1; 1) has thus the form 1 Z1 e? 2a2t cos (x ? ) d : G(x; ; t) =
0

(4:5:8)

We calculate now the integral

I = e?
0

Z1

cos( ) d ( > 0);

(4:5:9)

where the parameters and are given. In order to calculate this integral, we x and vary so that I is a function of : I ( ). It is clear that dI = ? Z1 e? 2 sin( ) d d 0 1 e? 2 1 ? Z1e? 2 cos( ) d = sin( ) 2 0 2 0 = ? 2 I ( ): Thus, Hence On the other hand,

I 0( ) = ? : I( ) 2 I( ) = C e I (0) = e?
0

?4

Z1

p 1 1 Z1 e?z2 dz = p d =p 2 : 0

82 Thus,

CHAPTER 4. PARABOLIC EQUATIONS


I ( ) = e?

Z1
0

p 1 p e? 4 : cos( ) d = 2
2

(4:5:10)

Applying (4.5.10) to (4.5.8), we get

(4:5:11) 2 at which is the Green function for the unbounded domain (?1; 1). This function is often called the fundamental solution of the heat equation.

(x ? ) 1 e? 4a2t ; G(x; ; t) = p 2

Properties of the fundamental solution i) The function G(x; ; t ? t0) as a function of x; t is a solution of the heat equation. In fact, (x ? )2 ? ? 1 Gx = ? 2p 2 a2(x ? t )]3=2 e 4a2(t ? t0) ; t
1 Gxx = 2p 1 Gt = 2p

Thus, ii)

# ? (x ? )2 1 x 1 ? 2 a2(t ? t )]3=2 + 4 a2((t ? t )]5=2 e 4a2(t ? t0) ; ?0 0 (x ? )2 " 2 2(x ? )2 # ? 2 a ? 2 a2(t a t )]3=2 + 4 a2(t ? t )]3=2 e 4a (t ? t0) : ?0 0 "
)2

Z1
?1

Gt = a2 Gxx: G(x; ; t ? t0) dx = 1 for t > t0.

2 Z1 ? (x ? ) 1 Z1 e? 2 d = 1: 1 4a2(t ? t0) q dx =p e G(x; ; t ? t0) dx = p 2 a2(t ? t0) ?1 ?1 ?1

Z1

4.5.2 Heat conduction in the unbounded domain (?1 1)


;

Find a bounded function u(x; t) (?1 < x < 1; t 0), which satis es the equation ut = a2 uxx (?1 < x < 1; t > 0); (4:5:12) and the initial condition u(x; 0) = '(x); ?1 < x < 1: (4:5:13) Since u is bounded, the solution of our problem is unique (x 4.3.5). We shall prove that if ' is bounded, say j'j < M , then for t > 0 the Poisson integral Z1 1 ? (x ? )2 1 p 2 e 4a2t '( ) d (4:5:14) u(x; t) = 2p at ?1

4.5. PROBLEMS ON UNBOUNDED DOMAINS


We note that
t!00

83

is a solution of (4.5.12), and xlim u(x; t) = '(x0). Here x0 is a point where ' is continuous. !x

ju(x; t)j

Z1
?1

G(x; ; t) j'( )j d < M

Z1
?1

G(x; ; t) d = M:

Thus, the function u(x; t) is bounded. We shall prove that for t > 0, u(x; t) satis es the heat equation (4.5.12). In doing so, we prove that we can di erentiate (4.5.14) under the integral. For example, formally di erentiating both sides of (4.5.14) we get @u = Z1 @ G(x; ; t) '( ) d ; @x ?1 @x

and it remains to prove that the integral on the right-hand side converges uniformly. It is enough to prove the di erentiability of u at a point (x0; t0), or to prove the uniform convergence of the above integral in a neighbourhood of this point: t1 t0 t2; jx ? x0j x:

t2 t0 t1

t6

x0 ? x

x0

x0 + x

-x

In doing so, we shall prove that there exists a positive function F ( ), which does not depend on x and t, and approximates jGx (x; ; t) '( )j from above: jGx(x; ; t) '( )j F ( ); (4:5:15) and Z1 Zx F ( ) d < 1; F ( ) d < 1; (4:5:16)
1

x1 is a number, for which (4.5.15) is valid. For jx ? x0j x and t1 t t2 we have

x1

?1

(x ? )2 1 @ G(x; ; t) j'( )j = p j ? xj e? 4a2t j'( )j @x 2 2pa2t3 (j ? x0j ? x)2 M j ? x0j + x e? 4a2t2 p p 2 2 a2t13 =: F ( ):

(4:5:17)

84

CHAPTER 4. PARABOLIC EQUATIONS


Z1
x1

Let x1 be a number from which (4.5.17) is valid. Then


2 Z1 M j ? x0j + x ? (j ? x02j ? x) 4a t2 e F ( ) d = 2p d p 2 a2t13 x 2 1 Z1 M 1 + 2x ? 2 4a t2 d ( 1 = j ? x0j ? x): p = 2 2pa2t13 e x ?x
1 1

This integral is clearly convergent. Thus, @u = Z1 @ G(x; ; t) '( ) d : @x ?1 @x

(4:5:18)

Similarly, we can prove that for t > 0 the function u(x; t) is di erentiable two times w.r.t. x and once w.r.t. t, and that u satis es the heat equation (4.5.12). Let x0 be a point where '(x) is continuous. It remains to prove that

u(x; t) ?! '(x0); as t ! 0 and x ! x0:


For any > 0 we have to show that there is a ( ) such that

ju(x; t) ? '(x0)j <


if jx ? x0j < ( ) and t < ( ). Since ' is continuous at x0, there exists an ( ) such that

j'(x) ? '(x0)j < 6 for jx ? x0j < :


We break the integral for u(x; t) into three parts as follows:

(4:5:19)

Z1 1 ? (x ? )2 1 p 2 e 4a2t '( ) d u(x; t) = 2p ?1 a t 1 Z1 1 Zx 1 Zx + 2p + 2p = 2p x x ?1 := u1(x; t) + u2(x; t) + u3(x; t);


1 2 1 2

(4:5:20)

where

x1 = x0 ? For the second integral we have


2

and x2 = x0 + :
2

(4:5:21)

Zx 1 ? (x ? )2 Zx 1 ? (x ? )2 1 p u2(x; t) = '(x0) p 2 e 4a2t d + 2p p 2 e 4a2t '( ) ? '(x0)] d = I1 + I2: 2 x at at x


1 1

4.5. PROBLEMS ON UNBOUNDED DOMAINS


The integral I1 can be calculated as
x ?x (x ? ) p Zx e? 4a2t Za t 2 ( p 2 d = 'px0) p I1 = '(x0) e? d ; 2

85

x1

at

x1 ?x p2 at

?x d with = p 2 , d = p 2 . 2 at 2 at As jx ? x0j < , x1 = x0 ? , x2 = x0 + , we see that xp? x ?! ?1; as t ! 0 1 2 a2 t x2 ? x ?! +1; as t ! 0: p 2 a2 t It follows that lim I1 = '(x0): t!
x!x0
0

Thus, there is a 1( ) such that

jI1 ? '(x0j < 2 for jx ? x0j <


We estimate I2 as follows:

and t < 1:

(4:5:22)

jI2j

(x ? )2 1 Zx p1 e? 4a2t j'( ) ? '(x )j d : p 0 2 x a2 t


2 1

From (4.5.21), for x1 < < x2, we see that

j ? x0 j < :
Taking (4.5.19) and the inequality
00 1 Zx e? 2 d < p Z1 e? 2 d = 1; 8x0; x00 1 p

x0

?1

into account, we obtain

jI2j

Zx 1 ? (x ? )2 1 p p 2 e 4a2t d 6 2 x 4a t
2

1 Z e? 2 d < : = 6 6 x ?x
2 1 p

x2 ?x p 2 a2 t a2 t

(4:5:23)

86 Further,

CHAPTER 4. PARABOLIC EQUATIONS


(x ? )2 1 Z1 p1 e? 4a2t '( ) d ju3(x; t)j = 2p a2t x M Z1 e? 2 d ?! t! 0; < p x!x
2 2

(4:5:24)

xp?x 2 a2 t

0 0

1 ju1(x; t)j = 2p

Zx
2

M Z a t e? 2 d ?! t! 0: < p x!x
(Note that as x ! x0, x2 ? x > 0 and x1 ? x < 0.) Thus, there is a 2( ) such that
?1
0 0

?1 x1 ?x p2

(x ? ) 1 e? 4a2t '( ) d p2
2

at

(4:5:25)

ju3(x; t)j < 3 and ju1(x; t)j < 3 for jx ? x0j <
A combination of (4.5.22), (4.5.23) and (4.5.26) gives

and t < 2:

(4:5:26)

ju(x; t) ? '(x0)j

ju1j + jI1 ? '(x0)j + jI2j + ju3j

< 3 + 6 + 6 + 3; as jx ? x0j < and t < , where = minf 1; 2g. Thus, we have proved that the function
(x ? )2 1 Z1 p1 e? 4a2t '( ) d u(x; t) = 2p 2 ?1 a t is bounded and satis es the heat equation as well as the initial condition. If the initial condition is given at t = t0, then the solution has the form
2 ( Z1 ? 4ax(??)t ) 1 q21 e 2 t 0 '( ) d : u(x; t) = 2p ?1 a (t ? t0)

(4:5:27)

Example: We try to nd the solution of the problem (4.5.12), (4.5.13) in the case
8 T for x < 0; < 1 u(x; 0) = '(x) = : : T2 for x > 0

4.5. PROBLEMS ON UNBOUNDED DOMAINS


In this case

87

Z1 1 ? (x ? )2 1 p 2 e 4a2t '( ) d u(x; t) = 2p ?1 a t (x ? )2 T2 Z0 e? 4a2t p + p1 d T = p 2 a2 t ?1 ? pxa t T T Z e? 2 d + p1 Z1 p2 =


2 2

Z1 ? (x ? )2 d e 4a2t p 2 2 at 0
e? d
2

(4:5:28)

?1

T1 + T2 + T1p T2 Z e? 2 d ; ? = 2 0
since
? 1 Z z e? 2 d = p Z0 e? 2 d + p Zz e? 2 d = 1 + p Zz e? 2 d 1 1 1 p 2 ?1 ?1 0 0

px 2 a2 t

? 2pxa2 t

and

1 Z1 e? 2 d = 1 + p Z0 e? 2 d = 1 ? p Zz e? 2 d : 1 1 p 2 2 ?z ?z 0 0 1 x p 2 1 B1 + p Z a t e? 2 d C. The function C If T2 = 0, T1 = 1, then u(x; t) = 2 B @ A 0


2 2

2 Zz e? 2 d (z) = p
0

is called the error function which has many applications in the probability theory.

4.5.3 The boundary value problem in a half-space


Consider the heat equation in the rst quadrant

ut = a2 uxx; x > 0; t > 0;


with the initial condition

(4.5.12) (4.5.13)

u(x; 0) = '(x); x > 0;


and one of the boundary value conditions

u(0; t) = (t); t 0; (the rst b.v.p.)

88

CHAPTER 4. PARABOLIC EQUATIONS


@u (0; t) = (t); t 0; (the second b.v.p.) @x ! @u (0; t) = u(0; t) ? (t)]; (the third b.v.p.) or @x

In order to guarantee the uniqueness of the solution, we assume that the solution is bounded: ju(x; t)j < M; 0 < x < 1; t 0; where M is a constant. It follows also j'(x)j < M: We represent the solution of the rst b.v.p. in the form u(x; t) = u1(x; t) + u2(x; t); where u1 is the solution of (4.5.12) with the conditions

u1(x; 0) = '(x); u1(0; t) = 0


and u2 is the solution of (4.5.12) with the conditions

(4.5.14) (4.5.17)

u2(x; 0) = 0; u2(0; t) = (t):


Below we give two lemmas about the Poisson integral (x ? )2 1 Z1 p1 e? 4a2t ( ) d . u(x; t) = 2p 2 ?1 a t

(4.5.14)

Lemma 4.5.2 If is a bounded and odd function, (x) = ? (?x);


then u(0; t) = 0.

Proof: Since is bounded, the integral (4.5.14) converges, and

2 1 Z1 p1 e? 4a2t ( ) d = 0: u(0; t) = 2p 2 ?1 a t

Lemma 4.5.3 If is a bounded and even function, (x) = (?x);


@u then @x (0; t) = 0.

4.5. PROBLEMS ON UNBOUNDED DOMAINS

89

Proof:

(x ? )2 @u = ? p Z1 (x ? ) e? 4a2t ( ) d 1 = 0: @x x=0 2 ?1 2pa2t3 x=0

Let (x) be a function de ned by

8 '(x) ; x > 0 < (x) = : : ?'(?x) ; x < 0

Since ' is bounded (by M ), is bounded. It follows that the function

Z1 1 ? (x ? )2 1 p 2 e 4a2t ( ) d U (x; t) = 2p ?1 a t
is well de ned. Furthermore, since is even, U (0; t) = 0. Thus,

u1(x; t) = U (x; t) for x 0:


The function U (x; t) can be represented as follows: (x ? )2 (x ? )2 1 1 Z0 p1 e? 4a2t ( ) d + p Z1 p1 e? 4a2t ( ) d U (x; t) = 2p 2 2 0 a2 t ?1 a t (x + )2 (x ? )2 1 1 Z1 p1 e? 4a2t '( ) d + p Z1 p1 e? 4a2t '( ) d = ? 2p 2 2 0 a2 t 0 at 9 8 Z1 1 > ? (x ? )2 ? (x + )2 > = < 1 = 2p p 2 e 4a2t ? e 4a2t '( ) d : > a t> ; :
0

Thus,

9 8 Z1 1 > ? (x ? )2 ? (x + )2 > = < 1 u1(x; t) = 2p p 2 >e 4a2t ? e 4a2t > '( ) d : ; 0 a t:

(4:5:29)

Analogously, for the solution u1(x; t) of the second boundary value problem with @u1 (0; t) = @x 0 and u1(x; 0) = '(x) we have 8 9 Z1 1 > ? (x ? )2 ? (x + )2 > < = 1 u1(x; t) = 2p p 2 >e 4a2t + e 4a2t > '( ) d : (4:5:30) a t: ; 0 Now we try to nd u2(x; t) and u2(x; t). In doing so we note that if we consider the equation v1t = a2 v1xx, for 0 < x < 1 and t t0, with the conditions v1(x; t0) = T and v1(0; t) = 0,

90

CHAPTER 4. PARABOLIC EQUATIONS

then from (4.5.29) we have 9 8 > ? (x ? )2 (x + )2 > > ? = < T Z1> p >e 4a2(t ? t0) ? e 4a2(t ? t0) > q 2 d : v1(x; t) = 2 > a (t ? t0) 0 > ; : Taking the changes of variables = q 2 ? x ; 1 = q 2+ ; 2 a (t ? t0) 2 a (t ? t0) we get

(4:5:31)

2 6 T 6 Z1 v1(x; t) = p 6 6 4? p x
2

2 e? d ?
2

Z1

a2 (t?t0 )
0)

pa x t?t
2( 2

3 7 ? 2 d 17 1 7 e 7 5
0)

T = p
Thus,

pa x t?t
Z
2( 2(

pa x t?t

2 e? d = T p
2

pa x t?t
Z
0
2(

0)

e? d :

0)

v1(x; t) = T
where

1 0 x A; @ q 2 a2(t ? t0)
0

(4:5:32)

2 Zz e? 2 d (z) = p

is the error function. Let (t) = 0 = const. Then the function

v(x; t) = 0

0 1 x @ q A 2 a2(t ? t0)

is the solution of the heat equation (4.5.12) for t t0 which satis es the conditions

v(x; t0) = 0; v(0; t) = 0:


It follows that the function

13 2 0 x A5 v(x; t) := 0 ? v(x; t) = 0 41 ? @ q 2 2 a (t ? t0)


v(x; t0) = 0 (x > 0) and v(0; t) = 0 (t > t0):

(4:5:33)

is the solution of (4.5.12) for t t0, which satis es the conditions

4.5. PROBLEMS ON UNBOUNDED DOMAINS


We rewrite v(x; t) in the form where

91

v(x; t) = 0 U (x; t); 0 1 Z1 2 x 2 A= p U (x; t) = 1 ? @ q 2 e? d : 2 a (t ? t0) pa x t?t


2 2( 0)

The function U (x; t) corresponds to the case 0 = 1. We extend U (x; t) by zero for t < t0. It then corresponds to the step boundary condition 8 1 ; t>t < 0 : U (0; t) = : 0 ; t < t0 Consider now the solution v(x; t) of (4.5.12), which satis es the conditions 8 < 0 ; t0 < t < t 1 : v(x; t0) = 0; v(0; t) = (t) = : 0 ; t > t1 It can be veri ed that

v(x; t) = 0 U (x; t ? t0) ? U (x; t ? t1)]:


Further, if has the form

8 > 0 > > > 1 > < (t) = > > > > > :

; t0 < t t 1 ; ; t1 < t t 2 ; ::: ; tn?1 < t tn;


(4:5:34)

n?1

then the solution of (4.5.12), (4.5.13) can be represented in the form n?2 X u(x; t) = i U (x; t ? ti ) ? U (x; t ? ti+1 )] + n?1 U (x; t ? tn?1 ):
i=0

With the aid of the theorem about the mean value, we have n?2 @U (x; t ? ) X u(x; t) = + n?1 U (x; t ? tn?1); ti i @t i=1 i

ti+1:

(4:5:35)

We consider the problem (4.5.12), (4.5.13) with u(x; 0) = 0. If the function (t) is piecewise continuous, then we have an approximation to u(x; t) of the form (4.5.34), when (t) is approximated by a piecewise constant function. If we re ne this approximation, then we have Zt @U u(x; t) = @t (x; t ? ) ( ) d ; 0

92 since for x > 0

CHAPTER 4. PARABOLIC EQUATIONS


lim U (x; t ? tn?1 ) = 0: t?tn?1 !0 n?1

We do not go into the details, when the limit process is meaningful, and so formally take Zt @U u2(x; t) = @t (x; t ? ) ( ) d : (4:5:36) 0 Further, 0 1 x2 1 @U (x; t) = @ B p Z e? 2 d C = p a2x e? 4a2t = ?2a2 @G (x; 0; t) = 2a2 @G : 2 1 B C C 2 p 23 @ A @t @t B px @x @ =0 at
2

Thus,

a2 t

2 Zt @G ? 4a2(x ? ) 2 Zt a t u2(x; t) = 2a2 @ (x; 0; t ? ) ( ) d = 2p q x 3 e ( )d : 2 (t ? ) a 0 0 (4:5:37) We note that in the way of getting (4.5.37) we have used only the linearity of the heat equation and nothing more. Furthermore, the boundary and initial conditions on U are explicitly given U (0; t) = 1; t > 0; U (x; 0) = 0; x > 0; or 8 1 ; t>0 < : U (0; t) = : 0 ; t<0 If the boundary condition for a given di erential equation is u(0; t) = (t); t > 0; which satis es the homogeneous initial condition, then Zt @U u(x; t) = @t (x; t ? ) ( ) d : 0 This is called the Duhamel principle, which shows that the di culties in the boundary value problems are due to the piecewise constant boundary conditions. By the method of odd extension of the data, we can easily nd the solution of the problem ut = a2 uxx + f (x; t); (0 < x < 1; t > 0); u(0; t) = 0; u(x; 0) = 0; in the form 8 9 > ? (x ? )2 ? (x + )2 > Z1 Zt < = 1 q 21 e 4a2t ? e 4a2t > f ( ; ) d d : u3(x; t) = 2p > ; 0 0 a (t ? ) :

Chapter 5 Elliptic Equations


We shall consider shortly in this chapter the Laplace equation u = uxx + uyy = 0: A solution of the Laplace equation is called a harmonic function. The inhomogeneous version of the Laplace equation u = f; with f being a given function, is called the Poisson equation. The basic mathematical problem is to solve the Laplace or the Poisson equation in a given domain with a condition on the boundary @ of . u = f in @u @u u = h or @n = h or @n + au = h on @ :

5.1 The maximum principle


Let be a connected bounded domain in IR2. Let u be a harmonic function that is continuous @ . Then the maximum and the minimum values of u are attained on @ . in = Proof: Consider the function

v(x; y) = u(x; y) + (x2 + y2); > 0:

v = u + (x2 + y2) = 0 + 4 > 0 in IR : Thus, v has no maximumin , otherwise vxx 0, vyy 0, and so vxx +vyy 0. Furthermore, since v is a continuous function, it attains its maximum in the bounded closed domain . As v has no maximum in , v attains its maximum at some point x0 2 @ . Hence, for all (x; y) 2 2 2 u(x; y) v(x; y) v(x0; y0) = u(x0; y0) + (x2 + y0 ) max u + (x2 + y0 ): 0 0 @
93

We have

94

CHAPTER 5. ELLIPTIC EQUATIONS


u(x; y) max u; 8(x; y) 2 : @

2 Since is bounded, x2 + y0 is bounded and since is an arbitrary positive number, we have 0

Thus, u attains its maximum at some point of the boundary @ . The proof of "minimum part" is similar.

5.2 The uniqueness of the Dirichlet problem


u = f in ;
and

uj@ = h: Let u1 and u2 be two solutions of this problem. Put v = u1 ? u2. Then v = 0 in v @ = 0. By the maximum principle v 0 in . Thus u1(x; y) = u2(x; y).

5.3 The invariance of the operator


The operator is invariant under translations and rotations. In fact, a translation in the plane is a transformation x0 = x + a; y0 = y + b: The invariance under translations means that uxx + uyy = ux0x0 + uy0y0 . A rotation in the plane through the angle is given by

x0 = x cos + y sin ; y0 = ?x sin + y cos :


By the chain rule we calculate

ux uy uxx uyy
Adding, we have

= = = =

ux0 cos ux0 sin (ux0 cos (ux0 sin

? uy0 sin ;
+ uy0 cos ; ? uy0 sin )x0 cos ? (ux0 sin + uy0 cos )y0 sin ; + uy0 cos )x0 sin + (ux0 sin + uy0 cos )y0 cos :

uxx + uyy = (ux0x0 + uy0 y0 )(cos2 + sin2 ) + ux0y0 : : : = ux0x0 + uy0 y0 :


This proves the invariance of the Laplace operator. In engineering the Laplacian model for isotropic physical situations, in which there is no prefered direction. is a

5.3. THE INVARIANCE OF THE OPERATOR

95

2 2 The rotational invariance suggests that the two-dimensional Laplacian 2 = @ 2 + @ 2 @x @y should take a particularly simple form in polar coordinates. The transformation has the form x = r cos ; y = r sin : It follows that q r = x2 + y2; = arccos px2x+ y2 = arcsin px2y+ y2 : We have @x = cos ; @y = sin ; @r @r @x = ?r sin ; @y = r cos ; @ @ @r = x = cos ; @r = y = sin ; @x r @y r @ = ? y = ? sin ; @ = x = cos : @x r2 r @y r2 r Thus, the transformation x = r cos , y = r sin has the Jacobian matrix 0 @x @y 1 B @r @r C 0 cos sin 1 C @ B A J =B C B @x @y C = A @ ?r sin r cos @ @ with the inverse 0 @r @ 1 0 ? sin 1 B @x C B cos C B C r C ?1 = B @x C=B C: B J B @r @ C B A A @ @ cos C sin @y @y r Hence, @u = @u @r + @u @ = @u cos ? @u sin ; @x @r @x @ @x @r @ r

@u = @u @r + @u @ = @u sin + @u cos ; @y @r @y @ @y @r @ r ! ! @ 2u = @ @u cos ? @u sin cos ? @ @u cos ? @u sin sin @x2 @r @r @ r @ @r @ r r @ 2u cos2 ? @ 2u sin cos + @u sin cos = @r2 @r @ r @ r2
2 2 2 @ 2u ? @r @ sin rcos + @u sin + @ u sin2 + @u sin r2cos @r r @2 r @

2 2 2 2 @ 2u cos = @ u cos2 + @ u sin2 ? 2 @r @ sin rcos + @u sin + 2 @u sin r2 : 2 2 r @r @ @r r @

96

CHAPTER 5. ELLIPTIC EQUATIONS

2 2 uxy = uyx = urr sin cos ? u sin r2cos + ur cos ? sin + r 2 2 +u sin ? cos ? ur sin rcos ; r2 2 2 uyy = urr sin2 + u cos2 + 2ur sin rcos + ur cos ? 2u sin r2cos : r r

Thus,

It is also natural to look for special harmonic functions that themselves are rotationally invariant. It means that we look for solutions depending only on r. Thus, by (5.3.2) ! 1 @ r @u 0 = uxx + uyy = r @r @r if u does not depend on . Hence, r @u = c1, and so u = c1 ln r + c2. The function ln r will play a central role later. @r

uxx + uyy = urr + u r12 + ur 1 ( !r 2 ) = 12 r @ r @u + @ u : r @r @r @2

(5:3:1) (5:3:2)

5.4 Poisson's formula


uxx + uyy = 0; x2 + y2 < a; (5:4:1) u = f; x2 + y2 = a: (5:4:2) Here a is a given number and f is a given function. In polar coordinates (r; ), the equation (5.4.1) has the form ! ( ) 1 r @ r @u + @ 2u = 0: (5:4:3) r2 @r @r @2 We shall nd solutions of this equation in the form u(r; ) = R(r) ( ):
Plugging this into (5.4.3), we get
d dr

Consider the problem

r dR dr
R r

=?

00

= ; (5:4:4)

where is a constant. From this we get


00 +

= 0;

5.4. POISSON'S FORMULA

The equation (5.4.4) gives

! d r dR ? R = 0: r dr dr
( ) = A cos

97 (5:4:5)

+ B sin

As u(r; ) is periodic in , we have Thus,

( + 2 ) = ( ): = n is an entire number and so


n(

) = An cos(n ) + Bn sin(n ): = n (n > 0):

We shall nd R(r) in the form R(r) = r . Putting this into (5.4.5) we get

n2 = 2 or
Thus,

R(r) = Crn + Dr?n ; where n > 0 and C and D are some constants. We see that D must be equal to zero, otherwise as r = 0, R(r) is not bounded. Thus, special solutions of our problem have the form un (r; ) = rn An cos(n ) + Bn sin(n ) : If the series 1 X n u(r; ) = r An cos(n ) + Bn sin(n )
converges, then it represents a harmonic function. Note that the equation (5.4.3) has no meaning for r = 0, so we have to prove that un = 0 also for r = 0. In fact, the special solutions rn cos(n ) and rn sin(n ) are the real and the imaginary parts of the function
n ein n=0

= ( ei )n = (x + iy)n;

which are polynomials in x and y. It is now clear that a polynomial which satis es the equation u = 0 for r > 0, due to the continuity of the second derivatives, satis es this equation also for r = 0. In order to determine An and Bn , we use the boundary condition 1 X (5:4:6) u(a; ) = an An cos(n ) + Bn sin(n ) = f:
n=0

Taking into account that f is a function of , we have 1 X f ( ) = 20 + n cos(n ) + n=1

sin(n ) ;

(5:4:7)

98 where 1 Z f( ) d ; 0=
?

CHAPTER 5. ELLIPTIC EQUATIONS


1 Z f ( ) cos(n ) d ; n=
?

1 Z f ( ) sin(n ) d : n=
?

A comparison of (5.4.6) with (5.4.7) gives Thus,


n n A0 = 20 ; An = an ; Bn = an :

@ kun = tn nk cos n + k + sin n + k : n n @k 2 2 Denoting by M the maximum of j nj, j nj, n = 0; 1; : : :, j nj < M; j nj < M we have @ k un tnnk 2M: @k For a t0 r0 < 1, a 1 1 X X n k t n (j n j + j nj) 2M tnnk (t t0): 0
n=1 n=1

We have

1 X r n u(r; ) = 20 + n cos(n ) + n sin(n ) : n=1 a We shall nd conditions guaranteeing the convergence of this series. Consider the function r un = tn n cos(n ) + n sin(n ) ; t = a 1:

(5:4:8)

(5:4:9)

It follows that the series on the right-hand side is uniformly convergent for t t0 < 1. Hence, u(r; ') for r r0 < a is in nitely di erentiable w.r.t. . Analogously, it is also in nitely di erentiable w.r.t. r r0 < a. As r0 < a is arbitrary the function u de ned by (5.4.8) satis es (5.4.1) and for r < a it is in nitely di erentiable w.r.t. r and '. Up to this point, we have used only the conditions in (5.4.9). This conditions are satis ed, if f (') is bounded by M=2. In order to guarantee the continuity of the solution up to boundary (up to r = a), we need to suppose that f is continuous and di erentiable. Putting the expressions of the Fourier coe cients n , n into (5.4.8), we get ) ( 1 1 Z f ( ) 1 + X r n cos(n ) cos(n ) + sin(n ) sin(n ) d u(r; ) = 2 n=1 a ? (5:4:10) ) ( X Z 1 r n 1 = 1 f( ) 2 + a cos n( ? ) d :
?
n=1

5.4. POISSON'S FORMULA


On the other hand, 1 1 1 + X r n cos n( ? )] = 1 + 1 X tn ein( ? ) + e?in( ? ) 2 n=1 a 2 2 n=1 ( 1 1 1 + X tei( ? ) n + te?i( ? ) = 2 n=1 2 3 1 41 + tei( ? ) + te?i( ? ) 5 = 2 1 ? tei( ? ) 1 ? te?i( ? ) 1? 2 r 1 = 2 1 ? 2t cos( t ) + t2 t = a < 1 : ? Putting the last equality into (5.4.10), we get r 1? a 1 Z f( ) d : u(r; ) = 2 r r a ? 2 a cos( ? ) + 1 ?
2 2 2 2

99

1 Z f( ) a2 ? r2 (5:4:11) u(r; ) = 2 r2 ? 2ar cos( ? ) + a2 d : ? This formula is called Poisson's formula. The function a2 ? r2 K ( ; ; a; ) = r2 ? 2ar cos( ? ) + a2 is called the Poisson kernel. For r < a it is positive, since 2ar < a2 + r2 for r 6= a. We now prove that Poisson's formula (5.4.11) (or (5.4.10)) gives also a continuous solution to (5.4.1), (5.4.2), if f is continuous. If f is continuous, then f is bounded, and so we have already proved that for r < a, the equation u = 0 is satis ed. It remains to prove that u(r; ) is continuous up to the boundary. Let f1( ), f2( ); : : : be a sequence of continuous and di erentiable functions, which uniformly converge to f ('). For any fk ( ) we can determine the corresponding uk (r; ) by the Poisson formula. Since ffk ( )g uniformly converges to f ( ), for any > 0 there exists a k0( ) > 0 such that jfk ( ) ? fk+l( )j < ; 8k > k0 ( ); l > 0: Thus, from the maximum principle, juk (r; ) ? uk+l(r; )j < for r r0 < a; k > k0( ); l > 0; > 0: Hence, fuk g uniformly converges to a function u, u = klim uk (r; ). The function u is !1 continuous in a closed domain, since all functions 2 2 1 Z uk (r; ) = 2 r2 ? 2ar a ? r? ) + a2 fk ( ) d cos( ?

Thus,

100

CHAPTER 5. ELLIPTIC EQUATIONS

are continuous in closed domains. Thus, 8 Z 2 2 > 1 > < 2 r2 ? 2ar a ? r? ) + a2 f ( ) d ; a < r; cos( u(r; ) = klim uk (r; ) = ; !1 > ? > f( ) : ; a=r since ffk ( )g uniformly converges to f ( ). (x; y) r

Let X = (x; y) have the polar coordinates (r; ) and X 0 = (x0; y0) have the the polar coordinates (a; ) (that means that (x0; y0) is lying at the boundary).

I?
(x0; y0)

Then

The arc length element on the circumference is ds0 = a d . Therefore, Poisson's formula takes the alternative form ~ a2 ? jX j2 Z u(X 0) ds0 u(X ) = 2 a (5:4:12) 02 0j=a jX ? X j jX for X 2 , where we write u(X 0) = f ( ).

~ ~ jX ? X 0j2 = a2 + r2 ? 2ar cos( ? ):

5.5 The mean value theorem


Let u be a harmonic function in a domain IR2, continuous in . Let M0 be a point of and Ba be the ball of radius a with the center at M0 which is completely lying in . Then 1 ZZ u( ) d : u(M0) = 2 a (5:5:1) @Ba

Proof: Without loss of generality we may suppose that M0 = 0. Since Ba


u is harmonic in , from Poisson's formula (5.4.11) we have (X = 0) a2 Z u(X 0) ds0: u(0) = 2 a a2 jX 0 j=a

, the function

5.6 The maximum principle (a strong version)

5.6. THE MAXIMUM PRINCIPLE (A STRONG VERSION)

101

We now prove the following: Let be a connected bounded domain in IR2. Let u be a harmonic function in , continuous in . Then the maximum and the minimum values of u are attained on @ and nowhere inside unless u const. Since u is continuous in , it attains its maximum somewhere, say xM 2 . We shall show that xM 62 unless u const. By de nition of M , we know that

u(x) u(xM ) = M; 8x 2 : xM
We draw a circle around xM entirely contained in . By the mean value theorem, u(xM ) is equal to its average around the circumference.

Since the average is not greater than the maximum, we have

M = u(xM ) M:
Therefore, u(x) = M for all x on the circumference. This is true for any such circle. So u(x) = M for all x in the diagonally shaded region. Now we repeat the argument with a di erent center. We can ll the whole domain up with circles. In this way, using the assumption that is connected, we deduce that u(x) M throughout . So u const.

102

CHAPTER 5. ELLIPTIC EQUATIONS

Bibliography
1] R. A. Adams: Sobolev Spaces, Academic Press, New York, 1975. 2] S. Agmon: Lectures on Elliptic Boundary Value Problems. Van Nostrand, 1965. 3] R. Courant and D. Hilbert: Methods of Mathematical Physics. Interscience Publishers, Vol. I, 1953, Vol. 2, 1962. 4] A. Friedman: Partial Di erential Equations of Parabolic Type, Prentice Hall, 1964. 5] A. Friedman: Partial Di erential Equations, Holt, Rinehart and Wiston, 1969. 6] P. R. Garabedian: Partial Di erential Equations, John Wiley & Sons, Inc., 1964. 7] I. M. Gelfand and G. E. Shilov: Generalized Functions, Academic Press, New York, 1964. 8] D. Gilbarg and N. . Trudinger: Elliptic Partial Di erential Equations of Second Order. Springer-Verlag, Berlin, 1977. 9] L. Hormander: Linear Partial Di erential Operators, Springer-Verlag, Berlin, 1963. 10] F. John: Partial Di erential Equations. Springer-Verlag, Berlin, 1982. 11] M. Krzyzanski: Partial Di erential Equations of Second Order, Vol. 1&2, PWN, Warszawa, 1971. 12] O. A. Ladyzhenskaya: The Boundary Value Problems of Mathematical Physics. Springer-Verlag, New York-Berlin-Heidelberg-Tokyo 1985. 13] O. A. Ladyzenskaja, V. A. Solonnikov, N. N. Ural'ceva: Linear and Quasilinear Equations of Parabolic Type. Amer. Math. Soc. 1968. 14] O. A. Ladyzhenskaya et. al.: Linear and Quasilinear Equations of Elliptic Type. Amer. Math. Soc.. 15] J.{L. Lions and E. Magenes: Non-Homogeneous Boundary Value Problems and Applications. Vol. I{III, Springer-Verlag, Berlin, 1972. 16] J. T. Marti: Introduction to Sobolev Spaces and Finite Element Solution of Elliptic Boundary Value Problems. Academic Press, New York, 1986. 17] I. G. Petrovskii: Lectures on Partial Di erential Equations. Interscience Publishers, 1954. 103

104

BIBLIOGRAPHY

18] M. H. Protter and H. F. Weinberger: Maximum Principles in Di erential Equations, Springer-Verlag, Berlin, 1984. 19] S. L. Sobolev: Partial Di erential Equations of Mathematical Physics, Pergamon, New York, 1964. 20] W. A. Strauss: Partial Di erential Equations. An Introduction. John Wiley, New York 1992. 21] F. Treves: Basic Linear Partial Di erential Equations, Academic Press, 1975. 22] A. N. Tykhonov, A. A. Samarski: Di erentialgleichungen der Mathematischen Physik. VEB, Berlin, 1959. 23] V. S. Vladimirov: Equations of Mathematical Physics, Marcel Dekker, Inc., New York, 1971. 24] V. S. Vladimirov: Generalized Functions in Mathematical Physics, Mir Publishers, Moscow, 1979. 25] D. Widder: The Heat Equation, Academic Press, New York ,1975. 26] J. Wloka: Partial Di erential Equations. Cambridge Univ. Press, 1987.

Index
Boundary condition rst kind, 30 second kind, 30 third kind, 30 Cauchy data, 18, 23 Cauchy problem, 18, 30, 33, 35 Cauchy{Kowalevski theorem, 24 characteristic, 18, 59 characteristic curves, 4 characteristic equation, 10, 33 characteristic form, 19 characteristic matrix, 19 characteristic triangle, 34 characteristics, 10 Coordinate method, 3 D'Alembert criterion, 71 D'Alembert's formula, 34 Darboux problem, 52 di erential equation second-order, 9 Duhamel principle, 92 eigenvalue problem, 40, 47 elasticity tension, 30 elastische Befestigung, 30 energy total, 31 equation adjoint, 26 elliptic, 12 hyperbolic, 30 canonical form, 11 nonlinear, 18 parabolic, 12 quasi{linear, 18 error function, 87, 90 analytic, 26 Fourier coe cients, 50 Fourier series, 43, 45 Fourier's law, 7 eld function harmonic, 93, 96 real analytic, 22 Gauss{Ostrogradskii formula, 7, 24 Geometric Method, 2 Goursat problem, 51 gradient operator, 20 gradient vector, 17 Green formula, 48, 58 Green function, 72, 78, 81, 82 heat equation, 8, 63, 86 heat ux, 63 Helmholtz equation, 8 Hook's law, 5 hyperbolic equation, 44, 53 ill{posedness, 35 initia (Tragheitskrafte), 6 integration by parts, 24 Lagrange{Green identity, 25 Laplace equation, 8, 35, 93 Laplace operator, 25 Laplacian , 94 maximum principle, 65 membrane, 7 multi{index, 17 Newton's law, 6 of heat transfer, 64 noncharacteristic, 18 operator (formally) adjoint, 25 partial di erential, 17 order, 1 in nite, 1 parabolic equation, 63 PDE linear, 1 quasilinear, 1 Picard problem, 52

105

106
Planck's constant, 8 Poisson equation, 8, 93 Poisson integral, 82, 88 Poisson kernel, 99 Poisson's formula, 99 principal part, 19, 20 problem ill{posed, 35 second boundary value, 65 third boundary value, 65 well{posed, 35 region lens{shaped, 26 Riemann function, 60 rotation, 94 Schrodinger's equation, 8 solution fundamental, 82 standard form, 23 successive approximations, 54 support, 27 symbol, 19, 20 translation, 94 Tricomi's equation, 13 type elliptic, 11, 13, 15 hyperbolic, 11, 13, 15 parabolic, 11, 13, 15 ultrahyperbolic, 15 vibrations of a string, 29 wave equation, 20 one-dimensional, 7 Young modulus, 30

INDEX

Das könnte Ihnen auch gefallen