0 Stimmen dafür0 Stimmen dagegen

29 Aufrufe45 SeitenMay 23, 2016

© © All Rights Reserved

PDF, TXT oder online auf Scribd lesen

© All Rights Reserved

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

29 Aufrufe

© All Rights Reserved

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

- offshore engineering
- Lecture 07 - Self-adjoint and essentially self-adjoint operators (Schuller's Lectures on Quantum Theory)
- ejv8i3p189
- 1109
- Bloch
- Quantum Formalism
- Igor N. Kozin, Jonathan Tennyson and Mark M. Law- Effective computation of matrix elements between polynomial basis functions
- (Rosenberger, 1997) - Functional Analysis Introduction to Spectral Theory in Hilbert Spaces
- Dynamical Energy Analysis (Tanner)
- Noncommutativity in non-uniform magnetic field and Quantum Mechanics
- Linear Response Theory
- Jacobi Operators and Completely Integrable Nonlinear Lattices
- Arxiv Lecture Notes in Quantum Mechanics
- Lecture 15
- symmetry-02-01810
- quantum leaps and bounds.pdf
- QMech_HomeworkSolutions
- qft notes
- Suquet 1985 A
- Basic Computational Plasticity

Sie sind auf Seite 1von 45

Physics 129a

051018 Frank Porter

Revision 151029 F. Porter

Introduction

Lu = g,

(1)

including both some theoretical remarks and practical approaches. Some

foundational background may be found in the notes on Hilbert Spaces and

on Distributions.

We will denote here whatever Hilbert space we are working in by the

symbol H.

Self-adjoint Operators

Definition: If L is an operator with domain DL , and

hLu|vi = hu|Lvi

(2)

The common appearance of Hermitian operators in physics has to do with

the reality of their eigenvalues, e.g., the eigenvalues of a Hermitian matrix

are real.

However, in general the specification that an operator be Hermitian may

not be restrictive enough, once we consider differential operators on function

spaces. We define the notion of the adjoint of an operator:

Definition: Given an operator L, with domain DL such that DL is dense in

L = H), the adjoint L of L is defined according to:

H (i.e., D

hL u|vi = hu|Lvi,

v DL .

(3)

with DL .

DL = DL ,

L u = Lu,

and

u DL ,

(4)

(5)

Note that a self-adjoint operator is Hermitian, but the converse may not

hold, due to the added condition on domain.

Let us remark further on the issue of domain. A differential equation

is not completely specified until we give certain boudary conditions which

the solutions must satisfy. Thus, for a function to belong to DL , not only

must the expression Lu be defined, but u must also satisfy the boundary

conditions. If the boundary conditions are too restrictive, then we might

have DL DL , but DL 6= DL , so that a Hermitian operator may not be

self-adjoint.

For example, consider the momentum operator, p = id/dx, in quantum mechanics. Suppose x is confined to the region [a, b], and well leave the

boundary conditions to be specified. Supposing u and v to be continuous

functions, we may evaluate, using integration by parts:

hu|pvi =

Z b

a

u (x)

1 dv

(x)dx

i dx

1 du

1

u (x)v(x)|ba

(x)v(x)dx

i

a i dx

1

=

[u (b)v(b) u (a)v(a)] + hqu|vi

i

= hp u|vi,

Z b

(6)

(7)

(8)

(9)

hp u|vi hqu|vi = h(p q)u|vi =

1

[u (b)v(b) u (a)v(a)] .

i

(10)

We have used the symbol q instead of p here to indicate the operator id/dx,

but without implication that the domain of q has the same boundary conditions as p. That is, it may be that Dp is a proper subset of Dq . If p is to be

Hermitian, we must have that the integrated part vanishes:

u (b)v(b) u (a)v(a) = 0 u, v Dp .

(11)

vector orthogonal to all v is the null vector. Hence, p u = qu for all u

Dq Dp .

Lets try a couple of explicit boundary conditions:

boundary condition. Then we can have

hp u|vi = hqu|vi,

v Dp ,

(12)

if and only if u(a) = u(b) in order that the integrated part vanishes.

Hence Dp = Dp and we have a self-adjoint operator.

2. Now suppose the boundary condition on v is v(a) = v(b) = 0. In this

case, u can be any differentiable function and p will still be a Hermitian

operator. We have that Dp 6= Dp , hence p is not self-adjoint.

Why are we worrying about this distinction between Hermiticity and selfadjointness? Stay tuned.

Recall that we are interested in solving the equation Lu(x) = g(x) where

L is a differential operator, and a x b. One approach to this is to find

the inverse of L, called the Greens function:

Lx G(x, y) = (x y),

a x, y b.

(13)

variable involved.

We may show how u is obtained with the above G(x, y):

g(x) =

=

Z b

a

Z b

a

= Lx

(x y)g(y)dy

Lx G(x, y)g(y)dy

"Z

G(x, y)g(y)dy

= Lx u(x),

where

u(x) =

Z b

G(x, y)g(y)dy,

(14)

(15)

Now also consider the eigenvalue problem:

Lx u(x) = u(x).

(16)

u(x) =

Z b

G(x, y)u(y)dy,

(17)

or,

Z b

u(x)

,

(18)

a

as long as 6= 0. Thus, the eigenvectors of the (nice) integral operator are

the same as those of the (unnice) differential operator and the corresponding

eigenvalues (with redefinition of the integral operator eigenvalues from that

in the Integral Note) are the inverses of each other.

We have the theorem:

G(x, y)u(y)dy =

Theorem: The eigenvectors of a compact, self-adjoint operator form a complete set (when we include eigenvectors corresponding to zero eigenvalues).

Our integral operator is self-adjoint if G(x, y) = G (y, x), since then

Z bZ b

a

(x)G(x, y)(y)dxdy =

Z bZ b

a

(19)

the definition of self-adjointness. Note that there are no sticky domain questions here, if we take our Hilbert space to be the space of square-integrable

functions on (a, b), denoted L2 (a, b). Our integral operator is also compact,

if

Z bZ b

|G(x, y)|2 dxdy < .

(20)

a

operator L is compact and self-adjoint, then by appending to the set of

eigenvectors of the integral operator the linearly independent solutions of

the homogeneous differential equation

L0 (x) = 0,

(21)

L2 (a, b). In practice, we have that one means of obtaining the eigenfunctions

of L is to find G and then find the eigenfunctions of the integral equation.

Our problem reduces in some sense to one of finding the conditions on L

so that G is self-adjoint and compact. If L is self-adjoint, then so is G, but remember the added considerations we had to deal with for the self-adjointness

of L. If [a, b] is a finite range, then the theory of ordinary differential equations guarantees that G is compact. If [a, b] is infinite, then we must examine

the particular case. However, for many important cases in physics, G is compact here as well.

eigenvalues are real, and that eigenvectors corresponding to distinct eigenvalues are orthogonal: With eigenvectors:

L|i i = i |i i,

(22)

hj |Li i = i hj |i i.

(23)

hLj |i i = j hj |i i.

(24)

we have

We also have:

But hj |Li i = hLj |i i by Hermiticity. Hence,

(i j )hj |i i = 0.

(25)

are real. If i 6= j and i 6= j then hj |i i = 0, that is, the eigenvectors

corresponding to distinct eigenvalues are orthogonal. If i 6= j but i = j

then we may still construct an orthonormal basis of eigenfunctions by using

the Gram-Schmidt algorithm. Thus, we may always pick our eigenfunctions

of a Hermitian operator to be orthonormal:

hi |j i = ij .

(26)

(other cases require further examination). Well usually assume completeness

of the set of eigenfunctions.

Now let us write our inhomogeneous differential equation in a form suggestive of our study of integral equations:

Lu(x) u(x) = g(x).

(27)

Expand in eigenfunctions:

|ui =

|i ihi |ui

(28)

|i ihi |gi.

(29)

|gi =

X

i

X

|i ihi |gi.

(30)

hi |gi

,

i

hi |ui =

(31)

if 6= i . Thus,

|ui =

X

i

|i i

X i (x) Z

hi |gi

=

i (y)g(y)dy.

i

i

i

(32)

If = i , we will have a solution only if hi |gi = 0.

As in the study of integral equations, we are led to the alternative theorem:

Theorem: Either (L )u = g, where L is a compact linear operator, has

a unique solution:

X

hi |gi

,

(33)

|ui =

|i i

i

i

or the homogeneous equation (L )u = 0 has a solution (i.e., is

an eigenvalue of L). In this latter case, the original inhomogeneous

equation has a solution if and only if hi |gi = 0 for all eigenvectors i

belonging to eigenvalue . If so, the solution is not unique:

|ui =

|i i

i,i 6=

X

hi |gi

+

ci |i i,

i i,i =

(34)

The Greens function for the operator L ,

u(x) =

Z b

G(x, y)g(y)dy,

(35)

is given by:

G(x, y) =

X

i

=

.

i

i

i

(36)

1. If is real, then G(x, y) = G (y, x), since i is real. If the eigenfunctions are real, then G is real symmetric.

2. G(x, y) satisfies the boundary conditions of the differential equation,

since the eigenfunctions do.

3. Since

g(x) = (L )u(x) = (Lx )

G(x, y)g(y)dy,

(37)

the differential equation (an operator equation):

Lx G(x, y) G(x, y) = (x y).

(38)

Thus, G may be regarded as the solution (including boundary conditions) to the original problem, but for a point source (rather than

the extended source described by g(x)).

Thus, in addition to the method of finding the eigenfunctions, we have a

second promising practical approach to finding the Greens function, that of

solving the differential equation

(Lx )G(x, y) = (x y).

(39)

in one dimension:

d

d2

(40)

a(x) 2 u(x) + b(x) u(x) + c(x)u(x) = 0.

dx

dx

Under quite general conditions, as the reader may discover, this may be

written in the form of a differential equation involving a self-adjoint (with

appropriate boundary conditions) Sturm-Liouville operator:

Lu = 0,

(41)

where

d

d

p(x) q(x).

(42)

dx

dx

We are especially interested in the eigenvalue equation corresponding

to L, in the form:

Lu + wu = 0,

(43)

L=

or

"

d

du

p(x) (x) q(x)u(x) + w(x)u(x) = 0.

(44)

dx

dx

Here, is an eigenvalue for eigenfunction u(x), satisfying the boundary

conditions. Note the sign change from the eigenvalue equation we often write,

this is just a convention. The function w(x) is a real, non-negative weight

function.

3.1

An Example

L=

d2

+ 1,

dx2

x [0, ],

(45)

with homogeneous boundary conditions u(0) = u() = 0. This is of SturmLiouville form with p(x) = 1 and q(x) = 1. Suppose we wish to solve the

inhomogeneous equation:

Lu = cos x.

(46)

A solution to this problem is u(x) = x2 sin x, as may be readily verified,

including the boundary conditions. But we notice also that

d2

sin x + sin x = 0.

dx2

(47)

u(x) =

x

sin x + A sin x,

2

(48)

Lets examine how this fits in with our theorems about expansions in

terms of eigenfunctions, and perhaps imagine that the x2 sin x solution did

not occur to us. Well assert (and leave it to the reader to demonstrate)

that our operator with these boundary conditions is self-adjoint. Thus, the

eigenfunctions must form a complete set.

Consider Lun = n un and Lu = f . We expand:

u =

(49)

|un ihun |f i.

(50)

hun |f i

n

(51)

f =

X

n

Thus, Lu = f implies

u=

|un i

solution is not unique.

Let us proceed to find the eigenfunctions. We may write Lun = n un in

the form

u00n + (1 n )un = 0.

(52)

un (x) = An sin nx,

n = 1, 2, 3, . . . ,

(53)

q

orthonormal set.

The zero eigenvalue looks like trouble for equation 51. Either the solution

to the problem Lu = cos x doesnt exist, or it is of the form:

u(x) =

X

n=2

2

sin nx

R q2

0

1 n2

+ C sin x.

(54)

the sum at n = 2 because of the difficulty with 1 = 0. However, we must

check that this is permissible:

Z

(55)

so far so good.

In general, we need, for n = 2, 3, . . .:

In

1Z

[sin(n + 1)x + sin(n 1)x] dx

2

0

(

0

n odd,

=

2n

n even.

n2 1

=

(56)

u(x) =

2n

2 1

sin nx + C sin x

2

2

n=2,even 1 n n 1

= C sin x

X

n=2,even

(n2

n

sin nx.

1)2

(57)

But does the summation above give x2 sin x? We should check that our

two solutions agree. In fact, well find that

x

4

sin x 6=

2

X

n=2,even

(n2

n

sin nx.

1)2

(58)

But well also find that this is all right, because the difference is just something proportional to sin x, the solution to the homogeneous equation. To

answer this question, we evidently wish to find the sine series expansion of

x

sin x:

2

s

X

2

f (x) =

an

sin nx,

(59)

n=1

where f (x) = x2 sin x, and the interval of interest is (0, ).

We need to determine the expansion coefficients an :

s

an =

=

=

=

=

2Z

f (x) sin nxdx

0

1 Z

2 0

1 Z

8 0

( "

#

# )

Z "

1

sin(n 1)x sin(n + 1)x

sin(n 1)x sin(n + 1)x

dx

n1

n+1

n1

n+1

0

8

0

(

0 h

i n odd, n 6= 1,

(60)

4n

1

n 2, even.

2 (n2 1)2

1 2

a1 =

.

2 4

(61)

4

x

sin x = sin x

2

4

X

n=2,even

(n2

n

sin nx.

1)2

(62)

Suppose, instead that we wished to solve

Lu = cos x,

(63)

d

again with L = dx

2 + 1, and x [0, ], but now with boundary conditions

0

u(0) = u (0) = 0. Certainly the solution u(x) = x2 sin x still works, and

satisfies the boundary conditions.

Let us try the eigenfunction expansion approach once more. Start with

the eigenvalue equation:

u00 + (1 )u = 0.

(64)

10

(65)

The boundary condition u0 (0) = 0 implies a = 0 or A = 0, again depending

on . There is no non-trivial solution to the eigenvalue equation with these

boundary conditions! In particular, there is no non-trivial solution to the

homogeneous equation Lu = 0, so the above solution must be unique.

We see that our attempt to solve this problem by expanding in eigenfunctions failed. What went wrong? The problem is that L is not self-adjoint, or

even Hermitian, so our nice theorem does not apply. For L to be Hermitian,

we must have

hLu|vi = hu|Lvi

(66)

for all u, v DL , that is, for all u, v satisfying the boundary conditions. Lets

evaluate:

Z

d2 v

d2 u

u

(x)

(x)v(x)dx

(x)dx

hLu|vi hu|Lvi =

2

0

0 dx2

dx

du

dv

=

(x)v(x) u (x) (x)

dx

dx 0

0

dv

du

()v() u () ().

=

dx

dx

Z

(67)

Can we still find a Greens function for this operator? We wish to find

G(x, y) such that:

2G

(x, y) + G(x, y) = (x y).

x2

(68)

G(x, y) =

C(y) sin x + D(y) cos x, x > y.

(69)

G(0, y) = 0 = B(y)

dG

(0, y) = 0 = A(y).

dx

(70)

(71)

That is, G(x, y) = 0 for x < y. Notice that we have made no requirement

on boundary conditions for y. Since L is not self-adjoint, there is no such

constraint!

11

discontinuity of one unit, so that we get (x y) when we take the second

derivative:

0 = C(y) sin y + D(y) cos y

1 = C(y) cos y D(y) sin y.

(72)

(73)

Thus, C(y) = cos y and D(y) = sin y, and the Greens function is:

G(x, y) =

0,

x<y

sin x cos y cos x sin y, x > y.

(74)

Lets check that we can find the solution to the original equation Lu(x) =

cos x. It should be given by

u(x) =

=

Z

Z0x

(sin x cos y cos x sin y) cos ydy

x

sin x.

2

(75)

The lesson is that, even when the eigenvector approach fails, the method of

Greens functions may remain fruitful.

3.2

Weights

in the Sturm-Liouville eigenvalue equation. Suppose ui and uj are solutions,

corresponding to eigenvalues i and j :

Lui = i wui ,

Luj = j wuj

(76)

(77)

Then,

Z

Z

uj Lui dx = i

uj wui dx,

(78)

ui (Luj ) dx = j

uj wui dx.

(79)

where we have used the fact that w(x) is real. By Hermiticity, the two lines

above are equal, and we have:

(i

j )

uj wui dx = 0.

12

(80)

are real. With i 6= j, ui and uj are orthogonal when the weight function

w is included, at least for i 6= j . When i = j we may again use the

Gram-Schmidt procedure, now including the wieght function, to obtain an

orthogonal set.

Normalizing, we may express our orthonormality of eigenfunctions as:

Z

uj (x)ui (x)w(x)dx = ij .

(81)

of measure zero), this weighted integral defines a scalar product. Well see

several examples in the following section.

and can be put into Sturm-Liouville form. We give several important examples in this section.

4.1

Legendre Equation

Legendres equation:

(1 x2 )

du

d2 u

2x + `(` + 1)u = 0.

2

dx

dx

(82)

The typical situation is for x to be the cosine of a polar angle, and hence

|x| 1. When ` is an integer, the solutions are the Legendre polynomials

P` (x) and the Legendre functions of the second kind Q` (x). These solutions

may be obtained by assuming a series solution and substituting into the

differential equation to discover the recurrence relation.

We may put the Legendre Equation in the Sturm-Liouville form by letting

p(x)

q(x)

w(x)

=

=

=

=

1 x2

0

1

`(` + 1).

13

(83)

(84)

(85)

(86)

4.2

equation:

m2

d2 u

du

(1 x2 ) 2 2x + `(` + 1)u

u = 0.

(87)

dx

dx

1 x2

This may be put in Sturm-Liouville form the same as the Legendre equation,

except now with

m2

q(x) =

.

(88)

1 x2

Again, this equation typically arises for x = cos . The additional term

arises when the azimuthal symmetry is broken. That is, when dealing with

the Laplacian in spherical coordinates, the term:

m2

1 2

1 cos2

sin2 2

(89)

this case ` is an non-negative integer, and m takes on values `, ` + 1, . . . , `

The solutions are the associated Legendre polynomials P`m (x), and series

(singular at |x| = 1) Qm

` (x). The associated Legendre polynomials may be

obtained from the Legendre polynomials according to:

P`m (x)

4.3

2 m/2

= () (1 x )

dm

P` (x).

dxm

(90)

Bessel Equation

While the Legendre equation appears when one applies separation of variables to the Laplacian in spherical coordinates, the Bessel equation shows up

similarly when cylindrical coordinates are used. In this case, the interpretation of x is as a cylindrical radius, so typically the region of interest is x 0.

But the Bessel equation shows up in many places where it might not be so

expected. The equation is:

d2 u

du

+ x + (x2 n2 )u = 0.

(91)

2

dx

dx

We could put this in Sturm-Liouville form by letting (noting that simply

letting p(x) = x2 doesnt work, we divide the equation by x):

x2

p(x)

q(x)

w(x)

=

=

=

=

14

x

x

1/x

n2 .

(92)

(93)

(94)

(95)

where the equation arises. Thus, suppose we are interested in solving the

Helmholtz equation (or wave equation) in cylindrical coordinates:

or

1

r

r r

r

2 + k 2 = 0,

(96)

1 2 2

+ 2 2 + 2 + k 2 = 0.

r

z

(97)

the form:

(r, , z) = R(r)()Z(z).

(98)

The general solution is constructed by linear combinations of such separated solutions.

Substituting the proposed solution back into the differential equation,

and dividing through by , we obtain:

dR

1 d2

1 d2 Z

1 d

r

+ 2

+

+ k 2 = 0.

2

2

rR dr

dr

r d

Z dz

!

(99)

since the other terms do not depend on z. Thus, the third term must be a

constant, call it 2 k 2 .

If we multiply the equation through by r2 , we now have an equation in

two variables:

!

r d

dR

1 d2

r

+ r2 2 +

= 0.

(100)

R dr

dr

d2

The third term does not depend on r. It also cannot depend on , since the

first two terms have no dependence. It must therefore be a constant, call

it c2 . Then

d2

= c2 ,

(101)

d2

with solutions of the form:

= Aeic .

(102)

But is an angle, so we require periodic boundary conditions:

( + 2) = ().

(103)

Hence c = n, an integer.

Thus, we have a differential equation in r only

!

r d

dR

r

+ r2 2 n2 = 0,

R dr

dr

15

(104)

or

d

dR

n2

r

+ r 2 R R = 0,

dr

dr

r

!

(105)

and to treat 2 as the eigenvalue, related to k and the boundary conditions

of the z equation. The equation is then in Sturm-Liouville form with:

p(r)

q(r)

w(r)

=

=

=

=

r

n2 /r

r

2.

(106)

(107)

(108)

(109)

Eqn. 91, that we started with.

The solutions to Bessels equation are denoted Jn (x) = Jn (r) and Yn (x) =

Yn (r). The orthogonality condition for the Jn solutions reads:

Z b

a

Jn (x)Jn (x)xdx =

x

b

[Jn (x)Jn0 (x) Jn0 (x)Jn (x)]a . (110)

2 2

normalization condition reads:

Z b

a

Jn2 (x)dx

b

x2

[Jn+1 (x)]2 ,

=

a

2

(111)

For example, suppose we wish to expand a function f (x) on [0, b]:

f (x) =

cm Jn (kn x),

(112)

m=1

Z b

a

and

Rb

cm =

b2

[Jn+1 (km b)]2 ,

2

.

b2

[Jn+1 (km b)]2

2

16

(113)

(114)

4.4

d2 u

+ 2 u = 0.

dx2

(115)

p(x)

q(x)

w(x)

=

=

=

=

1

0

1

2.

(116)

(117)

(118)

(119)

x

u(x) = sin n

,

a

(120)

4.5

Hermite Equation

d2

+ x2 = E,

2

dx

(121)

in units where h

= 1, mass m = 1/2, and spring constant k = 2. This is

already essentially in Sturm-Liouville form.

However, we expect to satisfy boundary conditions () = 0. A useful

approach, when we have such criteria, is to divide out the known behavior.

For large x, (x) 0, so in this regime the equation becomes, approximately:

d2

x2 = 0.

2

dx

(122)

exp(x2 /2). Thus, well let

(x) = u(x)ex

2 /2

(123)

Substituting this form for (x) into the original Schrodinger equation

yields the following differential equation for u(x):

d2 u

du

2x + 2u = 0,

2

dx

dx

17

(124)

From the way we obtained the Hermite equation, we have a clue for

putting it into Sturm-Liouville form. If we let

2

p(x) = ex ,

(125)

2

du

d

2d u

2 du

p(x)

= ex

.

2xex

2

dx

dx

dx

dx

(126)

we have,

"

q(x) = 0

w(x) = ex

= 2.

(127)

Hermite polynomials, Hn (x).

4.6

Laguerre Equation

x

du

d2 u

+ (1 x) + u = 0.

2

dx

dx

(128)

It arises, for example, in the radial equation for the hydrogen atom Schrodinger

equation. With a similar approach as that for the Hermite equation above,

we can put it into Sturm-Liouville form with:

p(x)

q(x)

w(x)

=

=

=

=

xex

0

ex

.

(129)

Laguerre polynomials, Ln (x):

Ln (x) =

ex dn n x

x e

n! dxn

18

(130)

4.7

d2 u

du

(131)

x 2 + (k + 1 x) + ( k)u = 0,

dx

dx

where k = 0, 1, 2, . . .. It may likewise be put into Sturm-Liouville form.

For = n = 0, 1, 2, . . . we have polynomial solutions:

Lkn (x) =

dk

Ln (x),

dxk

(132)

arise also in the radial dependence of the hydrogen atom energy states with

orbital angular momentum.

4.8

Hypergeometric Equation

x(1 x)u00 + [c (a + b + 1)x] u0 abu = 0.

(133)

p(x) =

x

1x

c

(1 x)a+b+1

q(x) = 0

p(x)

x(1 x)

= ab.

(134)

w(x) =

its partner the confluent hypergeometric equation, and the relation of the

solutions to a variety of other special functions.

4.9

Chebyshev Equation

integer. Also, let c = 1/2. Then the differential equation is:

x(1 x)u00 +

1

x u0 + n2 u = 0.

2

19

(135)

(1 z 2 )

d2 u

du

+ n2 u = 0.

z

2

dz

dz

(136)

in the translation of the name).

This may be put into Sturm-Liouville form with:

p(z) =

1 z2

1/2

q(z) = 0

w(z) = 1/p(z)

= n2 .

(137)

Solutions include the Chebyshev polynomials of the first kind, Tn (z). These

polynomials have the feature that their oscillations are of uniform amplitude

in the interval [1, 1].

We find that a large class of our special functions can be described as orthogonal polynomials in the real variable x, with real coefficients:

Definition: A set of polynomials, {fn (x) : n = 0, 1, 2, . . .}, where fn is of

degree n, defined on x [a, b] such that

Z b

a

fn (x)fm (x)w(x)dx = 0,

n 6= m,

(138)

that w(x) 0.

5.1

The different systems of orthogonal polynomials are distinguished by

the weight function and the interval. That is, the system of polynomials in [a, b] is uniquely determined by w(x) up to a constant for each

polynomial. The reader may consider proving this by starting with the

n = 0 polynomial and thinking in terms of the Gram-Schmidt algorithm.

20

The choice of value for the remaining multiplicative constant for each

polynomial is called standardization.

After standardization, the normalization condition is written:

Z b

a

fn (x)fm (x)w(x)dx = hn nm .

(139)

fn (x) = kn xn + kn0 xn1 + kn00 xn2 + . . . + k (n) ,

(140)

where n = 0, 1, 2, . . .

Note that with suitable weight function, infinite intervals are possible.

Well discuss orthogonal polynomials here from an intuitive perspective.

Start with the following theorem:

Theorem: The orthogonal polynomial fn (x) has n real, simple zeros, all in

(a, b) (note that this excludes the boundary points).

Let us argue the plausibility of this. First, we notice that since a polynomial

of degree n has n roots, there can be no more than n real, simple zeros in

(a, b). Second, we keep in mind that fn (x) is a continuous function, and that

w(x) 0.

Then we may adopt an inductive approach. For n = 0, f0 (x) is just a

constant. There are zero roots, in agreement with our theorem. For n > 0,

we must have

Z b

fn (x)f0 (x)w(x)dx = 0.

(141)

a

If fn nowhere goes through zero in (a, b), we cannot accomplish this. Hence,

there exists at least one real root for each fn with n > 0.

For n = 1, f1 (x) = + x, a straight line. It must go through zero

somewhere in (a, b) as we have just argued, and a straight line can have only

one zero. Thus, for n = 1 there is one real, simple root.

For n > 1, consider the possiblities: We must have at least one root in

(a, b). Can we accomplish orthogonality with exactly two degenerate roots?

Certainly not, because then we cannot be orthogonal with f0 (the polynomial will always be non-negative or always non-positive in (a, b)). Can we be

orthogonal to both f0 and f1 with only one root? By considering the possiblities for where this root could be compared with the root of f1 , and imposing

orthogonality with f0 , it may be readily seen that this doesnt work either.

21

The reader is invited to develop this into a rigorous proof. Thus, there must

be at least two distinct real roots in (a, b) for n > 1. For n = 2, there are

thus exactly two real, simple roots in (a, b).

The inductive proof would then assert the theorem for all k < n, and

demonstrate it for n.

Once we are used to this theorem, the following one also becomes plausible:

Theorem: The zeros of fn (x) and fn+1 (x) alternate (and do not coincide)

on (a, b).

These two theorems are immensely useful for qualtitative understanding

of solutions to problems, without ever solving for the polynomials. For example, the qualitative nature of the wave functions for the different energy

states in a quantum mechanical problem may be pictured.

The following theorem is also intuitively plausible:

Theorem: For any given subinterval of (a, b) there exists an N such that

whenever n > N , fn (x) vanishes at least once in the subinterval. That

is, the zeros of {fn (x)} are dense in (a, b).

Note the plausibility: The alternative would be that the zeros somehow decided to bunch up at particular places. But this would then make satisfying

the orthogonality difficult (impossible).

Finally, we have the approximation theorem:

Theorem: Given an arbitrary function g(x), and all polynomials {pk : k

n} of degree less than or equal to n (linear combinations of {fk }), there

exists exactly one polynomial qn for which |g qn | is minimized:

qn (x) =

n

X

ak fk (x),

(142)

k=0

where

ak =

hfk |gi

.

hk

(143)

in (a, b), or else vanishes identically.

This theorem asserts that any polynomial of degree less than or equal to n

can be expressed as a linear combination of the orthogonal polynomials of

degree n. This is clearly possible, as an explicit construction along the

lines of Eqs. 142 and 143 can be performed.

22

noticing that

n

X

|fk ihfk |gi

(144)

hk

k=0

projects out the component of g in the subspace spanned by f0 , f1 , . . . , fn .

Any component of g which is orthogonal to this subspace is unreachable,

the best we can do is perfectly approximate the component in the subspace.

That is, write g = gk + g , as the decomposition into the components of

g within and orthognonal to the subspace spanned by {fk , k = 1, 2, . . . , n}.

Well take hk = 1 for simplicity here. By definition,

hg |fk i = 0,

and we expect that

gk =

n

X

k = 0, 1, 2, . . . , n,

(145)

ak f k = q n .

(146)

k=0

|g qn |2 = hgk + g qn |gk + g qn i

= hgk qn |gk qn i + hg |g i

= hg |g i

(147)

|g u|2 |gk u|2 + |g |2 .

(148)

Hence we have done the best we can with a function which is orthogonal to

g .

5.2

Pn (x) =

(1)n dn

2 n

1

x

.

2n n! dxn

(149)

This form may be generalized to include some of the other systems of polynomials, obtaining the generalized Rodrigues formula:

1

dn

fn (x) =

{w(x) [s (x)]n } ,

n

Kn w(x) dx

23

(150)

degree 2), independent of n. We remark that not all possible weight functions

will produce polynomials with this equation. For the Legendre polynomials,

w(x) = 1 and the normalization is:

Z 1

1

[P` (x)]2 dx =

2

.

2` + 1

(151)

Corresponding to the generalized Rodrigues formula, we have the SturmLiouville equation for orthogonal polynomials:

"

where

5.3

dfn

d

s(x)w(x) (x) + w(x)n fn (x) = 0,

dx

dx

(152)

df1 1

d2 s

n = n K1

+ (n 1) 2 .

dx 2

dx

(153)

"

Recurrence Relation

the form:

fn+1 (x) = (an + bn x) fn (x) cn fn1 (x),

(154)

where

kn+1

kn

!

0

kn+1

kn0

= bn

kn+1 kn

hn kn+1 kn1

=

,

hn1

kn2

bn =

(155)

an

(156)

cn

(157)

We may define the following projection operator, onto the subspace

spanned by the first n + 1 orthogonal polynomials:

Jn (x, y)

n

X

fj (x)fj (y)

j=0

hj

(158)

Theorem:

.

hn kn+1

xy

This is known as the Christoffel-Darboux theorem.

Jn (x, y) =

24

(159)

between hfi | and |fk i. The right side gives:

kn

kn

(hfi |fn ihfn+1 |fk i hfi |fn+1 ihfn |fk i) =

in (n+1)k i(n+1) nk

hn kn+1

hn kn+1

0

i=k

kn 1

i = n and k = n + 1

=

(160)

hn kn+1

1 i = n + 1 and k = n

0

otherwise.

hfi |(x y)

n

X

1

1

fj (x)fj (y)|fk i =

(hfi |fj ihxfj |fk i hfi |yfj ihfj |fk i)

j=0 hj

j=0 hj

n

X

n

X

1

(ij hxfj |fk i kj hfi |xfj i)

j=0 hj

i>n

in

in

i>n

and

and

and

and

k>n

kn

(161)

k>n

k n.

Thus, we must evaluate, for example, hfi |xfk i. We use the recurrence relation:

fk+1 = (ak + bk x)fk ck fk1 ,

(162)

or,

xfk =

1

(fk+1 ak fk + ck fk1 ) .

bk

(163)

Therefore,

1

(hfi |fk+1 i ak hfi |fk i + ck hfi |fk1 i)

bk

1

=

i(k+1) ak ik + ck i(k1) .

bk

hfi |xfk i =

(164)

and k = n. Then

1

hfi |xfk i =

bn

kn

=

,

(165)

kn+1

which is the desired result. A similar analysis is obtained for the hxfi |fk i

case, with i n and k > n.

25

of an operator which projects onto the subspace spanned by the first n + 1

polynomials into the form of a simple, symmetric kernel.

5.4

d2

d

Hn (x) 2x Hn (x) + 2nHn (x) = 0,

2

dx

dx

(166)

p(x) = ex

q(x) = 0

w(x) = ex

= 2n.

(167)

(168)

This gives:

d

2 dHn

ex

dx

dx

+ 2nex Hn (x) = 0,

(169)

To obtain the generalized Rodrigues formula for the Hermite polynomials, we note that since p(x) = s(x)w(x), we must have s(x) = 1. Hence, the

generalized Rodrigues formula is:

dn

1

[w(x)s(x)n ]

Hn (x) =

n

Kn w(x) dx

2

ex dn x2

=

e .

Kn dxn

(170)

d x2

2

e

= 2xex ,

dx

(171)

Since

the order xn leading term is

ex

dn x2

e

= (2)n xn + O(xn1 ).

dxn

(172)

Hence kn = 2n . The reader may demonstrate that kn0 = 0, and also that the

normalization is

hn = 2n n!.

(173)

26

Sturm-Liouville Equation

function for the inhomogeneuos Sturm-Liouville problem. The problem is to

find u satisfying:

Lu = (pu0 )0 qu = ,

(174)

for x [a, b], and where (x) is a continuous function of x. We look for a

solution in the form of the integral equation:

u(x) =

Z b

G(x, y)(y)dy.

(175)

We make some intitial observations to guide our determination of G:

1. Since u(x) satisfies the boundary conditions, G(x, y) must also, at least

in x.

2. Presuming p and q are continuous functions, G(x, y) is a continuous

function of x.

and G00 are continuous functions of x, except at x = y,

3. G0 G(x,y)

x

where

1

.

(176)

lim [G0 (y + , y) G0 (y , y)] =

0

p(y)

4. Except at x = y, G satisfies the differential equation Lx G = 0.

We remark that properties 2-4 follow from the operator equation:

Lx G(x, y) = (x y),

(177)

where the minus sign here is from the on the right hand side of Eqn. 174.

Writing this out:

Lx G = pG00 + p0 G0 qG = (x y).

(178)

The second derivative term must be the term that gives the delta function.

That is, in the limit as 0:

Z y+

y

27

(179)

the integral in this limit, yielding

Z y+

y

1

.

p(y)

(180)

As we have remarked before, there are various approaches to finding G.

Here, well attempt to find an explicit solution to Lx G(x, y) = (x y).

Suppose that we have found two linearly independent solutions u1 (x) and

u2 (x) to the homogeneous equation Lu = 0, without imposing the boundary

conditions. It may be noted here that a sufficient condition for independence

of two solutions to be independent is that the Wronskian not vanish. Here,

the Wronskian is:

W (x) = u1 (x)u02 (x) u01 (x)u2 (x).

(181)

k1 u1 + k2 u2 = 0 implies k1 = k2 = 0. Consider the derivative of this:

k1 u01 + k2 u02 = 0.

(182)

k1 (u01 u2 u1 u02 ) = k1 w(x) = 0.

(183)

not vanish, then the two solutions are linearly independent.

In some cases, the Wronskian is independent of x; consider the derivative:

W 0 (x) = u1 (x)u002 (x) u001 (x)u2 (x).

(184)

p(x)u00j (x) + p0 (x)u0j (x) q(x)uj (x) = 0.

Thus,

u00j (x) =

q(x)

p0 (x) 0

uj (x)

u (x).

p(x)

p(x) j

(185)

(186)

Hence,

p0 (x)

[u1 (x)u02 (x) u01 (x)u2 (x)]

p(x)

p0 (x)

=

W (x).

p(x)

W 0 (x) =

28

(187)

is no first order derivative in the differential equation. When this is true, it

is very convenient, since it means that the Wronskian can be evaluated at

any convenient point.

With two linearly independent solutions, the general solution to Lu = 0

is

u = c1 u 1 + c2 u 2 .

(188)

Since G(x, y) satisfies the homogeneous differential equation Lx G = 0 except

at x = y, we may form a right and a left solution:

G(x, y) =

(A + )u1 (x) + (B + )u2 (x), x y.

(189)

in this way for following convenience. Note that x = y has been included for

the two solutions, since G must be continuous.

Imposing continuity at x = y:

(A )u1 (y) + (B )u2 (y) = (A + )u1 (y) + (B + )u2 (y).

(190)

This yields

u1 (y) + u2 (y) = 0.

(191)

1

= G0 (y , y) G0 (y + , y)

p(y)

= (A )u01 (y) + (B )u02 (A + )u01 (y) + (B + )u02 (y)

= 2 [u01 (y) + u02 (y)] .

(192)

We thus have two equations in the unknowns and . Solving, we obtain:

u2 (y)

2p(y)W (y)

u1 (y)

(y) =

.

2p(y)W (y)

(y) =

(193)

(194)

29

2p(y)W (y)

xy

.

xy

(195)

B are to be determined by the boundary conditions of the specific problem.

The solution to the inhomogeneous equation Lu = is:

u(x) =

Z b

G(x, y)(y)dy.

(196)

We remark again that the Greens function for a self-adjoint operator (as

in the Sturm-Liouville problem with appropriate boundary conditions) is

symmetric, and thus the Schmidt-Hilbert theory applies.

It may happen, however, that we encounter boundary conditions such

that we cannot find A and B to satisfy them. In this case, we may attempt

to find an appropriate modified Greens fucntion as follows: Suppose that

we can find a solution u0 to Lu0 = 0 that satisfies the boundary conditions

(that is, suppose there exists a solution to the homogeneous equation). Lets

suppose here that the specified boundary conditions are homogeneous. Then

cu0 is also a solution, and we may assume that uo has been normalized:

Z b

a

[u0 (x)]2 dx = 1.

(197)

Now find a Greens function which satisfies all the same properties as

before, except that now:

for x 6= y,

and

Z b

a

(198)

(199)

The resulting Greens function will still be symmetric, and the solution to

our problem is still

Z b

G(x, y)(y)dy.

(200)

u(x) =

a

Lu =

=

Z b

a

Z b

Lx G(x, y)(y)dy

[(x y)] (y)dy +

= (x) + u0 (x)

Z b

a

Z b

a

u0 (x)u0 (y)(y)dy

u0 (y)(y)dy.

(201)

doesnt hold, then there is no solution to the problem which satisfies the

boundary conditions.

R

30

6.1

Example

plus interval and boundary conditions to be specified. Find the Greens

function: Two solutions to Lu = 0 are:

u1 (x) = 1, u2 (x) = x.

(202)

evaluate it at x = 0 (this really doesnt save any work in this simple example,

but we do it anyway for illustration):

W (x) = u1 (0)u02 (0) u01 (0)u2 (0) = 1.

(203)

1

[u (x)u2 (y) u2 (x)u1 (y)]

+2 1

1

xy

(y x)

= A(y) + B(y)x +

x y.

+2

xy

xy

(204)

Now suppose that the range of interest is x [1, 1]. Let the boundary

conditions be periodic:

u(1) = u(1)

u0 (1) = u0 (1).

(205)

(206)

1

1

A + B + (y 1) = A B (y + 1).

2

2

(207)

G(x, y) = A(y)

xy

1

+

(y x)

+2

2

xy

x y.

(208)

G

y

1

(x, y) =

x

2+

2

1

= y 1

+

2

xy

x y.

xy

.

x y.

(209)

condition in the derivative cannot be satisfied.

31

We turn to the modified Greens function in hopes of rescuing the situation. We want to find a solution, u0 , to Lu0 = 0 which satisfies the boundary

conditions and is normalized:

Z 1

1

u20 (x)dx = 1.

(210)

Greens function so that

Lx G(x, y) =

2

1

G(x, y) = u0 (x)u0 (y) = ,

2

x

2

x 6= y.

(211)

G(x, y) = A(y)

xy x2

1

+

+

(y x)

+2

2

4

xy

x y.

(212)

Note that our condition on B remains the same as before, since the added

x2 /4 term separately satisfies the u(1) = u(1) boundary condition.

The boundary condition on the derivative involves

1

y x

G (x, y) = + +

+

2 2

2

xy

x y.

(213)

We find that

y

G0 (1, y) = = G0 (1, y).

2

Finally, we fix A by requiring

Z b

a

(214)

(215)

Substituting in,

Z 1

1

Z y

Z 1

xy x2

xy

xy

A

+

dx +

dx

dx = 0.

2

4

2

2

1

y

!

(216)

A=

1 y2

+ .

6

4

(217)

Thus,

G(x, y) =

1 1

1

+ (x y)2 +

(y x)

+2

6 4

32

xy

x y.

(218)

1

1 1

(x y) (x y).

G(x, y) = + (x y)2 +

6 4

2

(219)

Now let us suppose that (x) = 1. But our procedure for choosing A

implies that in this case,

u(x) =

Z 1

G(x, y)(y)dy = 0.

(220)

Z 1

1 Z1

(x)u0 (x)dx =

dx 6= 0.

1

2 1

(221)

This problem is sufficiently simple that the reader is encouraged to demonstrate the non-existence of a solution by elementary methods.

Let us suppose, instead, that (x) = x, that is we wish to solve Lu = x.

Note that

Z 1

1 Z1

(x)u0 (x)dx =

xdx = 0,

(222)

1

2 1

so now we expect that a solution will exist. Lets find it:

u(x) =

Z 1

G(x, y)(y)dy

1Z 1

1Z 1

1Z x

1Z 1

ydy +

(x y)2 ydy +

(y x)ydy

(y x)ydy

6 1

4 1

2 1

2 x

x3 x

= + + arbitrary constant C.

(223)

6

6

=

the homogeneous equation (including boundary conditions) can be added to

the solution to obtain another solution. Our solution is thus not unique.

6.2

investigated, in the context of our more general discussion. Recall from the

alternative theorem that when a solution to the homogeneous equation (with

homogeneous boundary conditions) exists, the solution to the inhomogeneous

33

equation either does not exist, or is not unique. A solution exists if and only

if is perpendicular to any solution of the homogeneous equation, e.g.,

Z b

a

(x)u0 (x)dx = 0.

(224)

When we considered the problem Lu = g, we obtained the expansion:

G=

X

i

|ui ihui |

.

i

(225)

expansion above doesnt work in this case. However, we also considered the

problem Lu u = g, and the expansion:

G=

X

i

|ui ihui |

.

i

(226)

long as hu0 |i = 0 (so that taking the 0 limit will not cause trouble).

But then any constant times u0 will also be a solution.

For our explicit approach to finding the Greens function we are led to

consider a source term, or force in LG = source(x, y) that contains not

only the (x y) point source, but an additional source that prevents the

homogeneous solution from blowing up note that (x y) is not orthogonal

to u0 . We add a restoring force to counteract the presence of a u0 piece in

the term.

There is some arbitrariness to this additional source, but it cannot be

orthogonal to the eigenfunction u0 if it is to prevent the excitation of this

mode by the source. Thus, the usual choice, with nice symmetry, is

Lx G(x, y) = (x y) + u0 (x)u0 (y).

(227)

Note that:

Z b

a

(228)

since we have hu0 |u0 i = 1. The additional term exactly cancels the u0 component of the source.

Since Lu0 = 0, this equation, including the boundary conditions, and the

discontinuity condition on G0 only determines G(x, y) up to the addition of

34

the requirement

Z b

G(x, y)uo (x)dx = 0.

(229)

a

To see why this works, consider:

Lx

Z b

a

Z b

(230)

Thus, ab G(x, y)u0 (y)dy is a function which satisfies Lu = 0, and satisfies the

boundary conditions. But the most general such function (were considering

only second order differential equations here) is cu0 (x), hence

R

Z b

a

(231)

L(cu0 ) = 0, so there is no solution to Lu = u0 . The only way to reconcile

this with Eqn. 231 is with c = 0. That is, the integral on the left is welldefined, but it cannot be a non-trivial multiple of u0 (x) because this would

imply that the solution to Lu = u0 exists. Hence, the only possible value

for the integral is zero:

Z b

a

(232)

Z b

a

(233)

in our example, but it must be remembered that the requirement is really

arbitrary, since we can add any multiple of u0 to our solution and obtain

another solution.

Well conclude with some further comments about the Sturm-Liouville

problem.

1. Recall that our expression for G(x, y) had a factor of 1/p(y)W (y) in

the

u1 (x)u2 (y) u1 (y)u2 (x)

(234)

2p(y)W (y)

35

d

pW =

dx

=

=

=

pW 0 + p0 W

pu1 u002 pu001 u2 + p0 u1 u02 p0 u01 u2

u1 (pu002 + p0 u02 ) u2 (pu001 + p0 u01 )

u1 qu2 u2 qu1 = 0.

(235)

Thus, p(x)W (x) is independent of x, and the evaluation of the denominator may be made at at any convenient point, not only at x = y.

This can be very useful when the identity yielding the constant isnt so

obvious.

2. Lets quickly consider the complication of the weight function. Suppose

(x) = w(x)u(x) (x),

(236)

Lu + wu = .

(237)

corresponding to

Then we have

u(x) =

Z b

G(x, y)(y)dy

= g(x) +

Z b

G(x, y)w(y)u(y)dy,

(238)

where

g(x) =

Z b

G(x, y)(y)dy.

(239)

kernel. We can fix this by defining a new unknown function, f (x) by

q

(240)

this into Eqn. 238,

q

including multiplication of the equation by w(x), we have:

f (x) =

w(x)g(x) +

Z bq

The kernel

36

(241)

conditions. Let us consider with an example the treatment of inhomogeneous boundary conditions. Suppose that we have the homogeneous

d

d

p(x) dx

q(x), with the inhodifferential equation Lu = 0, with L = dx

mogeneous boundary conditions u(0) = 0 and u(1) = 1.

We can transform this to an inhomogeneous problem with homogeneous

boundary conditions: Let

v(x) = u(x) x.

(242)

0. The differential equation becomes:

d

d

d

d

p(x) u(x)q(x)u(x) =

p(x) [v(x) + x]q(x)v(x)q(x)x = 0,

dx

dx

dx

dx

(243)

or Lv = , where

dp

(x) q(x)x.

(244)

(x) =

dx

Thus, we now have an inhomogeneous Sturm-Liouville problem with

homogeneous boundary conditions. Note that we could just as well have

picked v(x) = u(x) x2 , or v(x) = u(x) sin(x/2). The possibilities

are endless, one should try to pick something that looks like it is going

to make the problem easy.

Lu =

because so many real problems are of this form. However, the Greens function method isnt limited to solving this problem. We briefly remark on other

applications here.

For a linear differential equation of order n, we can also look for a Greens

function solution, with the properties:

LG(x, y) = (x y),

(245)

or

n1

n

G(x, y) + a1 (x) n1 G(x, y) + + an (x)G(x, y) = (x y). (246)

xn

x

37

x = y, G(x, y) and its first n 2 derivatives are continuous, but the n 1

derivative has a discontinuity of magnitude one:

"

lim

0

n1 G

n1 G

(y

+

,

y)

(y , y) = 1.

xn1

xn1

!

(247)

Note that for a differential equation of order greater than two, it may

happen that there is more than one independent eigenvector associated with

eigenvalue zero. In this case, we form the modified Greens function as follows: Suppose that u0 (x), u1 (x), . . . um (x) are the orthonormal eigenfunctions

corresponding to = 0 (and satisfying the boundary conditions). Require

that

m

Lx G(x, y) =

ui (x)ui (y),

x 6= y.

(248)

i=0

Z b

a

i = 0, 1, 2, . . . , m.

(249)

Z b

a

ui (x)(x)dx = 0,

i = 0, 1, 2, . . . , m.

(250)

one dimension may also be treated. The reader may wish to consult Courant

and Hilbert for some of the subtleties in this case.

Exercises

1. Consider the general linear second order homogeneous differential equation in one dimension:

a(x)

d

d2

u(x) + b(x) u(x) + c(x)u(x) = 0.

2

dx

dx

(251)

Determine the conditions under which this may be written in the form

of a differential equation involving a self-adjoint (with appropriate

boundary conditions) Sturm-Liouville operator:

Lu = 0,

38

(252)

where

L=

d

d

p(x) q(x).

dx

dx

(253)

2. Show that the operator

L=

d2

+ 1,

dx2

x [0, ],

(254)

d

,

3. Let us consider somewhat further the momentum operator, p = 1i dx

discussed briefly in the differential equation note. We let this operator

be an operator on the Hilbert space of square-integrable (normalizable)

functions, with x [a, b].

(a) Find the most general boundary condition such that p is Hermitian.

(b) What is the domain, DP , of p such that p is self-adjoint?

(c) What is the situation when [a, b] [, ]? Is p bounded or

unbounded?

4. Prove that the different systems of orthogonal polynomials are distinguished by the weight function and the interval. That is, the system of

polynomials in [a, b] is uniquely determined by w(x) up to a constant

for each polynomial.

5. We said that the recurrence relation for the orthogonal polynomials

may be expressed in the form:

fn+1 (x) = (an + bn x) fn (x) cn fn1 (x),

(255)

6. We discussed some theorems for the qualitative behavior of classical

orthogonal polynomials, and illustrated this with the one-electron atom

radial wave functions. Now consider the simple harmonic oscillator (in

one dimension) wave functions. The potential is

1

V (x) = kx2 .

2

39

(256)

1 d2

1

(x) + kx2 (x) = E(x).

2

2m dx

2

(257)

Make a sketch showing the qualitative features you expect for the wave

functions corresponding to the five lowest energy levels.

Try to do this with some care: There is really a lot that you can

say in qualitative terms without ever solving the Schrodinger equation.

Include a curve of the potential on your graph. Try to illustrate what

happens at the classical turning points (that is, the points where E =

V (x)).

7. Find the Greens function for the operator

L=

d2

+ k2,

dx2

(258)

For what values of k does your result break down? You may assume

x [0, 1].

8. An integral that is encountered in calculating radiative corrections in

e+ e collisions is of the form:

I(t; a, b) =

Z b

a

xt1

dx,

1x

(259)

Show that this integral may be expressed in terms of the hypergeometric

function 2 F1 . Make sure to check the t = 0 case.

9. We consider the Helmholtz equation in three dimensions:

2 u + k 2 u = 0

(260)

a) = 0. Such a situation may arise, for example, if we are interested in

the electric field inside a conducting sphere. Our goal is to find G(x, y)

such that

(2x + k 2 )G(x, y) = (x y),

(261)

with G(r = a, y) = 0. Well do this via one approach in this problem,

and try another approach in the next problem.

40

(2 + k 2 )G = 0,

(262)

and the appropriate matching conditions at r = |y|.

10. We return to the preceding problem. This is the problem of the Helmholtz

equation:

2 u + k 2 u = 0

(263)

inside a sphere of radius a, subject to the boundary condition u(r =

a) = 0. Such a situation may arise, for example, if we are interested in

the electric field inside a conducting sphere. Our goal is to find G(x, y)

such that

(2x + k 2 )G(x, y) = (x y),

(264)

with G(r = a, y) = 0.

In problem 9, you found G(x, y) by obtaining solutions to the homogeneous equation

(2 + k 2 )G = 0,

(265)

on either side of r = |y|; satisfying the boundary conditions at r = a,

and the appropriate matching conditions at r = |y|.

Now we take a different approach: Find G by directly solving (2x +

k 2 )G(x, y) = (x y). You should ignore the boundary conditions

at first and obtain a solution by integrating the equation over a small

volume containing y. Then satisfy the boundary conditions by adding a

suitable function g(x, y) that satisfies (2x +k 2 )g(x, y) = 0 everywhere.

11. Lets continue our discussion of the preceding two problems. This is

the problem of the Helmholtz equation:

2 u + k 2 u = 0

(266)

a) = 0. Our goal is to find G(x, y) such that

(2x + k 2 )G(x, y) = (x y),

(267)

with G(r = a, y) = 0.

In problem10, you found G(x, y) by directly solving (2x +k 2 )G(x, y) =

(x y), ignoring the boundary conditions at first. This is called

41

structure, and hence has to do with the source. Now find the fundamental solution by another technique: Put the origin at y and solve

the equation

(268)

(2x + k 2 )f (x) = (x),

by using Fourier transforms. Do you get the same answer as last week?

12. Referring still to the Helmholz problem (problems 9 11), discuss the

relative merits of the solutions found in problems 9 and 10. In particular, analyze, by making a suitable expansion, a case where the

problem 10 solution is likely to be preferred, stating the necessary assumptions clearly.

13. We noted that the Greens function method is applicable beyond the

Sturm-Liouville problem. For example, consider the differential operator:

d2

d4

(269)

L = 4 + 2.

dx

dx

As usual, we wish to find the solution to Lu = . Let us consider the

case of boundary conditions u(0) = u0 (0) = u00 (0) = u000 (0) = 0.

(a) Find the Greens function for this operator.

(b) Find the solution for x [0, ] and (x) = ex .

You are encouraged to notice, at least in hindsight, that you could

probably have solved this problem by elementary means.

14. Using the Greens function method, we derived in class the time development transformation for the free-particle Schrodinger equation in

one dimension:

t

1

U (x, y; t) = 1 i

|t|

2

!s

m

im(x y)2

exp

.

2|t|

2t

"

(270)

t, followed by a transformation by time t, you should get back to

where you started. Check whether this is indeed the case or not.

15. Using the Christoffel-Darboux formula, find the projection operator

onto the subspace spanned by the first three Chebyshev polynomials.

16. We discussed the radial solutions to the one-electron Schrodinger

equation. Investigate orthogonality of the result are our wave functions orthogonal or not?

42

H=

1 d2

.

2m dx2

(271)

space x [a, b] (infinite square well).

(a) Construct the Greens function, G(x, y; z) for this problem.

(b) From your answer to part (a), determine the spectrum of H.

(c) Notice that, using

G(x, y; z) =

k (x)k (y)

,

k z

k=1

(272)

the residue of G at the pole z = k . Do this calculation, and check

that your result is properly normalized.

(d) Consider the limit a , b . Show, in this limit that

G(x, y; z) tends to the Greens function we obtained in class for

this Hamiltonian on x (, ):

r

G(x, y; z) = i

m i|xy|

e

.

2z

(273)

18. Let us investigate the Greens function for a slightly more complicated

situation. Consider the potential:

V (x) =

V

0

|x|

|x| >

(274)

43

potential.

Remarks: You will need to construct your left and right solutions by considering the three different regions of the potential,

matching the functions and their first derivatives at the boundaries. Note that the right solution may be very simply obtained

from the left solution by the symmetry of the problem. In your

solution, let

q

0 =

2m(z V )

(275)

2mz.

(276)

Make sure that you describe any cuts in the complex plane, and

your selected branch. You may find it convenient to express your

answer to some extent in terms of the force-free Greens function:

im i0 |xy|

G0 (x, y; z) =

e

.

(277)

analytic in your cut plane, with a branch point at z = 0.

(c) Assume V < 0. Show that G(x, y; z) is analytic in your cut plane,

except for a finite number of simple poles at the bound states of

the Hamiltonian.

19. In class, we obtained the free particle propagator for the Schrodinger

equation in quantum mechanics:

m

im(x x0 )2

exp

.

2|t t0 |

2(t t0 )

(278)

Lets actually use this to evolve a wave function. Thus, let the wave

function at time t = t0 = 0 be:

1

t t0

U (x, t; x0 , t0 ) = 1 i

|t t0 |

2

1

(x0 , t0 = 0) =

a2

!s

1/4

"

x2

exp 02 + ip0 x0 ,

2a

!

(279)

where a and p0 are real constants. Since the absolute square of the

wave function gives the probability, this wave function corresponds to a

Gaussian probability distribution (i.e., the probability density function

to find the particle at x0 ) at t = t0 :

|(x0 , t0 )|2 =

44

1

a2

1/2

x2

0

e a2 .

(280)

The standard deviation of this distribution is = a/ 2. Find the

probability density function, |(x, t)|2 , to find the particle at x at some

later (or earlier) time t. You are encouraged to think about the physical interpretation of your result.

45

- Lecture 07 - Self-adjoint and essentially self-adjoint operators (Schuller's Lectures on Quantum Theory)Hochgeladen vonSimon Rea
- ejv8i3p189Hochgeladen vondersteppenwolfe
- 1109Hochgeladen vonOtto Rendón
- Igor N. Kozin, Jonathan Tennyson and Mark M. Law- Effective computation of matrix elements between polynomial basis functionsHochgeladen vonKmaxx2
- BlochHochgeladen vonSyed Raza
- Quantum FormalismHochgeladen vonBrennan Mace
- (Rosenberger, 1997) - Functional Analysis Introduction to Spectral Theory in Hilbert SpacesHochgeladen vonclearcasting
- Linear Response TheoryHochgeladen vonnaveedulhaq07
- Noncommutativity in non-uniform magnetic field and Quantum MechanicsHochgeladen vonsapotra
- Jacobi Operators and Completely Integrable Nonlinear LatticesHochgeladen vonLuis Alberto Fuentes
- Arxiv Lecture Notes in Quantum MechanicsHochgeladen vonHan Don
- Lecture 15Hochgeladen vongopal
- symmetry-02-01810Hochgeladen vonEka
- quantum leaps and bounds.pdfHochgeladen vonMorodir
- QMech_HomeworkSolutionsHochgeladen vonJimmy Shamsi
- qft notesHochgeladen vonVardha Rajan

- offshore engineeringHochgeladen vonMattia Bernardi
- Suquet 1985 AHochgeladen vonMattia Bernardi
- Basic Computational PlasticityHochgeladen vonadnanhasanovic
- Dynamical Energy Analysis (Tanner)Hochgeladen vonMattia Bernardi
- Energy Principles in beam theoryHochgeladen vonMattia Bernardi
- beam theoryHochgeladen vonMattia Bernardi
- water distribution system projectHochgeladen vonMattia Bernardi

- i3p4-5Hochgeladen vonapi-286683208
- Atmel - Enhancing ADC Resolution by OversamplingHochgeladen vonAhmo50c
- Listovi Za UplodaHochgeladen vonKurt Cargo
- mh2801tut02soln.pdfHochgeladen vonShweta Sridhar
- Concepts of Print ScriptHochgeladen vonclements20077994
- GeotechnicalReport.pdfHochgeladen vonocholujesamy
- Tulpas for DummiesHochgeladen vonVictorBessa
- Art9Hochgeladen vonRakesh Meena
- lenka everything at onceHochgeladen vonapi-252155626
- Motivation_ 2006_ M.J. Ducharme and M. Podolsky.pdfHochgeladen vonHelena Limpinho
- Retail FormatHochgeladen vonAman Ahuja
- Physica B 519 (2017) 69–75Hochgeladen vonNeeraj Panwar
- Rivolta Femminile Lets Spit on HegelHochgeladen vonMikey Stanley
- A Comparison and Contrasting of Three Studies for Figures at the Base of a Crucifixion by Francis Bacon and The Green Manalishi (with the two prong crown) by Peter Green's Fleetwood MacHochgeladen vonmarkisegg
- Are Carnally-Minded Christians Saved?Hochgeladen vonHelena Lehman
- final student story sarah degraafHochgeladen vonapi-385158641
- Lenin, Trotsky & Jewish IdentityHochgeladen vonLuis Vazquez
- A Comparison Study of Pressure Vessel Design Using Different Standards.pdfHochgeladen vonJM
- Heat Dissipation in a 500W MicrochipHochgeladen vonkevo_02
- Fear Factor from a Biblical PerspectiveHochgeladen vonPastor Jeanne
- CODD'S RULEHochgeladen vonBasabdatta Das
- IJACT2433PPLHochgeladen vonIrmansyah
- Chapter 3 TransistorsHochgeladen vonFikri Rahim
- Meiosis and MitosisHochgeladen vonDJ ISAACS
- The Intrapersonal and Interpersonal Dimensions of IntrapreneursHochgeladen vonKeith Broni
- Bharti Axa Recruitment and SelectionHochgeladen vonSneha Gupta
- How to Develop a Prayer LifeHochgeladen vony10j
- Azadirachta Indica A. Juss. Leaf Extract Mediated Synthesis of Silver Nanoparticles and their Antibacterial Efficacy Against Selected Human PathogensHochgeladen vonEditor IJTSRD
- The Most Frequently Used Spoken American English Idioms a Corpus Analysis and Its ImplicationsHochgeladen vonShaimaa Suleiman
- Career Management at Cactus GlobalHochgeladen vonEnoch Rao

## Viel mehr als nur Dokumente.

Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.

Jederzeit kündbar.