Sie sind auf Seite 1von 11

Lecture 7: Variational method:

introduction (10/11/2005)
Several and perhaps many problems in quantum mechanics can be solved
exactly. However, a majority of them cannot. We must often use approximate
methods. Perturbation theory is one approach; the variation principle is
another approach.
Additional reading if you wish: Griffiths 6.1, 6.2 (review of what
you should already know), and 7.1 for the new stuff

Review of time-independent perturbation theory


Recall that perturbation theory is a method how to approximately solve a
certain problem such as finding the spectrum of a Hamiltonian by treating
the problem as a small modification of another problem H0 whose solution
is known exactly. You may write H = H0 + H 0 as
H = H0 + H 0,

=1

and try to find all eigenstates and eigenvalues as Taylor expansions in


around = 0 i.e. around H = H0 . (Normally, a more natural parameter
instead of occurs, and its value is naturally small, another number smaller
than one.)
The eigenstates are then written as
n = n0 + n1 + 2 n2 + . . .
where n0 are eigenstates of H0 and the upper indices above are powers
while theyre labels if attached to the s. Their corresponding eigenvalues
are
En = En0 + En1 + 2 En2 + . . .
where En0 are eigenvalues of H0 . And perturbation theory as discussed in
143a allows you to compute the corrections in a systematic way:
En1

hn0 |H 0|n0 i,

|n1 i

0
0
m
hm
|H 0|n0 i
=
,
0
En0 Em
m6=n

En2

0
|hm
|H 0|n0 i|2
=
0
En0 Em
m6=n

0
and so forth. Recall that if some of the eigenvalues Em
are degenerate, you
0
must first diagonalize H in the subspace corresponding to this eigenvalue

(and use the corresponding eigenstates) before you apply the formulae above;
this is the content of degenerate perturbation theory. Also note that the
second order correction to the energy of the ground state is always negative
because all the terms in the sum are non-positive.
Perturbation theory is not the universal cure for all of our problems, for
example because
sometimes we dont find any good H0 at all
sometimes we find it, but H 0 = H H0 is too large
sometimes H0 exists and H 0 is small, but the perturbation theory does
not converge anyway
Therefore we must look for other alternatives, too.

Variational method
One of the most important ones is the variational method. Its starting observation is that the energy of the ground state Eg
Eg

h|H|i
h|i

is never higher than the expectation value of the energy in any properly
normalized state. It holds simply because the expectation value is a weighted
average of the energy eigenvalues, and all energy eigenvalues except for the
ground energy Eg are greater than Eg . The inequality is only saturated if
is the ground state. In mathematical terms, this is the proof, using the
P
decomposition of |i as n cn |n i:
2
2
h|H|i
n |cn | En
n |cn | Eg
= P

.
P
2
2
h|i
n |cn |
n |cn |

Dynamical interpretation: if a system is in |i, it will radiate energy until


it reaches the ground state |g i. The inequality is connected to a fact about
time-independent perturbation theory we discussed previously: take the state
to be g0 . You see that the expectation value of H is
hg0 |(H0 + H 0 )|g0 i = Eg0 + Eg1 Eg
2

where the inequality is taken from the variational principle. This means that
Eg2 +Eg3 +. . . must be non-positive which essentially means that Eg2 itself (the
second order correction to the ground state energy), the term that dominates
the sum, must be non-positive.
However, the second order perturbation theory can still overshoot the
ground state energy because Eg3 may be negative.
The strategy of the variational method
The method is a method to look for the ground state energy (and other
energy eigenvalues, as discussed later today). We first define a large enough
family of good candidates (pi ) for the ground state wavefunction, depending
on some parameters pi , calculate the average value of energy in each state,
and minimize it with respect to the variables pi . The resulting minimum
E(pmin
) will still be bigger than the actual ground state energy Eg because
i
of the general inequality we started with but it will converge to it if we
choose a sufficiently good family of candidates.
The variational method has a disadvantage: in perturbation theory, the
size of the error may be estimated by looking at the first term we neglected;
the variational method, on the other hand, has no universal rule to estimate
the errors.

Energies estimated better than wavefunctions


We still talk about the variational method. Why its better in estimating the
energy of the ground state than the state itself? Imagine that it chooses for
you a ground state candidate that is very close to the exact ground state:
= 0 +
The expectation value of energy is
E=

h|H|i
h|i

Eh|i = h|H|i

Lets now substitute for as well as E = E0 + E to the latter equation:


(E0 + E)h0 + |0 + i = h0 + |H|0 + i
3

Expand this equation to the first order neglect terms with more than one
delta:
E0 h0 |0 i + E0 [h|0 i + h0 |i] + Eh0 |0 i =
= h0 |H|0i + h0 |H|i + h|H|0 i
But the right hand side (or lower hand side) may be written, up to the first
order terms, as
= E0 h0 |0 i + E0 [h|0 i + h0 |i]
because the first term is exact notice that E0 does not mean the zeroth
approximation but the exact ground state energy here. If you compare it
with the previous expression, you see that
E = 0
at the first order. This means that the errors to the energy only occur
at the second order; their size is proportional to the squared error of the
wavefunction which means that the energy estimate is more accurate. In
other words, even if the wavefunction is not quite perfect, the variational
method works well because it squares and reduces the imperfection and gives
us a good ground state energy estimate.

Examples
Harmonic oscillator: lucky choice
Try to solve the harmonic oscillator whose Hamiltonian is
h
2 d2
1
H=
+ m 2 x2
2
2m dx
2
and choose a pretty lucky family of candidate wavefunctions
(x) = A exp(bx2 ).
A useful integral
that follows from the Euler integral (and the fact that

(1/2)! = ) is
1 3 5 . . . (2n 1)
x exp(ax )dx =
(2a)n

2n

.
a

With this knowledge, you can see that


h|i =

2
A.
2b

The calculation of the expectation value of the energy is slightly more difficult:
h|H|i =
+
=
+
=

h
2 2 Z
d2
A
exp(bx2 ) 2 exp(bx2 )dx +
2m
dx

1
m 2 A2
x2 exp(2bx2 )dx =
2

"
2 Z
h

exp(bx2 )(2b + 4b2 x2 ) exp(bx2 )dx


A2
2m

Z
1
x2 exp(2bx2 )dx
m 2
2

!
r

h
2 b m 2
2
+
A
2b 2m
8b

Now we must extremize the ratio


0=

h
2
m 2
d h|H|i
=

db h|i
2m
8b2

which gives us
m
h|H|i h

E=
=
.
2
h
h|i
2
You see: this is the exact result. Of course this could only happen because
we chose the Gaussian Ansatz which includes the exact ground state wavefunction for the right value of b (the normalization A did not matter and
cancelled, of course). But still, we got the right b without solving any differential equations.
b=

Harmonic oscillator: unlucky choice


What happens if you start with a less prophetic form of the wavefunction?
For example, consider
(x) = A exp(b|x|).
Then

d
(x) = (x)bA exp(b|x|)
dx
5

where (x) is +1 for positive x, otherwise 1. The discontinuity of d(x)/dx


at x = 0 is equal to 2bA. This is why d2 (x)/dx2 has a term proportional
to 4bA(x); a delta-function located at x = 0. To see why the derivative
of a step function is a delta-function, prove the opposite statement that the
integral of a delta-function is the step function:
a

(x)dx = (a)

where (a) is zero for a < 0 and 1 otherwise. We have collected enough facts
to calculate the norms and expectation values. The norm is
2

h|i = A

Z

2bx

dx +

2bx

dx =

A2
.
b

The calculation involving the Hamiltonian is again more difficult:


Z
h
2 Z 0 bx d2 bx
2
e2bx 2b(x)dx
e
h|H|i = A
e dx
2m dx2

#
Z 0
)
Z
Z
2
1
bx d
bx
2
2bx 2
2bx 2
+
e
e dx + m
e x dx +
e
x dx
dx2
2
0

0
(

"

Here, a useful integral is the Euler integral itself:


Z

xn eax dx =

n!
an+1

With its help, the average energy becomes


b
2
h|H|i
h
2
b
2
b
=
b
2b +
+
+ m 2
3
h|i
2m 2
2
2
(2b)
(2b)3
"

"

h
2 b2 m 2
+
2m
4b2

Its b-derivative should vanish which means


h
2 b m 3
0=

m
2b3

b=

m2 2
2
h2

!1/4

Substituting this b into the average energy, we obtain


E=

2
2


i.e. 2 times the true ground state energy. Its not great but its not a
disaster. Usually we get better results if we choose smooth wavefunctions
and wavefunctions with the right asymptotics.
Lets draw the potential energy and wavefunctions (and its square) from
both of these harmonic oscillator examples.
Infinite well
Consider the potential V (x) being infinite for |x| > a and otherwise zero.
We know the exact solution for the ground state:
=

x
1
cos
a
2a


and

h
22
h
2

1.23370
.
8ma2
ma2
Imagine that for no whatever reason, you dont like trigonometric functions
and prefer polynomials. Moreover, you realize that the ground state is an
even function. Also, you want your wavefunction to vanish at |x| = a and you
want it to depend on one more parameter c that will be used for minimization.
Your simplest guess is therefore
E=

(x) = A(a2 x2 )(1 + cx2 ) for

|x| < a

and (x) = 0 otherwise. Assuming that you are patient and you know how
to integrate polynomials, you obtain
h|H|i
h
2
3 11a4 c2 + 14a2 c + 35
=

.
h|i
ma2 4
a4 c2 + 6a2 c + 21
Note that (u/v) is extremized iff u0 v uv 0 = 0 which means
d h|H|i
dc h|i

26a4 c2 + 196a2 c + 42 = 0.

This is a quadratic equation for c. Actually only the smaller root is a good
candidate (the other one gives a much higher energy and may even be a local
maximum of the energy, not a minimum check). This root is
c 0.22075a2
7

and the corresponding energy is


E 1.23372

h
2
1.000016Eg .
ma2

Thats a pretty impressive accuracy, is not it?

How to calculate the first excited state


The previous method applies directly to the ground states only but the general variational strategy works for other states, too. And sometimes the
calculation is equally easy as that for the ground state. If you want to find
the (exact) first excited state, it is obvious that it is orthogonal to the (exact) ground state. So minimizing the expectation value of energy among
the states that are orthogonal to the ground states gives you the energy for
the first excited state. You may argue that it is difficult to define the condition for the state to be orthogonal to the ground state because we dont
know the exact ground state. But sometimes it is very simple to define your
candidate wavefunctions that are automatically orthogonal to the ground
state because of symmetry considerations. Its because two states with
different eigenvalues of a Hermitean operator are automatically orthogonal
to each other. We want to choose a conserved Hermitean observable because
it commutes with the Hamiltonian and may therefore be simultaneously
diagonalized with it.
Examples with parity and angular momentum
For example, one-dimensional problems with V (x) = V (x) conserve parity.
P (x) = (x),

[P, H] = 0.

The ground state g (x) is guaranteed to have positive parity P = +1 (no


zeroes) while all states with negative parity are automatically perpendicular
to the ground state and are good candidates for the first excited state. A
variation of the original argument implies that the energy of the first excited
state is smaller or equal to the average energy in any negative-parity state.
Another example involves the angular momentum. All pairs of states
with different values of l are automatically orthogonal to each other. All
states with different values of m are also orthogonal to each other, but this
8

is not so useful because there are always (2l + 1) states with different m but
the same l that have the same energy.
Lemma including the proof. (We mentioned it previously.) For all
states |i that are orthogonal to the ground state
h0 |i = 0,
the inequality proved at the end will hold. This |i may be expanded into
energy eigenstates:
X
|i =
cn |n i
n

where c0 = 0 for the ground state component because c0 h0 |i = 0. Then


we have
P
P
2
E1 n |cn |2
h|H|i
n |cn | En
P
= E1
= P
2
2
h|i
n |cn |
n |cn |

The higher excited states

The variational approach may also be applied to the second (and higher if
you wish) excited states as long as we require that the trial wavefunction is
orthogonal to the ground state, the first excited state (and perhaps all other
previous states if there are any). The Gramm-Schmidt orthogonalization
procedure may be helpful.
The Gramm-Schmidt method
What is this procedure? It is a procedure that starts with a set of linearly
independent states |n i that are not necessarily orthogonal and ends up with
an orthogonal basis of states |n i. How do you get them? You expect them
to be in the form
|1 i = |1 i
|2 i = |2 i + a21 |1 i
|3 i = |3 i + a32 |2 i + a31 |1 i
...
|n i = |n i + an,n1 |n1i + . . . + an1 |1 i
9

and then you just find the values of akl from the orthogonality relations. For
l < k,
0 = hl |k i = hl |k i + akl hl |k i

akl =

hl |k i
hl |k i

The variation method therefore means that you write down the same kind
of Ansatz for the first n wavefunctions |k i; you find the ground state here
denoted |1 i by minimizing the energy; you find |2 i of the form above
with a21 calculated from the scalar products we explained by minimizing
the expectation value of the Hamiltonian in |2 i with respect to the same
parameters, and so forth.
It is no longer true that the expectation value of the Hamiltonian in
the state |m i is the upper bound on the energy Em of the m-th excited
state except for the ground state and the states whose orthogonality to the
previous ones is guaranteed by symmetries but the method may still drive
you to the right values of Em arbitrarily closely if you choose general and
good enough families of trial functions.
Harmonic oscillator: first excited state
If we chose the trial odd function
(x) = Ax exp(bx2 )
for the first excited state of the harmonic oscillator, we would again obtain
the exact result which is too boring. Let us try x times our previous second,
bad Ansatz
(x) = Ax exp(b|x|).
This is automatically orthogonal to the exact ground state because it is
again an odd function. The first derivative is therefore an even function, and
therefore it only depends on |x| and you may just calculate it for x > 0 to
get
d
(x) = A exp(b|x|)(1 b|x|).
dx
Note that it has no discontinuity at x = 0 and no delta-functions will appear
below in this case. The calculation of the average energy is therefore easier

10

than before:
A2
xe
dx = 3
h|i = 2A
2b
"0
#
Z
2 Z
2
h

m 2 4 2bx
2
bx d
bx
h|H|i = 2A
xe
xe
dx
(xe )dx +
2m 0
dx2
2 0
2

2 2bx

h
2 b2 3m 2
+
2m
2b2

A
=
2b3

It is now easy to find the minimum:


d
0=
db

h|H|i
h|i

h
2 b 3m 2

=
m
b3

which gives us
b=

3m2 2
h
2

!1/4

E1 =

3
h 1.732
h > 1.5
h.

You see that the true first excited energy, 3


h/2, was again below our estimate, but the relative error was smaller than for the ground state, especially
because d/dx was continuous in this case.

11

Das könnte Ihnen auch gefallen