Sie sind auf Seite 1von 281

NUMERICAL QUANTUM DYNAMICS

Progress in Theoretical Chemistry and Physics


VOLUME 9

Honorary Editors:

W.N. Lipscomb (Harvard University, Cambridge, MA, U.S.A.)


I. Prigogine (Université Libre de Bruxelles, Belgium)

Editors-in-Chief:

J. Maruani (Laboratoire de Chimie Physique, Paris, France)


S. Wilson (Rutherford Appleton Laboratory, Oxfordshire, United Kingdom)

Editorial Board:

H. Ågren (Royal Institute of Technology, Stockholm, Sweden)


D. Avnir (Hebrew University of Jerusalem, Israel)
J. Cioslowski (Florida State University, Tallahassee, FL, U.S.A.)
R. Daudel (European Academy of Sciences, Paris, France)
E.K.U. Gross (Universität Würzburg Am Hubland, Germany)
W.F. van Gunsteren (ETH-Zentrum, Zürich, Switzerland)
K. Hirao (University of Tokyo, Japan)
(Komensky University, Bratislava, Slovakia)
M.P. Levy (Tulane University, New Orleans, LA, U.S.A.)
G.L. Malli (Simon Fraser University, Burnaby, BC, Canada)
R. McWeeny (Università di Pisa, Italy)
P.G. Mezey (University of Saskatchewan, Saskatoon, SK, Canada)
M.A.C. Nascimento (Instituto de Quimica, Rio de Janeiro, Brazil)
J. Rychlewski (Polish Academy of Sciences, Poznan, Poland)
S.D. Schwartz (Yeshiva University, Bronx, NY, U.S.A.)
Y.G. Smeyers (Instituto de Estructura de la Materia, Madrid, Spain)
S. Suhai (Cancer Research Center, Heidelberg, Germany)
O. Tapia (Uppsala University, Sweden)
P.R. Taylor (University of California, La Jolla, CA, U.S.A.)
R.G. Woolley (Nottingham Trent University, United Kingdom)
Numerical Quantum
Dynamics
by
Wolfgang Schweizer
Departments of Theoretical Astrophysics and Computational Physics,
University Tübingen,
Germany

KLUWER ACADEMIC PUBLISHERS


NEW YORK, BOSTON, DORDRECHT, LONDON, MOSCOW
eBook ISBN: 0-306-47617-7
Print ISBN: 1-4020-0215-7

©2002 Kluwer Academic Publishers


New York, Boston, Dordrecht, London, Moscow

Print ©2002 Kluwer Academic Publishers


Dordrecht

All rights reserved

No part of this eBook may be reproduced or transmitted in any form or by any means, electronic,
mechanical, recording, or otherwise, without written consent from the Publisher

Created in the United States of America

Visit Kluwer Online at: http://kluweronline.com


and Kluwer's eBookstore at: http://ebooks.kluweronline.com
Progress in Theoretical Chemistry and Physics
A series reporting advances in theoretical molecular and material
sciences, including theoretical, mathematical and computational
chemistry, physical chemistry and chemical physics

Aim and Scope


Science progresses by a symbiotic interaction between theory and experiment: theory is
used to interpret experimental results and may suggest new experiments; experiment
helps to test theoretical predictions and may lead to improved theories. Theoretical
Chemistry (including Physical Chemistry and Chemical Physics) provides the concep-
tual and technical background and apparatus for the rationalisation of phenomena in the
chemical sciences. It is, therefore, a wide ranging subject, reflecting the diversity of
molecular and related species and processes arising in chemical systems. The book
series Progress in Theoretical Chemistry and Physics aims to report advances in
methods and applications in this extended domain. It will comprise monographs as well
as collections of papers on particular themes, which may arise from proceedings of
symposia or invited papers on specific topics as well as initiatives from authors or
translations.
The basic theories of physics – classical mechanics and electromagnetism, relativity
theory, quantum mechanics, statistical mechanics, quantum electrodynamics – support
the theoretical apparatus which is used in molecular sciences. Quantum mechanics
plays a particular role in theoretical chemistry, providing the basis for the valence
theories which allow to interpret the structure of molecules and for the spectroscopic
models employed in the determination of structural information from spectral patterns.
Indeed, Quantum Chemistry often appears synonymous with Theoretical Chemistry: it
will, therefore, constitute a major part of this book series. However, the scope of the
series will also include other areas of theoretical chemistry, such as mathematical
chemistry (which involves the use of algebra and topology in the analysis of molecular
structures and reactions); molecular mechanics, molecular dynamics and chemical
thermodynamics, which play an important role in rationalizing the geometric and
electronic structures of molecular assemblies and polymers, clusters and crystals;
surface, interface, solvent and solid-state effects; excited-state dynamics, reactive
collisions, and chemical reactions.
Recent decades have seen the emergence of a novel approach to scientific research,
based on the exploitation of fast electronic digital computers. Computation provides a
method of investigation which transcends the traditional division between theory and
experiment. Computer-assisted simulation and design may afford a solution to complex
problems which would otherwise be intractable to theoretical analysis, and may also
provide a viable alternative to difficult or costly laboratory experiments. Though
stemming from Theoretical Chemistry, Computational Chemistry is a field of research

v
Progress in Theoretical Chemistry and Physics

in its own right, which can help to test theoretical predictions and may also suggest
improved theories.
The field of theoretical molecular sciences ranges from fundamental physical
questions relevant to the molecular concept, through the statics and dynamics of
isolated molecules, aggregates and materials, molecular properties and interactions, and
the role of molecules in the biological sciences. Therefore, it involves the physical basis
for geometric and electronic structure, states of aggregation, physical and chemical
transformations, thermodynamic and kinetic properties, as well as unusual properties
such as extreme flexibility or strong relativistic or quantum-field effects, extreme
conditions such as intense radiation fields or interaction with the continuum, and the
specificity of biochemical reactions.
Theoretical chemistry has an applied branch – a part of molecular engineering,
which involves the investigation of structure–property relationships aiming at the
design, synthesis and application of molecules and materials endowed with specific
functions, now in demand in such areas as molecular electronics, drug design or genetic
engineering. Relevant properties include conductivity (normal, semi- and supra-),
magnetism (ferro- or ferri-), optoelectronic effects (involving nonlinear response),
photochromism and photoreactivity, radiation and thermal resistance, molecular recog-
nition and information processing, and biological and pharmaceutical activities, as well
as properties favouring self-assembling mechanisms and combination properties needed
in multifunctional systems.
Progress in Theoretical Chemistry and Physics is made at different rates in these
various research fields. The aim of this book series is to provide timely and in-depth
coverage of selected topics and broad-ranging yet detailed analysis of contemporary
theories and their applications. The series will be of primary interest to those whose
research is directly concerned with the development and application of theoretical
approaches in the chemical sciences. It will provide up-to-date reports on theoretical
methods for the chemist, thermodynamician or spectroscopist, the atomic, molecular or
cluster physicist, and the biochemist or molecular biologist who wish to employ
techniques developed in theoretical, mathematical or computational chemistry in their
research programmes. It is also intended to provide the graduate student with a readily
accessible documentation on various branches of theoretical chemistry, physical chem-
istry and chemical physics.

vi
Contents

List of Figures ix
List of Tables xv
Preface xvi
1. INTRODUCTION TO QUANTUM DYNAMICS 1
1. The Schrödinger Equation 1
2. Dirac Description of Quantum States 7
3. Angular Momentum 9
4. The Motion of Wave Packets 14
5. The Quantum-Classical Correspondence 19
2. SEPARABILITY 35
1. Classical and Quantum Integrability 35
2. Separability in Three Dimensions 38
3. Coordinates and Singularities 49
3. APPROXIMATION BY PERTURBATION 55
1. The Rayleigh-Schrödinger Perturbation Theory 56
2. l/N-Shift Expansions 70
3. Approximative Symmetry 77
4. Time-Dependent Perturbation Theory 85
4. APPROXIMATION TECHNIQUES 95
1. The Variational Principle 95
2. The Hartree-Fock Method 104
3. Density Functional Theory 106
4. The Virial Theorem 108
5. Quantum Monte Carlo Methods 118
5. FINITE DIFFERENCES 133
1. Initial Value Problems for Ordinary Differential Equations 133
2. The Runge-Kutta Method 135
vii
viii NUMERICAL QUANTUM DYNAMICS

3. Predictor-Corrector Methods 139


4. Finite Differences in Space and Time 140
5. The Numerov Method 150
6. DISCRETE VARIABLE METHOD 155
1. Basic Idea 155
2. Theory 157
3. Orthogonal Polynomials and Special Functions 165
4. Examples 182
5. The Laguerre Mesh 200
7. FINITE ELEMENTS 209
1. Introduction 210
2. Unidimensional Finite Elements 212
3. Adaptive Methods: Some Remarks 230
4. B-Splines 232
5. Two-Dimensional Finite Elements 235
6. Using Different Numerical Techniques in Combination 248
8. SOFTWARE SOURCES 255
Acknowledgments 261

Index
263
List of Figures

1.1 Graphic example for an active coordinate rotation. On


the lefthand side top, the body fixed and laboratory
coordinate system are equal. To follow the coordinate
rotation, the sphere is covered with a pattern. On the
lefthand side of the sphere there is a white strip running
from north pole to south pole. The pole axis is the
Z-axis. First step: Rotation around the Z-axis. This
gives the sphere on the top righthand side. Second
step: rotation around the new which leads to
the sphere on the bottom lefthand side, and last step,
rotation around the new 12
1.2 Potential well and the classical turning points
defined by 30
2.1 Energy in Rydberg for the 750th to 759th pos-
itive parity diamagnetic Rydberg states in dependence
of the magnetic field strength measured in units of
(From [7]) 37
2.2 Histogram of the nearest neighbor distribution for the
diamagnetic hydrogen atom. For the computation about
350 positive parity, Rydberg eigenstates have
been used. On the lefthand side for
and on the righthand side for with
the magnetic field strength measured in units of
and E the energy in Hartrees. For the left-
hand side the classical corresponding system is almost
regular and for the righthand side almost completely
chaotic. For more details see [9]. For comparison the
Poisson and the Wigner distribution are additionally
plotted. 38
ix
x NUMERICAL QUANTUM DYNAMICS

2.3 The nuclei are situated at points “P1” and “P2”, the
electron at “e”. R is the distance between the two nuclei,
and r1 respectively r2 the distances of the electron to
each of the nuclei. The Cartesian coordinate system is
chosen such that is the z-axis with the origin in
the middle of 46

3.1 On top of the figure we show the energy as a function of


the magnetic field strength for the states (left, top)
and The prime indicates that this labeling would
only be correct in the field free limit. At a certain
magnetic field strength the two lines cross each other –
the two states are degenerate. At the bottom we show
the corresponding probability of presence in cylindrical
coordinates. The magnetic field B points into the z-
direction. The wave functions remain undistorted in
the region where the two lines cross each other. (From
[5]) 63

3.2 The states are the same as in Fig. (3.1) and their cor-
responding energies are plotted in dependence of the
magnetic field strength, but now with an additional par-
allel electric field of magnitude The
parity is no longer a good quantum number and hence
degeneracy becomes forbidden. Therefore, we observe
an avoided crossing: the states interact with each other
and the wave functions are distorted clearly close to the
point of the forbidden crossing. (From [5]) 64

3.3 Transition probability in units of as a func-


tion of 88

4.1 Coulomb (C), exchange (A) and overlap (S) integral as


a function of the nuclei distance in atomic
units. 117

5.1 Diagrammatic picture of the time propagation around a pulse. 141


List of Figures xi

5.2 In all rows the hight of the initial Gaussian wave packet
and of the potential wall is selected such, that it equals
the relative ratio between the kinetic and the potential
energy. On the left hand side tunneling occurs. In the
middle the kinetic energy of the initial wave packet is
higher than the potential energy, and on the left hand
side we have the same situation but now with two Gaus-
sians traveling in opposite directions. From top to bot-
tom: the propagated wave packet is shown in equal
time steps of T/6. (The numerical values are selected
as described in Chapt. 5.4.3.1.) 145

5.3 Potential and the lowest


five corresponding eigenfunctions, labeled by to 148
5.4 Ground state and first excited state of the harmonic os-
cillator. On the lefthand side computed with the correct
initial conditions, on the righthand side the erroneous
computation with the opposite parity. 152

6.1 On top the Legendre polynomials and


(solid lines), and bottom and (solid lines). 173
6.2 Energy of the 3rd eigenstate for the quartic anharmonic
oscillator as a function of ‘pert.’ are the results in
1st order perturbation theory. The 3-dimensional DVR
positive parity computations are the bold solid line and
the diamonds indicate the 6-dimensional computations.
The thin solid line is obtained by diagonalizing the 6-
dimensional positive parity Hamiltonian matrix. 185
6.3 Eigenenergies and potential curve for the general anhar-
monic oscillator. Superimposed (dots) are the analytic
results of the harmonic oscillator for comparison. On
the lefthand side for and on the righthand side
From top to bottom: in steps of 0.05. 186
6.4 Eigenfunctions of the discrete variable represen-
tation for N = 3. On top for the fixed-node and on bot-
tom for the periodic DVR. The nodes for the fixed-node
representation are and for the
periodic representation
obviously the Kronecker delta property is
satisfied. 197
xii NUMERICAL QUANTUM DYNAMICS

7.1 Top: Graphical example for a radial hydrogen wave


function. The entire space is divided into small el-
ements. On each of the elements the wave function
is expanded via interpolation polynomials. Here each
element has 5 nodal points at 0, 0.25, 0.5, 0.75 and 1.
Bottom: For Coulomb systems convergence is signifi-
cantly improved by quadratically spacing the elements.
This means that the size of the elements increases
linearly and the distance from the origin increases
quadratically with the number of elements n. 214
7.2 Top: Graphical example for the nodes of the lowest 10
excited radial hydrogen wave functions for l = 0,
Bottom: The nodes of the first 10 oscillator eigen-
states. The distance between neighboring nodes of a
radial hydrogen wave function increases significantly
as a function of the distance from the origin, whereas
the distances between neighboring nodes of the oscil-
lator eigenstates are always of the same order. 216
7.3 The lowest four Lagrange interpolation polynomials.
The nodes are equidistant in [–1,1]. Thus for two
nodes, (–1, +1), the interpolation polynomials are lin-
ear; for three nodes, (–1, 0,+1) quadratic and so on.
Obviously each of the interpolation polynomials fulfill
219
7.4 Hermite interpolation polynomials. Left hand side for
two and right hand side for three nodes. Top the in-
terpolation polynomials, bottom the derivative of the
interpolation polynomials. "1" and "2" on the left hand
side and "1", "2" and "3" on the right hand side la-
bel the polynomials the others label the Hermite
interpolation polynomials see Eq.(7.35a). 222
7.5 Extended Hermite interpolation polynomials for two
nodes. Left hand side the interpolation polynomials,
center its first and right hand side its second derivative.
"1" and "2" label the polynomials , "3" and "4" and
"5" and "6" 224
7.6 Example for the global Hamiltonian matrix for 5 finite
elements. Each of the blocks are given by the local
matrix. 226
List of Figures xiii

7.7 Convergence of the energy


for the state as a function of the num-
ber of non-vanishing Hamiltonian matrix elements
Top, "1", for linear Lagrange interpolation polynomials,
"3" for third order and "5" for 5-th order interpolation
polynomial. Thick lines for Lagrange and thin lines for
Hermite interpolation polynomials. 228
7.8 Convergence for the state in depen-
dence of the element structure for Hermite interpola-
tion polynomials. and
is the number of non-vanishing Hamiltonian matrix el-
ements. "const." for finite elements of constant size,
"quadr." for quadratically spaced finite elements, and
"lag." for finite elements whose dimension is given by
the zeros of a Laguerre polynomial. 229
7.9 Triangle in global coordinates. 236
7.10 Triangular element in local coordinates. 237
7.11 Grid and grid labels for triangular finite elements. 238
7.12 Linear interpolation polynomial 239
7.13 Quadratic interpolation polynomials. (Lefthand side
and righthand side 239
7.14 Lefthand side finite element for quadratic and righthand
side for cubic two-dimensional interpolation polynomi-
als. 241
7.15 Cubic interpolation polynomials. From lefthand side
top to righthand side bottom: interpolation polynomial
and 242
7.16 Interpolation polynomials for finite elements with 15
nodes. Top: For finite elements with nodes only on
the border; bottom for finite elements with nodes on
the border and three internal nodes. Lefthand inter-
polation polynomial righthand side interpolation
polynomial 243
7.17 Complex energy of the hydrogen ground state for F=0.3
as a function of the complex rotation angle To un-
cover the “convergence trajectory” the deviation from
the converged value is plotted:
and with the
real part of the complex energy eigenvalue and the
FWHM of the resonance. 251
This Page Intentionally Left Blank
List of Tables

1.1 Some partial differential equations in physics. 3


3.1 Contribution in % of the 0th to 4th perturbational orders
to the final energy for different nuclei, for the ground
"0" and the first excited "1" breathing modes. 77
3.2 Eigenvalues of the diamagnetic contribution
for some diamagnetic
hydrogenic eigenstates. indicates the field-free
quantum number and the upper index which states
form a common adiabatic multiplet. States without
upper index are adiabatic singlet states. 84
3.3 Comparison of the energy of the state as func-
tion of various external field strengths obtained by the
approximate invariant (inv.) and by ‘exact’ numerical
computations via discrete variable and finite elements. 85
4.1 Ionization potential for two-electron systems in Rydberg 100
5.1 The lowest five eigenenergies for the Pöschl-Teller po-
tential in
dependence of the number of finite difference steps. 149
5.2 The lowest five eigenenergies for the harmonic oscilla-
tor. denotes the integration border and dx the step
size. 150
5.3 The lowest five eigenenergies for the anharmonic oscil-
lator with potential
denotes the left and the right
integration border and dx the step size. 150
xv
xvi NUMERICAL QUANTUM DYNAMICS

6.1 Parameters for the model potential


is the
averaged relative error over all computed eigenstates
up to principal quantum number n= 10 and the dif-
ference between our computation and the experimental
result. 188
6.2 Energy in Rydberg for the ground state of the hydrogen
atom for spin down states for The
Laguerre mesh was obtained by with
respectively 203
7.1 Some polynomial coefficients for the Lagrange inter-
polation polynomials as defined by Eq.(7.30) 220
7.2 Eigenvalues for the two-dimensional harmonic oscil-
lator. For both computations quadratic interpolation
polynomials are used. 247
Preface

It is an indisputable fact that computational physics form part of the essential


landscape of physical science and physical education. When writing such a
book, one is faced with numerous decisions, e.g.: Which topics should be
included? What should be assumed about the readers’ prior knowledge? How
should balance be achieved between numerical theory and physical application?
This book is not elementary. The reader should have a background in quan-
tum physics and computing. On the other way the topics discussed are not
addressed to the specialist. This work bridges hopefully the gap between ad-
vanced students, graduates and researchers looking for computational ideas
beyond their fence and the specialist working on a special topic. Many impor-
tant topics and applications are not considered in this book. The selection is
of course a personal one and by no way exhaustive and the material presented
obviously reflects my own interest.

What is Computational Physics?

During the past two decades computational physics became the third funda-
mental physical discipline. Like the ‘traditional partners’ experimental physics
and theoretical physics, computational physics is not restricted to a special area,
e.g., atomic physics or solid state physics. Computational physics is a method-
ical ansatz useful in all subareas and not necessarily restricted to physics. Of
course this methods are related to computational aspects, which means numeri-
cal and algebraic methods, but also the interpretation and visualization of huge
amounts of data.
Similar to theoretical physics, computational physics studies the properties
of mathematical models of the nature. In contrast to analytical results obtained
by theoretical models, computational models can be much closer to the real
world. Due to the numerical backbone of computational physics more details
can be studied. Experimental physics is an empirical ansatz to obtain a better
xvii
xviii NUMERICAL QUANTUM DYNAMICS

understanding of the world. Computational physics has as well a strong em-


pirical component. For fixed parameters of our computational model of the
real-world physical system we obtain a single and isolated answer. Parameter
studies result in a bundle of numbers. Intuition and interpretation is necessary
to obtain a better understanding of nature from these numbers - the number by
its own, without any interpretation, is nothing.

Mathematical models describing classical systems are dominated by ordi-


nary differential equations, nonrelativistic quantum dynamics by the Schrö-
dinger equation, which leads for bound states to an elliptic boundary value
problem. The eigenfunctions of the Schrödinger equation for bound states
will vanish in the infinity. Thus the computational methods useful in solving
physical models will also differ between classical mechanics, electrodynamics,
statistical mechanics or quantum mechanics and so forth - although there will
be a certain overlap. Methods useful for a small number of degrees-of-freedom
cannot be used for many particle systems. Even if the computer power - storage
and speed - increased dramatically during the last decades, many body systems
cannot be treated on the same computational footing as, e.g., two degrees-
of-freedom systems. In this book we will concentrate on numerical methods
adequate mainly for small quantum systems.

The book consists of eight chapters. Chapter one formalizes some of the
ideas of quantum mechanics and introduces the notation used throughout the
book. The Dirac description will be briefly discussed, angular momentum, the
Euler angles and the Wigner rotation function introduced. The motion of wave
packets and some comments about the discretization of the time propagator
can be also found there. This chapter will be completed by a discussion of
the quantum classical correspondence, including the WKB approximation and
the representation of quantum wave functions in phase space. Chapter two,
devoted to the discussion of integrability and separability, contains a list of
all coordinate systems for which the three-dimensional Schrödinger equation
could be separable.
Chapter three and four revisits approximation techniques. We will start with
Schrödinger perturbation theory and discuss briefly the effect and the qualitative
consequences of perturbing degenerate states. Approximative symmetries and
dynamical Lie algebras will be as well a topic. Advanced computational
methods are often justified by variational principles. Thus we will discuss
the Rayleigh-Ritz variational principle. Many numerical techniques will lead
to banded or sparse Hamiltonian matrices. Effective linear algebra routines
are based on Krylov space techniques like the Arnoldi and Lanczos method,
which will be introduced in this chapter. The many-body Schrödinger equation
cannot be solved directly. Thus approximation models have to be used to obtain
sensitive results. Hartree-Fock and density functional theory are two of the most
Preface xix

important techniques to deal with many body problems. We will briefly discuss
these two methods, followed by a discussion of the virial theorem. The last
section of chapter four deals with quantum Monte Carlo methods, which due
to the increasing computer power will become more important for quantum
computations in future.
Chapter five contains finite differences. Here we will also discuss initial
value problems for ordinary differential equations. Central in this chapter is
the discretization of the uni-dimensional time-dependent Schrödinger equation
in the space and time variable. This chapter will be completed by deriving the
Numerov method, useful for computing bound states, resonances and scattering
states.
The theory of discrete variables is the subject of chapter six. The essential
mathematical background is provided. Orthogonal polynomials are discussed
and the computation of their nodes are explained. These nodes play an im-
portant rôle in applying the discrete variable technique to quantum systems.
Some examples are discussed emphasizing the combination of discrete variable
techniques with other numerical methods.
Chapter seven is devoted to the discussion of finite elements. We will discuss
one- and two-dimensional finite elements, the discussion of two-dimensional
elements will be restricted to elements with triangular shape. We will derive
interpolation polynomials useful for quantum applications and introduce Spline
finite elements and discuss, by an example, the combination of finite elements
for the radial coordinate with discrete variables in the angular coordinate.
Many computational techniques necessitate linear algebra routines to obtain
the results. Adaptive techniques for the grid generation are useful to optimize
finite element computations. Other routines are necessary to compute the
nodes of orthogonal polynomials, fast Fourier transformations to obtain wave
functions in phase space, graphical routines to visualize numerical results and
so on. This list could become endless easily. For the benefit of the reader
the final chapter contains a - of course incomplete - list of useful sources for
software.
Chapter 1

INTRODUCTION TO QUANTUM DYNAMICS

This chapter gives a brief review of quantum mechanics. Although the reader
is expected to have some experience in the subject already, the presentation
starts at the beginning and is self-contained. A more thorough introduction to
quantum mechanics can be found in numerous monographs, e.g., [1, 2, 3].
In the present state of scientific knowledge the importance of quantum me-
chanics is commonplace. Quantum mechanics play a fundamental rôle in the
description and understanding of physical, chemical and biological phenom-
ena. In 1900, Max Planck [4] was the first to realize that quantum theory is
necessary for a correct description of the radiation of incandescent matter. With
this insight he had opened new horizons in science. Two further historic cor-
nerstones are Einstein’s photoelectric effect and 1923 de Broglie’s – at that time
– speculative wave-particle duality. In accordance with Einstein’s insight and
de Broglie’s hypothesis Schrödinger1 derived in 1925 his famous Schrödinger
equation.

1. THE SCHRÖDINGER EQUATION


This section is an introduction to the Schrödinger equation. No attempt is
made here to be complete. The essential goal is to lay the basis for the rest of
the book – the numerical aspects of non-relativistic quantum theory.

1.1 INTRODUCTION
The emission of electrons from the surface of a metal was discovered by
Hertz in 1887. Later experiments by Lenard showed that the kinetic energy of
the emitted electron was independent of the intensity of the incident light and
that there was no emission unless the frequency of the light was greater than
a threshold value typical for each element. Einstein realized that this is what
1
2 NUMERICAL QUANTUM DYNAMICS

is to be expected on the hypothesis that light is absorbed in quanta of amount


In the photoelectric effect an electron at the surface of the metal gains an
energy by the absorption of a photon. The maximum kinetic energy of the
ejected electron is where is the contact potential or work
function of the surface. Einstein’s theory predicts that the maximum kinetic
energy of the emitted photoelectron is a linear function of the frequency of the
incident light, a result which was later confirmed experimentally by Millikan
(1916) and which allowed to measure the value of the Planck constant These
results lead to the conclusion, that the interaction of an electromagnetic wave
with matter occurs by means of particles, the photons. Corpuscular parameters
(energy E and momentum of the photons are related to wave parameters
(frequency and wave vector of the electromagnetic field by the
fundamental Planck-Einstein relations:

where with Js the Planck constant.


Newton was the first to resolve white light into separate colors by dispersion
with a prism. The study of atomic emission and absorption spectra uncovered
that these spectra are composed of discrete lines. Bohr assumed 1913 that an
electron in an atom moved in an orbit around the nucleus under the influence
of the electrostatic attraction of the nucleus just like the planets in the solar
system. Due to classical electrodynamics an accelerated electron will radiate
and lose energy. Thus such a classical system would be unstable. Therefore
Niels Bohr assumed that the electron moves on quantized orbits and that the
electron radiate only when they jump from an higher to a lower orbit. Of
course the origin of this quantization rules, the Bohr-Sommerfeld quantization,
remained mysterious.
In 1923 de Broglie formulated the hypothesis, that material particles, just
like photons, can have wavelike aspects. Hence he associated with particle
parameters energy and momentum wave frequencies and wave vectors, similar
to the Planck-Einstein relation (1.1). Therefore the corresponding wavelength
of a particle is

By this relation de Broglie could derive the Bohr-Sommerfeld quantization rule.


At that time Erwin Schrödinger was working at the institute of Peter Debye in
Zürich, and Peter Debye suggested to derive a wave equation for these strange
de-Broglie-waves, which resulted in the famous time-dependent Schrödinger
Introduction to Quantum Dynamics 3

equation

By acting with the potential-free Schrödinger equation on a plane de-Broglie-


wave he obtained the correct de Broglie relation
(1.2) and he could derive the hydrogen eigenspectrum 2 .

Mathematical aspects: partial differential equations. The mathematical


typology of quasilinear partial differential equations of 2nd order

distinguishes three types of such equations: hyperbolic, parabolic or elliptic.


A quasilinear partial differential equation of 2nd order, Eq.(1.4), is called

with
In physical applications hyperbolic and parabolic partial differential equa-
tions usually describe initial value problems and elliptic equations boundary
value problems. In Tab. (1.1) some typical physical examples are listed.

Physical aspects. The solutions of the Schrödinger equation are com-


plex functions. Due to the interpretation of Max Born, the modulus square of
the normalized wave function at a position gives us the probability
4 NUMERICAL QUANTUM DYNAMICS

of finding the particle at that point. Heisenberg discovered


that the product of position and momentum uncertainties and is greater
than or equal to Planck’s constant

This uncertainty can be generalized to arbitrary operators. The expectation


value of an operator  is defined by

and its variance by

in dimensions). A vanishing variance of an Hermitian operator


means that its expectation value can be measured exactly. The commutator of
two operators  , is defined by

If  and are two Hermitian operators which do not commute, they cannot
be both sharply measured simultaneously. Let

then the uncertainties and hold

Coming back to the probability interpretation, we must require that

since at some initial the particle must be somewhere in space. Let

the probability density and


Introduction to Quantum Dynamics 5

the probability current density or flux, then we get the following law of proba-
bility conservation

This is a continuity equation analogous to the one between the charge and
current densities in electrodynamics and insures that the physical requirement
of probability conservation is satisfied.
The momentum operator has the coordinate-space representation
and thus the differential operator in the probability current density
can be interpreted as the velocity operator of the quantum particle under con-
sideration. Therefore if the particle is placed in an electromagnetic field the
momentum operator has to be replaced by with q the particle’s charge
and the vector potential of the external electromagnetic field.

1.2 THE SCHRÖDINGER EQUATION AS AN


EIGENVALUE PROBLEM: HAMILTONIAN
OPERATOR
Let us return to the time-dependent Schrödinger equation (1.3) under the
assumption of a time independent potential By the standard mathematical
procedure

we can separate the time and space variables in the time-dependent Schrödinger
equation (1.3). This gives

and because only the lefthand side of the differential equation depends on the
time variable t, respectively the righthand side only on space variables, this
equation can be satisfied only if both sides are equal to a constant. Calling this
constant E, we obtain

and the time-independent Schrödinger equation

Consequently this can also be written as an eigenvalue equation


6 NUMERICAL QUANTUM DYNAMICS

with the Hamiltonian, the eigenfunction of the Hamilton operator and


the constant E its eigenvalue. Thus whereas the time-dependent Schrbödinger
equation describes the development of the system in time, the time independent
equation is an eigenvalue equation and its (real) eigenvalue the total energy
of the system. This set of eigenvalues could be discrete with normalizable
eigenvectors, or a continuous one, or a mixed one as in the case of the hydrogen
atom. A mixed spectrum may have a finite or even infinite number of discrete
eigenvalues. Bound states are always discrete. Thus the question occurs under
which condition an attractive potential possesses bound states.
In one-dimensional potentials it is a well-known fact that a bound state
exists if for all x or even if satisfies the weaker condition
More of interest are attractive, regular 3-dimensional
central potentials. Regular potentials are those which are less singular than
at the origin, and go to zero faster than at infinity, r being the radial
coordinate. More precisely, they satisfy the condition

For those potentials Bargmann has proved the inequality

where is the number of bound states with angular momentum l, Calogero [6]
have shown that the number of bound states for the radial Schrödinger equation
in the S-wave (l = 0) admits the upper bound

These upper bounds have been recently generalized[7] for central regular po-
tentials with to

(For more details and additional inequalities for upper bounds see [7].)

Resonances. Strictly speaking, the continuous spectrum of an Hamiltonian


has no eigenvector, because the corresponding state is no longer normalizable.
Any real quantum system has quasibound states or resonances which are im-
portant for understanding their dynamics. Like the continuum and scattering
states resonances are associated with non-normalized solutions of the time-
independent Schrödinger equation. However, the complex coordinate method
Introduction to Quantum Dynamics 7

or complex coordinate rotation [8] enables us to isolate the resonance states


from the other states and to discretize the continuum.
In the complex coordinate method the real configuration space coordinates
are transformed by a complex dilatation or rotation. The Hamiltonian of the
system is thus continued into the complex plane. This has the effect that,
according to the boundaries of the representation, complex resonances are
uncovered with square-integrable wavefunctions and hence the space boundary
conditions remain simple. This square integrability is achieved through an
additional exponentially decreasing term

After the coordinates entering the Hamiltonian have been transformed, the
Hamiltonian is no longer hermitian and thus can support complex eigenenergies
associated with decaying states. Thereby a complex Schrödinger eigenvalue
equation is obtained. The spectrum of a complex-rotated Hamiltonian has the
following features [8]: Its bound spectrum remains unchanged, but continuous
spectra are rotated about their thresholds into the complex plane by an angle
of Resonances are uncovered by the rotated continuum spectra with
complex eigenvalues and square-integrable (complex rotated) eigenfunctions.
The complex eigenvalue yields the resonance position and
width The inverse of the width gives the lifetime of the state. This can
be understood easily by the separation ansatz Eq.(1.12). Due to Eq.(1.14) we
obtain for an eigenstate of the complex coordinate rotated Hamiltonian

and thus this state will exponentially decay. The complex coordinate method
has been applied to various physical phenomena and systems. For a recent
review and more details see [9],
In Eq.(l .6) we have already used Dirac’s “bras” and “kets” as an abbreviation
for computing the expectation of an operator. In the following section we
will briefly discuss Dirac’s bra-ket notation, which has the advantage of great
convenience and which uncovers more clearly the mathematical formulation of
quantum mechanics.

2. DIRAC DESCRIPTION OF QUANTUM STATES


This section is an introduction to the Dirac notation, to the “bra” and “kets”,
which will be mainly used throughout the book. For more details see Dirac’s
famous book [2].
Observables of quantum systems are represented by Hermitian operators Ô
in Hilbert space, the states by generalized vectors, the state vector or wave
8 NUMERICAL QUANTUM DYNAMICS

function. Following Dirac we call such a vector ket and denote it by This
ket is postulated to contain complete information about the physical state. The
eigenstates of an observable Ô are specified by the eigenvalues and the
corresponding eigenvalue equation is

with the eigenstate or eigenvector. The eigenstates form an orthonormal


set, so

By this equation the bra3 vector is defined, an element of the bra space, a
vector space dual to the ket space. If is an Hilbert space element is a
complex number. Thus by the “bras” we have defined a map from the Hilbert
space onto the complex numbers, the inner or scalar product.
We postulate two fundamental properties of the inner product. First

in other words and are complex conjugate to each other, and second

This is also called the postulate of positive definite metric.


The eigenkets of the position operator satisfying

are postulated to form a complete set and are normalized such that
Hence the ket of an arbitrary physical state can be expanded in
terms of

and is the coordinate representation of the state In analogue,


we define the momentum representation of the state and
thus

hence is the Fourier transformation (FT) from momen-


tum space to coordinate space.
Introduction to Quantum Dynamics 9

3. ANGULAR MOMENTUM
3.1 BASIC ASPECTS
In the early times of quantum mechanics the hydrogen spectrum was ex-
plained by quantization conditions for the electron’s motion - orbits in three
dimensions. Thus it is not surprising that in quantum dynamics the concept
of angular momentum plays a crucial role. The classical definition of the an-
gular momentum is with and canonical conjugate coordinates
and momenta, and we will adopt the same definition for the (orbital) angular
momentum operator
The components of the angular momentum operator hold the following
commutator relations:

which are sometimes summarized by the easy-to-memorize expressions

What follows from the existence of the commutation relations? Due to Eq.( 1.9)
we cannot simultaneously assign eigenvalues to all three components of the
angular momentum operator. Thus a measurement of any component of angular
momentum introduces an uncontrollable uncertainty of the order in our
knowledge of the other two components. In contrast we can simultaneously
measure the angular momentum square with any component
of since these operators commute

Thus the angular momentum eigenstates can be labeled by the eigenvalues of


and one angular momentum component. It is customary to choose and
Denoting the corresponding eigenvalues by / and we obtain

Due to the angular momentum commutator relations the expectation values


< Ô > vanish for the observables and for (E.g.,
due to Eq.(l .9) and because
For practical purposes it is more comfortable to define the non-hermitian
ladder operators
10 NUMERICAL QUANTUM DYNAMICS

which fulfill the commutator relations

and

In spherical coordinates (see chapter 2.2.2) the angular momentum operators


hold

and the eigenfunctions of and in spherical coordinates are the spherical


harmonics

with are associated Legendre functions (see Chapt.5.3.2 for more


details).
In addition to the orbital angular momentum quantum particles possess spin,
an “angular-momentum-like” operator. Since the orbital angular momentum
depends only on space coordinates and spin has nothing to do with space, both
operators commute. The total angular momentum is defined by
with the spin operator. In general, let and be two angular momenta,
and let then the total angular momentum is The
corresponding eigenkets can be labeled either by or by
with the quantum numbers of the single angular momenta and J, M
the quantum numbers of the total angular momentum. The Clebsch-Gordan
coefficients (CG) are the unitary transformation between both sets [10]
Introduction to Quantum Dynamics 11

The permitted values of J range from in steps of one, and


The total number of states for all possible J’s
satisfies

The 3j-symbols are modified Clebsch-Gordan coefficients, but computationally


more useful due to their symmetry properties. They are defined by

The symmetry properties of the 3j-symbols are best uncovered by the Regge
symbols [11], which are defined by

The Regge symbol vanishes if one of its elements become negative or if one
of its columns or rows is unequal Hence all symmetries and
selection rules of the 3j-symbol are uncovered by the Regge symbol (see also
Chapt. 5.4.2.2). (Unfortunately the used symbols and the exact definition of
the symbols, e.g. by differs from author to author.)

3.2 COMPUTATIONAL ASPECTS


The following formulas are based on the conventions used in the book by
Edmonds [12].

The Euler angles. To define the orientation of a three-dimensional object,


three angles are needed. Most common are the Euler angles, which can be
defined by a finite coordinate rotation between the laboratory and the body fixed
frame. Transforming the coordinate system is called passive transformation,
while rotating the physical object is called active transformation. In most
textbooks, see e.g [12], the coordinate transformation is graphically presented.
In order to appreciate the Euler angles, Fig.( 1.1) shows as an example the active
view of a finite coordinate rotation defined by Euler angles.
The generally accepted definition of the Euler angles is to start with the
initial system X , Y, Z and a first rotation by about its Z axis.
This results in a new system which is now rotated through an
angle about its leading to another intermediary system:
which by rotation through the third Euler angle
about the passes finally into the system These
12 NUMERICAL QUANTUM DYNAMICS

transformations has been summarized in a short hand notation in Eq.(1.43).


Introduction to Quantum Dynamics 13

Let denote the angular momentum operator with respect to the axis a. The
rotation of a wave function is given by

with

In order to perform these rotations by rotations about the original axes (X, Y, Z)
we replace the rotation by first turning back to the original axis and then to
rotate instead of the new axis around the original axis and going back to the
new coordinate system. Thus we obtain, e.g., for

Continuing with this process will finally lead to

The Wigner Rotation Function. Let us now apply these rotations to the
spherical harmonics

Because the angular momentum operator can only affect the magnetic quantum
number l will be conserved. Therefore

with

where is called the Wigner rotation function given by


14 NUMERICAL QUANTUM DYNAMICS

Instead of using the direct summation the Wigner rotation function can be
computed alternatively via Jacobi polynomials

Jacobi polynomials are discussed in Chapt. 5.3.8 and useful recursion relations
can be found there. (In symbolic computer languages like Maple the Jacobi
polynomials are predefined and thus the equation above is especially useful to
compute the Wigner rotation function using Maple.)

3j-symbols. By combining the definition of the 3j-symbols with the Wigner


rotation functions one arrives at

where the sum runs over values of s for which the argument of the factorial is
nonnegative. (Additional recursion relations can be found in [12].)

4. THE MOTION OF WAVE PACKETS


4.1 THE TIME PROPAGATOR
In contrast to coordinates and momenta, time is just a parameter in quantum
mechanics. Thus time is not connected with an observable or an operator. Our
basic concern in this section is the motion of a wave packet or ket state. Suppose
we have a physical system prepared at time At later times, we do not expect
the system to remain in the initially prepared state. The time evolution of
the state is governed by the Schrödinger equation. Because the Schrödinger
equation is a linear equation, there exists a linear (unitary) operator, which
maps the initial wave packet onto the wave packet at a later time t

Because of

and due to
Introduction to Quantum Dynamics 15

the time evolution operator or time propagator is transitive

and because of

U hold

From the time-dependent Schrödinger equation we get immediately

and thus the integral equation

Conservative quantum systems. For a conservative quantum system the


Hamiltonian is time independent. The solution to Eq.(1.51) in such a case is
given by

To evaluate the effect of the time propagator on a general initial wave packet,
we can either expand the wave packet with respect of the Hamiltonian eigen-
functions or expand the time evolution operator in terms of the eigenprojector.
Because both ways are educational we will discuss both briefly - of course the
results are entirely equivalent.
Thus let us start with the complete eigenbasis of the Hamiltonian:
and expand our wave packet with respect to the eigen-
basis
16 NUMERICAL QUANTUM DYNAMICS

because Equivalent to the method above is the direct expan-


sion of the time evolution operator by the idempotent projector

and thus again

Of course this equation could also be derived directly using the time-dependent
Schrödinger equation.

Non-conservative systems. For non-conservative systems is explicitly


time dependent and thus the integral equation (1.51) cannot be solved directly.
Nevertheless there is a simplification for Hamiltonians which commute at dif-
ferent times, which is rather an exception than the rule. The formal propagator
for those systems can be computed by

To derive a formal solution in the general case let us iteratively interpret the
integral equation (1.51). Hence

and this series is sometimes called Neumann series or Dyson series, after F. J.
Dyson, who developed a similar perturbation expansion for the Green function
Introduction to Quantum Dynamics 17

in quantum field theory. Note, that because does not commute with
the time order becomes important.

4.2 DISCRETIZATION IN TIME


4.2.1 THE CAYLEY METHOD
Let us restrict the following discussion to conservative systems. Hence the
time evolution of a wave packet could be described by Eq.(1.52). Suppose
we know a sufficiently large subset of the eigenstates of the systems, then
we could expand our initial wave packet with respect of these eigenfunctions.
Such a formal expansion could cause several numerical problems: In case we
know the eigensolutions only approximately and/or the summation converges
only weakly, computational errors will accumulate by adding further and further
contributions to Eq.(l .55). In case of a mixed spectrum not only the bound states
but in addition the continuum states have to be considered, and of course even
the computation of the eigenstates could be hard work. Therefore especially
if the continuum becomes important, e.g via tunneling, a direct approximate
solution of seems more favorable.
The naïve approximation by a first order Taylor expansion

fails because the approximating operator is non-unitary and hence the norm of
the wave packet will not be conserved. This problem could be overcome easily
and additionally the accuracy improved. Let us again start with the correct
propagation.

and this approximation is called Cayley or Crank-Nicholson approximation.


A closer look reveals that this formula is now of second order in and that
the approximative propagator is again unitary, because

A didactical very nicely written paper, in which time dependent aspects of


tunelling and reflexion of wave packets for one-dimensional barrier potentials
18 NUMERICAL QUANTUM DYNAMICS

are discussed, can be found in [13]. These authors combined a finite difference
method in the space coordinate with the Cayley method for the time variable
(for more details see chapter 4.4). For one-dimensional systems the implicit
equations resulting from this discretization procedure can be solved iteratively.
For higher dimensions this would be no longer possible and thus a coupled
system of linear equations have to be solved. An application can be found in
[14].

4.2.2 THE PEACEMAN-RACHFORD METHOD


The essential idea of the Peaceman-Rachford computational scheme is to
split the propagator in two parts. Let  and be two operators that do not
necessarily commute, but whose commutator fulfills

Then

with x a complex number. This is called the Campbell-Baker-Hausdorff the-


orem. Thus for commuting operators the operator sum in the
exponential function can be mapped onto the operator product of the exponen-
tial operators. Therefore if the Hamiltonian can be written in a sum it could be
numerical of interest first to separate the propagator into two sub-propagators
and to expand each of this sub-propagators in a Cayley-like expansion. For
sufficiently small time steps and the Hamiltonian we obtain

and with Eq.(1.58)

thus
Introduction to Quantum Dynamics 19

Computationally it is more efficient to separate the action of this approximate


time propagator on the wave packet into two implicit schemes by introducing
an artificial intermediate wave packet

It is very common in literature to denote this intermediate state by


But note, this notation is misleading because this is not a
propagated wave packet, it is only a “book keeping” time-label and numeri-
cally an intermediate step towards the propagated wave packet at time
Thus numerically we solve first the Eq.(1.65a) and then substitute this result in
Eq.(1.65b). Like the Cayley method this propagation scheme preserves unitar-
ity and is for commuting operators of second order in t, but for
non-commuting operators only of first order.

5. THE QUANTUM-CLASSICAL CORRESPONDENCE


In the early 20th century physicists realized that describing atoms in purely
classical terms is doomed to failure. Ever since the connection between clas-
sical mechanics and quantum mechanics has interested researchers. The rich
structure observed in low-dimensional non-integrable classical systems has re-
newed and strengthen this interest [15]. For atoms, it was always expected
that for energy regions in which the energy spacing between neighboring states
is so small as to resemble the continuum as predicted by classical mechanics,
quantum effects become small. Thus todays textbooks state that the classical
limit will be reached for highly excited states. That this is an oversimplifica-
tion was shown by Gao [16]. By associating the semiclassical limit with states
where the corresponding particle possesses a short wavelength this apparent
contradiction could be explained.

5.1 QUANTUM DYNAMICS IN PHASE SPACE


Due to the Heisenberg uncertainty relation phase space and quantum dynam-
ics seem to be incompatible. In classical dynamics it is always theoretically
possible to prepare a system knowing exactly its coordinates and momenta;
for quantum systems this would be impossible. Nevertheless, by considering
mean values instead of the quantum operators themselves, Ehrenfest could de-
rive equations of motion equivalent to those known from classical Hamiltonian
dynamics.
Because either the coordinates are multiplicative operators and the momenta
derivative operators or vice versa, the wave function can only depend on either
the coordinates or the momenta, but not at the same time from canonical con-
jugate observables. Of course non-canonical coordinates and momenta could
20 NUMERICAL QUANTUM DYNAMICS

be mixed. To map the wave function onto a phase space object, actually we
should call that phase space a mock phase space4, Wigner considered instead
of the wave function the density operator. The density operator or density
matrix depends for systems of f degrees-of-freedom from 2f coordinates in-
dependently. Using one set of this coordinates and Fourier-transforming this
set Wigner could derive a quantum object in phase space.

5.1.1 THE EHRENFEST THEOREM


The theorems relate not to the dynamical variables of quantum mechanics
themselves but to their expectation values. Let  be an observable and
a normalized state. The mean value of the observable at time t is

thus a complex function of t. By differentiating this equation with respect of t


we obtain

with the Hamiltonian of the system under consideration. If we apply this


formula to the observables and we obtain for stationary potentials, with

These two equations are called Ehrenfest’s theorem. Their form recalls that of
the classical equations of motion.

5.1.2 THE WIGNER FUNCTION IN PHASE SPACE


The concept of density operators was originally derived to describe mixed
states. Mixed states occur when one has incomplete information about a system.
For those systems one has to appeal to the concept of probability. For quantum
systems this has important consequences, one of the most obvious are vanishing
interference patterns. Nevertheless this is a fascinating and important problem
by its own, we will restrict the following discussion to pure states, even if most
of the relations presented will also hold for mixed states.
The expectation or mean value of an observable  is given by
Introduction to Quantum Dynamics 21

therefore in a complete Hilbert space basis we obtain with

and thus

is called the density operator and the density matrix. Its diagonal element
is

which gives the probability to find the system in the state

The von Neumann Equation. The von Neumann equation is the equation of
motion for the density operator. Let us start with an arbitrary wave packet

and a complete Hilbert space basis. From the time-dependent Schrödinger


equation (1.3), we obtain immediately

Multiplying from the left with and using the orthonormalization of the
Hilbert space basis, results in

respectively in

For the density matrix we obtain with

and thus for the density operator


22 NUMERICAL QUANTUM DYNAMICS

This equation is called the von Neumann equation and has exactly the same
structure as the classical Liouville equation.
The diagonal elements of the density operator in the coordinate represen-
tation are

which is just the probability of presence at position Some additional impor-


tant properties of the density operator are: it is hermitian,
normal and positive definite, which means for all hermitian operators

The Wigner Function in Phase Space. In the coordinate representation the


density operator depends for quantum systems with f degrees-of-freedom on
2f coordinates Defining new coordinates via

we obtain the Wigner function [17]

For a better understanding let us rewrite this equation: The Fourier transform
can be written as

and thus equivalently

The new coordinates and bear some similarities to relative and center of
mass coordinates in many particle systems. Integrating Eq.(1.77) over the
momenta gives
Introduction to Quantum Dynamics 23

Thus, integrating the Wigner function in phase space over the momenta results
in the probability of presence in coordinate space, and vice versa, integrating
over the coordinate space will lead to the probability of presence in momentum
space:

Additional properties of the Wigner function in phase space can be found in


[17] and many text books about quantum optics.
Because the Wigner function can be obtained by Fourier transformation of
the 2f-dimensional density operator, a simple computational scheme is given
by discretizing the density operator in coordinate space and using a fast Fourier
transformation to obtain the Wigner function [18]. Regarding the projection
onto a surface-of-section allows to compare quantum Poincaré surfaces with
classical surfaces. By synthesizing wave packets localized along classical
structures quantum and classical dynamics can be compared and quantum
effects for chaotic systems uncovered. Some examples can, e.g., found in [19]
and references therein.
A generalization of the Wigner function is given by the Weyl transformation.
The Weyl tranformation maps an arbitrary quantum operator  onto a quantum
function over phase space and is given by

Using as quantum operator the density operator will give again the Wigner
function. Note, even if the quantum function above is well defined in any phase
space position, this does not mean that the quantum system can be prepared
for any exact phase space value. Of course the Heisenberg uncertainty princi-
ple still holds. One should not confuse the well defined behavior of quantum
functions, respectively quantum operators, with preparing or measuring quan-
tum systems. Let us consider, e.g., a wave packet. Such a wave packet has
always well defined values in coordinate space and at the same time its Fourier
transformed possesses exact and well defined values in momentum space. The
corresponding probability of presence in coordinate respectively momentum
space of this wave packet can be computed by integrating the Wigner func-
tion, see Eq.(1.79). Of course calculating the variance of the coordinate and
momentum operator with this wave packet keeps the quantum uncertainty.
By folding the Wigner function with a Gaussian in phase space the Husimi
function [20] is defined
24 NUMERICAL QUANTUM DYNAMICS

with

and a real number. Instead of using this definition it is much easier to compute
the Husimi function by projection onto a coherent state [21]. In contrast to
the Wigner function, the Husimi function is positive definite, which allows its
interpretation as a quantum phase space probability.
Once one have obtained the Wigner function for a given quantum state then
one may study its evolution in time. There are two possibilities: Either one
computes the evolution of the wave packet and again and again the Wigner
function from the evolved wave packet or one tries to solve the equation of
motion for the Wigner function directly. Computational easier is the first way.
Nevertheless let us also briefly discuss the second possibility.
To obtain the equation of motion for the Wigner function we have to map
the von Neumann equation onto the phase space. This results in

with the Moyal brackets [22], [23] defined by

By inserting the Taylor expansion of the sine function into this formal expres-
sion one obtains finally a series expansion of the Moyal bracket. The Moyal
bracket are the quantum correspondence to the Lagrange bracket. Therefore
they are in addition useful in checking, e.g., the integrability of a quantum
system (see Chapt. 2). Because vanishing commutator relations for quantum
operators correspond to vanishing Moyal brackets for the quantum function in
phase space, this could be useful, e.g, for those quantum systems for which only
the corresponding classical invariants of motion are known (see, e.g. [24]).

5.2 THE WKB-APPROXIMATION


5.2.1 THE WKB-APPROXIMATION IN COORDINATE SPACE
Now we will review the WKB approximation (Wentzel-Kramers-Brillouin)
to first bring out the similarity between the Schrödinger equation and the
Hamilton-Jacobi equation of classical dynamics. By applying the time-independent
Schrödinger equation (1.15) onto the ansatz
Introduction to Quantum Dynamics 25

we obtain

This equation is still exact. Because the last term could also be treated as a
potential-like term, is called quantum potential.
In the limit the equation above becomes

and

A classical conservative system holds and


respectively

Thus is interpreted as action of the corresponding classical path with


Hamilton-Jacobi equation By rewriting the action of the classical
system in terms of the potential we obtain

and hence one condition for the validity of the WKB-approximation can be
translated into a criteria of the potential: the above approximation will be
satisfied for slowly varying potentials. This will not be hold at turning points,
which will become more obvious after discussing Eq.(1.85).
From we obtain and thus Eq.(1.85) becomes
26 NUMERICAL QUANTUM DYNAMICS

and this equation can be immediately solved:

At the turning points of the classical trajectory in coordinate space the canonical
conjugate momentum vanishes and thus the wave amplitude function A(q)
diverges.

5.2.2 THE WKB-APPROXIMATION IN MOMENTUM SPACE


Classically coordinates and canonical conjugate momenta can interchange
their rôle by a canonical transformation. Similar, quantum coordinate space is
connected with the momentum space by a Fourier transformation. Therefore
Maslov solved the singularity problem mentioned above by considering the
WKB-ansatz in momentum space.
The Hamiltonian in momentum space is given by

and the WKB-ansatz by

In momentum space the kinetic energy operator becomes multiplicative and


the potential operator a derivative operator. Therefore but
this would not hold for the potential operator. Let us assume
Thus we obtain from the potential operator up to orders in

and with
Introduction to Quantum Dynamics 27

only two contributions remain due to the restrictions in the summation above.
In we get and thus and hence and
In we have which results in
and therefore Putting these pieces all together
we obtain

and thus

with

Therefore we get in analogy to the WKB-equations in coordinate space the


following WKB-equations in momentum space:

and
28 NUMERICAL QUANTUM DYNAMICS

With and we obtain

and therefore

Thus again we get divergencies, but now at the turning points in momentum
space.
Hence we have two solutions, both diverging, but fortunately at different
places in phase space. Therefore the essential idea is to combine the solution in
coordinate space with the Fourier transformed solution of the momentum space,
which will lead to a coordinate space solution with divergences at different
points in phase space. Note, because both solutions are only approximations
they are not Fourier transformed solutions to each other. Let be the
WKB-soIution in coordinate space at point q(t) and the WKB-solution
in momentum space. Its Fourier transformation is given by

Because we are only interested in lowest order in it is sufficient to compute


this integral in stationary phase approximation.
The solutions of the WKB approximation are given by periodic trajectories
in the corresponding classical system. A periodic trajectory can run through the
same point in coordinate space several times. Therefore we get our solution in
coordinate space by an overlap of all contributions for fixed q, but at different
times. Of course this holds also equivalently in momentum space. If we would
only sum over all those contributions we would still have to fight with the
original singularity. Therefore we multiply our original solution in coordinate
space with a suitable selected function and our Fourier transformed
solution with a second function The main task of this function is that
it becomes zero at the singularities, additionally it is a periodic function with
the period of the classical trajectory under consideration and the sum of both
functions equals one:

where denotes the singularities of the coordinate and the singularities


of the momentum space. Thus we obtain two new solutions
Introduction to Quantum Dynamics 29

and

and our final singularity-free solution becomes

5.2.3 THE CONNECTION FORMULAS


As described above both in the momentum and in the coordinate formulation
of the WKB-method singularities occur at the classical turning points. This
problem could be solved by combining the solution in coordinate space with
the Fourier transformed momentum space solution, or, if we should be inter-
ested in a solution in momentum space, vice versa. Obtaining the solution in
momentum space and Fourier transforming it might be often an unjustified la-
borious solution in spite of its approximate character. Nevertheless the solution
presented above holds in any dimension and is rather general. A simpler way
for one-dimensional and radial systems is given by the connection formula,
which we will briefly discuss for bound states and one-dimensional systems.
Let us assume, that around the turning point the potential variation is small
such that the potential can be linearly approximated,
with the tangent at the singularity and E the energy, because
at the turning point in coordinate space the kinetic energy vanishes and thus
the potential energy equals the total energy. Thus in the vicinity of the turning
point the Schrödinger equation becomes

and with the variable transformation we obtain

The Airy functions are the solutions of this equation. Because we are interested
in bound states we are only interested in those Airy functions, which goes to
zero for

Example. In the following we will discuss the formulas derived above by a


simple example.
Fig.(1.2) shows a potential well with turning points To derive our
solution we devide the space into the turning points and three regions. Outside
the well: and and the area inside the well For
30 NUMERICAL QUANTUM DYNAMICS

the solution outside the well the momentum becomes imaginary because the
potential becomes larger than the energy. Thus we get from our WKB-ansatz
(1.84) outside the interior region

For both and the wave function has to vanish. Hence


we obtain on the lefthand side the ’positive’ and on the righthand side the
’negative’ sign in the exponential function. These solutions have to connect
to the oscillatory solution inside the well. The ’inside’ solution is given by a
linear combination of the two solutions of the action S(q) in Eq.(1.86). This
inside solution will be given by the Airy function which goes smoothly through
the turning points.
In the neighborhood of the turning point we get from the outside

and therefore

and inside the well

The asymptotic solution of the Airy function is


Introduction to Quantum Dynamics 31

and thus the WKB solution inside the well will only match this Airy solution
if we take a linear combination of exp and exp such that

Next, let us discuss the turning point on the lefthand side. By a similar
analysis we get

These two equations represent the same area in space and thus they have to be
identical up to a normalization constant N. Therefore we obtain

and thus

By the analysis above we have derived a quantum condition

for a bound state in a potential well with smoothly varying turning points,
Because outside the well the wave function converges exponentially to zero
the quantum number gives the number of nodes of the wave function, see
Eq.(1.97). The WKB wave function is given by
32 NUMERICAL QUANTUM DYNAMICS

Further aspects of the WKB solution, especially in the context of EBK6 and
torus quantization, can be found in the very readable book of Brack and Bhaduri
[15].

Notes
l Those readers able to understand German are recommended to read Schrö-
dinger’s original papers – masterpieces in science and language! (See, e.g.,
E. Schrödinger, Von der Mikromechanik zur Makromechanik, Naturwis-
senschaften (1926), 14, 664).
2 Schrödinger published his results 1926 a few months after Pauli derived the
hydrogen spectrum based on Heisenberg’s “matrix mechanics”.
3 The names bra and ket are built from bra-c-ket:

4 In phase space each position has a well defined value. But for the quantum
system this does not allow us to prepare a single point in phase space or to
detect a single point in phase space – of course the uncertainty principle still
holds. Therefore we call the phase space in the quantum picture a “mock
phase space”.
5 stands for trace.
6 EBK = Einstein-Brillouin-Keller

References

[1] Cohen-Tannoudji, C., Diu, B. and Laloë, F. (1977). Quantum Mechanics I,


II. John Wiley & Sons, New York

[2] Dirac, P. A. M. (1967). The Principle of Quantum Mechanics (4th edition).


Oxford Science Publ., Claredon Press, Oxford

[3] Merzbacher, E. (1970). Quantum mechanics. John Wiley & Sons, New
York

[4] Planck, M. (1995). Die Ableitung der Strahlungsgesetze. (A collection of


reprints of M. Planck published from 1895 – 1901, in German.) Verlag Harri
Deutsch, Frankfurt am Main

[5] Simon, B. (1976). “The bound states of weakly coupled Schrödinger op-
erators in one and two dimensions,” Ann. Phys. (N.Y.) 97, 279–288
References 33

[6] Calogero, F. (1965). “Sufficient conditions for an attractive potential to


possess bound states,” J. Math. Phys. 6, 161–164
[7] Chadan, K., Kobayashi, R. and Stubbe, J. (1996). “Generalization of the
Calogero-Cohn bound on the number of bound states,” J. Math. Phys. 37,
1106–1114
[8] Ho, Y. K. (1983). “The method of complex coordinate rotation and its
applications to atomic collision processes,” Phys. Rep. 99, 1 – 68
[9] Moiseyev, N. (1998). “Quantum theory of resonances: Calculating ener-
gies, widths and cross-sections by complex scaling,” Phys. Rep. 302, 211–
296
[10] Biedenharn, L. C. and Louck, J. D. (1981). Angular momentum in quantum
physics. Addison-Wesley, Reading
[11] Regge, T. (1958). “Symmetry properties of Clebsch-Gordan’s coeffi-
cients”, Nuovo Cim. 10, 544–546
[12] Edmonds, A. R. (1957). Angular momentum in quantum dynamics Prince-
ton University Press, New York
[13] Goldberg, A., Schey, H. M. and Schwartz J. L. (1967). “Computer-
generated motion pictures of one-dimensional quantum-mechanical trans-
mission and reflection phenomena,” Am. Journ. of Phys. 35, 177 – 186
[14] Faßbinder, P., Schweizer, W. and Uzer, T (1997). “Numerical simulation
of electronic wavepacket evolution,” Phys. Rev. A 56, 3626 – 3629
[15] Brack, M. and Bhaduri, R. (1997). Semiclassical Physics Addison-
Wesley, Reading
[16] Gao, B. (1999). “Breakdown of Bohr’s Correspondence Principle,” Phys.
Rev. Lett. 83, 4225 – 4228
[17] Hillery, M., O’Connell, R. F., Scully, M. O., and Wigner, E. P. (1984).
“Distributions Functions in Physics: Fundamentals”, Phys. Reports 106, 121
– 167
[18] Schweizer, W., Schaich, M., Jans, W., and Ruder, H. (1994). “Classical
chaos and quantal Wigner distributions for the diamagnetic H-atom”, Phys.
Lett. A 189, 64 – 71
[19] Schweizer, W., Jans, W., and User, T (1998). “Optimal Localization of
Wave Packets on Invariant Structures”, Phys. Rev. A58, 1382 – 1388; (1998)
“Wave Packet Evolution along Periodic Structures of Classical Dynamics”
ibid. A60, 1414 – 1419
34 NUMERICAL QUANTUM DYNAMICS

[20] Takahashi, K. (1986). “Wigner and Husimi function in quantum mechan-


ics”, J. Phys. Soc. Jap. 55, 762 – 779
[21] Groh, G., Korsch, H. J., and Schweizer, W. (1998). “Phase space en-
tropies and global quantum phase space organisation: A two-dimensional
anharmonic system”, J. Phys. A 31, 6897 – 6910
[22] Moyal, E. J. (1949). “Quantum Mechanics as a Statistical Theory”, Proc.
Cambridge Phil. Soc. 45, 99 – 124
[23] Takabayasi, T. (1954). “The Formulation of Quantum Mechanics in Terms
of Ensemble in Phase Space” Prog. Theor. Phys. 11, 341 – 373
[24] Hietarinta, J. (1984). “Classical versus quantum integrability”, J. Math.
Phys. 25, 1833 – 1840
Chapter 2

SEPARABILITY

The Schrödinger equation is a partial differential equation and in the coor-


dinate representation its kinetic energy operator a Laplacian. Thus separability
is a great simplification of the numerical task. Partial differential equations
involving the three-dimensional Laplacian operator are known to be separable
in eleven different coordinate systems [1, 2]. We will label all eleven coordinate
systems in section two, but discuss in detail only some of them. In section three
we will briefly discuss coordinate singularities exampled by the Coulomb- and
the centrifugal singularity.
Each separable system is integrable but not vice versa. Conservative systems
can be divided into two types, integrable and non-integrable. Integrable systems
have as many independent integrals of motion as they have degrees-of-freedom.
Thus there is a qualitative difference between integrable and non-integrable
systems. A decisive stimulus came from the discovery that even simple systems
exhibit a non-integrable - or more popular - chaotic behavior. Since this is an
exciting phenomenon, this is where we will begin our discussion.

1. CLASSICAL AND QUANTUM INTEGRABILITY


Classical integrability. Let us consider a classical system with f degrees-
of-freedom. Its phase space has 2f dimensions. A phase space coordinate is
denoted by with the position vector and its canonical conjugate
momentum vector A constant of motion is a function over phase space
whose Poisson bracket with the Hamiltonian vanishes

35
36 NUMERICAL QUANTUM DYNAMICS

A conservative Hamiltonian system with f degrees-of-freedom is called inte-


grable if and only if there exist f analytical functions with
1. the analytical functions are analytically independent, which means
are linear independent vectors in phase space
2. the analytical functions are constants of motion
3. all are in involution, which means the mutual Poisson brackets vanish,

If a system is integrable all orbits lie on f-dimensional surfaces in the 2f-


dimensional phase space and there are no internal resonances leading to chaos.
Due to Noether’s theorem constants of motion result from symmetries. For
more details see, e.g., [3,4],

Quantum integrability. Classical integrability is our model for quantum in-


tegrability and thus integrability for quantum systems is defined in an analogous
manner [5, 6].
A conservative quantum system with f degrees-of-freedom is called in-
tegrable if and only if there exist f globally defined hermitian operators
whose mutual commutators vanish

Independent constants of motion imprison classical trajectories in lower


dimensional subspaces of the 2f-dimensional phase space. Hence, for each
constant of motion which becomes obsolete the dimension of the dynamically
accessible space will be increased by one. Due to the Heisenberg uncertainty
principle trajectories do not exist in quantum dynamics. Loosely speaking
quantum numbers will take their rôle. Observables in quantum dynamics are
given by hermitian operators. Therefore integrability of a f-dimensional Hamil-
tonian system requires the existence of f commuting observables
Mutually commuting observables have common eigenvectors and their eigen-
values can serve as quantum-labels for the wave functions. Therefore the
number of constants of motion equals the number of commuting observables
and corresponds to the number of conserved or good quantum numbers . For
separable systems the eigenvalues (separation constants) of each of the f uni-
dimensional differential equations can be used to label the eigenfunctions and
hence serve as quantum numbers. If the number of commuting observables
decreases, the number of good quantum numbers will also decrease and the
quantum system becomes non-integrable.
A simple example is given by spherical symmetric systems. In this case the
Hamiltonian will commute with the angular momentum operators
Separability 37

1, 2, 3 and because of

the mutual commuting observables are and one angular momentum


component, e.g., Ergo, eigenvectors of the system can be labeled by the
corresponding eigenvalues: the energy E, from which we get the principal
quantum number the angular momentum l obtained from
and the magnetic quantum number this holds, e.g., for the
hydrogen atom. If the hydrogen atom is placed in an external magnetic field
the spherical symmetry will be perturbed and l is no longer a conserved quantum
number. Because l is no longer conserved degeneracy of states with equal parity
and equal magnetic quantum number is now forbidden, and hence avoided
crossings or level repulsion occur. An example is shown in Fig.(2.1) (see also
Chapt.6.?.). Of course this holds in general for any quantum system. Thus if we
compare, e.g., the statistical properties of spectra for quantum integrable and
quantum non-integrable systems there will be a qualitative difference. Those
degeneracies which were related to quantum numbers existing only in the
integrable limit, will become forbidden for the non-integrable quantum system.
One of the most established statistical investigations in quantumchaology is
the nearest neighbor statistic. The basic idea is to compute the probability to
find a definite spacing between any two neighboring energy levels. Because in
general the density of states is energy dependent, first the quantum spectrum
under consideration have to be transformed to a spectrum with a constant
38 NUMERICAL QUANTUM DYNAMICS

averaged density of states, the so-called defolded spectrum. Then in this


defolded spectrum the probability to find a certain spacing, S, between any
two neighboring eigenlevels in the interval will be computed.
For integrable systems the highest probability is to find degeneracy and the
histogram is best fitted by a Poisson distribution

For quantum non-integrable systems degeneracy is forbidden and hence for


vanishing spacing the nearest neighbor distribution function, will
vanish. The correct distribution will depend on discrete symmetry proper-
ties. For time-invariant non-integrable quantum systems this will be a Wigner
distribution

An example is presented in Fig.(2.2). For more details about quantum chaos


and statistics see, e.g., [8].

2. SEPARABILITY IN THREE DIMENSIONS


Integrability of a 3-dimensional Hamiltonian system requires the existence
of 3 commuting observables. For a three dimensional system it can be shown
that the Laplace-Beltrami operator can be separated in exactly 11 different
curvilinear coordinates. For each of these coordinates the potential has
to fulfill certain properties that the Schrödinger equation becomes separable.
Separability 39

For all orthogonal curvilinear coordinates in three dimensions the follow-


ing conditions hold:

with and is called Laplace-Beltrami operator, and


these equations are the most relevant expressions in curvilinear coordinates for
quantum systems: The components of the momentum operator are given by
the nabla-operator, the kinetic energy by the Laplace-Beltrami operator and
to obtain, e.g., the probability of presence the volume element in the new
coordinates is needed. The only remaining task is to select the coordinate
system for which, if any, the Schrödinger equation becomes separable. This is
the case if the potential can be written as

Using Eq.(2.7) it can be proofed easily if and for which coordinates a given
one-particle quantum system is separable.

(1) Cartesian coordinates.

A quantum system becomes separable in Cartesian coordinates if the potential


can be written as

With a product ansatz for the wave function

we obtain immediately
40 NUMERICAL QUANTUM DYNAMICS

with E the energy, and and separa-


tion constants. Separable in Cartesian coordinates are, e.g., the 3d-harmonic
oscillator.
The scheme of the derivation of the one-dimensional differential equations
above for quantum systems separable in Cartesian coordinates, holds also for
curvilinear coordinates. The first step is to derive the Laplace-Beltrami operator
in the new coordinates and then to act with the Schrödinger equation in the new
coordinates on a product ansatz to derive the
uni-dimensional differential equations. In the following list we will discuss
only for some examples how the Laplace-Beltrami operator looks like and for
which type of potentials the Schrödinger equation becomes separable, but leave
the complete derivation to the reader.

(2) Spherical coordinates.

In spherical coordinates the metric tensor is

and the Laplace-Beltrami operator

A quantum system becomes separable in spherical coordinates, if the potential


fulfills
Separability 41

Of course this holds for all systems with spherical symmetry, e.g., the spheri-
cal oscillator, the Kepler system, the phenomenological Alkali metal potentials,
the Morse oscillator and the Yukawa potential to name only a few. (Many exam-
ples are discussed in [10].) These quantum systems possess a central potential.
Since the angular momentum operator in spherical coordinates is given by
Eq.(1.37c) and the Laplace-Beltrami operator by Eq.(2.11) the central field
Hamiltonian can be written as

with the particle mass or (for two particle systems) reduced mass. Labeling
the eigenstates by the angular momentum quantum number l, the magnetic
quantum number and an additional quantum number related to the energy,
we obtain the Hamiltonian eigenvalue equation

With a product ansatz for the wave function

the radial Schrödinger equation reads

This solution depends on the angular momentum l and thus the radial wave
function has to be labeled by both, the quantum number n and the total angular
momentum l.
For bound states it is convenient to rewrite the radial Schrbödinger equation.
Using instead of

the radial contribution to the Laplace-Beltrami operator becomes

and thus the radial Schrödinger equation is actually simpler in terms of

and becomes similar the uni-dimensional Schrödinger equation.


42 NUMERICAL QUANTUM DYNAMICS

(3) Cylindrical coordinates.

In cylindrical coordinates the metric tensor is

and the Laplace-Beltrami operator

A quantum system becomes separable in cylindrical coordinates, if the potential


fulfills

An important example for a system which possesses cylindrical symmetry is


the free electron in a homogeneous magnetic field.

(4) Parabolic coordinates.

The metric tensor becomes in parabolic coordinates

and the Laplace-Beltrami operator

A quantum system becomes separable in parabolic coordinates, if


Separability 43

Two important applications for parabolic coordinates are Rutherford scattering


and the Stark effect (hydrogen atom in an external homogeneous electric field),
which we will discuss briefly.
The Hamiltonian of the hydrogen atom in an external electric field F in
atomic units reads

Hence in parabolic coordinates we obtain

and with a product ansatz

with and the separation constants. Because is connected with the


charge it is also called fractional charge. For hydrogen-like ions with charge
Z the Coulomb potential becomes and thus the two separation constants
and holds respectively in Eq.(2.27c) we get instead
of Due to the electric field tunelling becomes possible and thus all
former bound states become resonances. Elliptic differential equations allow
only bound states. Therefore the mathematical structure of the Kepler equation
in an additional electric field had to change: Eq.(2.27b) is of elliptic type hence
supports bound states, whereas Eq.(2.27c) is of hyperbolic type, ergo possesses
only continuum states. For more details about the hydrogen Stark effect see
[11].
44 NUMERICAL QUANTUM DYNAMICS

(5) Elliptic cylindrical coordinates.

(6) Prolate spheroidal coordinates.

(7) Oblate spheroidal coordinates.

(8) Semiparabolic cylindrical coordinates.

(9) Paraboloidal coordinates.

(10) Ellipsoidal coordinates.


Separability 45

(11) Cone coordinates.

Semiparabolic coordinates. The semiparabolic coordinates can be derived


from the semiparabolic cylindrical coordinates and the spherical coordinates
and therefore are not independent. Nevertheless they play an important role in
lifting the Coulomb singularity, see section 3.1, and therefore we will discuss
this coordinate system

in addition.
The metric tensor of the semiparabolic coordinates is

and the Laplace-Beltrami operator becomes

Quantum systems are separable in semiparabolic coordinates if the potential


reads

Fur further discussions see sect.(3.1).


46 NUMERICAL QUANTUM DYNAMICS

Elliptic coordinates.

The only molecule for which the Schrödinger equation can be solved exactly
is the hydrogen molecule-ion, in Born-Oppenheimer approximation. The
basic idea of the Born-Oppenheimer approximation is that we regard the nuclei
frozen in a single arrangement and solve the Schrödinger equation for the
electron moving in this stationary Coulomb potential they generate. This
approximation is justified by the different masses of the electron and the nuclei.
The mass-relation between proton and electron is approximately 1 : 1836. The
geometry of such a molecule-ion is presented in Fig.(2.3). Transforming the
Schrödinger equation to elliptic coordinates results in a separable differential
equation.

The Hamiltonian of the electron motion in Born-Oppenheimer approxima-


tion is
Separability 47

with

and the energy is a parametric function of the nuclei distance R. Thus the
correct energy is given by the minimum of By introducing an angle
and coordinates

we arrive at

and these are exactly the elliptic coordinates defined above.


The Laplace-Beltrami operator becomes

and with the ansatz

and some arithmetic we obtain finally

and

with separation constants. Solving these equations, we find a discrete


spectrum for each value of R. We will further discuss this example in Sect.3 in
context of variational calculations.
48 NUMERICAL QUANTUM DYNAMICS

Polar coordinates.

Polar coordinates are often used to partially separate higher dimensional partial
differential equations. Due to their similarity to spherical, cylindrical and
hyperspherical coordinates we will not discuss them in detail here (see Chapt,
3.2).

Jacobi coordinates. Jacobi and hyperspherical coordinates go beyond the


above-discussed 3-dimensional coordinates. Jacobi coordinates are used for
N-particle systems. The following discussion will be restricted to systems
with particle of identical mass. Otherwise mass-weighted coordinates have
to be used. Jacobi coordinates are used typically for n-electron systems in
atomic physics or n-nucleon systems in nuclear physics. Hence the particle are
fermions and symmetry properties like the Pauli principle are simplified. For
two-particle systems the Jacobi coordinates are identical with relative and
center-of-mass coordinates

with the individual particle coordinates. Jacobi coordinates for an N-


particle system are defined as the relative distance between the th
particle and the center of mass of the particle

The kinetic energy in Jacobi coordinates reads

with the particle mass.


Separability 49

Hyperspherical coordinates. Hyperspherical coordinates generalize polar


and spherical coordinates to N dimensions via

thus hyperspherical coordinates hold

3. COORDINATES AND SINGULARITIES


The successful story of quantum dynamics began by explaining the hydrogen
spectrum back in the nineteen-twenties. This is the Kepler problem in quantum
mechanics, the motion of an electron in a There is a slight
problem in the radial Schrödinger equation, the singularity at the origin, A
standard textbook example is to remove this singularity by an ansatz for the
wave function with for the regular and
for the irregular solution. In this section we will discuss coordinate systems
for which the Schrödinger equation of the Kepler system will be mapped on a
singularity-free form.

3.1 THE KEPLER SYSTEM IN SEMIPARABOLIC


COORDINATES
It is of interest that different information of a system are uncovered by using
different coordinate systems. The hydrogen atom is usually treated in spherical
coordinates, because in this coordinates wave functions and energies can be
conveniently computed. By using semiparabolic coordinates the hydrogen
atom becomes similar to an harmonic oscillator. Let us start with the Kepler-
Hamiltonian

with the radial distance of the electron to the origin and E the eigenenergy.
We obtain in semiparabolic coordinates with and Eq.(2.36)
50 NUMERICAL QUANTUM DYNAMICS

and with ansatz

Therefore the Kepler system is separable in semiparabolic coordinates. For


bound states the energy becomes negative and a real number and for contin-
uous solutions is imaginary.
Let us first consider the subspace For we obtain two similar
ordinary differential equations of the kind

and the separation constants. This is the differential equation for


a two-dimensional harmonic oscillator in polar coordinates. With

we obtain for

and by the variable transformation

the Kummer equation [12]. The solution of the Kummer equation1 are the
hypergeometric confluent series, which will converge for

Thus we obtain

and for the hydrogen energy for

Therefore we have of course recovered the equation for the hydrogen energy.
Because are positive integers, the energy ground state is again –1/2 and
the degeneracy for states again with Thus
the description above is equivalent to the Kepler description for but the
Coulomb singularity is removed.
Separability 51

3.2 THE KUSTAANHEIMO-STIEFEL COORDINATES


Circular oscillator. The Schrödinger equation for the circular oscillator is

In polar coordinates

we obtain for the metric tensor and thus for the Laplace -
Beltrami operator

Together with the separation ansatz

this leads to

for the Schrödinger equation of the circular oscillator in polar coordinates. (We
have this equation already discussed for But our focus is not on the
two dimensional oscillator. More details about the two-dimensional harmonic
oscillator can be found in most text books on quantum mechanics.)

The Kustaanheimo-Stiefel transformation. Eq.(2.59) and Eq.(2.52) are


equivalent in structure, but Eq.(2.52) depends on two and not only on one
coordinate. Therefore we will briefly discuss the four dimensional harmonic
oscillator. The Hamiltonian of the degenerate four dimensional oscillator is

By the following bi-circular transformation


52 NUMERICAL QUANTUM DYNAMICS

we obtain for the Hamiltonian of the four dimensional harmonic oscillator with
the ansatz,

Compared to Eq.(2.52) there are two shortcomings: The harmonic oscillator


energy E could be in general unequal 2, and more important the two angular
momentum quantum number and are not necessarily equal. The first
problem could be solved easily, by restricting to those values for which E
equals 2, The second problem could be solved by introducing an additional
constraint. Restricting the solution above to leads to the scleronom
constraint

Transformation (2.60) together with the transformation from the semiparabolic


coordinates to the spherical coordinates under the constraint Eq.(2.63) is
called Kustaanheimo-Stiefel transformation and the coordinates of Eq.(2.60)
Kustaanheimo-Stiefel coordinates [13] [14], By this transformation, both the
Coulomb singularity and the centrifugal singularity are lifted. Of course there
is no necessity to introduce the intermediate semiparabolic transformation and
one could also transform directly the spherical coordinates to the four dimen-
sional space together with the constraint, but the intermediate semiparabolic
transformation uncovers more clearly the physical structure behind.

Notes
1 For the two-dimensional harmonic oscillator we would obtain

References

[1] Morse, P. M. and Feshbach, H. (1953). Methods of Theoretical Physics


Mc-Graw-Hill, NewYork
[2] Miller, W. (1968). Lie theory and special functions Academic Press, New
York
References 53

[3] Lichtenberg, A. J. and Lieberman, M. A. (1983). Regular and stochastic


motion Springer-Verlag Heidelberg
[4] Arnold, V. I, (1979). Mathematical methods of classical physics Springer-
Verlag Heidelberg

[5] Hietarinta, J. (1982), “Quantum integrability is not a trivial consequence


of classical integrability”, Phys. Lett. A93, 55 – 57
[6] Hietarinta, J. (1984). “Classical versus quantum integrability”, J. Math.
Phys. 25, 1833–1840

[7] Schweizer, W. (1995). Das diamagnetische Wasserstoffatom. Ein Beispiel


für Chaos in der Quantenmechanik, Verlag Hard Deutsch Frankfurt am
Main

[8] Haake, F. (1991). Quantum signatures of chaos Springer-Verlag Berlin


[9] Jans, W., Monteiro, T., Schweizer, W. and Dando, P. (1993). “Phase-space
distributions and spectral properties for non-hydroenic atoms in magnetic
fields” J. Phys. A26, 3187 – 3200
[10] Flügge, S. (1994). Practical Quantum mechanics (2nd printing) Springer-
Verlag Berlin

[11] Harmin, D. A. (1982). “Theory of the Stark effect” Phys. Rev. A 26, 2656
– 2681
[12] Abramowitz, M. and Stegun, I. A. (1970). Handbook of Mathematical
Functions. Dover Publications, Inc., New York
[13] Kustaanheimo, P. and Stiefel, E. (1964). “Pertubation theory of Kepler
motion based on spinor regularization”, Journal für Mathematik, 204 – 219
[14] Cahill, E. (1990). “The Kustaanheimo-Stiefel transformation applied to
the hydrogen atom: using the constraint equation and resolving a wavefunc-
tion discrepancy”, J. Phys. A 23, 1519–1522
Chapter 3

APPROXIMATION BY PERTURBATION

The necessary step in dealing with real-world quantum systems is to develop


approximation techniques. The next two chapters are devoted to perturbation
and variational techniques. The term ‘perturbation’ and the idea of this method
were introduced in quantum dynamics by analogy with the perturbation method
of classical mechanics. These perturbations can be imposed externally or they
may represent interactions within the system itself. This numerical technique
works so long as we can treat the perturbation as small - which immediately
rises the question, what is small?

There are two kinds of perturbation, time-dependent and time-independent.


First we will discuss time-independent methods and illustrate its application
by some examples. We will start with the Rayleigh-Schrödinger perturbation
theory. With the advent of computers, this seems to be old-fashioned. But
perturbation theory retains its usefulness because of a certain physical clar-
ity it gives us, and not only as testing method of more advanced numerical
techniques in certain parameter limits. Therefore whenever it is applicable,
low-order perturbation should be used to gain additional physical insight, and
more advanced numerical techniques to get accurate numbers.

In the first section we will develop the Rayleigh-Schrödinger perturbation


technique which can be also found in many quantum mechanic textbooks,
see, e.g., [1], [2]. We will discuss both, ordinary and degenerate perturbation
theory. The next two sections will be devoted to more advanced perturbation
techniques, the 1/N-shift theory and the approximative symmetry. Perturbation
acting on a system are very non-stationary. Thus we will close this chapter
with a brief discussion of time-dependent perturbation theory.
55
56 NUMERICAL QUANTUM DYNAMICS

1. THE RAYLEIGH-SCHRÖDINGER PERTURBATION


THEORY
1.1 NONDEGENERATE STATES
Let us first consider the simplest case in which the Hamiltonian of the quan-
tum system does not depend explicitly on the time. Suppose we know the
eigenvalues and the eigenstates of what seems from physical considera-
tions the mayor part of the Hamiltonian, called

where we will assume additionally in this subsection, that is nondegener-


ate. We further assume that the total Hamiltonian can be written as

where we add the formal parameter for convenience. For the Hamil-
tonian becomes the unperturbed Hamiltonian, and for the Hamiltonian
goes over to the Hamiltonian proper for the system under consideration.
Let us denote the eigenstates of the complete Hamiltonian as with
eigenvalues

We seek a solution of this equation by a power series expansion of and


We will assume, that the unperturbed eigenstates form a complete
orthonormal set. Thus we start with the ansatz

Then we substitute this ansatz into the Schrödinger equation (3.3), which gives

In order that equation 3.5 can be satisfied for arbitrary the


coefficient of each power of on both sides must be equal. Thus we obtain for
Approximation by Perturbation Techniques 57

Multiplying with the bra from left and using the orthogonality,
we obtain the first order correction to the energy

This is a very important formula. It states that the first order energy correction to
any unperturbed state is just the expectation value of the perturbing Hamiltonian
in the unperturbed state under consideration. But this is not the only information
hidden in the equation above. By multiplying with the bra from
the left we get

Therefore the expansion coefficients in first order perturbation theory de-


pends on both, the the non-vanishing off-diagonal matrix elements,
and on the energy difference between the states under consideration of the un-
perturbed system. Therefore if all off-diagonal matrix elements are of the same
order neighboring states mix more than far away ones. In addition, we can now
see that "small" for a perturbation depends from the magnitude of the matrix
elements and from the level separation of the unperturbed system due to the
denominator of Eq.(3.8).

In 2nd order we obtain


58 NUMERICAL QUANTUM DYNAMICS

and thus by multiplying from the left with the bra respectively with the
bra

and

and finally

Therefore if is the ground state, that is the state of lowest energy, then
the denominator in Eq.(3.9) is always negative. Due to the hermiticity of the
Hamiltonian, the nominator of Eq.(3.9) is always positive. Hence the ground
state energy correction in second order perturbation theory is always negative.

Energy correction - summary. With

we get for the first four orders in the energy correction


Approximation by Perturbation Techniques 59

1.1.1 ACCELERATION TECHNIQUES


By low order perturbation theory we could gain additional physical insight,
as already mentioned. Of course acceleration techniques are only useful for
high order perturbation expansions. In the next chapter we will briefly mention
some of these methods. In many situations discretization and finite element
techniques or quantum Monte Carlo methods are more accurate and due to the
power of modern low-priced computers even easier available.
A discussion of large order perturbation theory, especially discussing semi-
convergent and divergent series for quantum systems can be found, e.g., in [3]
and a brief numerical example for series in [4].
A quadratically convergent series converges in general faster than a linear
convergent series. Nevertheless there are several tricks for accelerating the rate
of convergence of a series. Given a (convergent) series with

the partial sums. Thus lim If there exist a transformation

will converge quicker than and if lim


the rate of convergence of will be higher than the rate of convergence of
to the limit A.
Aitken’s process is one of the standard methods to accelerate the conver-
gence of series. For the following construction let us assume for simplification
that have the same sign and that is sufficiently
large, that

Thus
60 NUMERICAL QUANTUM DYNAMICS

and hence

which gives immediately

Based on this simple derivation we define Aitken’s transformation to speed


up the convergence of series by

is a simple example, which shows the power of accelerating


techniques. and given by Eq.(3.15). Then

but Of
course one should never forget, that computing necessitates to know
and therefore one should always compare and not partial sums
of the same order. But nevertheless converges significantly quicker than
to 1.

1.2 DEGENERATE STATES


We now assume that the eigenvalue whose perturbation we want to
study is degenerate. We denote the orthonormal eigenstates by
with Without degeneracy we started with an ansatz

Because the n-th level is degenerate with a multiplicity the former non-
degenerate states in the expansion above have to be replaced by a unknown
linear combination of the eigenstates

Without degeneracy we got from


Approximation by Perturbation Techniques 61

but for degenerate states this will lead to a homogenous linear system of
equations for the expansion coefficients

Hence the eigenenergies in first order perturbation theory are obtained by the
nodes of the characteristic polynomial, respectively by the zeros of the
determinant

In first order degenerate perturbation theory we obtain a linear system of


equations whose roots with eigenvectors will give us the eigenstate
in zero order by

and the eigenenergy in first order by

Example. Let us examine more closely the case of a two-fold degenerate


system. For simplicity we will suppress the level quantum number Thus as
ansatz for the zero order eigenstates we choose

and thus orthonormalization will be conserved. Substituting this ansatz in


Eq.(3.17) we obtain the system of homogeneous equations

with The eigenvalues are given by the zeros of the


determinant

hence in first order perturbation theory we get for the energy shift
62 NUMERICAL QUANTUM DYNAMICS

and the eigenfunctions are given via

Therefore the former degenerate eigenstates become non-degenerate under a


perturbation with energy distance even for an
arbitrary small perturbation. Thus the level structure is qualitatively changed
under a small perturbation.
Without perturbation the levels are degenerate and a quantum number for
labeling these degenerate states exist. A simple example is given by the hy-
drogen atom, for which the levels are degenerate with respect to the angular
momentum and the magnetic quantum number Both quantum numbers
are associated with operators commuting with the Hamiltonian of the system.
Under some perturbations these operators are no longer conserved operators
and thus the corresponding quantum numbers are no longer "good" quantum
numbers. Therefore there are no additional quantum labels for distinguishing
degenerate levels and degeneracy becomes forbidden. Even under an arbi-
trary small perturbation degeneracy becomes forbidden. This effect is called
exponentially avoided crossing or level repulsion.
An example is given by the hydrogen atom in external magnetic and electric
fields. Under a magnetic field pointing into the z-direction, the z-component
of the angular momentum and the z-parity is conserved. Thus if we plot
the Hamiltonian and the z-component of the spin)
degeneracy occurs for eigenstates with the same m-quantum number but differ-
ent z-parity . An example is shown in Figs.(3.1) and (3.2). Note, the quantum
number is no longer conserved due to the diamagnetic contribution. For
the hydrogen atom in parallel electric and magnetic fields the symmetry with
respect to reflection on the is broken by the electric field and hence
the is no longer conserved2, thus degeneracy with respect to
becomes forbidden and an avoided crossing occurs.
In Fig.(3.1) the electric field strength is vanishing, the parity conserved and
hence crossings of eigenstates with different parity are allowed. In Fig.(3.2)
some results for a non-vanishing electric field strength are shown. The de-
generacy is broken by a small avoided crossing. One important aspect is the
behavior of the eigenstates. In case of an allowed crossing, Fig.(3.1), the wave
functions run without any distortion through this crossing. In Fig.(3.2) the
states are interacting with each other. Far away from the crossing the wave
functions are similar to the ones without an additional electric field. Close to
the energy value at which the avoided crossing occurs, both wave functions are
distorted and mixed with each other.
Approximation by Perturbation Techniques 63

By the simple hydrogen atom in external fields we have a very graphic


example, showing the interaction of neighboring eigenstates for non-integrable
quantum systems, an effect, which holds in general and which results also in
64 NUMERICAL QUANTUM DYNAMICS

a completely different statistical behavior of the energy eigenvalues (see, e.g.,


[6]).
For integrable quantum systems (see chapter 2 and Fig.(2.1)) degeneracy is
allowed and hence the most probable energy distance between neighboring en-
Approximation by Perturbation Techniques 65

ergy levels is zero. Thus the probability of finding a certain energy distance for
a spectrum with constant density follows a Poisson distribution. The question
arises, how does this behavior change if degeneracy becomes forbidden due to
an additional perturbation. For simplification the following discussion will be
restricted to a two-level system invariant under time-reflection and rotation.
To derive the probability of finding a certain energy distance between neigh-
boring energy levels let us first ask ‘what is the probability for finding a certain
Hamiltonian matrix’ ?
The Hamiltonian matrix of our two level system is given by

with real and The matrix is given by their elements, thus the
probability of finding a certain matrix H, P(H), is given by the probability for
finding certain matrix elements, and thus

This result has to be invariant under unitary basis transformation and hence
under an infinitesimal basis transformation:

The phrase "infinitesimal" means that terms in are negligible, hence

and

with

and thus

Because of the infinitesimal transformation the probability of finding the new


primed matrix elements are given in first order Taylor expansion, resulting in
66 NUMERICAL QUANTUM DYNAMICS

Both matrices are equivalent and therefore the probability of finding the new
unitary transformed matrix equals the probability of finding the original matrix.
(More technical speaking, we are only interested in the ensemble probability,
which is independent under the selected basis.)

Thus which results in

The lefthand side depends only on contributions from the "12" elements and the
righthand side is completely independent from those "12" elements. Therefore
this equation have to be constant and we obtain

The probability of finding any matrix element is one and thus

Repeating the same game for the "11" and "22" elements gives
Approximation by Perturbation Techniques 67

and finally

with A = 0.
From the beginning we were only interested in the probability of finding
a certain energy distance between neighboring levels, but not in the matrix
elements it selves. Therefore let us now compute the probability function as a
function of the energies and the mixing angle given by Eq.(3.22) and
Eq.(3.23):

The matrix elements are given by

and the Jacobi determinant by

Therefore we obtain for the probability of finding a matrix H

and as a function of the physical parameters energy and mixing angle


68 NUMERICAL QUANTUM DYNAMICS

Hence the probability of finding a certain energy spacing is given


by

With the averaged energy spacing

we arrive finally at

and this probability function is called Wigner function. The probability for
finding a degenerate level, vanishes. Usually the averaged energy
spacing is transformed to D = 1 and the energy spacing is denoted by "S". Be-
cause this probability distribution is by construction invariant under orthogonal
transformation it is also called a Gaussian orthogonal ensemble, abbreviated
by GOE. Other important probability functions are

where the names reflect the invariance of the Hamiltonian matrix under certain
transformations. Of course this transformation properties are due to certain
physical qualities like time invariance and so forth. For more details see, e.g.,
[7]. Remember, that for integrable quantum systems degeneracy is allowed and
the energy spacing follows a Possion distribution P(S) = e x p ( – S ) .

1.2.1 QUADRATIC AND LINEAR STARK EFFECT


To illustrate the rôle of degeneracy in perturbation theory, we will consider
the effect of an external electric field on the energy levels of the hydrogen atom,
the Stark effect.3 The Hamiltonian for a hydrogen atom in an electric field with
field axis pointing into the reads (in atomic units)

The hydrogen ground state is nondegenerate and thus the


energy correction in first order perturbation theory is found to be
Approximation by Perturbation Techniques 69

and this vanishes because the state is a state of definite parity. Thus for
the ground state there is no energy shift in first order perturbation theory. So
we have to go to the second order to find any Stark effect on the ground state
energy and this will give rise to the so-called quadratic Stark effect.
The energy shift in second order is

and the matrix is given as

with the spherical coordinate and where we


have used The radial integration above can be done
analytically [8], which results in

and thus with

Because this series converges poorly it is worth to find an upper bound with the
help of a sum rule.4

This is easy to evaluate, since the ground state wave function is spherically
symmetric

and therefore

To illustrate degenerate perturbation theory, we calculate the first order


Stark effect for the There are four but because
the 4 × 4 matrix reduces to a 2 × 2 one
70 NUMERICAL QUANTUM DYNAMICS

with eigenvalues

Thus the linear Stark effect of the degenerate yields a splitting for
but not for the

2. 1/N-SHIFT EXPANSIONS
The shifted 1/N-expansion yields exact results for the harmonic oscillator
and for the Coulomb potential [9]. Because it is simple to derive approximately
analytic results for spherically symmetric potentials this approach is useful
for power-law potentials the radial coordinate. In this section we will
formulate the shifted l / N expansion for arbitrary potentials and derive the
energy correction in 4th order perturbation theory.
The radial Schrödinger equation in N spatial dimensions is

The essential idea of the 1 / N-shift expansion is to reformulate this Schrödinger


equation by using a parameter and a shift parameter a such, that an
expansion with respect to those parameters becomes exact for the two limiting
cases: Coulomb potential and harmonic potential. This expansion will then be
used for a perturbational ansatz to obtain the approximate eigensolutions of the
system under consideration. With the ansatz

for the wave function we obtain

which will serve as our starting differential equation.

1st Step. Let us rewrite Eq. (3.55) by shifting the parameter


and rescaling the potential

with
Approximation by Perturbation Techniques 71

By deriving a suitable coordinate shift we will obtain as expansion of our


potential a harmonic oscillator potential plus quick converging corrections. To
obtain such an expansion it is necessary to remove linear parts with respect to
the coordinates in the potential. Therefore we develop the potential around its
minimum.

2nd step. Search for a suitable minimum:


For large the leading contribution to the energy comes from the effective
potential

and this gives

Let us restrict to those potentials for which has a minimum at thus

which allows to compute in case the position of the minimum of is


determined.

3rd step. Shifting the origin of the coordinates to


In order to shift the origin of the coordinates to the position of the minimum
of the effective potential it is convenient to define a new variable x

which yields for the radial Schrödinger equation (3.55)

4th step. Taylor expansion around the effective potential minimum re-
spectively
With
72 NUMERICAL QUANTUM DYNAMICS

and

the radial Schrödinger equation (3.62) becomes

By reordering this equation with respect to powers in we obtain


Approximation by Perturbation Techniques 73

and together with Eq.(3.60) for the terms denoted by (1)

The terms labeled by (2) are independent of and define an oscillator potential

where

which becomes with

5th step. Expansion with respect to orders in


For convenience let us introduce the following abbreviations, ordered with
respect to powers in
74 NUMERICAL QUANTUM DYNAMICS

and thus we obtain for the potential

respectively for the radial Schrödinger equation

with

6th step. Optimized choice of the shift parameter a:


The main contribution to the energy comes from

and the next order in energy is given by

Thus we choose a such that this second order contribution vanishes:


Approximation by Perturbation Techniques 75

7th step. Computation of


From Eq.(3.60) we obtain with

Summary of the computational steps. From the implicit equation (3.69)


we obtain first the minimum position of and from Eq.(3.60) The
oscillator frequency can then be computed via Eq.(3.66) and finally the shift
from Eq.(3.68), which completes all necessary steps to obtain the potential
expansion. Finally the eigensolutions are computed in Rayleigh Schrödinger
perturbation theory.

Perturbation expansion. The new potential is equivalent to a shifted har-


monic oscillator with anharmonic perturbation. Thus we use the harmonic
oscillator eigenfunction and rewrite the anharmonic potential in terms of cre-
ation annihilation operators. With the following abbreviation

and introducing the dimensionless new variable

and rescaling the abbreviations defined above via

we obtain after a tedious but straightforward computation the energy in 4th


perturbational order:
76 NUMERICAL QUANTUM DYNAMICS

Example. is the most important parameter with respect to the speed of


convergency in the expansion derived above. As larger as quicker the conver-
gence of the energy for similar potentials. To show this let us discuss briefly the
breathing mode of spherical nuclei [10]. Spherical nuclei are 3n-dimensional
problems for n nuclei. Thus we will use hyperspherical coordinates, Eq.(2.48).
The Hamiltonian is given by

With the product ansatz and the corresponding Casimir operator


Approximation by Perturbation Techniques 77

we obtain the radial Hamiltonian in 3n-dimensions

Note, that for we obtain a 3-dimensional problem and the Casimir


operator becomes the angular momentum operator
and are spherical nuclei, where the figure in front gives the number of
nuclei. Because the energy will converge for heavier
nuclei much quicker than for light nuclei. Using e.g. Skyrmian or Brink-
Boeker forces [10], which are phenomenological nuclei-nuclei forces taking
into account spin/isospin corrections, the convergence for the heavier nuclei is
much quicker than for the lighter ones. An example is shown in table (3.1).

3. APPROXIMATIVE SYMMETRY
Exact continuous symmetries for classical systems are related with con-
stants of motion and for quantum systems with conserved quantum numbers,
respectively commuting operators. Neither approximative symmetries nor the
corresponding approximative invariants are exact symmetries respectively ex-
act invariants. Nevertheless they are useful in the perturbational regime of the
system under consideration, because they uncover for weak perturbations the
qualitative behavior of the system and simplify considerably the computation.
An effective way to derive approximative symmetries is the combination of
projection and Lie algebra techniques. The first step is to project the system onto
a lower dimensional subsystem in which approximately conserved quantum
numbers play the role of parameters and reformulating this subsystem by Lie-
group generators. On the next few pages we will analyze this idea [11] by
an example: the diamagnetic hydrogen atom in an additional parallel electric
field.
78 NUMERICAL QUANTUM DYNAMICS

3.1 THE DYNAMICAL LIE-ALGEBRA


The dynamical group of a physical system is given by its symmetry group
and its spectral generating or transition group. The symmetry represents,
e.g., the degeneracy of the quantum system and the existence of constants
of motion; the generators of the spectral generating group create the whole
quantum spectrum by action onto the ‘ground states’ (the states of lowest
weight in representation theory). In this context I used the plural ‘ground
states’ because under additional discrete symmetries, it might happen that
we can create the whole spectrum only by applying the generating operators
on several different states, which are states of lowest energy for one specific
discrete symmetry, e.g., the parity. This will become immediately clear by
discussing as an example the harmonic oscillator:
SU(1,1) is the dynamical group of the harmonic oscillator. Well-known is
the special unitary group SU(2), whose generators are given by the angular
momentum operator. The corresponding Lie-algebra su(2) is compact, this
means physically that the representation consists of a finite basis. In contrast
su(l,l) is non-compact and the unitary representation countable but infinite
dimensional. A simple example are for su(2) the states for fixed angular
momentum which could be half-integer, and the corresponding basis will run
from to in steps of one and for su(1,1) the infinite dimensional spectrum
of the harmonic oscillator for fixed parity. The corresponding commutator
relations are

The generators of SU(1,1) are given by the bilinear products


and with â and the boson creation and
annihilation operator. Because will increase the oscillator quantum
number by two, the multiple action of on the oscillator ground state
will create only states with even quantum number and these are the states
with positive parity. To create the states with odd oscillator quantum number
has to act on the first excited oscillator state which serves as the
‘ground state’ for the odd parity oscillator spectrum. Thus the eigenstates of
the harmonic oscillator belong to two different irreducible representations, the
even and the odd parity representation spaces. For more details see, e.g., [12],
[13] or [14].
SO(4,2;R) is the dynamical group of the hydrogen atom in external fields,
where the notation (4,2) reflects the invariant metric. Thus the elements of this
Approximation by Perturbation Techniques 79

group leave invariant the 6-dimensional space

The operators with commutator relation

and metric tensor

are the basis of the corresponding Lie algebra, respectively the generators of the
group [13]. The symmetry group SO(4;R) of the field-free hydrogen atom is a
subgroup of the dynamical group SO(4,2;R). The operators
are the angular momentum operators and are the three Runge-
Lenz operators. The most important subgroups of the dynamical group
SO(4,2;R) are:
–The maximum compact subgroup built by the generators
and
–The subgroup with generators and
for describing the basis in spherical coordinates.
– The subgroup useful to describe the hydrogen atom
in the oscillator representation.
The first and the last subgroup mentioned above will play an important rôle
in the following derivation. The equivalence of the corresponding operators
projected onto fixed principle quantum number is best uncovered in the boson
representation of the hydrogen eigenstates [12].
80 NUMERICAL QUANTUM DYNAMICS

and are the annihilation operator for two pair of bosons, are Pauli
spin matrices and

is the metric tensor of SO(2,1;R).


In this representation the eigenstates of the hydrogen atom are given by

The principle quantum number is given by

For the SO(4,R) - operators we obtain the expectation values

and

and

3.2 APPROXIMATIVE INVARIANT


In [11] the approximative invariant for the hydrogen atom in strong magnetic
fields was derived by the projection and Lie-algebra techniques mentioned
above. The first derivation dates back to the classical analysis of [16]. Starting
point of the derivation is the Hamiltonian represented by
generators projected onto a fixed principle quantum number. Thus the relevant
state space is given by the fixed n-multiplet. A perturbation expansion of
this Hamiltonian will give us the approximatively invariant operator, hence the
generator of a first order perturbational conserved symmetry.
The Hamiltonian of the hydrogen atom in external fields in atomic units
reads
Approximation by Perturbation Techniques 81

with and the magnetic field


pointing into the z-direction, and for parallel
electric and magnetic fields. In scaled semiparabolic coordinates

the Hamiltonian becomes

Because and in parallel electric and magnetic fields are conserved, only the
Hamiltonian corrected by the diamagnetic and the electric field contribution
are of computational interest. The Zeeman contribution leads to a constant
energy shift for fixed and fixed spin. Of course this contribution cannot be
neglected in computing spectroscopic quantities like wave lengths and oscillator
strengths.
A representation of the SO(2,1 ;R) generators is given by

Let us call this operators S.. for the semiparabolic coordinate and T..
for Thus the Hamiltonian reads

The action of this Lie-algebra operators onto the hydrogen eigenstates in the
boson representation is given by
82 NUMERICAL QUANTUM DYNAMICS

where we have rewritten the standard hydrogenic boson quantum numbers in


such a way, that the action onto the principal quantum is uncovered.
With the projection

onto a fixed we obtain

With

and similar for the operators T... In this manner we get from

Expanding this equation with respect to we obtain

and in first order


Approximation by Perturbation Techniques 83

The operators above are rather abstract. For obtaining a more physical
picture let us rewrite these results in terms of the angular momentum and
Runge-Lenz operators. On the n-manifold, and only on the n-manifold, the
following isomorphism holds:

By using and these finally result in [17]

and thus we obtain for the approximate invariant

Note, this Runge-Lenz operators are building an ideal, which means

whereas in the standard representation (the non-energy weighted representa-


tion) [12] the Runge-Lenz operator are given by

This gives for the approximate invariant

in agreement with the classical representation [16].

Example. For vanishing electric field strength the eigenvalues of the ap-
proximate invariant becomes independent of the field strength and thus can be
computed by diagonalization. This leads to

with the magnetic quantum number and the spin. The action of the
approximate invariant onto the hydrogen eigenstates can be computed easily
84 NUMERICAL QUANTUM DYNAMICS

by

The results for the diamagnetic contribution is labeled for some hydrogen
eigenstates in Table (3.2), where we have used the field-free notation
with an additional indicating that l is no longer a conserved quantum number
and thus will mix with all other allowed angular momenta

In Table (3.3) we have compared the results for the state as function
of various external field strengths obtained by the approximate invariant and
by numerical computations based on the discrete variable and finite elements.
These results indicate that even for relatively strong electric and magnetic fields
good results can be obtained by using the invariant approximate For the
state the results obtained by the approximate invariant are about or even better
Approximation by Perturbation Techniques 85

1% for magnetic fields up to 5000 Tesla and electric fields of about


These limiting field strengths scale roughly with for the magnetic field and
for the electric field, the principal quantum number.

4. TIME-DEPENDENT PERTURBATION THEORY


In this section we will derive an approximate solution to the time dependent
Schrödinger equation. The time dependence can enter on two different ways:
86 NUMERICAL QUANTUM DYNAMICS

either by a time-dependent perturbation to a zero order time-independent part or


by an initial state which is not an eigenstate of a time-independent Hamiltonian
(see Chapt.l). An example for the first case is an atom under the influence
of a time dependent external electromagnetic field. Even if we are talking
about a time-dependent perturbation, this does not mean that we will get a
small correction to the energy as we would expect for a time-independent
perturbation, because for a time-dependent Hamiltonian the energy will no
longer be conserved.

Thus the question arises how we could obtain the wave function of the full
system from the stationary states of the time-independent system.
The time-dependent Schrödinger equation takes the form

and the eigensolutions (with the necessary set of quantum numbers)


of the unperturbed system are known. These eigenfunctions serve as a basis
for expanding the perturbed wave function

with the time dependent coefficient given by the projection of the wave function
onto the n-th basis state

The next step is to apply the time dependent Schrödinger equation onto our
wave function expansion and to multiply this equation from the left with a basis
state

which results in
Approximation by Perturbation Techniques 87

This can be rearranged to give

with On the first glance this looks like our searched solution,
because Eq.(3.102) is exact, but unfortunately each of the unknown coefficient
became a function of all other time-dependent coefficients. A way around that
obstacle is in the weak perturbation limit provided by first order perturbation
theory, in which we replace all the coefficients on the right-hand-side of Eq.
(3.102) by their values at initial time
Let us turn on the time dependent perturbation only for a finite time interval

Thus for

and for

with time-independent coefficients Of course, the final state will de-


pend from the initial one, thus we have added the index ‘·, ’ to the coefficients.
The probability that the system initially in state is in state for is
given by and called transition probability.
Let us start with an initially stationary state and a perturbation which
has no diagonal elements, thus In first order we use the ansatz
which gives

and because of

we obtain finally in first order time-dependent perturbation theory

To gain a better understanding we will discuss two simple cases: the constant
and the periodic perturbation.
88 NUMERICAL QUANTUM DYNAMICS

Constant perturbation. For a perturbation which is constant during a certain


period

the matrix elements are time independent. Thus we obtain from Eq.(3.106)

and hence the transition probability becomes (where we will skip in the further
discussion the upper index (1))

with
Approximation by Perturbation Techniques 89

If the energy of the final state equals the energy of the initial state, which is
the limit of degenerate states, we get from Eq.(3.108)

and thus the transition probability becomes largest. For respec-


tively the transition probability vanishes in first order,
If the interaction time becomes very large compared to the eigentime
we obtain for the transition probability with

With we get immediately

and thus in the limit of an infinite interaction time transitions occur only be-
tween degenerate (stationary) states. This result is somewhat surprising, but an
infinite interaction time limit with a constant potential is a time-independent
conservative system, speaking strictly. Thus we have simply recovered the con-
servation of energy. Nevertheless we can learn from this result also something
about finite interaction times: Significant transitions for a finite interaction time
will only occur for small energy differences between the initial and final state
under consideration. Thus only those states will be populated significantly for
which

For the transition energy we obtain directly

Note, that the right-hand-side of this equation is constant. Thus as smaller


the interaction time as larger the transition energy. Eq.(3.113) represents the
energy-time uncertainty relation. It is of importance that this uncertainty differs
in its interpretation significantly from the Heisenberg uncertainty
For the Heisenberg uncertainty the coordinates and momenta are taken at the
exactly same time for the same state of the system under consideration and
the Heisenberg uncertainty relation tells that both cannot be measured together
exactly. Each energy can be measured exactly at any time. is the
90 NUMERICAL QUANTUM DYNAMICS

difference between two energies measured at different times, but this does not
imply that one of the energies of one of the stationary states under consideration
possess an uncertainty of the energy at a certain time.
The differential transition probability becomes for long interaction
times in first oder time-dependent perturbation theory time-independent and
proportional to the squared interaction potential due to Eq.(3.111)

Usually final and initial states are not isolated bound states but states in the
continuum. Therefore speaking practically the transition probability is the
probability to populate all states sufficiently close in energy to the final state
Of course in general is multi-valued quantum number and thus all
other quantum numbers (e.g. angular momentum l and magnetic quantum
number which do not refer to the energy have to be considered in addition.
It depends on the special physical situation under consideration if only those
states have to be taken into account which have identical additional quantum
numbers or if one have to sum over all or some degenerate quantum numbers.
Let be the density of final states. Then the total differential transition
probability is given by

and this equation is called Fermi’s Golden Rule.

Periodic perturbation. Let us consider finally the case of a periodic pertur-


bation

where ± refers to the sign in the exponential function. Again, initially


the system will be in the n-th stationary state. In first order time-dependent
perturbation theory, we obtain from Eq.(3.106)

and thus
Approximation by Perturbation Techniques 91

Similar to the discussion above we obtain for large interaction time

and thus for the transition probability

and for the differential transition probability

Due to this periodic perturbation those states are populated for which

Hence if we define again the density of final states in the continuum (see dis-
cussion above) by the (total) differential transition probability becomes

The ±-sign differs two situations: For the +-sign states with energy
are populated. Thus by the transition the final states
are lower in energy which means the system looses the energy This
happens for stimulated emission in which the system due to the external periodic
perturbation radiates. Stimulated emission is exemplified by the output of lasers
and is the exact opposite of absorption. In the case of absorption the system
gains the energy by the transition from and thus the final states
will be higher in energy and this is exactly the situation which happens by
absorption of light, labeled above by the –-sign.

Notes
1 is the magnetic induction measured in atomic units
. In addition to the Zeeman contribution the diamagnetic con-
tribution becomes important for strong magnetic fields and/or
highly excited states, the so-called Rydberg states.
2 The quantum number is still conserved
3 We will not consider the coupling to the continuum due to tunneling below
the classical ionization limit.
4 The relation

is called a sum rule.


92 NUMERICAL QUANTUM DYNAMICS

5 The subgroup SO(2,1;R) [15] is generated by the operators

which leave the angular momentum quantum numbers invariant and shift
the principle quantum number by one.

References

[1] Cohen-Tannoudji, C., Diu, B. and Laloë, F. (1977). Quantum Mechanics


II John Wiley & Sons, New York
[2] Flügge, S. (1994). Practical Quantum mechanics (2nd printing) Springer-
Verlag Berlin
[3] J. and Vrscay, E. R., (1982). “Large order perturbation theory in
the context of atomic and molecular physics - Interdisciplinay aspects”, Int.
Journ. Quant. Chem. XXI, 27–68
[4] Press, W. H., Teukolsky, S. A., Vetterling, W. T. and Flannery, B. P. (1992).
Numerical Recipes Cambridge University Press Cambridge
[5] Schweizer, W. and Faßbinder, P., (1997). “Discrete variable method for
nonintegrable quantum systems”, Comp. in Phys. 11, 641–646
[6] Bohigas, O., Giannoni, M.-J. and Schmit, C. (1986). in Lecture Notes in
Physics Vol. 262, edts. Albeverio, S., Casati, G. and Merlini, D. Springer-
Verlag Berlin
[7] Reichl, L. E. (1992). The transition to chaos Springer Verlag, New York
[8] Bethe, H. A. and Salpeter E. E. (1957). Quantum Mechanics of One- and
Two-Electron Atoms Academic Press, New York
[9] Imbo, T., Pagnamenta, A., and Sukhatme U., (1984), “Energy eigenstates
of spherically symmetric potentials using the shifted l / N expansion”, Phys.
Rev. D 29, 1669–1681
References 93

[10] Ring, P. and Schuck, P., (1980). The nuclear many body problem Springer,
Heidelberg
[11] Delande, D., Gay, J. C., (1984). J. Phys. B17, L335 – 337
[12] Wybourne, B. G., (1974). Classical Groups for Physicists John Wiley &
Sons, New York
[13] Barut, A. O., (1972). Dynamical Groups and Generalized Symmetries in
Quantum Theory Univ. of Canterbury Pub., Christchurch, NewZealand
[14] Kleiner, H., (1968). Group Dynamics of the Hydrogen Atom in Lect. Theor.
Physics XB, eds. Barut, A. O., and Brittin, W. E., Gordon & Breach, New
York
[15] Bednar, M., (1977) “Calculation of infinite series representing high-order
energy corrections for nonrelativistic hydrogen atoms in external fields”,
Phys. Rev. A15, 27–34
[16] Solov’ev, E. A., (1981) “Approximate motion integral for a hydrogen
atom in a magntic field”, Sov. Phys. JETP Lett 34, 265 – 268
[17] Bivona, S., Schweizer, W., O’Mahony, P. F., and Tayler, K. T., (1988)
“Wavefunction localization for the hydrogen atom in parallel electric and
magnetic fields”, J. Phys. B21, L617
Chapter 4

APPROXIMATION TECHNIQUES

As a matter of fact, even the simplest system, the hydrogen atom, represents a
system of two particles. Thus in this chapter we will concentrate on techniques
suitable for systems which cannot be reduced to a single particle problem. Of
course this does not mean that, e.g., finite element techniques or perturbation
techniques are only useful for single particle problems. But in most practical
computations different techniques will be used in combination, e.g., one- or
two-dimensional finite elements as an ansatz for a part of the higher dimensional
wave function.
In the first section we discuss the variational principle followed by the
Hartree- Fock method. Simplified the Hartree-Fock technique is based on a
variational method in which the many fermion wave functions are a product
of antisymmetrized single particle wave functions. This ansatz leads to an
effective one-particle Schrödinger equation with a potential given by the self-
consistent field of all other fermions together. The density functional theory is
an exact theory based on the charge density of the system. Density functional
theory includes all exchange and correlation effects and will be discussed in
section three. Section four is devoted to the virial theorem, which plays an
important role in molecular physics and the last chapter to quantum Monte
Carlo methods. With the increasing power of computer facilities quantum
Monte Carlo techniques provide a practical method for solving the many body
Schrödinger equation.

1. THE VARIATIONAL PRINCIPLE


In understanding classical systems variational principles play an important
role and are in addition computationally useful. Thus two questions occur:
Quantum systems are governed by the Schrodinger equation. Is it possible to
formulate a quantum variational principle? If we have found such a variational
95
96 NUMERICAL QUANTUM DYNAMICS

principle, will it enable us to solve the Schrödinger equation in such situations


where we found neither an exact solution nor a perturbational treatment was
amenable?
In the following we will show, that we are able to replace the Schrödinger
equation by a variational method - the Rayleigh-Ritz method. Even if the
Rayleigh-Ritz method in its naive numerical application seems to be rather
restricted, many other numerical techniques, e.g. the finite element method,
are justified by the equivalence of the Schrödinger equation to a variational
principle,

1.1 THE RAYLEIGH-RITZ VARIATIONAL


CALCULUS
The variational method is based on the simple fact that for conservative
systems the Hamiltonian expectation value in an arbitrary state has to be greater
or equal the ground state energy of the system.
Let us assume, for simplification, that the Hamiltonian spectrum is discrete
and non-degenerate. Eigenstates hold

and each wave function can be expanded with respect of this complete
Hilbert space basis. Thus

which proves the contention above.

The Ritz Theorem. The Hamiltonian expectation value becomes stationary


in the neighborhood of its discrete eigenstate:

Proof:
Approximation Techniques 97

For any state is a real number. Thus

Let

then

Because this holds for any varied ket, it is also true for

and thus becomes stationary if and only if is an eigenstate.


Therefore the ‘art of variationing’ is to chose judiciously a family of trial
wave function, , that contains free parameters The variational
approximation is then obtained by

1.1.1 EXAMPLES
To illustrate the utility of the variational procedure we will discuss two
examples, the harmonic oscillator and the helium atom.

The harmonic Oscillator. The Hamiltonian of the harmonic oscillator is

and the trial wave function is given by


98 NUMERICAL QUANTUM DYNAMICS

Thus

and therefore we obtain

and with Eq.(4.5)

which is the correct ground state energy and wave function of the harmonic
oscillator. With the ansatz we would have obtained the correct
first excited state, which is the state of lowest energy with negative parity.
The calculations above are only on pedagogical use to familiarize ourselves
with the variational technique. Because the trial functions chosen included
already the exact wave function, we are not able to judge by this example its
effectiveness as a method of approximation. Therefore we will now discuss a
second example.

The helium atom. The helium atom is composed of two electrons in the
s-shell. If the nucleus is placed in the origin, and if the electron coordinates are
labeled and then the Hamiltonian is (in atomic units)

with

Since we are interested in the ground state, both electrons must occupy the
lowest energy state. Therefore the simplest variational ansatz is the product of
two hydrogen-like ground state eigenfunctions
Approximation Techniques 99

Because we are concerned with a two electron system, hence a fermion system,
the total wave function has to fulfill the Pauli principle. Thus we have to anti-
symmetrize the trial function against exchange of the two electrons. Because
the spatial part is symmetric the spin part has to be antisymmetric. Coupling
the two electron spins leads to the triplet S = 1 state being the
symmetric and the singlet S = 0 state the antisymmetric state under exchange
of the two electron spin coordinates. Therefore the total correct trial function
for the ground state is given by
With

we obtain

and thus the energy becomes after some elementary integration steps

Minimizing Eq.(4.11) as a function of leads to

and

The computation above holds for any two electron system. The experimental
measured quantity is the ionization potential I given by where
is the ground state energy of the hydrogen-like single ionized atom. Thus
equals respectively

Table (4.1) lists the variational and the measured values for some two-electron
systems. Surprisingly the difference between the variational and the experi-
mental result is approximately invariant, at least for the first four entries. This
is an effect of the electron-electron interaction, which becomes for higher and
higher ionized atoms relatively less important because the nucleus/electron
attraction becomes much stronger.
Higher approximation in the variational treatment take the electron-electron
interaction explicitly into account. Hylleraas suggested as trial function
100 NUMERICAL QUANTUM DYNAMICS

With this ansatz Pekeris [1] obtained and the experimen-


tal result is And this exhibits one of the strengths of
variational methods: In many cases they allow to obtain highly accurate values
for single selected states, which allow to study, e.g., small relativistic atomic
effects.

1.2 DIAGONALIZATION VERSUS VARIATION


1.2.1 DIAGONALIZATION AS VARIATIONAL METHOD
In the variational computation the bound states of the Hamiltonian are found
within a subspace of the Hilbert space. This subspace is given by the variational
parameters in the examples above. Of course this set of trial functions could
also be expanded by a set of basic vectors of the complete Hilbert space basis.
Therefore we could reinterpret the variational calculus above and equivalently
use a finite set of N Hilbert space basis vectors which span the relevant
submanifold.
For a trial state

the Hamiltonian expectation value is given by

with

For a orthonormal Hilbert subspace basis the elements of the overlap matrix
are
Approximation Techniques 101

For the Hamiltonian bound states the derivative with respect to the of the
expectation value vanishes, which leads to the linear system of equations

Eq.(4.16) is a generalized eigenvalue equation which can be written in matrix


notation

Thus diagonalization procedures based on a Hilbert subspace are equivalent


to a variational ansatz. Variational results, for both ground state and excited
states, are always superior to the correct eigenvalue. Including more basis
functions into our set, the Hilbert subspace becomes larger. Consequently the
computational results will converge towards the exact ones by increasing the
basis size.
A very effective method solving large scale eigenvalue equations is given
by Arnoldi and Lanczos methods [2] [3]. In the following subchapter we will
discuss the Lanczos method under a slightly different point of view.

1.2.2 THE LANCZOS METHOD


Spectrum Transformation Techniques. Standard diagonalization techniques
like Householder's method scale with if is the dimension of the matrix.
Lanczos methods are usually used to compute a few eigenvalues of a sparse
matrix and scale with Therefore Lanczos techniques becomes effi-
cient for large eigenvalue problems. Because the Lanczos method converges
quickly only for extremal eigenvalues, spectrum transformation techniques [2]
are used to compute non-extremal interior eigenvalues of the complete spec-
trum. Therefore speaking practically, if one needs to compute a large set of
eigenvalues of the complete spectrum the total set is windowed into smaller
subsets. Each of these subsets are spectrum transformed, to map its interior
eigenvalues onto a new spectrum for which the new transformed eigenvalues
become extremal. The essential idea is very simple. Let us assume the subset
of eigenvalues we are interested in, is covered by the interval and
is an element of this interval but not an eigenvalue. Instead of computing
the eigenvalues of our original matrix we derive a new matrix with a shifted
and inverted spectrum, such that this spectrum becomes extremal around the
selected value
Let

be our original generalized eigenvalue problem with positive. Thus we can


map the generalized problem onto an ordinary one with shifted eigenvalues
102 NUMERICAL QUANTUM DYNAMICS

which we can rewrite as

and therefore we arrive at

Thus the eigenvalues around are mapped onto new eigenvalues which
becomes extremal. The original eigenvalues are given by

The Lanczos algorithm. Let the Hamiltonian of the system under con-
sideration and a randomly selected normalized state. By the Lanczos
procedure we will derive a tridiagonal Hamiltonian matrix, whose eigensolu-
tions will converge towards the exact ones.
1st step: We compute a new state by applying the Hamiltonian onto our
initial state. Because the initial state was randomly chosen, it will not be an
eigenstate of Hamiltonian and thus and

are not parallel. By Gram-Schmidt orthonormalization

we obtain a state orthonormal to our randomly chosen first state. The Hamil-
tonian matrix elements are
Approximation Techniques 103

Note, by using the numerical results from the Gram-Schmidt orthonormaliza-


tion the total computation could be reduced significantly, because already all
matrix elements are computed by the step before.
2nd step: We compute the next state again by applying the Hamiltonian onto
our previously derived state:

and by Gram-Schmidt orthonormalization

The new matrix elements are

The elements vanishes because can be written as a linear com-


bination of the two states which are by construction orthonormal
to Similar holds for all non-tridiagonal elements and thus

Due to the construction states with state labels which differs by more than two
are already orthonormal and thus Gram-Schmidt orthonormalization runs only
over the two predecessors. The general construction is

and by Gram-Schmidt orthonormalization

Continuing this process leads finally to a tridiagonal matrix whose eigen-


values converge to the Hamiltonian ones. Based is this construction on the
Krylov space [3]. The Krylov space is spanned by all basis vectors generated
by multiply applying the Hamiltonian onto the first randomly selected one,

The eigenvalues and eigenvectors of tridiagonal matrices are efficiently cal-


culated by QR-decomposition. The QR-decomposition is based on the follow-
ing construction: Let our tridiagonal Lanczos Hamiltonian matrix. Then
104 NUMERICAL QUANTUM DYNAMICS

we decompose this matrix in a upper triangular and an orthogonal matrix

Commuting this decomposition results in a new matrix and a new decomposi-


tion

Continuing this process leads to

and hence all matrices have exactly the same eigensolutions


and converges towards a diagonal matrix. For general symmetric matri-
ces the QR-decomposition converges slowly, but is an efficient algorithm for
tridiagonal matrices. Routines can be found, e.g., in [4].

2. THE HARTREE-FOCK METHOD


In this section we will discus briefly the Hartree-Fock method. The Hartree-
Fock theory provides a simplification of the full problem of many electrons
moving in a potential field. The essential idea is to retain the simplicity of
a independent single-particle picture. This means that in the spirit of the
Ritz variational principle we search for the ‘best’ many particle eigenstate
approximated by a product of single particle states.
The Hamiltonian for a Z electron system (atom or ion) is given by

with eigenstates (The Hamiltonian above is already ap-


proximative because we have neglected the atomic nuclei.) In a first step we
approximate the Z-electron wave function by a product state

Physically this ansatz implies that each electron in the multi electron system
is described by its own wavefunction. This implies that each single electron
experiences an equivalent potential due to all other electrons and the nucleus.
Approximation Techniques 105

Because our wave function is normalized all single electron wave func-
tions are also normalized. Therefore the Hamiltonian expectation value is
given by

Variation with respect to under the additional constraint of orthonormal


single particle states leads to

respectively

This integro-differential equation is called Hartree equation. The integral


potential term, the effect of all other particles on each particle, is a self-
consistent potential and the idea approximating a physical system by a self-
consistent potential the basis of the self-consistent field method [4]. Eq.(4.30)
describes the single electron moving in the Coulomb potential plus a potential
generated by all other electrons. However, the self-consistent potential

depends on the charge density which we know only after solving


Eq. (4.30).
The total energy E is given by the Hamiltonian expectation value (4.29). The
expectation value of the Hartree equation (4.30) counts the repulsive electronic
interaction twice, hence the correct total energy is not obtained by the sum of
the single particle energies but by

In the Hartree solution above, we have not taken into account that electrons
are fermions and thus the total wave function have to be antisymmetric. The
106 NUMERICAL QUANTUM DYNAMICS

antisymmetrization of a product of N single particle states can be obtained


with the aid of the antisymmetrizer

with the permutation operator and the sum extends over all possible per-
mutations, where is +1 if the permutation is even, otherwise –1. It is
more convenient to write the antisymmetric wave function of orthonormized
single particle states by a Slater determinant

where includes spin and space coordinates. This means that


we search in the spirit of the self-consistent field theory for the ‘best’ Slater
determinant. Thus the Pauli principle is taken into account, and we get from the
variational procedure above the Hartree-Fock equation, which differs from the
Hartree equation (4.30) by the appearance of an additional exchange-potential
term.
It can be verified easily, that the Slater determinant is normalized. We obtain
for the one electron part of the Hamiltonian

and the electron-electron interaction

therefore the total energy is given by

For more details see, e.g., [5].

3. DENSITY FUNCTIONAL THEORY


Approximation Techniques 107

In the Hartree-Fock formalism the many body ansatz for the wave function
is written in the form of a Slater determinant and the solution obtained in a self-
consistent way. Similar to the Hartree idea an effective independent particle
Hamiltonian is derived here. The solution are then self-consistent in the density

where the sum runs over the N spin-orbitals lowest in energy of the quasi
single particle Hamiltonian

N being the number of electrons in the system.


The first three terms are the kinetic energy, the electrostatic interaction
between the electron and the nuclei and the self-consistent potential. Therefore
this equation differs only in the exchange and correlation potential from
the corresponding Hartree-Fock equation.
The total energy E of the many-electron system is given by

with defined by the functional derivative

These equations were first derived by Kohn-Sham [6]. For more details see,
e.g., [7].
For applying the density functional method the ground state energy as a
function of density have to be known. The difference to the Hartree-Fock
theory above is the replacement of the Hartree-Fock potential by the exchange-
correlation term, which is a function of the local density. The exact form of the
exchange-correlation functional is unknown. The simplest approxima-
tion is the local density approximation (LDA).
In the local density approximation the single particle exchange-correlation
energy is taken local at the position approximated by a
homogeneous electron gas and
108 NUMERICAL QUANTUM DYNAMICS

For a homogeneous electron gas the single particle exchange-correlation energy


can be separated in the exchange and correlation part

Rewriting the electron density by the Wigner-Seitz radius we obtain

and

and by Monte Carlo computations [8]

Hence we are able to compute the exchange-correlation energy and potential


term

Of course the local density approximation is only a crude approximation,


which can be further improved by gradient terms in the density, pair correlation
terms and so forth. For more details see [9].

4. THE VIRIAL THEOREM


The virial theorem provides exact relations between the energy and the
expectation value of the kinetic and potential energy operator. In this section
we will derive the virial theorem, discuss its physical consequence and its
numerical usefulness.

4.1 THE EULER AND THE HELLMAN-FEYNMAN


THEOREM
The Euler Theorem. A function is called homogeneous
of degree s if

For example, the harmonic oscillator in n dimensions is homogenous of degree


"2", or the two-particle Coulomb potential is homogeneous of degree " –1".
Approximation Techniques 109

Euler’s theorem states that any function f which is homogeneous of degree


s holds

The Hellman-Feynman Theorem. Let be an hermitian operator which


depends on a real parameter and

with
The Hellman-Feynman theorem states that

This relation can be derived easily:


By definition we have

If we differentiate this relation with respect to we obtain

which proofs the theorem above.


For the mean value of the commutator of with any operator  in an
Hamiltonian eigenstate we obtain immediately

since
110 NUMERICAL QUANTUM DYNAMICS

4.2 THE VIRIAL THEOREM: THEORY AND


EXAMPLES
4.2.1 DERIVATION
In classical mechanics the derivation of the virial theorem is based on the
time average of the relevant phase space functions. In quantum dynamics we
mimic this ansatz by studying the expectation or mean values.
Let (or in general with twice differentiable and

From

and the Schrödinger equation

we obtain

and with Eq. (4.49)


Approximation Techniques 111
hence

For we obtain the virial theorem:

with the kinetic energy operator. Using Eq.(4.5 1b) we find for the harmonic
oscillator

and for the (1-dimensional) hydrogen atom

Setting

leads to the hypervirial theorem

useful in perturbation expansions.

4.2.2 THE VIRIAL THEOREM APPLIED TO MOLECULAR


SYSTEMS
Consider an arbitrary molecule composed of nuclei and q electrons. We
shall denote by the classical position of the nuclei, and by and the
position and momenta of the electrons. We shall use the Born-Oppenheimer
approximation here, considering the position of the nuclei as given non-
dynamical parameters. The Born-Oppenheimer approximation is based on the
great differences between the masses of the electrons and nuclei in a molecule,
which allows to consider the nuclei motion as frozen compared to the electronic
motion. Therefore only the electronic coordinates and momenta are considered
as quantum operators.
In Born-Oppenheimer approximation the Hamiltonian is given by
112 NUMERICAL QUANTUM DYNAMICS

with

where the first two potential terms are quantum operators due to the electron
interaction and the only effect of the last term is to shift all energies equally.
The second potential operator depends parametrically on the position of the
nuclei. This position is derived by minimizing the total energy. The electronic
Hamiltonian is given by

and depends parametrically on the nuclei position with eigenenergies


and wavefunctions
Due to the Euler theorem the potential is homogeneous of order –1 and due to
the Hellman-Feynman theorem the Hamiltonian holds To
derive the virial theorem for molecules we start by computing this commutator.

Therefore

because due to the Born-Oppenheimer approximation depends only para-


metrically via the potential term from the nuclei’s position. Application of the
Hellman-Feynman theorem results in

and because obviously


Approximation Techniques 113

we find for the virial theorem applied to molecules

By this simple result we are able to compute the average kinetic and potential
energies from the variation of the total energy as a function of the position of
the nuclei. In analogy to the derivation above we can also derive the virial
result for the electronic potential energy

(Of course due to the Born-Oppenheimer approximation the total kinetic energy
is exactly the kinetic energy of the electronic motion.)

4.2.3 EXAMPLE
For diatomic molecules the energy depends only parametrically from the
diatomic distance and we obtain from Eq.(4.60a,4.60b)

because is homogeneous of order 1 and thus

Loosely speaking, the chemical bond is due to a lowering of the electronic


potential energy as a function of the nuclear distance. Let be the total
energy of the system when the nuclei are infinitely far apart from each other.
To build a stable molecule or molecule-ion the total energy have to have a
minimum at a finite nuclear distance Thus due to the virial theorem

and at infinite distance


114 NUMERICAL QUANTUM DYNAMICS

Therefore stability of the molecular system necessitates

The molecule ion As an application of the virial theorem we will


discuss the molecule-ion. The coordinates are shown in Fig.(2.3) and the
Hamiltonian is given by

where the nuclear distance serves as parameter.


In the limit "electron close to the first nucleus, second nucleus far away"
respectively "electron close to the second nucleus, first nucleus far away"
the total energy of the molecule-ion is approximately given by the hydrogen
ground state energy. Therefore we chose as test states a linear combination of
hydrogen-like ground states

with

for hydrogen. Therefore the variational task is to compute the optimized


nuclear distance of lowest energy for the selected test state. Therefore we
have to calculate the stationary state of

From

we obtain

and non-trivial solution for

The wavefunctions are normalized and real


Approximation Techniques 115

The following calculation are simplified by using elliptic coordinates (2.41),


with
For the overlap S we obtain

For symmetry reasons the Hamiltonian matrix holds

with – the hydrogen ground state energy and C the Coulomb integral. The
Coulomb integral can be calculated easily in elliptic coordinates

and thus

Note

and hence, as expected, the energy equals the hydrogen ground state energy for
infinitely distant nuclei.

with

the exchange integral and hence


116 NUMERICAL QUANTUM DYNAMICS

To compute the energy eigenvalues we have to solve the energy determinant


which is given by

using the abbreviations

this results in

which holds

Hence the energy becomes the hydrogen ground state energy for infinitely
distant nuclei and denotes the bonding and the antibonding case. The
eigenstates are given by

where the ground state (bond state) is given by , Eq.(4.78b), and symmetric
under particle exchange.
In Fig.(4.1) the Coulomb, exchange and overlap integral are plotted. For
infinitely distant nuclei the overlap and exchange integral goes to zero. This is
obviously because the system becomes composed of a hydrogen atom and an
additional infinitely distant proton and the interaction between both becomes
arbitrary small. In the infinite limit the Coulomb integral converges to the
Coulomb potential and thus goes like
It is instructive to calculate the potential and kinetic energy directly. But note
these direct computation will lead to wrong results! The potential expectation
value is given by

and the kinetic energy by


Approximation Techniques 117

Therefore for the kinetic energy difference we obtain

Thus by direct computation the kinetic energy difference becomes smaller zero
because the exchange integral is smaller than the overlap integral, see Fig.(4.1).
But this result is in contrast to our statement above, that for binding states the
kinetic energy will increase by bringing the two components from infinity closer
together. Therefore, in general, variational results might lead to wrong values
for special operators. The variational results guarantee only to minimize the
energy with respect to the class of test states. The only way to compute the
correct kinetic or potential contribution is given by the virial theorem. From
the virial theorem we obtain

and thus the correct behavior of the mean value of the kinetic and the potential
energy operator.
118 NUMERICAL QUANTUM DYNAMICS

5. QUANTUM MONTE CARLO METHODS


Monte Carlo methods are based on using random numbers to perform numer-
ical integration. The term quantum Monte Carlo does not refer to a particular
numerical technique but rather to any method using Monte Carlo type inte-
grations. Thus the question is for which systems are Monte Carlo methods
effective? To answer this question, let us first consider standard numerical
integration based on an order-k algorithm, with an error of the order
for a d-dimensional integral. Let us assume for simplicity that the integra-
tion volume is a hypercube with length l. The step size of our conventional
numerical integration method is thus the hypercube contains
integration points and therefore the error scales as1 The Monte Carlo
integration in one dimension converges very slow, but this error is
independent from the dimension. Thus we obtain the same scaling behavior
for and hence Monte Carlo integration becomes more efficient
than the selected conventional order-k algorithm when Therefore
quantum Monte Carlo methods are efficient for solving the few- or many-body
Schrödinger equation.

5.1 MONTE CARLO INTEGRATION


5.1.1 THE BASIC MONTE CARLO METHOD
One of the most impressive Monte Carlo simulations is the experimental
computation of the number which was presented at the exhibition “phenom-
ena” in Zurich, Switzerland, 1984. Suppose there is a square with length and
an interior circular hole with radius Each visitor of the “phenomena”
was advised to throw a ball into the direction of the square without aiming at
the hole. Thus each visitor is a random number generator. The surface of the
square, is and the surface of the interior hole and hence the
probability p to hit the hole is

Counting the number of balls flying through the hole will give us (a few
thousand visitors later) a good approximation of the number
Suppose we want to calculate the integral of an integrable function f over
the real interval [a, b]

Standard methods are the trapezoidal rule [4]


Approximation Techniques 119

Where we have used N equally spaced interior intervals of width and applied
the trapeze rule to each of the intervals. The trapezoidal rule is exact for linear
polynomials. A very efficient method consists of repeating the integration
technique based on Eq.(4.82) for successive values of each having half the
size of the previous one. This gives a sequence of approximations, which can
be fitted to a polynomial and the value for this polynomial at yield a very
accurate approximation to the exact result. This accurate quadrature is called
Romberg integration.
Suppose we want to integrate a function f over a region with complicated
shape. Remember the experiment above. Having this idea in mind it should be
no problem: The first step will be to find a simpler region which contains
as subarea. Creating a random sample uniformly distributed in we have
to decide if lies inside or outside the integration area If it lies inside it
will count, if it is outside it does not contribute. Therefore we obtain

with otherwise zero. The basic theorem of the Monte


Carlo integration [10] of a function / over a multidimensional volume V is

with the expectations value of a function defined by its arithmetic mean


over sample points

5.1.2 THE METROPOLIS STRATEGY


The discussion above shows the Monte Carlo integration involves two ba-
sic operations: Generating randomly distributed vectors over the integration
volume, and evaluating the function at this values. While the second task is
straightforward, the generation of uniformly distributed random numbers is not
obvious. A quick and dirty method [4] is given by the recurrence relation

where is called the modulus and a and are positive integers. If a, c and
are properly chosen, Eq. (4.84) will create a sequence with period where
the initial value is called seed. In this case, all integers between 0 and
will occur.
120 NUMERICAL QUANTUM DYNAMICS

Suppose we have generated an uniformly distributed sequence of random


numbers. How can we obtain from this sequence a different probability distri-
bution? Let and be the probability distribution of respectively
From

we obtain obviously

Therefore by selecting a suitable transformation function we can compute other


probability distributions from the ‘quick and dirty’ method mentioned above.
As an example let us assume that For an uniformly distributed
sequence the probability is which leads immediately to a Poisson
distribution:

But note, the sequence of numbers created above are not really random dis-
tributed. The computer is a deterministic machine and thus most methods
create only a quasi random number sequence of finite length.
Now let us return to Eq.(4.84). Eq.(4.84) states that the integral over a
d-dimensional scalar function / can be approximated by

where we have scaled the coordinates such that V = 1, The uncertainty in the
Monte Carlo quadrature is proportional to the variance of the integrand. By
introducing a positive scalar weight function

we can rewrite the integral above

and change the variable from so that the Jacobi-determinant becomes

hence the integral


Approximation Techniques 121

The potential benefit of the change of variables is to select the scalar weight
function such that the squared variance

will be minimized and thus the Monte Carlo integration optimized. If we choose
that it behaves qualitatively like the variance will be reduced and
the approximation improved. This method is called important sampling Monte
Carlo integration.
Now let us go one step further. The Monte Carlo quadrature can be carried
out using any set of sample points. But obviously the efficiency will depend
on the selected probability distribution. An uniform random distribution will
collect values of the function in areas where the function nearly vanishes as
well as in areas where the function dominates the sum. Thus the convergence
will be poor. The transformation above leads to a sample which concentrates
the points where the function being integrated is large. The essential idea
of the Metropolis algorithm is to select a random path in space such that the
convergency is improved, which means to avoid those areas where the function
does not noticeable contribute to the total sum. Suppose that the current position
on the random walk is

and that the random move would lead to the new position

Each of these positions in space are associated with a certain probability


The move from a position to the position is performed by a walker. If
the move is accepted the walker will move from to otherwise the walker
remains where it is and the next trial step will be selected. In this way it will
be possible for the walker to explore the configuration space of the problem.
Thus the only remaining task will be a rule how to decide when the walker will
move and when the walker will stay and take the next trial.
Let us denote by the probability to move from to . In
the Metropolis algorithm the probability of accepting a random move is given
by

Thus if is smaller a certain number in the range [0,1], either


arbitrarily selected or chosen randomly at the beginning of the walk, the move
will be accepted, otherwise rejected. Therefore graphically the walker will
mainly move uphill. Practically the computation starts by putting N walkers at
122 NUMERICAL QUANTUM DYNAMICS

random positions, then select a walker at random. This walker will be shifted
to a new position and the Metropolis criteria will be calculated to decide if the
move will be accepted or rejected. This procedure will be repeated until all
walkers moved through their individual random path. Therefore the summation
runs over many walkers and their individual path. This technique guarantees
that a selected random path will not mainly explore the region for which the
function vanishes nearly and on the other side the selection of many random
walkers guarantees that the whole area of interest will be explored.

5.2 THE VARIATIONAL QUANTUM MONTE CARLO


METHOD
In chapter 3.5 we studied the variational calculus to obtain the Hamiltonian
eigensolutions. The essential idea of the Rayleigh Ritz variational principle
was to parameterize the wave function and to compute the minimum of the
energy as a function of the non-linear parameter. The necessary computation
of Hamiltonian expectation values is based on the evaluation of integrals. The
variational quantum Monte Carlo method is a combination of the variational
ansatz and the Monte Carlo integration.
First step of the variational ansatz is to choose a trial wave function
with parameters and to compute the energy expectation value

With

the energy expectation value above can be rewritten as

with

the local energy, and


Approximation Techniques 123

the stationary distribution.


The procedure is now as follows. The Metropolis algorithm is used to sample
a series of points, in configuration space. The path is selected such that the
fraction is

At each points of these paths the local energy, is evaluated and finally
the mean

computed. To optimize the computation a large collection of walkers is used. A


generalization to obtain also excited states via diffusion quantum Monte Carlo
can be found in [11] and an application to atomic systems in [12].

5.3 THE DIFFUSION QUANTUM MONTE CARLO


A further quantum Monte Carlo method is the so-called diffusion quantum
Monte Carlo method. The idea is to interpret the time-dependent Schrödinger
equation (1.3) as a diffusion equation with potential, by using its imaginary
time form

Expanding with respect to the Hamiltonian eigensolution, we


obtain

and for the imaginary-time evolution

In the long-time limit this state will converge to the state of lowest energy in
the wave function expansion. Thus as long as is not orthogonal to the
Hamiltonian ground state we obtain

Introducing an energy shift leads to


124 NUMERICAL QUANTUM DYNAMICS

with the total potential. Note, for n-body systems as well as will be
a 3 . dimensional vector. can be reinterpreted as a rate
term describing the branching processes.
The energy shift is an arbitrary parameter that effects only the unobserv-
able phase of the wave function but not its modulus and can be understood as a
trial energy. If the ground state energy, will grow
exponentially in time and for decrease exponentially. Thus solving
Eq.(4.95) for various values of and monitoring the temporal behavior of
could help us to optimize the energy shift If
remains stationary we would have found the correct energy and the
correct ground state wave function.
A computer calculation could be set up in the following way: An ini-
tial ensemble of diffusing particles representing is constructed. The
imaginary-time evolution is accomplished by considering a snapshot after a
imaginary-time step The entire equation is then simulated by a combina-
tion of diffusion and branching processes. For most problems this algorithm is
not satisfactory and a very inefficient way to solve the Schrödinger equation,
because whenever the potential becomes very negative, the branching process
blows up. This leads to large fluctuations in the number of diffusing particles
and thus to a large variance and hence an inaccurate estimate of the energy.
As already discussed above, important sampling will weaken the variance
and hence optimize the approximation. Thus the first step is to find a weight
function to optimize the Monte Carlo steps. This will be achieved with the help
of a guiding function. For this purpose a trial approximation, to the
ground state wave function is introduced and a new distribution

defined. The trial wave function, could be derived, e.g., from a Hartree-
Fock calculation. Inserting Eq.(4.96) in Eq.(4.93) yields a Fokker-Planck type
of equation

with

the so-called quantum force, and

the branching term, which gives rise to the important guiding function
Approximation Techniques 125

What do we gain by this method? If would be an exact eigenvalue


would become independent of Thus if is a reasonable approximation
to the ground state would be sufficiently flat. Guide functions were first
introduced by Kalos et al. [13].
Equation (4.97) is a drift-diffusion equation for The branching
term is proportional to which for close to a Hamiltonian
eigenstate need not become singular when does. Note, for a true Hamil-
tonian eigenstate the variance in would vanish. The procedure
outlined here might fail in some cases. The distribution of walkers can only
represent a density which is positive everywhere. The ground state of a boson
system is everywhere positive, but this would not be true for fermion systems.
This leads to the so-called fermion problem. Besides reducing the fluctuations
in the number of diffusing particles has an additional important role for
fermionic systems: determines the position of the nodes of the final
wave function. Thus the accuracy of the initial nodes due to determines
how good the estimate of the ground state energy, will be. This can be
understood easily by considering the long imaginary-time limit

Thus have to vanish at the nodes of to obtain the true fermionic


ground state in the imaginary time limit. This approximation is called the
fixed-node method. Normally the positions of the zeros are only approximately
known. In these cases the nodes are estimated and empirically varied until a
minimum of the energy is found. This advanced variant is called released-node
method [11].
The concrete algorithm runs as follows:
We start with an initial set of random walkers distributed according to
and let the walkers diffuse for a imaginary time step with a certain transition
probability given by

with the approximate Green function in the limit of a vanishing


energy shift
126 NUMERICAL QUANTUM DYNAMICS

In order to deal with the nodes in the fermion ground state we must use the
fixed-node or released-node approximation. A trial displacement is given by

with a 3 . -dimensional vector for particles and 3 space dimensions, a


3 . -dimensional displacement vector, the time step for which the particle
diffuse independently. The three-dimensional subspace displacement vector
for each particle is a Gaussian random variable with mean of zero and a
variance This trial displacement is accepted with probability

Then branching is performed according to the branching probability

Next, produce

copies of the walker, with a uniform random number. The steps


described above should be repeated several times before computing the mean
of and the new trial energy from

This procedure is then repeated until there is no longer a detectable trend in the
mean of At this point the steady-state has been reached. More details
about quantum Monte Carlo methods in general can be found in [14] and a
brief and graphic presentation in [15].

5.4 THE PATH INTEGRAL QUANTUM MONTE


CARLO
Fundamental path integral concepts. Instead of describing quantum sys-
tems by its Schrödinger equation there is an alternative of quantizing a classical
system via the Feynman path integral, which is equivalent to Schwinger’s ac-
tion principle. The first step on the way to quantizing a system is to rewrite the
problem in Lagrangian form. Assume that we have prepared the wave function
at time and position and we want to observe the wave function at later
time and position Both wave functions are connected according to
Approximation Techniques 127

where the integral kernel follows the time dependent Schrödinger equation

Thus the integral kernel is a retarded propagator which de-


scribes the preparation of the quantum system at and its observation at
Due to Feynman this propagator can be obtained by

with the classical Lagrangian. For causality reasons the time-


difference always has to be positive definite.
The propagator K fulfills the important Kolmogoroff-Smoluchovski or fold-
ing property

Thus we could introduce an infinite number of arbitrary small intermediate


steps and approximate the propagator piecewise in each intermediate
step by the free particle Lagrangian of the same total energy, due to Eq.(4.105).
Note, that the integration in Eq.(4.106) runs over all possible space vectors
Thus after putting all pieces together all paths, not only the classically allowed
one, will contribute to the Feynman path integral. In the semiclassical limit
these paths will reduce to the classical allowed one. It turned out that the path
integral formalism becomes important especially in studying quantum chaos or
quantum systems which can only be described by density operators of mixed
states. An example are quantum systems which are in contact with a heat
bath of temperature T > 0. Due to the importance of paths in the Feynman
formalism it is straightforward to try to merge path integral ideas with Monte
Carlo methods. For more details about Feyman path integrals see [16] and for
semiclassical applications [17].
For quantum systems which are in contact with a heat bath of temperature
the density matrix in the coordinate representation (see Chapt. 1.5.1)
is given by

with
128 NUMERICAL QUANTUM DYNAMICS

The expectation value of an observable  is given by Evidently


is the canonical partition function. If we would know the quantum
density function, the road would be free for a Monte Carlo simulation in the
spirit of the classical case. However, the explicit form of the density matrix is
usually unknown.

Density matrix of a free particle. In the absence of an external potential


the Hamiltonian of a one-dimensional particle with mass and temperature

Let us assume that the particle is confined to an interval [–L/2, L/2]. Then
the eigensolutions are given by

Therefore we obtain for the density matrix

In the limit the discrete wave number will become continuous and
the summation can be rewritten as an integral

and thus the density matrix of a free particle becomes

Path integral quantum Monte Carlo. The basic idea of the path integral
quantum Monte Carlo method is to express the density matrix of any given
system in terms of the free particle density Eq.(4.111). This is justified by the
following integral transformation
Approximation Techniques 129

Thus in the equation above the density matrix for a system with temperature T
will be computed with the help of the density matrix of a system with double
the original temperature. But the higher the temperature, the smaller the effect
of the potential and thus as better the approximation above.
Due to the derivation above we can rewrite the density matrix by

The number f of intermediate steps is called the Trotter number, which should
be chosen such that the temperature becomes high compared to the energy
spacing. For a particle in an external potential Û the following approximation
turned out to be useful
130 NUMERICAL QUANTUM DYNAMICS

This approximation would hold exactly for commuting operators and Û.


For the diagonal element we obtain

with

The equations above are derived for a single particle, but could be easily
generalized to particles. In this case becomes a dimensional vector and
the pre-factor in Eq.(4.114) have to be taken to the power of
Quantum statistical averages, especially the partition function2 Z

can be obtained by playing the classical Monte Carlo game. All we have to
do is to replace the n-particle vector by the . f-vector with f the Trotter
number. Thus the algorithm is:
Start with a random position The external potential Û of a particle will be
reinterpreted as

with / the Trotter number, and given by Eq.(4.114). The


random path is given by displacing the 3f-dimensional vector by and
each of the 3-dimensional subvectors < f – 1, by The new
configuration is By computing a Metropolis
decision is initiated. Let be a random number in [0,1]. If

the trial displacement will be accepted, thus Repeat until the arithmetic
mean is converged. For more details about path integral quantum
Monte Carlo see, e.g., [18].

Notes
l
References 131

2 The partition function is given by

and thus the energy expectation value by

Therefore we can also use the partition function to obtain the energy expec-
tation value.

References

[1] Pekeris, C. L., (1958) “Ground state of two-electron atoms”, Phys. Rev.
112, 1649 – 1658
[2] Ericsson, T., and Ruhe, A., (1980) “The spectral transformation Lanczos
method for the numerical solution of large sparse generalized symmetric
eigenvalue problems”, Math. Comput. 35, 1251 – 1268

[3] Saad, Y., (1992). Numerical Methods for Large Eigenvalue Problems:
Theory and Algorithms Wiley, New York

[4] Bethe, H. A. and Jackiw, R., (1986). Intermediate Quantum Mechanics


Addison-Wesley, Reading
[5] Lindgren, I. and Morrison, J. (1986). Atomic Many-Body Theory Springer-
Verlag, Berlin
[6] Kohn, W. and Sham, L. J. (1965) “Self-consistent equation including ex-
change and correlation effects”, Phys. Rev. 140, A l133 – a1138
[7] Lundqvist, S. and March, N. (1983). Theory of the inhomeogenous electron
gas Plenum, New York

[8] Perdew, J. and Zunger, A. (1981) “Self-interaction correction to density-


functional approximations for many-body systems”, Phys. Rev. B 23, 5048
– 5079
[9] Dreizler, R. M. and Gross E. K. U. (1990). Density functional theory
Springer Verlag, Berlin

[10] James, F., (1980) “Monte Carlo theory and practice”, Rep. Prog. Phys.
43, 1145 – 1189
132 NUMERICAL QUANTUM DYNAMICS

[11] Ceperley, D. M. and Bernu, B., (1988) “The calculation of excited state
properties with quantum Monte Carlo”, J. Chem. Phys. 89, 6316 – 6328
[12] Jones, M . D., Ortiz, G., and Ceperley, D. M., (1997) “Released-phase
quantum Monte Carlo”, Phys. Rev. E 55, 6202 – 6210
[13] Kalos, M. H., Levesque, D., and Verlet, L., (1974) “Helium at zero tem-
perature with hard-sphere and other forces”, Phys. Rev. A 9, 2178 – 2195
[14] Kalos, M. H. (Ed.) (1984). Monte Carlo Methods in Quantum Problems
Reidel, Dordrecht
[15] Ceperley, D. and Alder, B., (1986) “Quantum Monte Carlo”, Science 231,
555 – 560.
[16] Feynman, R. P. and Hibbs, A. R. (1965). Quantum Mechanics and Path
Integrals McGraw-Hill, New York
[17] Gutzwiller M. C. (1990). Chaos in Classical and Quantum Mechanics
Springer Verlag, Berlin
[18] Ceperley, D. M. (1995) “Path integrals in the theory of condensed helium”,
Rev. Mod. Phys. 67, 279 – 355
Chapter 5

FINITE DIFFERENCES

The equations and operators in physics are functions of continuous variables,


real or complex. Computers are always finite machines and hence can only
provide rational approximations to these continuous quantities. Instead of
representing these values with the maximum accuracy possible, they are usually
defined on a much coarser grid and each single value with a smaller resolution,
specified by the user in dependence of the problem under consideration and the
computing facilities available.
In this chapter we will discuss some methods for discretizing differential
operators and integrating differential equations. We will not discuss in detail
the numerous solvers available for initial value problems of ordinary differential
equations, but list them briefly and focus on methods useful for the partial
differential Schrödinger equation.

1. INITIAL VALUE PROBLEMS FOR ORDINARY


DIFFERENTIAL EQUATIONS
Solvers for initial value problems are divided roughly into single-step and
multi-step, explicit and implicit methods. Single-step solvers depend only from
one predecessor and -nomen est omen- multi-step solvers from further steps
computed “in the past”. Explicit methods depend only from already calculated,
implicit methods in addition from the present solution.
In the following we will denote the coordinate at time by

and the time step by

133
134 NUMERICAL QUANTUM DYNAMICS

and we always assume that the differential equation under consideration is


formulated by

Of course we can rewrite any ordinary differential equation

as a system of first-order equations by making the substitutions

1.1 SIMPLE METHODS


1.1.1 THE EULER TECHNIQUE
The Euler methods are based on the definition of the functional derivation

Therefore the explicit Euler method is given by

and the implicit Euler method by

Implicit methods are often used to solve stiff differential equations. A mixture
of both is the trapezoidal quadrature

Although these methods seem to work quite well, they are for most systems
of differential equations unsatisfactory due to their low-order accuracy. Higher
order methods offer usually a much more rapid increase of accuracy with
decreasing step size. One class are derived from Taylor series expansions for
about

From

we obtain
Finite Differences 135

and thus we arrive at

1.1.2 2ND ORDER DIFFERENTIAL EQUATIONS


A simple method to integrate second order differential equations

is given by the Verlet algorithm. The Verlet algorithm is explicitly time invariant
and often used in molecular dynamics simulations. Starting point is again the
Taylor expansion

The Verlet method above is often reformulated in its leap-frog form . To derive
the leap-frog algorithm we start with the inter-step velocity

and

and thus

Note, that this leap-frog form is exactly equivalent to the Verlet algorithm and
thus also of 4th order.

2. THE RUNGE-KUTTA METHOD


2.1 DERIVATION
The Runge-Kutta algorithm is one of the most widely used class of methods.
To explain the basic idea we will derive the 1st order Runge-Kutta algorithm and
list the 4th order Runge-Kutta. The advantage of these single step algorithms
are the possibility to optimize easily the step size after each single step. Thus
we will in addition discuss the Runge-Kutta-Fehlberg method, which allows
optimizing the step size and minimizing the necessary computations.
136 NUMERICAL QUANTUM DYNAMICS

is an exact solution of the 1st order differential equation (5.3). This integral
can be approximated by the mid-point of the integration interval

From the Taylor expansion

we obtain the 1st order Runge-Kutta algorithm

Runge-Kutta algorithms are explicit single-step algorithms and the order


version is exact in order, thus the error is of the order One of the
most widely used versions is the 4th order Runge-Kutta algorithm.

4th order Runge-Kutta algorithm. A 4th-order algorithm has a local ac-


curacy of and has been found by experience to give the best balance
between accuracy and computational effort. The necessary auxiliary calcula-
tions are

to yield the next step

In general the auxiliary variables are given by


Finite Differences 137

with

and the next step by

with

The accuracy in the is of the order Thus how could we


optimize the integration step

2.2 THE RUNGE-KUTTA-FEHLBERG METHOD


In Runge-Kutta respectively order the solution hold

with unknown error coefficients and Because

Let be the maximum error allowed. Thus

and hence

The advantage of the method above is the possibility to optimize the integration
step size. The disadvantage is that we have to evaluate instead of
-times. This problem could be overcome by selecting Runge-Kutta
algorithms, which use in both orders the same auxiliary variables. The Runge-
Kutta-Fehlberg algorithm [1] is an example of such a combination for 4th and
5th order.
138 NUMERICAL QUANTUM DYNAMICS

The auxiliary variables are

and the successors are

The error is given by

and thus the new optimized step size by

where we have reduced the allowed error for security by a factor of 0.9. Hence
the actual computational path is: 1st select an initial step size and compute
If < compute the next step, in case the new step size should be
larger the old one, recompute with the actual error and the new step
size.

Control of typing errors. Because in many publication the coefficients are


erroneous due to typing errors1 the Runge-Kutta formula should be always
checked. This can be done by the general equations derived in the section
above and by testing the algorithm by comparison with a system with a well
known exact solution.
Finite Differences 139

3. PREDICTOR-CORRECTOR METHODS
Multi-step algorithms are algorithms for which the next quadrature step
depends on several predecessors. The advantages are their high accuracy
and their numerical robustness. The disadvantage is, that it is not as easy
as for single-step algorithms to optimize the step size adaptively. Because
the next step needs several steps computed in the past, changing the step size
necessitates to recompute a new set of predecessor points. Usually this could be
done with interpolation routines based on polynomial approximations. In this
context Lagrange interpolation polynomials have the advantage of equidistantly
spaced nodes, and thus the expansion coefficients of this approximation equals
the predecessor values
In general a m-step algorithm is defined by

and called explicit otherwise implicit. The most important are


the order Adams-Bashforth algorithm:
the order Adams-Moulton algorithm: for
and thus this is an implicit algorithm.
The order Gear algorithm: for Thus
the Gear algorithm is as well implicit and useful especially for stiff differential
equations.
Each of the coefficients are selected such that a order polynomial ex-
pansion would be exact. In general the error coefficients of implicit algorithms
are smaller than those of explicit algorithms. Implicit algorithms are usually
used in combination with explicit algorithms, the so-called predictor-corrector
method. Predictor-corrector methods are numerically robust and highly ac-
curate. The essential idea is first to approximate - to predict - by an explicit
algorithm the next step. This step is then used in the implicit al-
gorithm - the corrector - instead of the implicit contribution
to obtain finally the next step algorithm need prede-
cessors. Thus multi-step algorithm have to be combined with other methods,
e.g. a Runge-Kutta algorithm, to obtain the necessary first steps. A typical
predictor-corrector combination is the Adam-Bashforth formula 4th order as
predictor

and the 4th order Adam-Moulton algorithm as corrector


140 NUMERICAL QUANTUM DYNAMICS

The algorithms for ordinary differential equations described in the previous


two sections above can be used, e.g., to compute the one-dimensional or the
radial solution of scattering states, formulated as initial value problem. We
will discuss this aspect more closely in section 4.5 about the Numerov method.
They are also useful in context of finite elements, for which we could also
obtain systems of ordinary differential equations.

4. FINITE DIFFERENCES IN SPACE AND TIME


In this section we study discretization techniques by an unidimensional
example [3].

4.1 DISCRETIZATION OF THE HAMILTONIAN


ACTION
Starting point is the time-dependent one-dimensional Schrödinger equation
(1.3). With and we obtain

which will be mapped onto a finite difference equation.


The discretized wave function will be indexed in the following way

with the time-index and the space index A Taylor


expansion with respect to the space coordinate gives

where will lead to the discretization point and is the dis-


cretization point with step size Thus the Taylor expansion with respect to the
space coordinate leads to (we suppress for the moment the time index)

and thus in analogy to the Verlet algorithm

and
Finite Differences 141

Therefore the Hamiltonian action on the space difference grid is given by

with

4.2 DISCRETIZATION IN TIME


Straight forward discretization of the time variable of the Schrödinger via
Taylor expansion results in a non-unitary algorithm. Therefore we will use the
Cayley method as described in Chapt. 1.4.2.1. From Eq.(1.58) we obtain with
the time step

4.2.1 KICKED SYSTEMS


Suppose we want to study a system with a potential pulse (or a potential
step) at time T. In this case we have to guarantee that we hit the system exactly
at the time and thus T have to be an integer factor of the time step
The wave function before and after the pulse (or step) can then be computed in
a Cayley-like way. This could be understood easily by a simple example.

Let be the wave function just before the pulse and


immediately after the pulse, with The Schrödinger equation
142 NUMERICAL QUANTUM DYNAMICS

for a single pulse is given by

and thus the wave packet propagation by

and therefore with

Hence the practical computation, see Fig.(5.1), runs in the following way:
For compute by Eq. (5.34). This will give us finally
By Eq.(5.37) we obtain then and for the
wave function will be again computed by Eq.(5.34).

4.3 THE RECURRENCE-ITERATION


Putting the pieces together and reordering the relevant equation will give us
a simple procedure to compute wave packet propagation for a one-dimensional
time-independent system. From Eq.(5.34) we obtain the time iteration pro-
cedure and from Eq.(5.32) we know how the Hamiltonian act on the finite
difference grid. Both together leads to

With the abbreviation

we get
Finite Differences 143

and by defining the righthand side as

and with the ansatz

and thus

By construction

and therefore

Taking the inverse and reordering these two equations gives

Thus for time independent potentials becomes time independent and thus
we can skip the upper index

As boundary condition we assume

for all time steps. Therefore we have to select a sufficiently large space area,
such that the wave packet becomes zero at the boundary and we have to restrict
the time interval to such a range, that this boundary conditions remain fulfilled
(in the numerical sense).
From the original ansatz, Eq.(5.41), we get
144 NUMERICAL QUANTUM DYNAMICS

Therefore we obtain from the lefthand side boundary condition

and from the righthand side boundary condition

and thus

and by definition

Thus we get the following computational structure:


We know from the previous run and because of course the potential is
given and we can compute from the initial wave packet at the beginning.
From the wave packet at time step we get and thus we know
and can compute all iteratively. Now we have all quantities available to
compute the wave packet of the next time step which we will do
‘spacely upside down’.
Due to the boundary condition we know that

and all other wave function values on our finite difference space grid can be
computed iteratively in inverse order via

which completes the computation for time step and thus we are now
ready to compute the next time step
The computation above can be generalized to any spatial dimension. But
note that only in one spatial dimension explicite equations can be derived.
For higher dimensional systems implicit linear algebra equations remain. For
conservative systems the relevant matrices are time-independent and thus for
fixed time steps some linear algebra tasks like matrix decompositions have to
be carried out only once.

4.3.1 EXAMPLE
Finite Differences 145
146 NUMERICAL QUANTUM DYNAMICS

As a graphic example we will discuss wave packet propagation through a


potential wall. As wall we selected a step-like symmetric potential. Of course
our potential could have any shape. The only important point is, that the
spacial resolution is sufficiently high. In our example the basic line has the
length L = 1 and the total width of the potential is 0.2. Thus each stair has a
length of 0.04. The number of spatial steps is 2000, which is fairly sufficient
to resolve this potential.
The ratio between the squared spatial and time resolution is given by
Eq.(5.38), It turned out that is numerically stable. The group velocity
of a wave packet is given by

with L/2 the distance the wave packet could travel under the assumption, that
its tail vanishes approximately at the spatial border and T the total time. From
the equation above we obtain

and thus

Of course these equations are only rough estimates to obtain the correct
order of the relevant values and have to be checked numerically. Some results
are presented in Fig.(5.2).

4.4 FINITE DIFFERENCES AND BOUND STATES


The Hamiltonian action on the discretized wave function is given by Eq.(5.32).
In this section we will derive a simple equation to compute the Hamiltonian
eigensolution by a finite difference grid. Bound states are square integrable and
thus have to vanish numerically outside a certain space area. Let us discuss for
simplification a uni-dimensional system with eigenvectors

Thus we obtain the following finite difference equation for the eigensolutions

This corresponds to a matrix equation


Finite Differences 147

with

Thus solving this simple tridiagonal matrix will give us the corresponding
eigensolutions. Tridiagonal matrices can be efficiently solved using the QR-
algorithm (see Chapt. 3.5.2.2).
The advantage of finite difference methods is its simplicity. E.g., using
standard software like MATLAB allows to derive the relevant programs in only
a few hours. The disadvantage of the approach derived above, is its fixed step
size. Even if this could be generalized to optimized non-constant step sizes
we will not stress that point further. The finite difference method should only
be used for oscillator-like potentials and to get a rough first estimate. For a
more detailed study or in cases for which this simple approach converges only
hardly, other methods, like finite elements, are more valuable.
For separable 3-dimensional systems for each of the single differential equa-
tions finite difference methods can be used. E.g., for the radial Schrödinger
equation we get

By

the radial Laplace-Beltrami operator can be mapped onto For other co-
ordinate systems, e.g. cylindrical coordinates, the Laplace-Beltrami operator
remains mixed of first and second order derivations and thus the equation for
the discretized wave functions becomes more complicate and the convergence
weaker. Thus again, for higher dimensional systems the finite element method
becomes more favorable.

4.4.1 EXAMPLES
As simple examples we will discuss the Pöschl-Teller potential [4], and the
harmonic and anharmonic oscillator. In contrast to the oscillator potentials the
Pöschl-Teller potential is of finite range and thus ideal for the finite difference
technique, because the wave function becomes exactly zero at the singularities
of the potential.
148 NUMERICAL QUANTUM DYNAMICS

The Pöschl-Teller potential is given by

with and potential parameters. We will restrict the following discus-


sion to and with (Of course it would be
no computational problem to obtain additional results for other values.)
The potential (5.57) is shown in Fig.(5.3). For the Pöschl-Teller
potential is symmetric around its minimum and possesses for the
values above singularities at and Thus the wave function
becomes zero at these points and vanishes outside the potential. By some
tricky substitutions the corresponding Schrödinger equation can be mapped
onto a hypergeometric equation [4] and the exact eigenvalues are given by

In Fig.(5.3) we present the lowest five eigenfunctions and in Tab.(5.l) the


eigenvalues in dependence of the number of finite difference steps.
Finite Differences 149

By computing the potential and the wave function at only 7 steps the ground
state energy shows already a remarkable accuracy. It differs only by 3.5% from
the correct value. Of course the eigenenergy of the higher exited states are
- as expected - less accurate. Using a finite difference grid of 25 steps gives
already reasonable eigenvalues for the first five eigenstates. The deviation of
the 5th eigenstate from its correct value is approximately 3.5%. For hundred
steps the accuracy of the ground state energy is of the order of and for the
5th excited state of about 0.1%. Thus with only 100 finite difference steps the
lowest eigenvalues and its eigenfunctions can be sufficiently accurate computed
and this will take, even on very small computers, only a few seconds.
As a second example we will briefly discuss the harmonic and anharmonic
oscillator. In those systems the coordinate space becomes infinitely large and
thus can only be restricted due to numerical considerations. Therefore we have
now to take care of two parameters to obtain converged results: The size of the
selected coordinate space and its coarsening due to the finite step size.
In the upper part of Tab.(5.2) the step size in increased but the integration
border in coordinate space is kept fixed. By increasing the finite difference step
the results become less accurate as expected. In the lower part of Tab.(5.2) the
step size is kept fixed, but now the size of the space is shrinked. This could
have a much more tremendous affect on the accuracy of the results. Outside
the border the wave function is set equal to zero and thus this parameter have to
be treated with care. Physically this means that the quantum system is confined
by impenetrable walls.
In Tab.(5.3) some results for an anharmonic oscillator are presented. Again
decreasing step size leads to increasing accuracy. The left and the right border
have to be selected such, that the wave function computationally vanishes. For
non-symmetric potentials there is no longer any reason to choose the borders
symmetric. This is documented by the last line in Tab.(5.3), for which the left
150 NUMERICAL QUANTUM DYNAMICS

border became smaller, but the accuracy equals for the first 5 eigenvalues the
“symmetric computation”.

5. THE NUMEROV METHOD


5.1 DERIVATION
The Numerov method is a particular simple and efficient method for integrat-
ing differential equations of Schrödinger type. It could be used for computing
eigenstates, resonances and scattering states. The essential idea is similar the
derivation for the finite difference method, but now we include in addition 4th
order terms in the Taylor expansion of the wave function. This results in a
expansion with a 6th order local error.
A Taylor expansion of the wave function, Eq.(5.32), with respect to the space
coordinate is given by
Finite Differences 151

and thus with

we obtain

From the time-independent Schrödinger equation

we get

and thus the Numerov approximation becomes

This approximation can be generalized easily to higher space dimensions


and can be used in two different ways: Reordering this equation leads to an
eigenvalue problem to compute bound states or resonances, by merging the
Numerov approximation with complex coordinate rotation. Interpreting this
equation as an recursion equation allows to solve initial value problems, thus,
to compute, e.g, scattering states by recursion. Programming this recursion
equation is rather simple and straightforward. After obtaining the first two steps,
the Numerov scheme is very efficient, as each step requires the computation
only at the grid points. To obtain the first two steps it might be necessary to
start, e.g, with a Taylor expansion and to use at the beginning some additional
intermediate steps.

How to start. For systems with definite parity, which is rather the rule than
the exception, we know already initial values at the origin. Because
152 NUMERICAL QUANTUM DYNAMICS

and normalization is only of importance after finishing the recursion, we can


always start with

for states with positive parity and with

for states with negative parity. Taylor expansion up to 4th order will give us
the next step But note, choosing an erroneous parity will of course also
lead to a result, a result which has nothing to do with the physical system under
consideration. An illustrative example is shown in Fig.(5.4). On the lefthand
side the ground state and the first excited state of the harmonic oscillator com-
puted with the Numerov approximation are plotted. On the righthand side
the Numerov scheme is used to obtain the results for the harmonic oscillator
Schrödinger equation using the ground state energy and the energy of the first
excited state, but with the erroneous initial condition of a negative parity state
for the ground state and of a positive parity state for the first excited state. Due
the symmetry only the positive half space is plotted. The computations with
References 153

the wrong initial condition leads to diverging results of the corresponding dif-
ferential equation. (But note, not for any system an erroneous initial condition
might lead to such an obvious failure.)

Notes
1 Fehlbergs [2] original publication had an typing error (4494/1025 instead of
the correct 4496/1025) and the celebrated Runge-Kutta-Nyström 5th(6th)
order algorithm was corrected 4 years after publication.
2 Remember is time-independent. Thus it is only necessary to compute
these coefficients once.

References

[1] Dormand, J. R. and Prince, P. J., (1989). “Practical Runge-Kutta Processes”,


SIAM J.Sci.Stat.Comput. 10, 977 - 989
[2] Fehlberg, E., (1969). “Klassische Runge-Kutta-Formeln fünfter und siebter
Ordnung mit Schrittweitekontrolle”, Computing 4, 93-106; (Correction in
(1970) Computing 5, 184)
[3] Goldberg, A., Schey, H. M. and Schwartz J. L. (1967). “Computer-
generated motion pictures of one-dimensional quantum-mechanical trans-
mission and reflection phenomena,” Am. Journ. of Phys. 35, 177 – 186
[4] Flügge, S., (1994). Practical Quantum Mechanics Springer-Verlag, Hei-
delberg
Chapter 6

DISCRETE VARIABLE METHOD

This chapter gives a review to the discrete variable representation applied


to bound state systems, scattering and resonance states and time-dependent
problems. We will focus on the combination of the discrete variable method
with other numerical methods and discuss some examples such as the alkali
metal atoms in external fields, which is of interest with respect to astrophysical
questions, Laser induced wave packet propagation and quantum chaos. Further
examples are the anharmonic oscillator, an application to periodic discrete
variable representations, and the discussion of the Laguerre mesh applied to
the radial Schrödinger equation.

1. BASIC IDEA
Suppose that we wish to solve the Schrödinger equation
with the Hamiltonian and the ith eigenvalue respectively eigen-
function, by an expansion of our postulated wave function with respect
to an orthonormalized complete Hilbert space basis By
setting this ansatz into the Schrödinger equation and taking the inner Hilbert
space product we arrive in general at an infinite set of equations

For non-integrable Hamiltonians one is forced, in the complete quantum the-


oretical treatment, to resort to numerical methods. This means the infinite
equations have to be restricted to finite equations. Hence expression (6.1) re-
stricted to a finite basis expansion will become equivalent to a truncated matrix
representation, respectively eigenvalue problem, and the variational parameters
are then usually determined by diagonalization. By increasing the dimension
155
156 NUMERICAL QUANTUM DYNAMICS

of the corresponding matrices this procedure leads monotonically to more ac-


curate eigenvalues and eigenfunctions, provided the inner Hilbert space product
(which means the single matrix elements) can be computed exactly. Because
this procedure is equivalent to a variational process it is called variation basis
representation or spectral representation. In the finite basis representation1 the
matrix elements are computed by numerical quadrature rather than by continu-
ous integration. If a Gaussian quadrature rule consisting of a set of quadrature
points and weights is used to compute the matrix elements, there exists an iso-
morphism between the finite basis representation and a discrete representation
of the coordinate eigenfunctions based on the quadrature points, the discrete
variable representation. This equivalence was first pointed out by Dickinson
and Certain [1]. Therefore in brief, the discrete variable method is applicable
to a quantum system under the condition, that there exists a basis set expansion
and that a Gaussian quadrature rule can be used to compute the matrix elements
in this expansion.
Light et al. [2] have pioneered the use of the discrete variable method in
quantum mechanical problems; first applications in atomic physics in strong
external fields can be found in Melezhik [3]. A combination of the discrete
variable method in the angular space with a finite element method is described
in Schweizer et al. [4], applications to the hydrogen atom in strong magnetic
and electric fields under astrophysical conditions are discussed in [5] and to
Laser induced wave packet propagation in [6]. A pedagogically nicely written
paper is [7]. A discussion of the discrete variable method in quest of quantum
scattering problems can be found in [8] and in combination with the Kohn
variational principle, e.g., in [9, 10]. A computation of the density of states
using discrete variables is presented in [11 ]. They reduced the infinite algebraic
equations to a finite one by using the analytical properties of the Toeplitz matrix
and obtained very accurate results for one-dimensional potential scattering. Of
course this is a very individual selection of examples, therefore see in addition
the references in the papers cited above.
A theoretical description of the discrete variable representation is outlined in
the following section. Section 3 gives a brief review of orthogonal polynomials
as it is needed for use in the discrete variable method. A more thorough
discussion can be found in some textbooks, e.g. [12, 13]. In section 4 we
will discuss some examples. Finally, Sect. 5 is devoted to the subject of the
Laguerre meshes.
Discrete Variable Method 157
2. THEORY
2.1 ONE-DIMENSIONAL PROBLEMS
For simplicity let us start with one-dimensional problems. The general-
ization to Cartesian multi-dimensional problems is straightforward. In many
situations the symmetry of the underlying physical systems can be exploited
more effectively by using symmetry-adopted non-Cartesian coordinates (see
Chapter 2). In those situations the discrete variable method have to be gen-
eralized leading to non-unitary transformations or have to be combined with
other numerical methods. We will discuss this problems and some ansätze
in the next subsection. The following discussion will be restricted to one-
dimensional problems but could be generalized easily to higher dimensional
systems in a Cartesian coordinate description.
In the variation basis or spectral representation a wave function is
expanded in a truncated orthonormal Hilbert space basis

Due to the orthonormality of the basis with respect to the continuous space
coordinate variable, the expansion coefficients are given by

By approximating this equation via discretization on a grid with quadrature


or grid points 2 we obtain

under the assumption that the basis functions remain orthonormal

in the discretized Hilbert space.


Now let us start with some linear algebra gymnastic, following closely the
ideas of [2], The discretized expansion coefficients equals the expansion
coefficients if we use a sufficient high number of grid points, respectively
if our integration remains exact. Therefore we assume, that (at least) in the
numerical limit our quadrature (6.4) becomes exact. By expanding our wave
158 NUMERICAL QUANTUM DYNAMICS

function (6.2) in the discretized frame we obtain

with

and

Hence is a function which acts on the continuous space variable but is


labeled by the selected grid points In analogy to Eq.(6.2) defines a
basis in the discrete representation which is orthonormal (see discussion below)
due to the inclusion of the quadrature weight in the definition. Therefore
it makes sense to interpret (6.7) as a transformation from the variation basis rep-
resentation to the discrete Hilbert space. By choosing both dimensions equal,
we avoid linear algebra problems with left- and right-transformation
and obtain

with

From the discrete orthonormality relation (6.5) we get

and due to the Christoffel-Darbaux relation for orthogonal polynomials [2, 13]

This second orthonormality relation can also be derived directly: Due to Eq.
(6.8) and, because the trace is invariant under commuting the
matrix product, we also have Additionally we obtain
Discrete Variable Method 159

therefore is idempotent. Because idempotent matrices have either eigen-


values equal zero or one and because must have N
eigenvalues equal one, hence and T is unitary. T was the trans-
formation from the orthonormal basis function in Hilbert space onto the basis
functions and hence this set of basis functions is
also orthonormal.
Now let us consider the kinetic and potential operator and of the
Hamiltonian. In the spectral representation we obtain

where this approximation is the finite basis representation already mentioned


above, which would be exact if our quadrature is exact. In the coordinate
representation the potential operator is simply multiplicative and hence we can
define

which holds for any position and ergo also for quadrature grid points. There-
fore we can transform this operator from the continuous Hilbert space onto the
discretized Hilbert space

where F B R denotes the finite basis representation. Because this transfor-


mation is unitary we can construct directly the discrete variable representation
(DV R) via:

and since T is unitary the discrete variable representation becomes isomor-


phic with the finite basis representation. Note, that in the discrete variable
representation the potential contribution, or more general the contribution of
multiplicative operators, is simply obtained by computing the potential at the
grid points and the contribution of derivative operators, like the kinetic energy,
by acting with this operator on the basis and taking the results at the
160 NUMERICAL QUANTUM DYNAMICS

selected quadrature grid points. In general the expansion coefficients of the


wave function are related by

(For some examples see subsection 4.)

2.2 MULTI-DIMENSIONAL PROBLEMS


The multi-dimensional generalization of the discrete variable method in a
direct product basis is straightforward. In a direct product basis the quan-
tum numbers of the one-dimensional sub-states are not related to each other.
That means the wave function corresponding to a specific degree-of-freedom
are labeled by indices which are only associated with this specific degree-
of-freedom. Therefore the total transformation T for a n-dimensional sys-
tem is given by a direct product of the transformation for each single
degree-of-freedom E.g., for three Cartesian
degrees-of-freedom [9] the spacing of the discrete variable grid in each
dimension is equidistant and mass-weighted in case the three masses are dif-
ferent. Hence and
and the Hamiltonian becomes

with the kinetic energy matrices given by

A counterexample to direct product states are the well known hydrogen


eigenstates with the principal quantum number, the
angular momentum and the magnetic quantum number. The ap-
proach mentioned above cannot be used for multi-dimensional eigenstates with
quantum numbers associated to each other. This could be easily understood,
because even the dimension of the variational basis representation will not be
equal the dimension in the discrete variable basis representation by simply
copying the “direct product” approach. Direct product states are rather the ex-
ception than the rule. Nevertheless the multi-dimensional grid can be derived
by a direct product of one-dimensional grids. The main task is then to derive
a numerical integration scheme on this multidimensional grid that preserves
the orthonormality of the variational basis function. For many problems the
symmetry of the physical system can be exploited most effectively in a non-
Cartesian coordinate system (see Chapter 2). Since progress in this subject is
helped by concrete examples, let us study, as a prototype, spherical coordinates.
Discrete Variable Method 161

Spherical coordinates. The kinetic energy operator in spherical coordi-


nates is given by

with the angular momentum operator

The spherical harmonics are the eigenfunctions of the angular mo-


mentum operator with eigenvalues Thus, the spherical harmonics
are the most appropriate spectral basis for the angular momentum operator and
the rotational kinetic energy operator will become diagonal in this basis. The
spherical harmonics are related to the associated Legendre functions by3

where for we have made use of

Because l and are not independent from each other, the spherical har-
monics are not a direct product state. In numerical computations the basis
has to be restricted to a maximum angular momentum l, and to a max-
imum value in the magnetic quantum number and
positive. Therefore in the variational basis representation the dimension is
provided runs from
to Thus, for maximal magnetic quantum number the
dimension of the basis becomes For
a direct product state the total dimension would be equal the product of the sin-
gle dimension in each direction, respectively quantum number. Hence a direct
relation between a direct product ansatz, as necessary for the discrete variable
representation, and the variational respectively finite basis representation is
only possible for
If the quantum system possesses azimuthal symmetry4 the magnetic quantum
number will be conserved and the can be separated off.
Therefore the angular wave function can be expanded in a basis of the associated
Legendre functions, with the value of the magnetic quantum number m fixed.
The associated Legendre function is a polynomial of degree
162 NUMERICAL QUANTUM DYNAMICS

multiplied by a prefactor and orthogonal on the interval [–1, 1],


Thus the discrete grid points are given by the zeros
of the associated Legendre polynomial and the (see
Eq.(6.4)) are the corresponding Gaussian quadrature weights for this interval
[–1,1].
The variational or spectral basis is given by

Hence in analogy to Eq.(6.2) we obtain

and, see Eq.(6.5), the discrete orthonormal relation

Thus the Hilbert space basis in the finite basis representation, see Eq.(6.7), is
given by

and the transformation between the discrete variable representation and the
finite basis representation, see Eq.(6.7), by

For systems such that the magnetic quantum number m is not conserved we
have to define a two-dimensional grid in the angular coordinates As
mentioned above the dimension of both unidimensional sub-grids has to be
equal. Because the is not associated with an orthonormal function
there is no similar rule how to select the appropriate grid points. Widely
excepted, but nevertheless not a strict rule, is an equidistant ansatz for As
already discussed, for conserved magnetic quantum number the depends
on Hence this angular grid is and because is no longer a
conserved quantity the discrete variable representation would be no longer a
convenient representation for Hamiltonians without azimuthal symmetry. This
complexity can be avoided by choosing an identical angular grid for all values
of Thus we choose for the discrete grid points the zeros of
Discrete Variable Method 163

the Legendre polynomial with corresponding weights


given by the N-point Gauss-Legendre quadrature. Following the ideas
of Melezhik [3] we define our (non-normalized) angular basis by

To summerize, the nodes of the corresponding grid with


are given by the zeros of the Legendre polynomial
and with weights Therefore our
Hilbert space basis in the finite basis representation becomes

For practical computations it is more convenient to combine the multi-label


by a single label and to re-normalize the ansatz during the actual computation.
Note, that because the is not related to orthogonal polynomials
this ansatz becomes only orthogonal in the limit and hence the
corresponding Hamiltonian matrix will be weakly non-symmetric. We will
discuss these questions and related problems and we will derive the DVR
representation of the Hamiltonian by a concrete example in subsection 4.
An alternative way to discretize the two-dimensional angular space is dis-
cussed in [14]. Because in their discretization the number of grid points for
the azimuthal angle is 2N – 1 for N rotational states (l = 0 ... N – 1),
the grid-dimension equals N(2N – 1) and hence is approximately twice the
dimension in the finite basis representation. The advantage of such an ansatz is
the higher resolution in the azimuthal angle the disadvantage is obviously the
different dimensions which does not allow a simple similarity transformation
between the Hamiltonians in both representations.

2.2.1 TIME-DEPENDENT SYSTEMS, RESONANCES AND


SCATTERING STATES
In this subchapter we will discuss briefly time-dependent systems and add
a few remarks to resonances and scattering states. The basic idea in using the
discrete variable representation to describe wave packet propagation is to map
the time-dependent Schrödinger equation onto a system of linear equations
and to describe the wave packet and the Hamiltonian in the discrete variable
representation.
164 NUMERICAL QUANTUM DYNAMICS

In order to describe the time evolution of a wave packet, one have to solve
the time dependent Schrödinger equation

A formal solution at time t+dt can be written as

In chapter (1.5) we discussed in detail effective methods to compute the time


propagation of a wave packet. To apply the discrete variable approach to those
methods is straightforward. To demonstrate the principle way in which the
discrete variable method can be used, let us restrict the following discussion to
conservative quantum systems. One possibility for bound state systems is the
expansion of the initial wave packet with respect to the eigensolution of the
Hamiltonian. In this case the time dependence is given by the eigenenergies
of the system under consideration and the expansion coefficients of the initial
wave packet as described in chapter (1.?). On the contrary this method will, e.g.,
fail if tunneling becomes important or if the number of eigenstates becomes
numerically hardly feasible. Hence in many situations it is necessary or at
least numerically more stable to solve directly the time-dependent Schrödinger
equation. Methods based on Chebychev expansions, the Cayley or Crank-
Nicholson ansatz, or on direct expansions of the propagator and subspace
iterations lead to operator equations. Every action of an operator on a wave
packet can be mapped onto the action of the corresponding matrix, Eq.(6.14),
on the discretized wave packet represented by a vector. Hence we end up with
a linear system of equations. An example is discussed in subsection (4.2.3).
In many 3-dimensional applications the wave function is expanded with
respect to angular and radial coordinates and the discrete variable method
is restricted to the angular part. In this case the 3-dimensional Schrödinger
equation is mapped onto a system of uni-dimensional differential equations.
Hence the discrete variable part does not differ in its application for bound or
continuum solutions, because the boundary condition will only affect the radial
component. Hence methods like the complex coordinate rotation will only act
on the radial component. Ergo taking into account special boundary conditions,
e.g. via Numerov integration, is only necessary for the radial component. An
example for the combination of discrete variables, finite elements and complex
coordinate rotation will be discussed in chapter 6.
The disadvantage of orthogonal polynomials in describing continuum solu-
tions is their finite range. In some applications the Hamiltonian system will
be placed in a large box with unpenetrable walls or focus on an evaluation of
the wave function in a finite range. In all those methods the first step is to find
Discrete Variable Method 165

an ansatz, which will be able to describe the asymptotic behavior by a finite


range evaluation. After formulating such an ansatz the wave packet have only
to be described in finite space and hence it is straightforward to use the discrete
variable method. The same philosophy holds for time-dependent approaches
to continuum solutions.

3. ORTHOGONAL POLYNOMIALS AND SPECIAL


FUNCTIONS
In the previous sections we investigated the discrete variable method. Basic
requisites of this discretization technique are orthogonal polynomials and their
nodes. Orthogonal polynomials and special functions are playing a crucial role
in computational physics, not only in quantum dynamics. In this section we
will report about orthogonal polynomials giving preference to discrete variable
applications. Nevertheless the following discussion will not be restricted only
to this application and will additionally include some remarks about special
functions.

3.1 GENERAL DEFINITIONS


Orthogonal polynomials can be produced by starting with 1, and
employing the Gram-Schmidt orthogonalization process. However, although
this is quite general, we took a more elegant approach that simultaneously
uncovers aspects of interest to physicists. The following definitions, lemmas
and theorems can be found in [12, 13, 16, 17].
In this section we will denote by [a, b] an interval of the real axis, by a
weight function for [a, b] and by an orthogonal polynomial of degree
A function is called a weight function for [a, b] if and only if

exists and remains finite for each and

The numbers defined above are called moments of The polynomials


of degree are called orthonormal polynomials
associated with the weight function and the interval [a, b] if and only if

The inner or scalar product between two function f, on the interval [a, b] is
defined by
166 NUMERICAL QUANTUM DYNAMICS

Because is a polynomial of degree of course, has n zeros


More important with respect to discrete variable techniques is the follow-
ing theorem:
The zeros of the orthogonal polynomial are
1. real
2. distinct
3. elements of the interval [a,b]
One important (and obvious) property of orthogonal polynomials is that any
polynomial of degree can be written as a linear combination
More precisely

By writing the orthogonal polynomial in the following form

a three-term recurrence relation can be derived:

This recurrence relation leads to a remarkable proof of the reality of the nodes
of the orthonormal sequence Let us rewrite (6.35) by the following
matrix-equation:
Discrete Variable Method 167

Now suppose we chose x as a node of we arrive at the matrix


equation:

and hence the eigenvalues of the symmetric tridiagonal matrix J are the zeros
of J is called the Jacobi matrix of the orthonormal polynomial. Since
J is a symmetric matrix, all eigenvalues and thus all roots of the corresponding
orthogonal polynomial are real. From the matrix equation above we can also
derive the Christoffel-Darboux relation

For completeness we will also mention the Rodrigues formula

with a polynomial in independent of The Rodrigues formula allows to


derive directly a sequence of orthogonal polynomials via n-times derivation. It
is also possible to generate all orthogonal polynomials of a certain kind from a
single two-variable function by repeated differentiation. This function is called
generating function of the orthogonal polynomial. Assume that a function
fulfills

then the orthogonal polynomial is (up to normalization) given by

Some examples will be listed in the following subsections.


One of the most important and useful applications of orthogonal polynomials
occurs in quest of numerical integration techniques. Let be a polynomial
of degree Then the formula

with weights and the zeros of the orthogonal polynomial is called


Gauss quadrature. The generalized Lagrangian interpolation polynomials
168 NUMERICAL QUANTUM DYNAMICS

fulfill clearly

and hence the quadrature weight can be computed by

The Gauss quadrature is one of the most successful integration techniques in


approximating integrals over general functions. In dependence of the selected
orthogonal polynomial, which of course depends also from the integration inter-
val, the Gauss quadrature gets the polynomial name as an additional classifying
label, e.g., Gauss-Legendre quadrature.
In the following subsections we will tabulate important properties and discuss
applications and numerical aspects of some selected polynomials.

3.2 LEGENDRE POLYNOMIALS


The spherical harmonics are the eigenfunctions of the angular
momentum operator. These functions have an explicit representation which
is based on the Legendre polynomials respectively the associated Legendre
functions. Of course applications can not only be found in quantum mechanics
but also in electrodynamics, gravitational physics, potential theory or any other
theory in which the Laplace operator plays an important role. Thus there is a
large body of literature about Legendre functions and its applications.
The Legendre polynomial is a polynomial of degree l defined by
(Rodrigues formula)

with l zeros in the interval [–1,1]. The associated Legendre functions are
defined by

is a polynomial of degree with nodes in the interval


[–1,1] and sometimes called associated Legendre polynomial. With respect
to the discrete variable method only those interior zeros of the associated
Legendre function are of importance. Associated Legendre functions with
respect to negative index m are defined by
Discrete Variable Method 169

Legendre functions are useful in such situations in which solutions of the


quantum system can be computed efficiently by an expansion with respect
to spherical harmonics or, in spite of this discussion, by the corresponding
discrete variable expansion. Note, that the efficiency of the discrete variable
method does not necessitate that a spherical expansion of the wave function
is efficient. Hence, discrete variable techniques are not related to symmetric
or near-symmetric situations of the quantum system under consideration. In
dependence of the necessary numerical accuracy we have to choose a certain
number of grid points, which are given by the zeros of the (associated) Leg-
endre polynomial. Increasing the number of grid points leads to higher order
polynomials. In Fig.(6.1) some examples of Legendre polynomials are shown
and one important disadvantage of discrete variable techniques is uncovered.
Suppose our Hamiltonian potential as a function of shows strong
localized peak structures. In such a situation increasing the numerical accuracy
necessitates a higher number of grid points only locally. Doubling the number
of grid points in one sub-interval doubles also the number of zeros in other areas
of the interval and hence we have to pay an unnecessary high computational
price. This is shown in Fig.(6.1) top to bottom. On top the polynomial degree
increases from 7 to 9 and hence the number of grid points is increased, but
in a smooth way, polynomially distributed. On bottom the polynomial degree
is roughly doubled which leads to the numerically claimed local
increase of nodes, but also to approximately a doubling of zeros in areas were
it might not be necessary. Hence in situations in which the potential as a func-
tion of the discrete variable coordinate shows only locally strong fluctuations,
discrete variable techniques are numerically not the most efficient technique.
In such situations it might be more favorable to resort to finite element methods
as described in the next chapter.
To compute associate Legendre functions the following recursion relation is
very useful:

By starting with

and we get secondly

because Some examples are labeled in the following equation


170 NUMERICAL QUANTUM DYNAMICS

Legendre polynomials are connected with hypergeometric functions by


Murphy’s equation

The associated Legendre differential equation is given by

which reduces for m=0 to the Legendre differential equation. It is straight-


forward to show by Rodrigues equation, that is a solution of the Legendre
differential equation for integer n. Legendre polynomials, respectively the
associated Legendre functions, hold the following orthogonality relation

By orthonormalizing the Legendre polynomials


we arrive at the recursion relation

from which we can derive directly the Jacobi matrix (6.37) to compute the
zeros of the Legendre polynomial

By normalizing the associate Legendre functions a similar equation can be


derived for the roots of the associate Legendre polynomials. Eigenvalues of
tridiagonal matrices can be computed easily by the QL- or QR-algorithm [18].
Hence this is an alternative to the Muller-method which we will explain in
subsection (3.6).
The geometric and historical background of the Legendre polynomials is
due to the expansion of the gravitational potential the radial distance.
Suppose we have a point mass at space point A and we want to derive the
gravitational potential at point P. The distances to the origin are
and and the enclosed angle between Hence the distance
between point A and P is thus 1/R could be
Discrete Variable Method 171

expanded for in powers of and for in powers of Let us


call this relative variable h, then the coefficients of this expansion

are the Legendre polynomials. Thus this equation can also serve as a gener-
ating function for the Legendre polynomials The derivative of the
associated Legendre function can be computed by

and the parity relation is given by

More equations and relations for Legendre polynomials and associated Legen-
dre functions and their relation to other special functions can be found, e.g, in
[19, 20],

3.3 LAGUERRE POLYNOMIALS


Generalized Laguerre polynomials are polynomials of
degree in Its Rodrigues equation is given by

Generalized Laguerre polynomials play an important role as part of the solution


of the radial Schrödinger equation

with

The orthogonality relation for the generalized Laguerre polynomials reads

The following recursion relation is very useful, because it allows a numerical


efficient and stable evaluation of the Laguerre polynomial for a given index:
172 NUMERICAL QUANTUM DYNAMICS

with

Similar to the recursion relation for the Legendre polynomial the equation above
can be reformulated to obtain a recursion relation for the normalized generalized
Laguerre polynomial, which possesses the same zeros as the standard ones.
Hence we obtain again a Jacobi formulation which allows computing the zeros.
The following equation shows as an example the Jacobi-recursion for the
Laguerre polynomial:

and hence the Jacobi matrix becomes

The generating function for the generalized Laguerre polynomial is given


by

and the derivative by

Some relations with the Hermite polynomials will be labeled in the next sub-
section, more equations and relations can be found in many table and quantum
mechanic textbooks. Unfortunately in some publications Laguerre polynomi-
als contain an additional factor and some publications make use
of the identity Hence in any case one have to take
care about the selected definition.
Discrete Variable Method 173
174 NUMERICAL QUANTUM DYNAMICS

3.4 HERMITE POLYNOMIALS


Hermite polynomials play an important role in quantum mechanics because
the eigenfunctions of the harmonic oscillator

are given by

with the Hermite polynomial of n-th order. The Hermite polynomial


hold the orthogonality relation

and is defined by its Rodrigues formula

and the generating function reads

The following recursion formula is of importance for computational approach

and

By normalizing the Hermite polynomial

we obtain
Discrete Variable Method 175

and hence the Jacobi matrix, whose eigenvalues are the zeros of the Hermite
polynomial becomes

The derivative of the Hermite polynomials hold

which leads together with the above recursion formula to

and

Thus the Hermite differential equation is given by

are even and are odd polynomials in Therefore we obtain


for the parity of the Hermite polynomial

and they are connected with the confluent hypergeometric function by

Because the Laguerre polynomial hold

we obtain the following connection between Laguerre and Hermite polynomials


of definite parity:
176 NUMERICAL QUANTUM DYNAMICS

3.5 BESSEL FUNCTIONS


Bessel functions were introduced in science the first time in 1824 from
Bessel in context of astronomical questions. In quantum dynamics Bessel
functions are of importance with respect to scattering solutions of those sys-
tems in which the potential converges quicker than the centrifugal potential,
hence In such situations the radial function can be
asymptotically written as a linear combination of the spherical Bessel function
(see below).
The ordinary Bessel function is a solution of the following second order
differential equation

and fulfills the regular solution near the origin

This ordinary Bessel function is connected with the spherical Bessel function
by

Note, that unfortunately in the literature there are different definitions for the
spherical Bessel function. A common definition is also

and thus The derivative of the Bessel function with integer


index n is given by

and fulfills the following recursion relation

All Bessel functions with integer index can be derived from by

with
Discrete Variable Method 177

The Rodrigues formula of the spherical Bessel function is given by

and they both fulfill the same recursion relation (similar to the ordinary Bessel
function)

The first two functions are given by

Much more relations about integer and half-integer Bessel functions and some
integral definitions can be found in [19]. In many situations in which the ana-
lytic solution of the Schrödinger equation can be obtained by Bessel functions
it is nevertheless less laborious to obtain the solution directly by optimized
numerical routines.

3.6 GEGENBAUER POLYNOMIALS


In the coordinate representation the momentum operator becomes a deriva-
tive operator and the position operator is multiplicative. In the momentum
representation this is vice versa. The eigenfunctions of the hydrogen atom in
the momentum representation are discussed nicely in [21]. The Gegenbauer
or ultraspherical polynomials characterize the momentum wave func-
tion of the Coulomb system in a similar way as the Laguerre polynomials in
the coordinate representation. Both are connected with each other by Fourier
transformation. The generating function of the Gegenbauer polynomial is

Hence for the Gegenbauer polynomials equal the Legendre poly-


nomials, see Eq.(6.58), and thus the are generalizations of the Legendre
polynomials.
The recurrence formula for fixed index is
178 NUMERICAL QUANTUM DYNAMICS

with

and the derivative holds

The orthogonality relation is given by

and its Rodgrigues equation by

and the are solutions of the following differential equation

More relations, special values and representations by other hypergeometric


functions and other orthogonal polynomials can be found in [20].

3.7 COMPUTATION OF NODES


The computations of nodes means the computations of roots of polynomi-
als. One way is to use the Jacobi representation of the recursion relation, see
Eq.(6.37). Of course it is straightforward to generalize this equation to orthogo-
nal polynomials, leading to generalized eigenvalue problems. Numerically this
means simply to shift the normalization of the polynomial to the generalized
eigenvalue problem. Because the normalization matrix is diagonal this is in
both cases simply multiplication by numbers and hence there is no advantage
between one or the other computational way.
One of the numerically most stable ways to compute the eigenvalues of tridi-
agonal matrices is the QR-decomposition. Once the Jacobi matrix is derived
using the relevant recursion relation, the Jacobi matrix J will be decomposed
in two matrices with an orthogonal and an upper triangular
matrix, By an iterative process
we arrive finally on an diagonal matrix Because

with denoting the transposed matrix and as all matrices are orthogonal
by construction, and possess the same eigenvalues and hence the
Discrete Variable Method 179

diagonal matrix consists in the eigenvalues of the original Jacobi matrix


J. For general symmetric eigenvalue problems QR-decompositions are not
very effective because they necessitate steps to obtain the results. For
tridiagonal symmetric matrices this is reduced to steps, making the algorithm
numerically highly efficient. QR- and QL-decompositions are equivalent, the
only difference is that in QL-decompositions the matrix will be decomposed
into an orthogonal and lower triangular matrix instead of an upper triangular
matrix. QR and QL algorithms can be found in most standard linear algebra
libraries, e.g., in the NAG-, Lapack- or the older Eispack-library and additional
in many textbooks about numerical methods.
Frequently found in the literature is an iterative computation of the character-
istic polynomial of the tridiagonal matrix. If denotes the determinant of
the n-dimensional Jacobi eigenvalue problem the n-dimensional
identity matrix, then the determinant can be computed by

For orthogonal polynomials this approach means simply to recover the original
polynomial root problem and is of no computational help. In general the
computation of large determinants should be numerically avoided, because the
roundoff errors in building differences between large numbers could lead to
an erroneous result and hence the algorithm mentioned above could become
numerically unstable.
In many situations the simplest way is to compute directly the zeros of the
orthogonal polynomial or special function under consideration. Perhaps the
most celebrated of all one-dimensional routines is Newton’s method. The basic
idea is to extrapolate the local derivative to find the next estimate of the correct
zero by the following iterative process

The advantage of orthogonal polynomials is that we know how many zeros


exist and in which real interval. Nevertheless for polynomials of higher order
we might fail. One of the problems is that, due to the derivative, the next
estimated point will be the tangent crossing with the x-axis and this could lay
outside the definition interval. To overcome that problem it might be necessary
to start with a finer and finer coarse grid of starting points in the theoretically
allowed interval. A second cause of the fault is to run in a non-convergent cycle
due to the same tangent of two following iterative points. Hence it is useful
to use more advanced techniques like the Müller method. The Müller method
generalizes the secant method in which the next estimate to the zero is taken
where the approximating line, a secant through two functional values, crosses
the axis. The first two guesses are points for which the polynomial value lies
180 NUMERICAL QUANTUM DYNAMICS

on the opposite side of the axis. Müller’s method uses a quadratic interpolation
among three points, instead of the linear approximation of the secant method.
Given three previous guesses and to the zero of the polynomial
the next approximation is computed by the following iteration procedure

In principle this method is not only restricted to orthogonal polynomials but to


general functions and also complex roots can be computed.

3.8 MISCELLANEOUS
Different basis representations lead to different structures of the correspond-
ing operator matrix. One of the simplest computational form, besides diagonal
representations, are tridiagonal representations. The radial Coulomb Hamilto-
nian can be transformed directly into such a tridiagonal form. The orthogonal
polynomials associated with the corresponding basis are the Pollaczek polyno-
mials [22]. The Schrödinger equation of the Coulomb problem has an exact
solution in both spherical and parabolic coordinates (see Chapter 2). The wave
functions in both coordinate systems are connected by Clebsch-Gordan coef-
ficients. This coefficient can be expressed in terras of the Hahn polynomials.
For details see [13, 16].
Hypergeometric series or functions play an important role in the theory of
partial differential equations. They are a generalization of the geometric series
defined by

with

the Pochhammer symbol. The derivative of the hypergeometric function is


given by
Discrete Variable Method 181

and because of

Hypergeometric functions are related to many special functions and orthogonal


polynomials. For more details see, e.g., [19, 20], To obtain starting values for
iterative computations the Gauss theorem

is helpful. The hypergeometric differential equation is given by

Well known examples for quantum systems which lead exactly to this hyperge-
ometric differential equation are molecules approximated by a symmetric top,
see, e.g., problem 46 in [23].
By substituting and we arrive at a new differential
equation

the confluent hypergeometric equation, and its solution is given by the confluent
hypergeometric function

Important applications for the confluent hypergeometric function are scatter-


ing problems in atomic systems and, beyond quantum dynamics, boundary
problems in potential theory. Hypergeometric functions are generalized by

E.g., is connected with the already mentioned Hahn


polynomial. More important is the following relation with the Jacobi polyno-
mial
182 NUMERICAL QUANTUM DYNAMICS

There are several equivalent definitions, but unfortunately also some different
ones in the mathematical literature, e.g,
Hence in any case the exact definition has to be checked before using different
tables.
The Jacobi polynomial follows the three-term recursion relation

and its generating function

with

From this equation it is straightforward to compute the lowest two contributions


to the recursion relation above. The Jacobi polynomials are the only integral
rational solution of the hypergeometric differential equation. Many orthogonal
polynomials can be treated as a Jacobi polynomial with special index
One of the most important relations are

and the Gegenbauer polynomial

4. EXAMPLES
This chapter is devoted to some applications of the discrete variable method.
The harmonic oscillator and the hydrogen atom are particularly important phys-
ical systems. Actually, a large number of systems are, at least approximately,
governed by the corresponding Schrödinger equations and an innumerable
number of physical systems are back-boned by these two. The eigenfunctions
of the harmonic oscillator (Kepler system) are related to Hermite (Legendre)
polynomials. Therefore let us start with the Hermite polynomial followed by
an example for Legendre polynomials.

4.1 HERMITE POLYNOMIALS AND THE


ANHARMONIC OSCILLATOR
Discrete Variable Method 183

As already mentioned above results obtained in the study of oscillator sys-


tems are applicable to numerous cases in physics, e.g., vibration of the nuclei in
diamagnetic molecules, torsional oscillations in molecules, motion of a muon
inside a heavy nucleus, giant resonances in atomic nuclei, to name only a few.
As a didactic example for the discrete variable technique we will discuss the
anharmonic oscillator with a general forth order potential.
The Hamiltonian of the anharmonic oscillator reads

By using the ladder operators

we obtain

and are the prefactors to respectively for In first


order perturbation theory we get for the energy

and because of the energy does not undergo an alteration


due to the cubic potential in first order. For vanishing forth order term the
potential curve will undergo a maximum so that the bound states die out at
higher energy. Strictly speaking even the low lying states become quasi bound
by coupling to the continuum via tunelling.
For the discrete variable computation we use the following ansatz for our
wave function:

and the roots of the Hermite polynomial Because the functions


are harmonic oscillator eigenstates the matrix elements can
184 NUMERICAL QUANTUM DYNAMICS

be computed easily. By this ansatz it is straightforward and a standard textbook


example that satisfies the Hermite differential equation. Hence for the
harmonic oscillator we obtain

and because is multiplicative (see also Eq.(6.12))

Thus we have to solve the following eigenvalue problem

Of course it is straightforward to generalize this result for to ar-


bitrary potentials by For even potentials
eigenfunctions of even and odd parity do not mix and the computations can be
optimized by projecting the ansatz above on the even, respectively odd parity
subspace. Hence the dimension of the eigenvalue problem will be reduced
from N + 1 to (N + l)/2 for N odd and for N even to (N + 2)/2 for the
positive parity and to N/2 for negative parity states.
As a concrete example we will first discuss the quartic anharmonic oscillator.
For the positive/negative parity eigenstates we have to solve the following
eigenvalue equation for the positive parity states
Discrete Variable Method 185

respectively

for the negative eigenstates, and projected to the positive/negative sub-

space. In Fig.(6.2) we compare some results for the 2nd exited state obtained
by 1st order perturbation theory, by DVR with a 3- and a 6-dimensional positive
parity matrix and by diagonalization of a positive parity Hamiltonian
matrix The results computed by the Hamiltonian matrix and by
the 6-dimensional DVR are in excellent agreement. As expected the perturba-
tional results are much worse. In Fig.(6.3) some DVR-results computed with
a 11-dimensional matrix are plotted. For comparison the analytic harmonic
oscillator results are surprinted. On the lefthand side some results for the cubic
anharmonic oscillator are plotted. With increasing anharmonicity constant
the potential curve undergoes its extrema at lower and lower energy. On the
bottom, only the lowest two eigenstates ‘survive’, all others are in
the continuum. This is an example where one have to exercise caution. By
construction the DVR-method will produce real eigenvalues, even if only reso-
186 NUMERICAL QUANTUM DYNAMICS

nances survive. Nevertheless, resonances can also be computed by combining


the discrete variable method with complex coordinate rotation, or by taking the
Discrete Variable Method 187

correct boundary condition into account. In the complex coordinate method the
momentum operator is mapped onto and the coordinate operator
onto Therefore if and are the relevant matrices we get

E.g. the first excited state for (lefthand side bottom in Fig.(6.3) will
have an complex eigenvalue Hence the width of this
eigenstate is still sufficiently small. We will discuss some results in context
of the finite element technique in the next chapter. On the righthand side of
Fig.(6.3) we present some results for In this case the contribution
will dominate the potential for sufficiently large distances from the origin and
hence all eigenstates remain real. Of course this would not be true for negative
which we will discuss as an example in chapter 6.

4.2 LEGENDRE POLYNOMIALS AND COULOMB


SYSTEMS
In this section we will apply the discrete variable method to Legendre poly-
nomials and more general to spherical harmonics. Possible applications are
beside the top any one-particle Hamiltonian. As an example we will discuss
Alkali atoms in external fields. This example has the advantage that it is not
only of principle interest but uncovers in addition many computational aspects
related to Legendre polynomials.
The kinetic energy of a single electron in a magnetic field is given by
with the vector field. In the symmetric gauge
this leads to three contributions: The field-free kinetic energy
the paramagnetic contribution which leads to the
well known Zeeman effect and the diamagnetic contribution
which becomes important for extremely strong magnetic fields or for high
excited states. Hence the Hamiltonian for an one-electron atom with infinite
nuclear mass becomes in the symmetric gauge and appropriate units
and spherical coordinates

where the magnetic field axis points into the z-direction For the
hydrogen atom in external magnetic and electric fields the potential becomes
188 NUMERICAL QUANTUM DYNAMICS

with the magnetic induction the electric


field strength measured in units of and representing the
angle between the two external fields. The energy is measured in Hartrees
(27.21 eV) and lengths in units of the Bohr radius Effective
one-electron systems like Alkali atoms and Alkali-like ions can be treated by
a suitable phenomenological potential, which mimics the multi-electron core.
The basic idea of model potentials is to represent the influence between the non-
hydrogenic multi-electron core and the valence electron by a semi-empirical
extension to the Coulomb term, which results in an analytical potential function.
The influence of the non-hydrogenic core on the outer electron is represented
by an exponential extension to the Coulomb term [24]:

Z is the nuclear charge, the ionization stage. (For neutral atoms the ionization
stage is for single ionized atoms and for multi-ionized atoms
The coefficients are optimized numerically so as to
reproduce the experimental field-free energy levels and hence quantum defects
of the Alkali atom or Alkali-like ion. In Tab.(6.1) we show some parameters

for Alkali-like ions. This model potential has the advantage, that it has no
additional higher order singularity than the Coulomb singularity. In many
computations, singularities up to order –2 are lifted due to the integral density
in spherical coordinates. The form of this model potential is
based on the work of [24], more data can be found in [25].
Discrete Variable Method 189

4.2.1 TIME-INDEPENDENT PROBLEMS


Let us now start to discuss the discrete variable method applied to the
Hamiltonian (6.128) above. Of course we could add in addition further potential
terms, e.g., a van-der-Waals interaction.
As already mentioned, the Legendre polynomials are related to the eigen-
functions of the angular momentum operator. Therefore a simple computational
ansatz is given by using the complete set of orthonormal eigenfunctions of the
angular momentum operator

where

and

As discussed in chapter (2.2), for conserved magnetic quantum number


can be separated off, and the N nodal points in are given by the roots of the
Legendre polynomial In general we design a by choosing
N nodal points with respect to by the zeros of and, for the variable
N equidistant nodal points in the interval The value of the first
eigenfunctions (6.131) at these nodal points defines a square
matrix with elements For N odd becomes a Chebycheff
system and hence the existence of the inverse matrix is guaranteed [26].
To simplify the computation we expand the wave function in terms of
and the radial functions

This expansion has the advantage, that all those parts in the Hamiltonian which
are independent of angular derivative operators become diagonal:
190 NUMERICAL QUANTUM DYNAMICS

For the “angular-terms” in the Hamiltonian we get

with
Separating the radial function in its real and imaginary part leads to
coupled uni-dimensional differential equations:

with

This system of differential equations could be written more clearly in a matrix-


vector notation
Discrete Variable Method 191

and hence finally

In general the coupling matrix K and thus the differential equation system
(6.138) will not be symmetric. As we will show this problem can only be
partially solved. But let us first discuss the two-dimensional case.

Cylindrical symmetry. For systems with cylindrical symmetry the quantum


number is conserved and hence the degree-of-freedom can be separated
off. In our example this is the case for either vanishing electric fields or parallel
electric and magnetic fields. Thus the angular momentum operator can be
reduced to

Therefore we have only N grid points instead of grid points In


addition our radial wave function becomes real and hence the system is
reduced to a system of N coupled uni-dimensional differential equations in

with

Symmetrization. In order to keep the Hamiltonian matrix symmetric, we


have to normalize the three-dimensional wave function in the space of the
discrete variables

By defining

and in analogy to Eq.(4.2.1)


192 NUMERICAL QUANTUM DYNAMICS

and renormalizing the differential equation (6.138) we arrive after some simple
algebra gymnastic at

with

and

If we would be able to design grid points and integration weights in such a


way that the corresponding Gaussian quadrature would become exact, the rows
of the matrix would built an orthogonal system

and hence our normalization matrix (6.144) would become diagonal


In case of the Legendre polynomials, respectively associated Legendre
function, the Gaussian integration is exact. Hence for cylindrical symmetry, in
which the discretization reduces to the Eq.(6.149) becomes exact.
Thus the normalization matrix is diagonal and the differential equation system
(6.146) becomes symmetric. For the full system the has also to
be discretized. is related to trigonometric functions and hence Eq.(6.149)
holds only approximately, respectively in the limit Therefore the
differential equation system (6.146) is only approximately symmetric.
In summary, by using the discrete variable method for spherical harmonics
the 3-dimensional Schrödinger equation is mapped onto a system of ordinary
differential equations in the radial coordinate. Hence finally we have to solve
this radial equation by a suitable method, e.g., by using the finite element
method as described in chapter (6).
Discrete Variable Method 193

4.2.2 SPECTROSCOPIC QUANTITIES


So far we have considered only the computation of wave functions and
energies. A complete discussion of spectral properties has also to take into
account the probability, that an atom or molecule undergoes a transition from
one state to another. In this context the dipole strength

becomes important. The dipole strength is by definition the transition matrix


element between two states and Related to the dipole
strength are the oscillator strength

and the transition probability

with the Bohr radius and the reduced mass. Due to the spherical harmonic
in the transition matrix element, only those transitions are allowed for
which the magnetic quantum number remains equal (linear polarization) or
changes by ±1 (circular polarization). (For more details see, e.g., [27].) Hence
from our ansatz (6.133) we get for the dipole strength

Both parts and of the integral can be computed independently.


The parameter q distinguishes between the different possible polarizations.
For q = 0 we have linear and for q = ±1 right and left circular
polarization.
To compute the angular part we rewrite the ansatz (6.131) in terms of
the spherical harmonics
194 NUMERICAL QUANTUM DYNAMICS

Ergo, computing the angle integral will be reduced to integrate products of


three spherical harmonics and hence well-known expressions from the theory of
angular momentum coupling can be used. Here we will make use of the Regge
symbols [28]. Of course it is straightforward to use or Clebsch-Gordan
coefficients instead of Regge symbols.

This representation has the advantage that the symmetries of the angular mo-
mentum coupling are best uncovered. Only if all elements are non-negative
integers and if all columns and rows are equal the sum of the three coupling
angular momenta, the Regge-Symbol [ ] does not vanish. Of course in our
case this leads immediately to the dipole selection rules
Therefore we get the following results:
For

for

and for

Hence with this expressions we can compute directly


Discrete Variable Method 195

4.2.3 TIME-DEPENDENT PROBLEMS


In this chapter we will only add a few remarks to time-dependent problems.
E.g., in order to describe the time evolution of a Laser-induced wave packet we
have to solve the time dependent Schrödinger equation

where H is the non-relativistic single particle Hamiltonian of an alkali atom


or ion in external fields. (Of course the following approach is not restricted to
that example. But we will continue this discussion in chapter 6, where we will
use finite elements to solve the linear system of uni-dimensional differential
equation, derived below.) For the spatial integration we use the combined
discrete variable and finite element approach above-mentioned. A solution for
the time development of the discretized wave functions is given by:

which can be approximated, e.g., by the Cayley or Crank-Nicholson method

Inserting (6.161) into the time-evolution (6.160) leads to an implicit system of


algebraic equations

which must be solved for each time step Hence in general we are guided to
operator equations of the form

with f, polynomial functions. (This equation includes also Chebycheff ap-


proximations.) Therefore applying the discrete variable method is straightfor-
ward and acts pointwise in time. Thus, instead of the eigenvalue problem for
time-independent questions, we get a linear system of equations, which have
to be solved in each time step. For example from Eq.(6.162) we will get in
analogy of the eigenvalue equation (6.146)
196 NUMERICAL QUANTUM DYNAMICS

For explicitly time-independent Hamiltonians the computations are simplified


because the Hamiltonian matrix remains invariant under time and hence, e.g.,
LU-decompositions of the Hamiltonian matrix have to be done only once.
In chapter 6 we will continue with this discussion and we will present some
results of these time-dependent and time-independent examples by solving the
radial part via finite elements.

4.3 PERIODIC AND FIXED-NODE DISCRETE


VARIABLES
The following discussion is based on [7]. We start with defining a basis set,
not necessarily orthogonal polynomials. The basic idea of this approach is that
the defined discrete variable basis will have all nodes spaced evenly in a fixed
selected coordinate interval in contrast to the “standard” orthogonal polynomial
approach.
Let our original Hilbert space basis and coordinate functions such
that For a complete Hilbert space basis, we can expand our
unknown coordinate function with respect to this basis

and get by a Gaussian integration

Hence, by construction, we obtain

or

This equation is the central result for this approach. Therefore there are two
steps left: The selection of the Hilbert space basis and a convenient
quadrature rule. Of course there is no general way how to select both, but this
freedom allows to incorporate special features of the system under considera-
tion. Because the Hilbert space basis can be defined exactly, the DVR basis
is in addition independent from any numerical procedure, Let us now turn to
two examples [7] to clarify the approach presented above.
Discrete Variable Method 197
198 NUMERICAL QUANTUM DYNAMICS

Fixed-node boundary conditions. As an example for fixed-node boundary


conditions we select a quantum system which can be described accurately by a
Hilbert space basis built by particle-in-box eigenfunctions

To fit any other boundaries this interval can be transformed by shift and rescal-
ing, With this basis we get from Eq.(6.168)

To find an appropriate quadrature rule, which will give us the relevant nodes
and weights let us first discuss the Gauss-Chebyshev quadrature of the
second kind [19].
The Gauss-Chebyshev quadrature of the second kind is defined by

where the nodes are given by the roots of the Chebyshev


polynomials of the 2nd kind, and the weights by
Guided by the fact that sin is the lowest order function not included
in the original basis, Eq.(6.169), and has evenly spaced zeros

we will reformulate the quadrature given above to derive the correct weights
With we get for an arbitrary integrable function f

with
Discrete Variable Method 199

Hence by using this quadrature rule we get equally spaced nodes and the
weights become Therefore finally we arrive at

An example of this fixed-node discrete variable basis for N = 3 is presented


in Fig.(6.4).

Periodic discrete variables. In this example we start with the Hilbert space
basis

with and and N odd integer. Because our


discrete variable basis has to fulfill it is straightforward to show
that becomes

By using the Gauss-Chebyshev quadrature of the first kind

we obtain with for an arbitrary integrable function f

and

An example of this fixed-node discrete variable basis for N = 3 is presented


in Fig.(6,4).
200 NUMERICAL QUANTUM DYNAMICS

5. THE LAGUERRE MESH


Similar to the discrete variable computation general mesh calculations make
use of Gaussian integration techniques and hence avoid the necessity of calculat-
ing complicated matrix elements. The basic idea is to use generalized Lagrange
interpolation polynomials based on orthogonal functions (see Eq.(6.43)). In
the following we will first describe the general idea but restrict the applica-
tion to Laguerre meshes and unidimensional systems. Using meshes based
on other orthogonal functions or polynomials as well as the generalization to
multidimensional problems is straightforward. Nevertheless in most cases a
combination of discrete variable and finite element systems, as described in the
next chapter, seems to be more accurate and efficient. Applications of mesh
methods can be found in nuclear [29], atomic [30, 31] and molecular physics.
In the following we will discuss mesh calculations for a one-dimensional
Schrödinger equation with a local potential and eigensolutions, which
approximately vanish outside an interval [a, b]

The approximation of the wave function is based on its values at the mesh
points

thus

For generalized Lagrange interpolation polynomials, based on orthogonal


functions Eq.(6.43), the Gaussian quadrature is given by

with the roots of Hence let us consider the corresponding family of


orthogonal polynomials with a weight function and norm A
set of functions normalized on the interval [a, b] is given by

and the mesh points by the zeros of Note, in contrast


to the discrete variable technique, these computations are only based on the
highest normalized orthogonal polynomial of the selected set A
discussion of generalized meshes, including Hermite meshes can be found in
Discrete Variable Method 201

[32]. The following discussion will be restricted to interpolation functions


based on Laguerre polynomials.
For Laguerre meshes the generalized Lagrange interpolation polynomials
are based on

Hence the mesh points will be given by the zeros of and therefore will
be different for each (angular momentum) value l. Because the coordinate is
restricted to positive definite values, we expect the corresponding mesh to be
useful for radial or polar coordinates and hence most applications are given by
the radial Schrödinger equation. Thus our kinetic energy operator for a given
orbital angular momentum l is given by

and its matrix elements by

with the corresponding Christoffel symbols

Making use of the Gauss-Laguerre quadrature [32] leads to

with

for the kinetic energy, and for the potential matrix elements we obtain

One of the advantages of the Laguerre mesh method is, that the centrifugal
potential in the kinetic energy is taken correctly into account. By introducing
a coordinate scaling to optimize the convergence, the discretized
Schrödinger equation becomes
202 NUMERICAL QUANTUM DYNAMICS

Hence solving eigenvalue equation (6.192) will give us the eigenenergies and
as eigenvectors the corresponding wave functions at the mesh points.
In some applications, e.g. in cases in which the potential is dominated by
a modified Laguerre mesh [31] with mesh points defined by

is more appropriate. In this case the interpolation polynomials are given by

with

and

It is of interest to compare the approximative behavior for large of the


generalized Lagrange interpolation functions. For the Laguerre mesh the ap-
proximative behavior for large is given by due to Eq.(6.185) and
for the “squared mesh” by due to Eq.(6.195). The approximative
behavior of the radial hydrogen eigenfunction with principal quantum number
n is Thus for a scaling factor both, the hydrogen eigenfunc-
tion and the interpolation function of the Laguerre mesh, show exactly the
same approximative behavior. In contrast to the hydrogen atom the approxi-
mative behavior of harmonic oscillator eigenfunctions for large is given by
Hence the Laguerre mesh is much better adopted to Coulomb-
dominated potentials, whereas the modified Laguerre mesh, Eq.(6.193), is
expected to be more appropriate for parable-dominated potentials. An example
for the hydrogen atom in extremely strong magnetic fields will be presented
below. Let us consider the kinetic energy for a two-dimensional problem in
polar coordinates

hence [31]

for the modified Laguerre mesh.


Discrete Variable Method 203

Example. The hydrogen atom in extremely strong magnetic fields is an exam-


ple where both, the modified Laguerre mesh and the ordinary Laguerre mesh,
can be used. The Hamiltonian of the hydrogen atom in strong magnetic fields
in spherical coordinates is given by Eqs.(6.128) and (6.129). In cylindrical
coordinates we obtain in the non-relativistic limit

with the magnetic field strength measured in units of Tesla, and


neglecting the conserved Zeeman contribution the electron spin.
The eigenfunction of the free electron in an external magnetic field is given
by the Landau function [33]

Thus the solution for the hydrogen atom in strong magnetic fields can be
expanded with respect to the Landau function for dominant magnetic fields.
This is the case for neutron star magneticfields or for high
excited Rydberg states with the principal quantum number).
In Tab.(6.2) we compare results for the ground state of the hydrogen atom
for obtained by a product ansatz for the wave function. In the
we used a finite element ansatz and in the a Laguerre
respectively a modified Laguerre mesh. The modified Laguerre mesh is in
coincidence with the Landau function (6.199).

In semiparabolic coordinates the Schrödinger equation for the hydro-


gen atom in strong magnetic fields becomes
204 NUMERICAL QUANTUM DYNAMICS

with

and this equation is in exact agreement with the kinetic energy operator (6.197).
Thus the generalized Laguerre mesh seems more appropriate. Ergo the opti-
mized mesh depends in addition from the selected coordinates. The wave
function converges for large distances in the semiparabolic coordinates like
but because this is equivalent to the
Coulomb-like behavior, mentioned above,
In the next chapter we will discuss another discretization technique, the finite
element method. In the discrete variable method or the mesh technique the basis
is defined globally, which means in the whole space, but considered locally at
the nodes. Finite elements go one step further. The space will be discretized by
elements, small areas in space, and on each of this elements a local coordinate
system possessing interpolation polynomials. Hence both, the coordinate space
and the approximation of the wave function, will be considered locally. Again
this ansatz will lead to a (symmetric) eigenvalue problem, whose solution will
give the eigenenergy and corresponding wave function of the quantum system
under consideration.

Notes
1 This somewhat unfortunate naming is due to the finite number of quadrature
points in coordinate space. Loosely speaking, in the finite basis representa-
tion the coordinate space is represented by the finite number of quadrature
points, in contrast to the variation basis representation in which the coordi-
nate space remains continuous.
2 To distinguish between continuous and discretized variables, functions and
so on we will use tilded letters and symbols for the discretized ones.
3 Unfortunately different authors use different phase conventions in defining
the
4 Systems with cylindrical or spherical symmetry have azimuthal symmetry,
which means symmetric under rotation around the azimuthal (z) axis.

References

[1] Dickinson, A. S., and Certain, P. R. (1968). “Calculation of Matrix Ele-


ments for One-Dimensional Problems,” J, Chem. Phys. 49,4209–4211

[2] Light, J. C, Hamilton, I. P., and Lill, J. V. (1985). Generalized discrete


variable approximation in quantum mechanics,” J. Chem. Phys. 82, 1400–
1409
References 205

[3] Melezhik, V. S., (1993). “Three-dimensional hydrogen atom in crossed


magnetic and electric fields,” Phys. Rev. A48,4528–4537
[4] Schweizer, W. and P. (1997). “The discrete variable method for
non-integrable quantum systems,” Comp. in Phys. 11, 641–646

[5] P. and Schweizer, W. (1996). “Hydrogen atom in very strong


magnetic and electric fields,” Phys. Rev. A53, 2135–2139

[6] P., Schweizer, W. and Uzer, T. (1997). “Numerical simulation


of electronic wave-packet evolution,” Phys. Rev. A56, 3626–3629

[7] Muckerman, J. T. (1990). “Some useful discrete variable representations


for problems in time-dependent and time-independent quantum dynamics,”
Chem. Phys. Lett. 173, 200–205

[8] Lill, J. V., Parker, G. A., and Light, J. C. (1982). “Discrete variable repre-
sentations and sudden models in quantum scattering theory,” Chem. Phys.
Lett. 89,483–489

[9] Colbert, D. T., and Miller, W. H. (1992). “A novel discrete variable repre-
sentation for quantum mechanical reactive scattering via the S-matrix Kohn
method,” J. Chem. Phys. 96,1982–1991

[10] Groenenboom, G. C., and Colbert, D. T. (1993). “Combining the discrete


variable representation with the S-matrix Kohn method for quantum reactive
scattering,” J. Chem Phys. 99, 9681–9696

[11] Eisenberg, E., Baram, A. and Baer, M, (1995). “Calculation of the den-
sity of states using discrete variable representation and Toepliitz matrices,” J.
Phys. A28, L433-438. Eisenberg, E., Ron, S. and Baer, M. (1994). “Toeplitz
matrices within DVR formulation: Application to quantum scattering prob-
lems,” J. Chem. Phys. 101, 3802–3805

[12] Szegö, G., (1975). Orthogonal Polynomials. Ameri. Math. Soc., Provi-
dence, Rhode Island

[13] Nikiforov, A. F., Suslov, S. K., and Uvarov, V. B. (1991). Classcial Ort-
gogonal Polynomials of a Discrete Variable. Springer- Verlag, Berlin

[14] Corey, G. C., Lemoine, D. (1992). “Pseudospectral method for solving


the time-dependent Schrödinger equation in spherical coordiantes,” J. Chem,
Phys. 97, 4115–4126

[15] Kosloff, R., (194). “Propagation methods for quantum molecular dynam-
ics,” Annu. Rev. Phys. Chem. 45, 145-178
206 NUMERICAL QUANTUM DYNAMICS

[16] Nikiforov, A. F., and Uvarov, V. B., (1988). Special functions of mathe-
matical physics Birkhäuser, Basel
[17] Tricomi, F. (1955). Vorlesungen über Orthogonalreihen. Springer, Hei-
delberg
[18] Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannery, B. P.
(1992). Numerical Recipies. Cambridge University Press
[19] Abramowitz, M. and Stegun, I. A. (1970). Handbook of Mathematical
Functions. Dover Publications, Inc., New York
[20] Gradstein, I. and Ryshik, I. (1981). Tafeln - Tables. Verlag Harry Deutsch,
Frankfurt
[21] Hey J. (1993). “On the momentum representation of hydroenic wave
functions: Some properties and an application”, Am. J. Phys. 61, 28 – 35
[22] Broad, J. T. (1985). “Calculation of two-photon processes in hydrogen
with an basis” Phys. Rev. A31, 1494–1514
[23] Flügge, S. (1994). Practical Quantum Mechanics. Springer-Verlag, Berlin
[24] Hannsen, J., McCarrol, R. and Valiron, P. (1979). “Model potential cal-
culations of the Na-He system”, J. Phys. B12, 899–908
[25] Schweizer, W., Faßbinder, P. and González-Férez, R. (1999). “Model
potentials for Alkali metal atoms and Li-like Ions,” ADNDT 72, 33–55
[26] Berezin, I. S. and Zhidkov, N. P. (1966). Numerical Methods, Nauka,
Moscow (in Russian), (German translation in VEB Leipzig)
[27] Bethe, H. A. and Salpeter, E. E. (1957). Quantum Mechanics of One- and
Two-Electron Systems. Springer-Verlag, Berlin
[28] Regge, T. (1958). “Symmetry properties of Clebsch-Gordan’s coeffi-
cients”, Nuovo Cim. 10, 544–545
[29] Baye, D. (1997). “Lagrange-mesh calculations of halo nuclei, Nucl. Phys.
A 627, 305–323
[30] Godefroid, M., Liévin, J. and Heenen, P.-H. (1989). “Laguerre meshes in
atomic structure calculations,” J. Phys. B 22, 3119–3136
[31] Baye, D. and Vincke, M. (1991). “Magnetized hydrogen atom on a La-
guerre mesh,” J. Phys. B 24, 3551–3564
[32] Baye, D. and Heenen, P.-H (1986).“Generalized meshes for quantum
mechanical problems,” J. Phys. A 19, 2041–2059
References 207

[33] Canuto, V. and Ventura, J. (1977). Quantizing magnetic fields in astro-


physics. Gordon and Breach Sci. Pub., New York
Chapter 7

FINITE ELEMENTS

Partial differential equations are used as mathematical models for describing


phenomena in many branches of physics and engineering, e.g, heat transfer in
solids, electrostatics of conductive media, plane stress and strain in structural
mechanics and so forth. Obviously, for many physical problems the geometry
for which a solution of the partial differential equation has to be found is
the main source of the difficulty. Take, e.g., a simple wrench. The complex
shape of this tool is too complicated to compute directly, e.g., the stress on
the top. Thus approximation techniques had to be derived to find solutions
of partial differential equations on complicated geometries. These difficulties
gave birth to the finite element method in order to solve differential equations
arising from engineering problems in the 1950s. The essential idea of the
finite element technique is to formulate the partial differential problem as a
boundary value problem and then to divide the object under consideration into
small pieces on which the partial differential equation will be solved using
interpolation polynomials.
Thus, even the finite element technique was not developed originally to
calculate quantum eigensolutions, it can be applied to partial differential equa-
tions of Schrödinger-type and it turned out that it provides a convenient, flexible
and accurate procedure for the calculation of energy eigenvalues, for deriving
scattering solutions or for studying wave packet propagation. One of the first
studies for quantum systems is [1], in which the finite element method was
used to obtain continuum states. The computation of one dimensional bound
states with finite elements is didactically well explained in [2]. In the follow-
ing chapter we will discuss the basic mathematical background of the finite
element technique followed by applications of one- and two-dimensional finite
elements to quantum systems.
209
210 NUMERICAL QUANTUM DYNAMICS

1. INTRODUCTION
The finite element method can be applied to any partial differential equation
formulated as boundary value problem. The questions that define a boundary
value problem are: What equations are satisfied in the interior of the region of
interest and what equations are satisfied by the points on the boundary of that
region of interest?
In general there are three possibilities. Let be a solution of the partial
differential equation. The Dirichlet boundary condition specifies the values of
the boundary points as a function, thus

and the generalized von Neumann boundary condition specifies the values of
the normal gradients on the boundary

with the outward unit normal, and complex valued functions defined on
the boundary. Thus the boundary condition could be either of von Neumann
type, or of Dirichlet type or a mixture of both. In the following we are interested
in solving the time-independent Schrödinger equation. Bound states are square
integrable, scattering and resonance states can be mapped onto square integrable
functions by complex coordinate rotation. This means that the solutions we
are interested in will vanish approximately outside a suitable selected area and
thus the following discussion will be restricted to boundary value problems of
Dirichlet type. For a thorough and general discussion of finite elements see,
e.g., the excellent monography of Akin [3].
Let be a differential operator and

and

with the boundary of the bounded domain and a complex function.


For continuous problems the boundary value problem can be formulated by an
equivalent variational problem

with the functional

This is also called the weak formulation or the Ritz variational principle.
Finite Elements 211

Example. For a better understanding of this equation let us discuss a simple


example:

Multiplying Eq.(7.3) by a test function and integrating over the domain


we obtain

and by partial integration and using the identity

Thus the variational task is to find a solution u such that Eq.(7.4) holds, where
for this example in addition the boundary integral vanishes. After this
didactic example let us return to our original discussion.
The next step in our derivation is to divide the domain in subdomains
the finite elements. Thus we obtain from Eq.(7.2)

To compute an approximate solution of Eq.(7.2) we expand with respect


of local approximation functions defined only on the single elements

with the summation over all elements, the label of the basis approximation
functions on each element, local coordinates defined locally on each of the
elements, see Fig.(7.1), and the expansion coefficients. Thus Eq.(7.5) leads
after variation to

Because the expansion coefficients are independent of each other we obtain a


linear system of equations to calculate the coefficients
212 NUMERICAL QUANTUM DYNAMICS

Thus solving this linear system of equations will approximately solve our
original Dirichlet boundary value problem. The derivation above is based on
the Ritz-Galerkin principle.
In general deriving the correct functional is a non trivial problem. For a
system of linear differential equations

a selfadjoint differential operator, a complex function, the functional is


given by [4]

and the corresponding linear system of equations by

with symmetric and banded. After this excursion into the mathematical
background of finite elements let us now turn to the quantum applications.

2. UNIDIMENSIONAL FINITE ELEMENTS


2.1 GLOBAL AND LOCAL BASIS
One-dimensional finite elements can be applied either to one degree-of-
freedom quantum systems or to single components of the multi dimensional
Schrödinger equation, e.g, the radial Schrödinger equation. Let us first discuss
bound states of a one dimensional quantum system for simplification. Thus the
corresponding Schrödinger equation in the coordinate representation is

Due to the simplicity of the kinetic energy operator and the arbitrary complex
shape of the potential, finite element techniques are only practicable in the
coordinate representation of the quantum system. For simplification we set

Because bound states are normalizable we could always find


a left and right border, in coordinate space beyond which the wave
functions vanishes effectively:

By multiplication of the Schrödinger equation with the wave function


from the left, we arrive at the equivalent variational equation with
Finite Elements 213

the functional

Because the wave function vanishes approximately outside the interval


we approximate the functional by a finite integration. Integrating by part leads
to

Hence we arrive finally at

and with the following preliminary expansion

at

Up to now the basis functions are still arbitrary and not restricted to a
finite element approach. In the finite element approximation the space will be
divided into small pieces and the wave function will be expanded on each of
the elements via interpolation polynomials. On each of this elements we define
a local basis, the nodal point coordinates:

with the length of the n-th finite element, the global position in
space of the left-hand border of the element and the local coordinate. In
Fig.(7.1) a simple example is shown.
214 NUMERICAL QUANTUM DYNAMICS
Finite Elements 215

The size of each of the local elements could be optimized individually and
could differ between neighboring elements. This freedom allows to refine the
finite element grid in those space areas in which the potential shows strong
fluctuations and to use a coarser grid in those areas in which the potential
behaves rather smoothly. Oversimplification of this loosely spoken statement
results in finite elements of constant size for nearly any potential. (That this
makes no sense becomes obvious if we compare the wave functions of different
systems with each other. Even there asymptotic behavior is quite different.) The
task of the finite element is not to optimize the potential shape, but to optimize
the computation of the wave functions, which means the eigensolutions of the
Schrödinger equation. Thus let us study two examples: the harmonic oscillator
wave function and the hydrogen radial wave function.
The normalized oscillator eigenfunctions are given as

with the oscillator mass and frequency and the Hermite polynomials
to the excited state. Thus the nodes of the eigenfunctions are given by
the zeros of the Hermite polynomial. The radial hydrogen eigenfunctions are
given by

with Z the charge, the Bohr radius and principal and angu-
lar momentum quantum number. Hence for the radial hydrogen eigenfunction
the nodes are given by the zeros of the generalized Laguerre polynomials.
Hermite and Laguerre polynomials are connected with each other by the square
of the function argument, see Chapt. 5.3.4. Already from this simple consid-
eration it would be astonishing if both wave functions are optimized by, up to
scaling factors, an identical finite element grid.
In Fig.(7.2, top) the nodes of the radial hydrogen eigenfunctions with prin-
cipal quantum number and are plotted. In contrast
to the rather equidistantly distributed nodes of the harmonic oscillator eigen-
functions, Fig.(7.2, bottom), their distribution is roughly quadratically. Thus to
optimize the computations the finite element grid will be selected accordingly.
Let be the size or length of the finite element and the left
border of the element. The finite elements of constant dimension are
given by

and the wave functions vanish approximately for As already men-


tioned above, a quadratic spacing is numerically more favorable for Coulomb
216 NUMERICAL QUANTUM DYNAMICS

systems:

and hence the size of the elements increases linearly

The wave function on the finite element is given by

where is the correct value of the wave function at the nodal points and the
summation runs over all interpolation polynomials. Hence the values,
play the rôle of the expansion coefficients on each of the finite elements and the
entire wave function is given by, loosely speaking, putting all pieces together.
Finite Elements 217

2.2 INTERPOLATION POLYNOMIALS


Common interpolation polynomials are the Lagrange, the Hermite and the
extended Hermite interpolation polynomials. Unfortunately this interpolation
polynomials are often called simply Lagrangian and Hermitian which should
not be mixed up with the polynomial functions of the same name.

2.2.1 LAGRANGE INTERPOLATION POLYNOMIALS


Let us first discuss in more detail Lagrange interpolation polynomials, la-
beled by the index In the following discussion we assume that the whole
space of interest is divided in smaller elements carrying a local coordi-
nate system running from [–1, +1]. A simple change of variables will allow
us to map this interval onto any other region

Derivation from a polynomial expansion. Let be an arbitrary function,


which can be expanded in a Taylor series

Through any two points there is a unique line, through any three points a unique
quadratic and so forth. Thus approximating the function by interpolation
polynomials will necessitate polynomials of order if points of the
function are given.
Approximating function by a polynomial under the following assumption

and

results in

with "1" (in general n) the polynomial degree and the label for the single
interpolation polynomials. Thus this interpolation polynomial fulfills
218 NUMERICAL QUANTUM DYNAMICS

and the corresponding Lagrange interpolation polynomial

and

The general definition of a Lagrange interpolation polynomial of order n is

with equidistant nodes and with

and therefore

Of course Eq.(7.27) could be generalized such that the nodes are non-equidistant-
ly or even randomly distributed in the interval [–1, +1]. For finite element
approaches the nodes are usually equidistant in the local coordinates inside the
single finite element. An example for the lowest four Lagrange interpolation
polynomials is shown in Fig.(7.3).

Derivation from a system of linear equations. Instead of the derivation


above, Lagrange interpolation polynomials can also be derived from a system
of linear equations for the polynomial coefficients, obtained by the polynomials
expanded at the nodes This point of view has the advan-
tage that it is straight forward to take in addition derivatives of the original
wave function into account. (In the following we will denote all interpolation
polynomials by and omit the polynomial order
To derive the Lagrange interpolation polynomial, we only assume that our
interpolation polynomial fulfills

at the (equidistant) nodes Therefore in local coordinates on a single finite


element the wave function is given by
Finite Elements 219

with correct values at the nodes For any other coordinate values, the wave
function will be approximated by the Lagrange interpolation polynomial

Example. A polynomial of order has linear independent coefficients


and hence nodal points in which are equidistantly
distributed. E.g. for 3 nodes we get and
hence a system of linear equations
220 NUMERICAL QUANTUM DYNAMICS

and additional similar equations for and This linear system of equations
can be condensed to a matrix equation:

whose solution will give us the unknown coefficients for the Lagrange
interpolation polynomial.
The example above can be generalized easily to a polynomial of arbitrary
order. Thus we get the following linear equation:

Computing the inverse matrix will lead to the unknown coefficients of the
Lagrangian interpolation polynomial. Some results are listed in Tab.(7.1).

2.2.2 HERMITE INTERPOLATION POLYNOMIALS


For Lagrange interpolation polynomials the wave function was computed
exactly at the nodal points. Nothing could be said about the first derivative
of the wave function. For Hermite interpolation polynomials we assume in
addition that the first derivative of the wave function will be correctly computed
Finite Elements 221

at the nodal points. Hence on a finite element we make the following ansatz
for the wave function

and for the derivative of the wave function

which leads to

hence nodal points lead to a total of linear equations and thus a polynomial
of order

Example. With we need for two nodes, and


Hermite interpolation polynomials of order three. Thus for we obtain

and similar equations for At the nodal points this will give us the
following system of linear equations:
222 NUMERICAL QUANTUM DYNAMICS

which could be, in analogy to the derivation of the Lagrangian interpolation


polynomials, condensed into a matrix equation

Therefore again the coefficient matrix C will be given by the inverse of the
nodal matrix A. Generalization of the equation above is straight forward.
Examples of Hermitian interpolation polynomials are shown in Fig.(7.4).

2.2.3 EXTENDED HERMITE INTERPOLATION POLYNOMIALS


The next higher interpolation step are extended Hermite interpolation poly-
nomials in which not only the wave function and its first derivative but in
Finite Elements 223

addition its second derivative are taken into account. Hence we get

and therefore the polynomials have to fulfill

For nodal points a complete description will be given by coefficients and


hence the corresponding polynomial is of the order All further deriva-
tions are equivalent to those described in detail for Lagrangian interpolation
polynomials and thus left to the reader. An example is shown in Fig.(7.5).
Obviously, as defined by Eq.(7.38a), the derivatives of vanishes at all nodal
points and the interpolation polynomial respectively at the nodal points.

2.3 EXAMPLE: THE HYDROGEN ATOM


A better understanding how finite elements can be applied to quantum sys-
tems is gained by concrete examples. Thus we will study in this subchapter in
detail the radial Schrödinger equation of the hydrogen atom.
The Schrödinger equation of the hydrogen atom reads

and thus the radial Schrödinger equation in atomic units for vanishing angular
momentum

Under the assumption and partial integration we obtain


224 NUMERICAL QUANTUM DYNAMICS

Thus together with Eq.(7.23) this leads to

where in the first line [. . .] is given by Eq.(7.40) and the derivative, e.g. in
is with respect to the radial coordinate. In our finite element equation all terms
Finite Elements 225

should be formulated in the local coordinates. Therefore we have to transform


these derivations from the global to the local coordinate:

In Eq.(7.41) we sum over all finite elements. Thus by Eq.(7.41) for each
single finite elements the following local matrices are given

By this local approach the computation is significantly simplified, because the


structure of the local matrices is independent from the individual finite element
and thus the single integrations, e.g.,

has to be done only once for each and hold for any finite element.
The contribution from all the elements via local matrices are assembled to
construct the global matrices H and S which results finally in a generalized
real-symmetric eigenvalue problem

Because S is symmetric and positive definite this can be easily transformed


into an ordinary eigenvalue problem. By a Cholesky decomposition we get
and thus

Of course we have to bear in mind, that the wave function has to behave
smoothly at the element boundary. Which means the local wave function for the
element at the right border equals the local wave function at the left border
of the neighboring finite element. Thus in the global matrices the
local matrices overlap at the border of the elements. An example is presented
in Fig.(7.6) for five elements. For Lagrange interpolation polynomials with
226 NUMERICAL QUANTUM DYNAMICS

three nodes each block of the Hamiltonian matrix, except the first and the last
one, has the following structure

and of course similar the mass or normal matrix S. For Lagrange interpolation
polynomials only the first and last matrix element in each matrix block will
overlap. Instead of these single block matrix elements for Hermite interpo-
lation polynomials a 2 × 2 sub-block and for extended Hermite interpolation
Finite Elements 227

polynomials a 3 x 3 sub-block will overlap, because now not only the wave
function but in addition the first respectively second derivatives at the borders
of the elements have to be equal.

2.3.1 CONVERGENCE
As an example we will discuss the convergency of the Coulomb problem
under different finite element structures and interpolation polynomials. The
dimension of the Hamiltonian matrix and the number of non-vanishing matrix
elements does not depend on the physical system under consideration. Because
the width of the matrices for Hermite interpolation polynomials are larger the
width for Lagrange interpolation polynomials, we will use the number of non-
vanishing matrix elements as parameter.
Let be the degree-of-freedom of the interpolation polynomials, where this
number tells us the number of derivatives of the interpolation polynomial which
will be taken into account. Thus

interpolation polynomials. For finite elements with nodes the interpolation


polynomial is of the order For elements, the dimension of the
banded, symmetric finite element matrices is given by

and the number of its non-vanishing matrix elements by

In Fig.(7.7) the dependence of the convergence of the energy eigenvalue


for the hydrogen eigenstate is presented. Obviously for interpolation
polynomials of higher order the energy is much better converged than for low
order interpolation polynomials. There is always an interplay between the
size of the single finite elements and the order of the interpolation polynomial.
Higher interpolation polynomials allow coarser grids. Thus in dependence
of the system under consideration one have to find the optimized choice by
experience. Higher order polynomials lead to a smaller number of elements,
but on the other side the width of matrix bands become larger. The difference
between Lagrange and Hermite interpolation polynomials of the same order
seems for a fixed number of non-vanishing matrix elements rather small, but
nevertheless in all our studies Hermite polynomials are slightly better. Thus
by experience, 5th order Hermite interpolation polynomials seem to be best
228 NUMERICAL QUANTUM DYNAMICS

for many quantum systems. Increasing the polynomial order beyond 5th order
results in some examples in additional wiggles in the wave functions and thus
deteriorates the convergence. But the most important point to obtain accurate
results is to select a fairly large coordinate space. Outside the finite element
area the wave function is equal zero by construction, and thus if this area should
be too small the energy will be increased significantly beyond its correct value.
One of the advantages of finite element calculations is, that the accuracy of
eigenvalues and eigenfunctions are roughly of the same order.
Besides the size of the selected space and the choice of the interpolation
polynomials the structure of the finite elements decides about the efficiency
of the computation. Let us assume that the radial space we have selected is
of the order of 100 a.u. and the size of the first element 0.07 a.u. . For
finite elements of constant size, this necessitates 1429 elements. For 5th order
Hermite interpolation polynomials the dimension of the matrices becomes 5718
and the number of non-vanishing elements For quadratically
widened elements only 38 elements are necessary. Thus again for a 5th order
Hermite interpolation polynomial, we obtain now finite element matrices of
dimension 154 and only non-vanishing matrix elements. This is
Finite Elements 229

only 2.7% of the value for finite elements of constant size. The only question
remains, if we will get convergency.
In Fig.(7.8) some results for different finite element structures are shown.
Again the hydrogen state was selected and the convergence, measured by
the relative deviation of the computational energy E from the exactly known
energy as a function of the non-vanishing Hamiltonian matrix elements
is presented. The weakest convergence was obtained for finite elements of
fixed size, labeled ’const.’. More efficient are finite elements where the interval
length increases linearly with the distance from the origin and thus the distance
of the border of the finite element from the origin increases quadratically. Even
better, but more laborious to obtain the structure, are finite elements which are
’Laguerre-spaced’. These finite elements are selected such, that the size of
the n-th finite element in units of the first finite element equals the n-th zero
of the Laguerre polynomial in units of the first node. Thus for a total of
elements the sizes of the finite elements are given by the zeros of the m-th
Laguerre polynomial. In many cases the structure of the physical system is
such, that it is not really worth to use adaptive methods. This holds for example
230 NUMERICAL QUANTUM DYNAMICS

for the hydrogen atom in external fields, even if the system will run from
spherical symmetric to cylindrical symmetric and will become non-integrable
and chaotic.
Of course the best structure for the selected grid depends on the physical
system under consideration. Hydrogen like systems, as presented above, fa-
vor quadratical spacing. Adaptive methods become important for multicentre
problems.

3. ADAPTIVE METHODS: SOME REMARKS


The purpose of selfadaptive methods is to seek an optimized grid, such that
the error becomes sufficiently small without increasing unnecessarily the finite
element space. Global refinements are often inefficient, because the finite ele-
ment space will be refined even in those areas which do not significantly count
to the total error. The strategy of adaptive methods can be summarized as:
1st Evaluate the residual of the current solution on the initial mesh.
2nd Compute the local error, that means the error related to a single finite el-
ement. Error computations are based on different norms. For the computation
of quantum bound states the so-called energy norm seems most appropriate.
3rd Estimate the global error.
4th Refine the grid due to error analysis.
(There are several adaptive finite element solver via Internet charge-free avail-
able, e.g., KASKADE [5].)
The essential step of grid optimization is the error analysis. For finite
elements the total error is based on the restriction of the total space to a
subspace. This means, e.g., for bound states the selection of the border at which
the wave function is set equal zero. Of course this is part of the formulation
of the boundary value problem and hence the mesh construction will have no
influence on this error. The second error might occur due to the finite accuracy
in computing the integrals related to the local matrices. Again this will not be
influenced directly by the grid selection. The third error is related to the mesh
size and the order of the interpolation polynomials. There are two possibilities
to react on this error: Modifying the grid or changing the order or even type of
interpolation polynomials. Selfadaptive methods are based on an analysis of
this third error. In the following discussion we will assume that we keep the
interpolation polynomial fixed and that we will only modify the mesh, either
coarsening or by refining the grid.
The error analysis based on the work of Zienkiewicz [4] has become popular
for adaptive applications. (For the non-German speaking reader this can also
be found in [3], especially Chapt. 14.) Let us start by defining an inner product
on finite element space as
Finite Elements 231

with the domain of our boundary problem, see Eq.(7.l). The is


defined by

By an error analysis it could be estimated that the norm of the deviation of


the approximated solution from the true solution is proportional to the residual
of the computed solution and the size of the finite element to the power of an
constant which depends on the smoothness of the exact solution. Thus the only
important message at the moment (which seems to be obvious to practitioner)
is that the error depends on the selected grid size.
In finite element space our quantum system is approximated by the boundary
value problem

Thus can be written as

with a lower order operator. For the Laplacian would be the nabla operator,
but restricted to Let us define

Thus in case of the Laplacian would be the gradient of the solution


Let us denote the finite element approximation with and let be the
normal coordinate of the single finite element. Then the local errors are

(Note, that already and are approximations because is a finite


approximation to the infinite coordinate space. If this space will be too small,
this approximation will be the main source of the total error which cannot be
corrected by the selected mesh.) Of course only the absolute values of the
local errors defined above are of interest. To steer the selfadaptive process for
quantum systems error estimates employ one of the following two norms1. The
energy norm of the error or the flux norm of the error, defined as
232 NUMERICAL QUANTUM DYNAMICS

The square of any of these norms is the sum of the corresponding norms over
the local finite elements. E.g.

where labels the single finite elements. Thus the remaining problem is that
we do not know the exact solution and thus we cannot evaluate the equations
above.
There are several possibilities to approximate the local errors defined above.
Computing the residual of our approximated solution will only tell us how small
the total error is, hence can be used to control convergence. One simple method
is to refine the mesh globally in the first step, or practically spoken to start with
a coarse grid, and to use the new solution as a first order approximation to the
correct solution to compute the local error. Another method commonly used is
an averaging based on the number and size of elements contributing to a node.
For quantum systems we expect and to be globally continuous. Thus a
solution should be more accurate for which these values are continuous across
element boundaries. Therefore we approximate and by an averaging of
and Basis of the adaptively remeshing is then that we want the energy
norm of the error to be the same in all the elements and that we want the locally
allowed error per element lower a given value. An application of adaptive
finite elements based on the software package KASKADE, mentioned above,
is presented in [6]. They studied the two-dimensional linear system with
triangular finite elements and an adaptive grid generation and obtained results
superior to global basis expansions.

4. B-SPLINES
Splines are quite common in approximating sets of discrete data points by
picewise polynomials. For approximations of data sets kubic splines are mainly
used. Other applications cover computer-aided design, surface approximations,
numerical analysis or partial differential equations. Here we will concentrate
on B-splines.
For simplicity we will restrict the following discussion onto one-dimensional
systems. The first step in using B-splines finite elements is again to confine
the quantum system under consideration to a finite space area outside
which the wave function effectively vanishes for bound states. This finite area
will be divided into single finite elements. In contrast to the interpolation
polynomials discussed above, there will be no internal nodes in the single
Finite Elements 233

finite element. Thus by the endpoints of the finite elements a knot sequence
will be defined. The B-splines of order on
this knot sequence are defined by the Boor-Cox recursive formula

In particular, is a piecewise polynomial zero outside the interval


thus covering finite elements and normalized such that

For a general discussion of splines see, e.g., [7] and for quantum applications,
e.g. [8, 9].
To optimize the computation for B-spline finite elements the choice of the
grid may be varied in different parts of the space. With respect to grid optimiza-
tion there is no principle difference between finite elements with interpolation
polynomials or finite elements with splines. In contrast to the interpolation
polynomials discussed above, B-splines are defined as polynomials piecewise
on intervals which are bounded by neighbored grid points. E.g. Lagrange
interpolation polynomials of any order will be restricted to one single finite
element, but B-splines of oder k will have a non-vanishing contribution for the
interval thus covering finite elements. Thus the structure of the
global matrices differs significantly

In Eq.(7.56) the show the structure of a global matrix with four finite
elements and Lagrangian interpolation polynomials of third order. For third
234 NUMERICAL QUANTUM DYNAMICS

order B-spline finite elements2 the global matrices will have additional non-
vanishing elements, in Eq.(7.56) marked by Thus comparing the efficiency
of B-spline with interpolation polynomials should also take into account the
structure of the global matrices and not only the number of nodes, as quite
common in literature.
B-spline techniques are mainly used for solving the radial Schrödinger equa-
tion or a system of coupled uni-dimensional Schrödinger equations. For the
radial Schrödinger equation

with ansatz

we expand in terms of B-splines. The radial space is divided


into finite elements. The endpoints of the finite elements are given by the knot
sequence for B-splines of order For the
B-splines vanish at their endpoints, except at the first B-spline is equal to
one and all others are vanishing and also at the last B-spline has the same
behavior. The knot sequence above can in general be distributed arbitrarily in
radial space. Hence the expansion of the radial wave function is given by

where the boundary conditions are implemented by restricting the above sum-
mation not to include and Due to the boundary condition
the wave function has to vanish for and due to the computa-
tional assumption for Thus using the Galerkin variational principle the
generalized eigenvalue is given by

and

The structure of these symmetric banded matrices is shown in Eq.(7.56). Gen-


eralization is possible in two directions: To include resonances and scattering
Finite Elements 235

states, either the boundary conditions have to be modified or complex coordi-


nate rotations have to be employed. Generalization towards higher dimensions
is possible by including tensor B-splines. But the application of B-splines is
mainly restricted to uni-dimensional systems.

5. TWO-DIMENSIONAL FINITE ELEMENTS


For uni-dimensional systems only the size of the finite elements can vary
between neighboring elements. In multi-dimensional systems we have in ad-
dition the possibility to select different shapes. In two dimensions there are
two different shapes common: rectangular and triangular finite elements. The
nodes could be located only on the border of the finite element or in addition
in the interior.
Rectangular finite elements become quadratic in local coordinates. If the two
local coordinates of a rectangular finite element are and the interpolation
polynomial could be given by a product of two interpolation polynomials
or as a function In the first case we could use, e.g.,
the one-dimensional interpolation polynomials already discussed above. Of
course if, e.g., becomes zero the total interpolation polynomial will become
zero and if equals one, at each position at which becomes one, the total
interpolation polynomial is also one. Thus the total number of nodes is given by
the product of the single nodes of and A didactically very well presented
application to a two-dimensional quantum system, the hydrogen atom in strong
magnetic fields, is [10].
Each polygonal shape can by covered by triangular elements. Even if most
potentials in quantum dynamics are not limited by borders we will discuss in
detail only triangular finite elements. Using different shaped finite elements is
straight forward.

5.1 TRIANGULAR FINITE ELEMENTS


5.1.1 LOCAL COORDINATES
In one dimension we have introduced local coordinates such that the size of
each finite element becomes one, independent of its true length in space. Thus
similar to the uni-dimensional construction we will construct local coordinates
for triangular finite elements.
Let us start with an arbitrary triangle in global coordinates, see Fig.(7.9),
with a point P inside. Connecting this interior point with all edges and labeling
these edges counter clockwise, (P1,P2,P3), will separate the triangle in three
triangles with area Thus we could define local coordinates by the
ratio of the subareas of the triangle with the total area of the triangle F
236 NUMERICAL QUANTUM DYNAMICS

Because we started with two dimensions but derived three local coordinates,
we need in addition a constraint, which is obviously

These local coordinates have the following properties: and


because for the coordinates become in P1
and similar, in and in P3
In Fig.(7.10) the triangular finite element is presented
in local coordinates. Of course to obtain the local matrices we have to integrate
only over two local coordinates

with {. . .} the integrand and J = 2F the Jacobi determinant and F the total
area of the triangle. Again, similar to the uni-dimensional system the contri-
bution from all the elements via local matrices is assembled to construct the
Finite Elements 237

global matrices, which results finally in a generalized real-symmetric eigen-


value problem.
In one dimension the order of the finite element is simple. In two dimensions
we have several possibilities to order and hence to label the finite elements.
Because common nodes of neighboring elements result in an overlap of the
block matrices corresponding to these finite elements, the construction of the
finite element net will have an influence on the band width of the matrix. Hence
the organization of the finite element grid becomes important. An example is
shown in Fig.(7.11). With this construction, e.g., finite element "9" will have
common nodes with "3", "10", "11", "18", "17", "16", "15", "14", "7", "8", "1"
and "2". Thus it could be even worth to construct different nets in dependence
of the selected interpolation polynomials. The finite element grid presented in
Fig.(7.11) is efficient for all interpolation polynomials discussed below.

5.1.2 INTERPOLATION POLYNOMIAL


In this subsection we will discuss two-dimensional interpolation polyno-
mials useful for quantum computations, All interpolation polynomials are of
Lagrangian type. In contrast to engineering problems it turned out, that for
most applications, due to the simple shaped quantum potentials, interpolation
polynomials with 10 or even 15 nodes are more efficient than using finer grids
and linear interpolation polynomials.
238 NUMERICAL QUANTUM DYNAMICS

Linear two-dimensional interpolation polynomials. The simplest interpo-


lation polynomials are linear interpolation polynomials. For linear interpola-
tion polynomials the finite element has three nodes located at the corners. An
example is shown in Fig.(7.12).

Quadratic two-dimensional interpolation polynomials. For quadratic in-


terpolation polynomials we need six nodes. These nodes are labeled counter
clockwise first for the nodes at the three corners and then for the remaining
line centered nodes, see Fig.(7.14). An example of quadratic two-dimensional
interpolation polynomials is shown in Fig.(7.13). Obviously the interpolation
polynomials becomes one at one single node and zero at all other nodes.
Finite Elements 239
240 NUMERICAL QUANTUM DYNAMICS

Cubic two-dimensional interpolation polynomials. For third order inter-


polation polynomials in two dimensions we need finite elements with 10 nodes.
10 nodes cannot be selected uniformly on the border of the triangle, hence one
node will be centered inside the triangle. Because interior nodes increase the
width of the banded matrices, interior nodes are avoided usually. Again the
nodes are labeled counter clockwise, first at the edges of the triangle, then
uniformly distributed on the border lines and at last the interior nodes, here of
course only one, see Fig.(7.14). An example for cubic interpolation polynomi-
als is shown in Fig.(7.15).

Interpolation polynomials for elements with fifteen nodes. For finite ele-
ments with fifteen nodes there are several possibilities to select the interpolation
polynomials. In contrast to uni-dimensional finite elements the same number of
nodes offers now the possibility to select interpolation polynomials of different
order. If all 15 nodes are uniformly distributed on the border of the triangular
finite element the interpolation polynomials are of 5th order:
Finite Elements 241

15 nodes, uniformly distributed on the border will lead to a ‘crowding’ of


nodes on the border and to a huge empty area inside the triangular finite element.
242 NUMERICAL QUANTUM DYNAMICS

Thus in this case rather uniformly distributed nodes over the whole finite
element are more favorable. This can be obtained by fourth order interpolation
polynomials. For quartic interpolation polynomials 12 nodes are uniformly
distributed on the border and three uniformly in the interior of the triangular
finite element. An example for both types of interpolation polynomials with
15 nodes is shown in Fig.(7.16).
Finite Elements 243

with

5.2 EXAMPLES
The following two examples, the hydrogen atom in strong magnetic fields
and the harmonic oscillator, document the principle way how to compute via two
dimensional finite elements quantum eigensolutions. The hydrogen atom shows
244 NUMERICAL QUANTUM DYNAMICS

in addition the transformation to a different coordinate set as an alternative to


varying the size of the finite elements as function of the distance from the
origin.

5.2.1 THE HYDROGEN ATOM IN STRONG MAGNETIC FIELDS


In [10] eigenenergies of the hydrogen atom in strong magnetic fields com-
puted with rectangular two-dimensional finite elements are discussed. Here we
will present a similar type of calculation, but for triangular finite elements. Due
to the potential we will use semiparabolic coordinates3 instead of Cartesian or
cylindrical coordinates. The Hamilton operator of the hydrogen atom in strong
magnetic fields in atomic units reads

with the angular and the spin operator. Due to


the cylindrical symmetry will be conserved and thus the angular momentum
part will only give rise to a constant energy shift for fixed magnetic quantum
number and fixed spin. Due to equs.(2.36) and (2.51) the Schrödinger
equation for the hydrogen atom in strong magnetic fields in semiparabolic
coordinates becomes

Thus for the kinetic energy we obtain

and for the potential energy


Finite Elements 245

thus

To compute the local matrices, the equation above has to be evaluated on


each of the single elements and the coordinates have to be transformed to local
coordinates. Due to the constraint for the local coordinates
we arrive finally for the finite element at

and for the local matrix elements

Merging all together we obtain finally the generalized real-symmetric eigen-


value problem (see Eq.(7.44))

whose solution will give us the eigenenergies and corresponding wave func-
tions.
Using in each direction 100 elements and 2-dimensional interpolation poly-
nomials of 3rd order results in a 90601 dimensional total matrix, which allows
to compute for about 3.5 Tesla the lowest 150 m=0 eigenstates. Of course
it is also easily possible to compute eigensolutions for the hydrogen atom in
246 NUMERICAL QUANTUM DYNAMICS

stronger magnetic fields. To obtain converged results three parameters have to


be checked carefully: the order of the interpolation polynomials, the number
of finite elements and the size of the net, which means the border at which the
wave function vanishes effectively.

5.2.2 THE TWO-DIMENSIONAL HARMONIC OSCILLATOR


The Hamiltonian of the two-dimensional isotropic harmonic oscillator reads
for mass

Thus the local matrices become

which are again merged together in the generalized eigenvalue equation

to obtain eigenenergies and eigenfunctions. In Tab.(7.2) some results are


presented to document the convergence.
By “computation I” (Tab.(7.2) we obtain about 100 eigenstates with an
accuracy better 1%. Using a much smaller grid and a much lower number of
finite elements (computation II) will result in only 30 computed eigenstates.
Using the same size of the net but increasing the number of finite elements,
thus refining the grid, will lead to only a few additional eigensolutions and a
weakly higher accuracy. Thus computing a higher number of eigensolutions
necessitates a larger size of the net. A higher accuracy without increasing the
total number of converged eigensolutions can be obtained either by refining the
grid or using higher dimensional interpolation polynomials.
Two-dimensional finite elements are widely used in quantum computations.
One of the early works [11] discusses a comparison between finite element
Finite Elements 247

methods and spectral methods for bound state problems. A finite element anal-
ysis for electron hydrogen scattering can be found in [12]. In many problems
the number of degrees-of-freedom is higher two. In those cases a combina-
tion of finite element methods with other computational schemes is often more
favorable than working with higher dimensional finite elements. [13] used a
combination of the closed coupling schemes with finite elements to compute
eigenstates for a five degrees-of-freedom system, the helium atom in strong
magnetic fields. In the next section we will describe a combination of discrete
variable and finite element methods.
248 NUMERICAL QUANTUM DYNAMICS

6. USING DIFFERENT NUMERICAL TECHNIQUES


IN COMBINATION
For multi-dimensional systems there is the possibility to combine different
numerical techniques to obtain converged solutions. The background is either
to obtain an optimized numerical technique or because otherwise storage and/or
computational time become too large. The fundamental idea is quite simple.
Suppose we have a degrees-of-freedom system. Therefore the wave function
will depend on space coordinates. In case of partially separable systems, the
wave function will be expanded by a product ansatz

with the separable and the non-separable part. For non-separable


systems, respectively the non-separable subsystem, the wave function can be
expanded

with the space coordinates and wave functions related to the


coordinate splitting. To obtain numerical results different techniques can be
applied to each part of the wave function.
For systems with three degrees-of-freedom it is quite common to use angular,
and radial, coordinates. For systems with cylindrical symmetry
the azimuthal angle can be separated off and the magnetic quantum number
is conserved. The non-separable part of the Hamiltonian will thus only depend
on the angle and the radial coordinate An expansion of the angular part
with respect to discrete variables (see Chapt. 5) and the radial part with respect
to finite elements [14] turns out to be highly effective and flexible. Because
a better understanding in this subject is helped by concrete examples we will
discuss as an example alkali metal atoms in strong magnetic fields.
The Hamiltonian of alkali metal atoms in strong magnetic fields is given
by equation (6.128) with potentials (6.129, 6.130) and the ansatz for the wave
function by Eq.(6.133):

Thus the wave function is expanded with respect to the angular and the radial
part. For the angular part the Hamiltonian equation and the wave function
will be discretized with respect to the nodes of the Legendre polynomial and
for cylindrical symmetry symmetrized. The details are described in Chapter
(5.4.2.1). This will lead finally to an eigenvalue equation (6.146)
Finite Elements 249

with a matrix carrying the radial differential part and the centrifugal and
angular part. For solving this differential eigenvalue equation we will expand
the radial wave function with respect to a finite element ansatz as described in
Chapter (6.2.1) and Chapter (6.3). Thus from the equation above we will get
finally a generalized eigenvalue equation

with H the global Hamiltonian and S the global mass matrix. The elements
of this global matrices are for non-cylindrical systems matrices
and for conserved magnetic quantum number matrices due to the
angular discretization, with N the number of nodes in the respectively
discretization. Because in case of conserved quantum number4 radial wave
5
functions are real the dimension is reduced in addition by a factor of 2. Thus
the global matrices have the following structure

with the local matrix element corresponding to the finite element and
each of the local matrix elements is again a matrix of dimension
respectively N × N. Thus for interpolation polynomials with degrees-of-
freedom, finite elements each with nodes, and discretization points in
the angle, the dimension of these banded matrices is given by

and the number of its non-vanishing matrix elements by

For the linear algebra routines the important value is the density of non-
vanishing matrix elements, which is given by
250 NUMERICAL QUANTUM DYNAMICS

and thus the density of the matrix becomes independent from the degree-of-
freedom of the interpolation polynomials and from the number of used nodes
in the discretization of the angular space via discrete variables. This is one of
the reasons why this combination seems so favorable. E.g. for the hydrogen
atom in strong magnetic fields of white dwarf stars (100 to 100, 000 Tesla)
no convergency could be obtained by using Sturmian functions up to a basis
size of 325,000. Such a large basis necessitates supercomputers. For exactly
the same problem we could obtain convergent results [14] by a combination
of finite elements and discrete variables already on small work stations. Of
course this combination is not only restricted to bound states. It can be as well
used in combination with complex coordinate rotations to compute resonances
or in combination with propagator techniques [15] to compute wave packet
propagations.
In the complex coordinate method the real configuration space coordinates
are transformed by a complex dilatation. The Hamiltonian of the system is
thus continued into the complex plane. This has the effect that, according
to the boundaries of the representation, complex resonances are uncovered
with square-integrable wavefunctions and hence the space boundary conditions
remain simple. This square integrability is achieved through an additional
exponentially decreasing term

After the coordinates entering the Hamiltonian have been transformed, the
Hamiltonian is no longer hermitian and thus can support complex eigenen-
ergies associated with decaying states. The spectrum of a complex-rotated
Hamiltonian has the following features [16]: Its bound spectrum remains un-
changed, but continuous spectra are rotated about their thresholds into the
complex plane by an angle of Resonances are uncovered by the rotated
continuum spectra with complex eigenvalues and square-integrable (complex
rotated) eigenfunctions.
In the Stark effect the whole real energy axis is the continuum spectrum.
Therefore no threshold exists for the continuous spectrum to rotate about. The
Hamiltonian for the hydrogen atom in parallel electric and magnetic fields,
after the above complex transformation has been applied, reads

using atomic units and spherical coordinates. Thus the complex coordinate
rotation affects only the radial coordinate but not the angle coordinates and
Finite Elements 251

hence the discrete variable part remains unchanged. Because the radial co-
ordinate is extended into the complex plane, but, of course, the interpolation
polynomials in the finite element ansatz remain real, the expansion coefficients
become complex and thus this results in a complex symmetric generalized
eigenvalue problem. Fig.(7.17) shows an example for the convergence of a
complex eigenvalue as function of the complex angle Convergence to a
complex eigenvalue is systematic and follows a pattern with an accumulation
point at the correct complex value.

Thus combining different computational strategies opens efficient ways to


calculate quantum results. Nevertheless one should be aware of the limits of
each method. E.g., for extremely high excited states and mag-
netic fields of a few Tesla (1 to 10) we got no convergence with discretization
techniques but we could compute converged eigenstates (up to about 1200)
with a global basis (special Sturmian eigenstates). Thus there is no general
computational technique useful for all quantum systems - and even that makes
studying the power of computational techniques and developing computational
methods so interesting! Developing physical models of real world systems and
studying them by numerical methods have to be done with the same care as
real experiments.

Notes
1 The third common method is error control with linear functionals.
252 NUMERICAL QUANTUM DYNAMICS

2 The definition of the order of B-splines differs from author to author. Thus
some authors would define this as a fourth order B-spline.
3 The radial coordinate is given by with semi-
parabolic coordinates. Thus we get with linearly spaced finite elements in
semiparabolic coordinates quadratically spacing in
4 The quantum number is conserved for systems with cylindrical symmetry.
5 In this example the only imaginary part in the Hamiltonian comes from the
linear Zeeman term. Of course for other quantum systems this might not be
true and thus the radial wave function might remain complex.

References

[1] Nordholm, S., and Bacskay, G. (1976). “Generalized finite element method
applied to the calculation of continuum states,” Chem. Phys. Lett. 42, 259 –
263

[2] Searles, D. J., and von Nagy-Felsobuki, E. I. (1988). “Numerical experi-


ments in quantum Physics: Finite-element method,” Am. J. Phys. 56, 444 –
448

[3] Akin, J. E. (1998). Finite Elements for Analaysis and Design, Academic
Press, London

[4] Zienkiewicz, O. C. (1984) Methode der Finiten Elemente, Carl Hanser


Verlag, München

[5] Beck, R., Erdmann, B., and Roitzsch, R. (1995). KASKADE 3.0 An object-
oriented adaptive finite element code Technical Report TR 95-4, Konrad-
Zuse-Zentrum, Berlin

[6] Ackermann, J., and Roitzsch, R. (1993). “A two-dimensional multilevel


adaptive finite element method for the time-independent Schrödinger equa-
tion,” Chem. Phys. Lett. 214, 109 – 117

[7] deBoor, C. (1978) A practical Guide to Splines Springer-Verlag, Heidelberg

[8] Sapirstein, J. and Johnson, W. R. (1996). “The use of basic splines in


theoretical atomic physics,” J. Phys. B 29, 5213 – 5225

[9] Johnson, W. R., Blundell, S. A., and Sapirstein, J. (1988). “Finite basis sets
for the Dirac equation constructed from B splines,” Phys. Rev. A 37, 307 –
315
References 253

[10] Ram-Mohan, L. R., Saigal, S., Dossa, D., and Shertzer, J. (1990). “The
finite-element method for energy eigenvalues of quantum mechanical sys-
tems,” Comp. Phys. Jan/Feb, 50 – 59

[11] Duff, M., Rabitz, H., Askar, A., Cakmak, A., and Ablowitz, M. (1980). “A
comparison between finite element methods and spectral methods as applied
to bound state problems,” J. Chem. Phys. 72, 1543 – 1559
[12] Shertzer, J., and Botero, J. (1994). “Finite-element analysis of electron-
hydrogen scattring,” Phys. Rev. A 49, 3673 – 3679
[13] Braun, M., Schweizer, W., and Herold, H. (1993). “Finite-element cal-
culations for S-states of helium,” Phys. Rev. A 48, 1916 – 1920; Braun,
M., Elster, H., and Schweizer, W. (1998). “Hyperspherical close coupling
calculations for helium in a strong magnetic field,” ibid. 57, 3739 – 3744
[14] Schweizer, W. and Faßbinder, P., (1997). “Discrete variable method for
nonintegrable quantum systems”, Comp. in Phys. 11, 641–646
[15] Faßbinder, P., Schweizer, W. and Uzer, T (1997). “Numerical simulation
of electronic wavepacket evolution,” Phys. Rev. A 56, 3626 – 3633
[16] Ho, Y. K. (1983). “The method of complex coordinate rotation and its
applications to atomic collision processes,” Phys. Rep. 99, 1 – 68
Chapter 8

SOFTWARE SOURCES

The first step in computational physics is to derive a mathematical model


of the real world system we want to describe. Of course this system will be
limited by the numerical methods known and by the computer facility available.
Many tasks in computing are based on standard library routines and well known
methods. Thus it is a question of scientific economy to use already available
codes respectively to modify those codes for our own needs.
The following list is of course an incomplete list of both, free available and
commercial software, but might be nevertheless useful.

General sources for information


GAMS. is a data bank in which the subroutines of the most important math-
ematical libraries are documented. Gams is an abbreviation for “Guide to
available Mathematical Software” and can be found under
http://gams.nist.gov

NETLIB. is a source for mathematical software, papers, list of addresses and


other mathematically related information under
http://www.netlib.org

Physics Database and Libraries. Links and a list of data of special interest
for physicists can be found under
http://www.tu.utwent.nl/links/physdata.html

Numerical and Computer Data Base. Links to and a list of numerical and
computer related resources can be found under
http://www.met.utah.edu/ksassen/homepages/zwang/computers.html
255
256 NUMERICAL QUANTUM DYNAMICS

ELIB. is a numerically related data bank from the Konrad-Zuse Centre


(Berlin)
http://elib.zib.de

MATH-NET. is an information service for mathematicians


http://www.math-net.de

Linux. a list of scientific applications under Linux could be found here


http://sal.kachinatech.com.sall.shtml
and in addition Linux software for scientists and other useful information
under
http://www-ocean.tamu.edu/min/wesites/linuxlist.html

GOOGLE. general non-specialized information


http://www.google.com

Commercial Software

MATLAB. is a numerical software system for solving scientific and en-


gineering problems. It provides advanced numerical codes, including sparse
matrix computations based on Krylov-techniques as well as advanced graphical
methods. Matlab is the basic package which allows to solve problems based on
algebraic, optimization, ordinary differential equations and partial differential
equations in one space and time variable, and allows to integrate FORTRAN,
C and C++ code. The basic functionality can be extended by additional tool-
boxes. Some are of interest for physicists, e.g.: “the optimization toolbox”,
“the statistical toolbox”, “the partial differential equation toolbox”, “the spline
toolbox” and “simulink” for the simulation of dynamical systems, and “the
symbolic math toolbox” and “the extended symbolic math toolbox” which al-
low symbolic computations based on a MAPLE kernel. In addition there are
toolboxes for developing stand-alone applications: “the matlab compiler”, “the
matlab C/C++ math-library” and “the matlab C/C++ graphics library”.
http://www.mathworks.com

FEMLAB. is a numerical software system based on Matlab for solving partial


differential equations via finite elements.
http://www.femlab.com

MAPLE. is a symbolic programming system with symbolic, numerical and


graphical possibilities.
http://www.maplesoft.com
Software sources 257

MATHEMATICA. like MAPLE Mathematica is a symbolic programming


system with symbolic, numerical and graphical possibilities. There are several
additional libraries available.
http://www.mathematica.com

Axiom. http://www.nag.com

Derive. is one of the smallest computer algebra systems and can already be
installed on small computers.
http://www.derive.com

Macsyma. http://www.macsyma.com

MATHCAD. offers symbolic and numerical computations. C/C++ code


could be integrated and graphical functions are available.
http://www.mathsoft.com

MUPAD. http://www.uni-paderborn.de/MuPAD

NAG. in contrast to the software listed above, NAG does not offer its own
computational language. NAG is an extensive collection of library subroutines
in FORTRAN 77/90 and C. Between Matlab and NAG there is a connection via
the “NAG foundation toolbox”. NAG offers more than 1000 routines covering
a wide area of numerical mathematics and statistics and is one of the “standard
libraries” for physical applications,
http://www.nag.com

IMSL. consists of more than 500 routines written in C or FORTRAN and


covers numerical and statistical methods.
http://www.vni.com and http://www.visual-numerics.de

Numerical Recipes. The numerical recipes (Press, W. H., Teukolsky, S. A.,


Vetterling, W. T. and Flannery, B. P. (1992). Numerical Recipes Cambridge
University Press Cambridge) are a well-known book for different numerical
methods. Source codes are available on CD in C, Fortran 77 and Fortran 90.
http://www.nr.com

Public Domain Software

BLAS. is an abbreviation for “Basic Linear Algebra Subroutines” and offers


linear algebra basic routines. Blas was the first implemented in 1979. In the
meantime 3 levels are available. Blas-level-1 are elementary vector operations
258 NUMERICAL QUANTUM DYNAMICS

like scalar products. This are vector-vector operations, respectively op-


erations. For vector computers these routines are not sufficient. Therefore
Blas-2-codes were developed. These codes support matrix-vector computa-
tions, respectively Because level-2 routines are insufficient
for workstations based on risc processors Blas-3 routines were developed. (La-
pack is based on Blas-level-3 routines.)
http://www.netlib.org/blas

EISPACK. program codes for eigenvalue problems


http://www.netlib.org/eispack http://www.scd.ucar.edu/softlib/eispack.html
Many routines published in the eispack library are based on Algol routines
discussed in Wilkinson, J. H. (1965). The Algebraic Eigenvalue Problem
Oxford University Press, New York.

FFTPACK. programm codes for fast Fourier transformation


http://www.netlib.org/fftpack

FISHPACK. program codes for solving the Laplace-, Poisson- and Helmholtz-
equation in two dimensions.
http://www.netlib.org/fishpack

HOMEPACK. program codes for solving non-linear systems of equations

ITPACK. software with iterative solvers for large linear systems of equations
with sparse matrices.
http://www.netlib.org/itpack

LAPACK. stands for linear algebra package and consists of FORTRAN 77


routines for linear problems. Lapack covers all standard linear problems and
offers additional routines for banded and sparse matrices. Lapack is the succes-
sor of linpack and eispack and most routines are more robust than the former
ones. In addition there is a parallel version of Lapack available and a C/C++
and Java version.
http://www.netlib.org/lapack

LINPACK. codes for solving linear systems of equations. The successor of


linpack is lapack.
http://www.netlib.org/linpack

MINPACK. codes for non-linear systems of equations and optimization rou-


tines.
http://www.netlib.org/minpack
Software sources 259

ODEPACK. software package for solving initial value problems for ordinary
differential equations.
http://www.netlib.org/odepack

ODRPACK. FORTRAN subroutines for solving orthogonal approximations


and least square fits.
http://www.netlib.org/odrpack

QUADPACK. routines for the computation of integrals and integral trans-


formations.
http://www.netlib.org/quadpack

OCTAVE. numerical computations, but restricted graphical possibilities.


http://www/che.wisc.edu/octace/

CERN. Information of special interest for physicists and a numerical library


is available from CERN. CERN offers also a “Physical Analysis Workstation”
under the abbreviation “PAW”
software, paw http://www.info.cern.ch/asd

ARPACK. A library for lanczos and arnoldi routines can be found under
http://www.caam.rice.edu/software/arpack

PORT. this mathematic library covers many numerical applications from


the bell laboratories and is free availbale for non-commerical applications for
scientists.
http://www.bell-labs.org/liblist.html
Acknowledgments

Many people have been involved, directly and indirectly, throughout the
progress of this book, and their contributions were invaluable to me. Earning
my money in industry and teaching at the university Tübingen only unsalaried,
the book was written during my “leisure time”. Therefore I have discovered -
as many authors before - that the acknowledgment to the author’s family, often
expressed in an apologetic tone, cannot be disqualified as a cliché. My biggest
debt is to my wife Ursula for her patience and perpetual support during the
long, and sometimes difficult time writing this book.
My thanks to the Ph.D. students I supervised: Peter Faßbinder, Rosario
González-Férez, Wilfried Jans, Matthias Klews, Roland Niemeier, Michael
Schaich, Ingo Seipp, Matthias Stehle, Christoph Stelzer. Each one has taught
me some aspects of life in general and contributed in their own way to my
intuition and to the topics discussed in the book. Special thanks are due to
Matthias Klews and Rosario González-Férez for a critical reading and their
helpful comments and criticisms. Of course, it is customary to thank someone
for doing an excellent job of typing. I only wish I could do so. It was also
a pleasure for me to teach and to learn from many students which took my
courses or made their one-year diploma thesis under my supervision.
I hope it is clear that a book does not grow in a vacuum. I have taken
help from any book or publication that I myself have found worthwhile; those
especially useful in writing a chapter have been referenced at the end of that
chapter. Thus I apologize for not citing all contributions to the body which are
worth reading.
It is a pleasure to thank the many colleagues and friends from whom I
learnt so much during the past two decades. Special thanks to my supervisor
Peter Kramer who teached me the beauty of symmetry structures in physics,
Hanns Ruder whose enthusiasm makes him a perfect instructor, and to Turgay
Uzer, Günter Wunner, Moritz Braun, Harald Friedrich, Stefan Jordan, Pat
O’Mahony, Vladimir Melezhik, Jesús Sánchez-Dehesa, Peter Schmelcher, and
261
262 NUMERICAL QUANTUM DYNAMICS

Ken Taylor for the constant encouragement and valuable discussions, and to
Susanne Friedrich and Roland Östreicher. Without their help I would not have
made the unique experience of observing white dwarf stars with the 3.5-m-
telescope at Calar Alto (Spain), and I would not have had the possibility to
compare my own computations based on discrete variable techniques and finite
elements with spectroscopic data from my own observations.
I well-come comments and suggestions from readers. Throughout the book
I chose to use the first person plural. Thus who is this anonymous ghostwriter
- who are we ? We are the author and the reader together. It is my hope that the
reader will follow me when we consider a numerical technique, a derivation,
or when we show that something is true. In this way we make contributions
towards a better understanding of computational physics.
Index

1/N-shift expansion, 70 Bra, 8


3j-symbol, 11, 14 B-splines, 232
Absorption, 91 Campbell-Baker-Hausdorff theorem, 18
Adam-Bashforth algorithm Cayley approximation, 17
4th order, 139 Cayley method, 141
Adam-Moulton algorithm Chaos, 35
4th order, 139 Chebyshev polynomial, 182
Adams-Bashforth algorithm, 139 Christoffel-Darbaux relation, 158
Adams-Moulton algorithm, 139 Christoffel-Darboux relation, 167
Aitken’s process, 59 Clebsch-Gordan coefficient, 10
Alkali atoms, 187–188 Commutator, 4
external fields, 187 Complex coordinate method, 6, 250
Alkali-like ions, 188 Complex coordinate rotation, 7, 164, 250
Angular coordinates, 162 Confluent hypergeometric equation, 181
Angular momentum, 9 Confluent hypergeometric function, 181
coupling, 10 Constant of motion, 35
SO(4, 2, R), 79 Contact potential, 2
Anharmonic oscillator, 182 Continuous states, 6
1st order perturbation, 183
Coordinates
DVR, 183
Cartesian, 39
parity, 184
cone, 45
Antisymmetrizer, 106
cylindrical, 42
Associated Legendre differential equation, 170
ellipsoidal, 44
Associated Legendre function, 10, 161, 168
elliptic, 46
Avoided crossing, 37, 62
elliptic cylindrical, 44
Azimuthal symmetry, 161
Bessel function, 176 hyperspherical, 48
ordinary, 176 Jacobi, 48
Rodrigues formula, 177 oblate spheroidal, 44
spherical, 176 parabolic, 42
Blas, 258 paraboloidal, 44
Bohr N., 2 polar, 48
Bohr-Sommerfeld quantization, 2 prolate spheroidal, 44
Boor-Cox recursive formula, 233 semiparabolic, 45
Born-Oppenheimer approximation, 46, 111 semiparabolic cylindrical, 44
Boundary condition spherical, 40
Dirichlet, 210 Coulomb integral, 115
generalized von Neumann, 210 Coulomb systems, 187
Boundary value problem, 210 Crank-Nicholson approximation, 17
Bound states, 6 De Broglie L.-V., 1
263
264 NUMERICAL QUANTUM DYNAMICS

De Broglie relation, 2 local coordinates, 235


Defolded spectrum, 38 two-dimensional, 235
Degenerate perturbation theory, 60 Fixed-node method, 125
Density functional theory, 95, 106 Fractional charge, 43
Density matrix, 21 Free electron
Density operator, 20–21 magnetic field, 203
Dipole strength, 193 Galerkin principle, 212
Direct product basis, 160 Gauss-Chebyshev quadrature
Dirichlet boundary condition, 210 1st kind, 199
Discrete variable representation, 156, 159 2nd kind, 198
fixed-node boundary condition, 196, 198 Gaussian orthogonal ensemble, 68
hermite polynomial, 183 Gauss quadrature, 167
Legendre polynomial, 189 Gauss-Symplectic-Ensemble, 68
cylindrical symmetry, 191 Gauss theorem, 181
periodic, 196, 199 Gauss-Unitary-Ensemble, 68
spectroscopic quantities, 193 Gear algorithm, 139
time-dependent systems, 195 Gegenbauer polynomials, 177
wave packet propagation, 195 generating function, 177
DVR, 159 Rodrigues formula, 178
Dynamical group, 78 Generalized Laguerre polynomial, 171
Dynamical Lie algebra, 78 Generating function, 167
Dyson series, 16 Guiding function, 124
Ehrenfest’s theorem, 20 Hahn polynomials, 180
Ehrenfest’s Theorem, 20 Hamiltonian, 6
Einstein A., 1 Harmonic oscillator, 182
Eispack, 258 Hartree equation, 105
Energy-time uncertainty relation, 89 Hartree-Fock equation, 106
Euler angles, 11 Hartree-Fock method, 95
Euler method Hartree-Fock method, 104
explicit, 134 Heisenberg uncertainty, 89
implicit, 134 Heisenberg uncertainty relation, 4
Euler methods, 134 Hellman-Feynman theorem, 109
Euler theorem, 109 Hermite interpolation polynomial, 220
Exchange integral, 115 extended, 222
Expectation value, 4 Hermite mesh, 200
FBR, 159 Hermite polynomial, 174
Fermion problem, 125 finite elements, 215
Fermi’s Golden Rule, 90 generating function, 174
Feynman path integral, 126 Jacobi matrix, 175
Finite basis representation, 156, 159 parity, 175
Finite differences Rodrigues formula, 174
bound states, 146 Homogeneous function, 108
bound states Husimi function, 23
Pöschl-Teller potential, 147 Hydrogen atom
Hamiltonian action, 140 electric field, 43
propagation, 140 external fields, 62
potential wall, 146 extremely strong magnetic field, 203
Finite element matrix finite elements
dimension, 227 two-dimensional, 244
non-vanishing elements, 227 strong external fields, 187
Finite elements, 209, 211 Hydrogen eigenstates
adaptive methods, 230 boson representation, 79
error, 230 Hydrogen molecule-ion, 46
rectangular, 235 Hylleraas ansatz
selfadaptive methods, 230 helium, 99
splines, 232 Hypergeometric functions, 180
triangular, 235 Hypergeometric series, 180
Index 265
Hypervirial theorem, 111 Legendre polynomials, 161
Imsl, 257 Level repulsion, 37
Inner product, 8 Lie algebra
Integrability, 35 dynamical, 78
Interpolation polynomial spectral generating, 78
degree of freedom, 227 Liouville equation, 22
two-dimensional Local basis, 213
cubic, 239 local matrices, 225
linear, 237 Local density approximation, 107
quadratic, 238 Maple, 256
quartic, 241 Mathematica, 257
quintic, 240 Matlab, 256
Jacobi matrix, 167 Mesh calculation, 200
Jacobi polynomial, 14, 181 Mesh
generating function, 182 Laguerre, 200
Kepler problem, 49 Metropolis algorithm, 121
Ket, 8 Model potentials, 188
Kicked systems, 141 Monte Carlo method, 118
Kohn-Sham equations, 107 important sampling, 121, 124
Krylov space, 103 quantum, 118
Kustaanheimo-Stiefel coordinates, 51–52 diffusion, 123
Kustaanheimo-Stiefel transformation, 51–52 guiding function, 124
Lagrange interpolation polynomial, 217 path integral, 128
linear equations, 218 variational, 122
n-th order, 218 Moyal brackets, 24
Laguerre mesh, 200–201 Müller method, 179
modified, 202 Murphy equation, 170
Laguerre polynomial, 171, 175 Nag, 257
finite elements, 215 Nearest neighbor distribution, 37, 65
generalized, 171 Neumann series, 16
generating function, 172 Newton’s method, 179
Jacobi matrix, 172 Noether’s theorem, 36
Rodrigues formula, 171 Non-integrable systems, 35
Lanczos method, 101 N-particle systems, 48
algorithm, 102 Numerov integration, 164
spectrum transformation, 101 Numerov method, 150–151
Lanczos eigenvalue problem, 151
software, 259 Ordinary Bessel function, 176
Landau function, 203 Orthogonal polynomials, 165
Lapack, 258 three-term recurrence relation, 166
Laplace-Beltrami operator, 39 zeros, 166
cylindrical coordinates, 42 Orthonormal polynomials, 165
elliptic coordinates, 47 Oscillator
parabolic coordinates, 42 anharmonic, 182
polar coordinates, 51 harmonic, 182
semiparabolic coordinate, 45 Oscillator strength, 193
spherical coordinates, 40 Partial differential equation
LDA, 107 elliptic, 3
Leap-frog algorithm, 135 hyperbolic, 3
Legendre differential equation, 170 parabolic, 3
Legendre function Particle-in-box eigenfunction, 198
associated, 168 Perturbation theory, 55
parity, 171 degenerate, 60
Legendre polynomial, 168, 187 Rayleigh-Schrödinger, 56
generating function, 171 two-fold degenerate, 61
Jacobi matrix, 170 Phase space, 35
Rodrigues formula, 168 Photoelectric effect, 2
266 NUMERICAL QUANTUM DYNAMICS

Photoelectron, 2 Cartesian coordinates, 39


Photons, 2 cylindrical coordinates, 42
Planck constant, 2 elliptic coordinates, 47
Planck-Einstein relations, 2 parabolic coordinates, 42
Planck M., 1 semiparabolic coordinates, 45
Pochhammer symbol, 180 spherical coordinates, 40
Poisson bracket, 35 Separation of variables, 35
Poisson distribution, 38, 65, 120 Simulink, 256
Polar coordinate, 51 Slater determinant, 106
Polarization, 194 Softare
194 netlib, 255
194 Software
194 arpack, 259
Pollaczek polynomials, 180 axiom, 257
Pöschl-Teller potential, 147 bell-labs, 259
Predictor-corrector method, 139 blas, 258
Probability conservation, 5 cern, 259
Probability current density, 5 derive, 257
Probability density, 4 differential equations, 259
Probability distribution, 120 eigenvalue problems, 258
QR-decomposition, 103, 178 eispack, 258
Quadratic Stark effect, 69 elib, 256
Quantum force, 124 femlab, 256
Quantum integrability, 36 FFT, 258
Quantum Monte Carlo, 95, 118 fftpack, 258
Quantum number fishpack, 258
good, 36 gams, 255
Quantum potential, 25 google, 256
Quasibound state, 6 Helmholtz equation, 258
Random number, 119 homepack, 258
quasi, 120 imsl, 257
Rayleigh-Ritz variational principle, 96 itpack, 258
Rayleigh-Schrödinger perturbation theory, 56 lanczos, 259
Regge symbol, 11 lapack, 258
Regge symbols, 194 Laplace equation, 258
Released-node method, 125 linpack, 258
Resonance, 6 macsyma, 257
Ritz-Galerkin principle, 212 maple, 256
Ritz variational principle, 210 mathcad, 257
Rodrigues formula, 167 mathematica, 257
Romberg integration, 119 math-net, 256
Runge-Kutta algorithm, 135 matlab, 256
Runge-Kutta-Fehlberg algorithm, 137 minpack, 258
Runge-Lenz operator, 79 mupad, 257
Scalar product, 8 nag, 257
Schrödinger equation, 2 numerical recipes, 257
Schrödinger equation octave, 259
radial, 41 odepack, 259
Schrödinger equation odrpack, 259
time-dependent, 3 paw, 259
Schrödinger equation Poisson equation, 258
time-independent, 5 port, 259
Schwinger’s action principle, 126 quadpack, 259
Secant method, 179 simulink, 256
Self-consistent field, 105 Solver
Self-consistent potential, 105 explicit, 133
Separability implicit, 133
Index 267
multi-step, 133 Uncertainty relation, 4
single-step, 133 energy time, 89
Spectral generating group, 78 Variance, 4
Spectral representation, 156–157, 159 Variational method
Spherical Bessel function, 176 diagonalization, 100
Spherical coordinates, 161 harmonic oscillator, 97
Spherical harmonics, 10, 161 helium ground state, 98
Spin, 10 Variational principle, 95
Splines, 232 Rayleigh-Ritz, 96
Stark effect, 43, 68 Variation basis representation, 156–157
quadratic, 69 Verlet algorithm, 135
Stimulated emission, 91 Virial theorem, 108, 111, 117
Sum rule, 69 molecules, 113
Symmetry group, 78 variational ansatz, 117
Time discretization, 141 Von Neumann boundary condition, 210
generalized, 210
pulse, 141
Von Neumann equation, 21–22
Time evolution operator, 15
Walker, 121
Time propagator, 15
Wave packet propagation, 195
Transformation
Weight function, 120, 165
active, 11
moment, 165
passive, 11
Weyl transformation, 23
Transition group, 78
Wigner distribution, 38
Transition probability, 87–88, 193 Wigner function, 20, 22, 68
Trapezoidal rule, 118 Wigner rotation function, 13
Trotter number, 129 Wigner-Seitz radius, 108
Ultraspherical polynomials, 177 WKB approximation, 24
momentum space, 26

Das könnte Ihnen auch gefallen