Sie sind auf Seite 1von 15

‘Eigenvalue problems’

Name/ Surname: Dionysios Zelios

Email: dionisis.zel@gmail.com

Course:

Computational Physics (FK8002)

Date of submission: 14/04/2014

CONTENTS

Description of the problem

i.

Introduction…………………………………………………………………………………………………………….3

ii.

Finite difference method……………………………………………………………………………………… 4

Five point method……………………………………………………………………………………………4

iii. Diagonalization………………………………………………………………………………………………………6

Inverse power iteration………………………………………………………………………………… 6

Inverse power iteration with shift…………………………………………………………………

7

Results

i. Standard routine………………………………………………………………………………………………….8

ii. Inverse power iteration with shift routine………………………………………………………… 10

iii. Time duration of our methods…………………………………………………………………………….13

References

i.

Description of the problem

Introduction

When we want to treat a quantum mechanical system, we usually have to solve an eigenvalue problem HΨ=ΕΨ, where H is the Hamilton operator. For a one

dimensional problem:

2  2  2
2
2
2

2 m x

H



V ( x )

As a test case we will use the one dimensional harmonic oscillator Hamiltonian. This problem can be solved analytically and we will use that to check our numerical approach. The importance of the harmonic oscillator problem steams from the fact that whenever there is a local potential minimum the harmonic oscillator model gives the first approximation to the physics. If the potential V(x) has a minimum in x = x0 ,we can expand it in a Taylor series around the minimum:

1

V x V x V x x x V x x x

0

0

2

(

)

(

)

'( )(

)

''( )(

0

)

2

  V x V x 1 2 k x x

(

)

(

0

)

(

0

)

2

where we have used that

) k where

harmonic oscillator (a spring for example) is governed by a restoring force

V '( x

0

) 0 since

x x

0

is a minimum. We have further put

V ''( x

0

V ''( x

0

) 0 (which follows since we are in a minimum). A classical

F  k ( x x )

0

and it potential energy is

V

(

1

x k x x

)

2

(

0

)

2 . Such an oscillator will oscillate with period

T

H

H

2

, with



2  2 1  2
2
2
1
2

2

m x

2

k m
k
m

. Using ω instead of k, we can rewrite the Hamilton operator as:

m x

2

2

,where we have put

x

0

in the origin. Hence we have:

    (

2  2 1  2
2
2
1
2

2

m x

2

m x

2

2

)   

This can be transformed to a dimensionless system by setting

1

2

1

have:

(

(

2

2

z )

2

2

1 2

2

z

z

2

2



(

n

1 2

)



1 2   z

E

 

, but we also know that

)

(1) , with n=0, 1, 2…

z

m  x ,
m
x
,

thus we will

n

1

( n )

2



, so:

Finite difference method

There are several ways to solve the eigenvalue equation with this Hamiltonian. We shall start with a so called finite difference approach. In principle there is an infinite number of eigenstates, ( x ) , to H, and these can extend to x  . However, we are usually only interested in a finite number of states. We search for states with rather low energy and which essentially are confined to a region of space close to x = 0. We span a space which we think is appropriate for what we are interested in with a linear grid from Xmin to Xmax. Outside these boundaries we assume that the wave function is zero. In principle this means that we have put our harmonic oscillator in a potential well. At the well boundaries we assume that the potential goes to infinity, and thus the eigenstates have to go to zero there. The purpose of discretization is to obtain a problem that can be solved by a finite procedure. By discretizing we may rewrite derivatives in terms of finite differences, which eventually enables us to formulate the original problem in terms of a matrix equation that can be solved by inverse iteration. In this section we will use a very intuitive approach to find expressions of different accuracy for the finite-difference operator representing the second derivative. The formulas we derive here to calculate second order derivatives by means of finite difference will always include the point at which the derivative is found, since in the discrete regime it seems reasonable to assume that this point stores valuable information about the function.

Five-point method :

We deduce the five-point method in order to have a good precision. We try to approximate the second order derivative of function f at grid point using two

neighbor points to the left and two neighbor points to the right from the grid. We take:

f

i

f ( x )

i

so that

, h ih , i 0, 1, 2 and we look for the constants

2

f

x

2

c f c f

0

i

1

i

1

c f

1

i

1

c f

2

i

2

c f

2

i

2

c , c , c

0

1

1

, c , c

2

2

The constants are determined by Taylor expanding around xi and after a few calculations and solving the system of equations obtained, we find:

4

f ''( x ) c c f ( x h ) c f ( x h ) c f ( x 2h ) c f ( x 2h ) O (h )

0

1

1

2

2

f x

''( )

1

12 h

2

(

f x

(

h

2 )

16 (

f x h

)

f x

30 ( )

16 (

f x h

)

f x

(

2 ))

h

O h

(

4 )

Considering that our equation (1) has a factor (-1/2), multiplying each coefficient with this

and taking into account the above equations we have: Hence:

15 h

12

2

h 8

c

0

,

c

1

12

2

,

c

2

1

24 h

2

,

c c

1

1

,

c c

2

2

where h is the step size that we are using in our model.

We can now set up the matrix eigenvalue equation: HX=EX

where

c

0

c

1

c 2

0

0

(

V x

1

)

HX=EX where c 0 c   1 c  2 0 0 ( V x

H  


c

0

c

1

(

V x

c 1

c 2

0

0

2

)

c

0

c

2

0

c

1

(

V x

3

c

1

)

c

0

c

2

c

1

(

V x

4

c

0

2 c

c

1

2

)

c

0

0

0

c

2

c

1

(

V x

5

c

1

)

0

0

0

c

2

c

1

0

0

0

0

c

2

and



X


 

f

f

f x

f

x

(

(

(

(

x

)

)

)

)

1

2

3

x

4

 

. The matrix above is banded and it is also symmetric since

c n

0

0

0

0

0

c

0 0

0 0

0 0

0 0

0 0

n

.

H is an Hermitian operator. We know then that the eigenstates corresponding to different eigenvalues should be orthogonal

*

( x ) ( x ) dx

j

ij

, where the normalization is assumed.

Further the eigenstates to a Hermitian operator form a complete set. The matrix H we just obtained by discretization of H is an Hermitian matrix (this means that it is

self adjoint;

manner:

H H ( H )

T

*

)and its eigenvectors will be orthogonal in a similar

X X

i

j

ij

, when

E E

i

j

Our matrix above belongs actually to the subclass of Hermitian matrices that are real and symmetric. Then also the eigenvectors Xi will be real. If H is an nxn matrix then there will be n eigenvectors. These eigenvectors, as the eigenvectors to the Hamilton operator, form a complete set. The difference is though that the eigenfunction to the operator H can span any function Φ(x) defined anywhere on the real x-axis, the finite set of eigenstates to our matrix H can span any function defined on our grid.

Diagonalization

There are several methods to solve the matrix eigenvalue equation. One method is to diagonalize and find all eigenvalues and (optionally) all eigenvectors. The scheme

is then to find the similarity transformation of the matrix H such that

where D is a diagonal matrix. The eigenvalues are now found at the diagonal of D and the eigenvectors are the columns of X. A method that uses the fact that the matrix is banded and symmetric is much faster than a general diagonalizer. Another possibility is to find a few eigenvalues (the lowest, the highest, in a specified energy region) and their eigenvector with an iterative method. This type of methods are

often faster when we want a particular solution and have a sparse matrix. One such method is the ‘(Inverse) Power Iteration method .

X

1

HX D

Inverse power iteration

For solutions of the Schrödinger equation we are more interested in the smallest eigenvalue than in the largest. For this we use the Inverse iteration scheme which

formally corresponds to the performance of power iteration for

A

1 (where A is a nxn

matrix). Since the computation of the inverse of a matrix is as time-consuming as the full eigenvalue problem the practical calculation is, however, through a different path. The

eigenvalues of

A

1

are

1

n

1

, if

1

n

n

are the eigenvalues of A.

Hence:

n

AX X

n

X A X

n

n

n

We will now solve the system of linear equations

eigenvector. In the next step we put the solution,

so we have an iterative scheme

can be expanded in eigenvectors to the matrix. After the first step we have for instance:

AY Y

2

1

, where

Y 1 is our first guess for an

Y

2

on the right hand side and solve again,

AY Y

i 1

i

. To analyze the situation we note that any vector

Y

2

1

A Y

1

A

1

n

c X

n

n

n

1

c X

n

n

n

.

It is clear that in the iterative procedure the solution

eigenvector with the largest value of

eigenvalue. At every step in the iteration the current approximation of the inverse of the smallest eigenvalue is given by:

Y i 1

will converge towards the

n

1 , i.e. towards the eigenvector with the smallest

Y Y

i

i

1

Y A Y Y Y

i

i

i

i

1

Y Y

i

i

1

min |

n

|

,

when 

Also here it is a good idea to normalize in every step. Finally at every step in the

iteration we solve a system of linear equations

is the same every time, but the right-hand side changes.

AY Y

i 1

i

. The left-hand side matrix

This is a typical situation where it is an advantage to first perform an LU- decomposition, A = LU, for fast solutions in the following iterations.

Inverse Power Iteration with Shift

If we shift the matrix with a scalar constant, ξ,

the power iterations will converge to the largest (smallest) eigenvalue of the shifted

matrix i.e to max |

than one eigenvalue. The shift can also be used to improve the convergence since

the rate of convergence depends on

second largest eigenvalue, or vice versa for the inverse power iteration.

 ,

n

|

, or to 1/ min |

n

| /

1

2

|

. In this way it is possible to find more

| , where

1

is the largest and

2

is the

Results

To begin with, we are taking a linear grid x= [-7,7] and a step size h=0,1. Our

potential is given from the formula :

1

V x

2

2

. Below we are plotting the potential as a

function of distance:

Our potential is given from the formula : 1 V  x 2 2 . Below

Standard routine

Then, we use the build-in Matlab command ‘eig’ in order to get the eigenvalues and the eigenvectors of our Hamiltonian. We check if the Hamiltonian matrix is symmetric by calculating the quantity H(i,j)-H(j,i) where i , j are the line and column of the matrix respectively. We find that this quantity is zero, hence it is indeed symmetric. We also check that the eigenvectors we have found are correct by inserting them into the eigenvalue equation.

For the lowest energies we notice that the equation (1) is true since as we can see in the matrix below, the energies (eigenvalues) are describing by the formula (n+1/2) where n=0,1,2,3…

State (n)

Energy

0

0.5

1

1.5

2

2.5

3

3.4999

We also present the eigenfunctions for the quantum harmonic oscillator for the first 4 states.

2 2.5 3 3.4999 We also present the eigenfunctions for the quantum harmonic oscillator for the

We notice that the wave functions for higher n have more ‘’humps’’ within the potential well. This corresponds to a shorter wavelength and therefore by the de Broglie relation, they may be seen to have a higher momentum and therefore higher energy.

Below, we present the two highest energy solutions with E=284.1615. We are expecting the higher energy solutions to be unphysical and depend on the approximations, for instance the finite box, the grid and the representation of the derivative.

and depend on the approximations, for instance the finite box, the grid and the representation of

Making our grid double, [-14,14], we plot the highest energy solutions with

E=353,57515.

we plot the highest energy solutions with E=353,57515. Inverse Power Iteration with shift routine We also

Inverse Power Iteration with shift routine

We also write an “Inverse Power Iteration with shift routine" to obtain the first few eigenvalues and eigenvectors to the matrix (as described above). As an initial guess, we are taking a constant vector (with ones) with length as much as the length of the Hamiltonian matrix. The shift is being set to zero in order to get the smallest eigenvalue and its corresponding eigenvector. First, we plot the ground state for a few iterations until we find the correct answer. We conclude that after 3 iterations we get the desirable value.

The plot is given below:

The plot is given below:

Below, we present the lowest odd solution obtained after 3 iterations, having shift being set as 1.5.

Below, we present the lowest odd solution obtained after 3 iterations, having shift being set as

In addition, we present the lowest even solution obtained after five iterations, having shift being set as 2.

obtained after five iterations, having shift being set as 2. We conclude that the convergence rate

We conclude that the convergence rate depends on the value of the shift. For instance, if we have set the shift in the previous case as 2.5, then with one iteration we have the right answer, while we need 5 iterations if we set it 2. In order to examine the shift dependence further, we present a table varying the shift from 1.9 to 2.5 for the lowest odd solution and note the number of iterations that we want in order to find the exact solution.

Shift

Iterations

1.9

6

2

5

2.1

4

2.2

3

2.3

2

2.4

2

2.5

1

As expected, the closer to the value we are, the less iterations our routine wants in order to return the right eigenvalue and its corresponding eigenvector.

Time duration of our methods

We now want to check how time consuming the two methods are. Using the built in command tic-toc, we are reading the elapsed time from stopwatch. Below we present our results in a table, concluding that the inverse power iteration with shift method is much faster than the standard routine.

Stepsize

Matrix

Build in

Routine

2

3

elements

command

(1 it.)

iterations

iterations

(sec)

(sec)

(sec)

(sec)

0.1

141*141

2.183901

0.016805

0.018968

0.022248

0.01

1401*1401

26.217114

0.774165

0.937791

0.963312

For step size 0.1, our routine needs 130 times less time to compute the eigenvalue and the corresponding eigenvector while for step size 0.01 it is quicker 33 times from the standard routine.

Last but not least, we want to compare our calculation with the analytical solutions of the Harmonic oscillator problem. We can see that for the lower states, we get multiples of (1/2) (since we work on a dimensionless model) as expected from our theory. Hence, from the fourth eigenvalue, we start having a small divergence from the expected value in the third decimal. This can be explained by the fact that the second order derivative is approximated with a finite difference formula and as a result this gives a specific error. Moreover, we force our solution to be zero outside the last point in our grid since this is necessary in order to making it possible to normalize the wave function. Our solutions are normalized and tend to be orthogonal to each other. Their inner product is almost but not exactly zero. This can also be explained by the fact that we have used an approximation for the second derivative hence maybe a more accurate method could be used to get better results.

References

1. Mathematical methods for physicists Arfken, Weber, Harris

2. Introduction to computation and modeling for differential equations Lennart Edsberg

3. Lecture notes, computational physics course (FK8002) Eva Lindroth

4. Numerical Recipes Press, Teukolsky, Vetterling, Flannery