Sie sind auf Seite 1von 35

A Molecular Modelers Guide to Statistical Mechanics

Course notes for BIOE575


Daniel A. Beard Department of Bioengineering University of Washington Box 3552255 dbeard@bioeng.washington.edu (206) 685 9891 April 11, 2001

Contents
1 Basic Principles and the Microcanonical Ensemble 1.1 Classical Laws of Motion . . . . . . . . . . . . 1.2 Ensembles and Thermodynamics . . . . . . . . 1.2.1 An Ensembles of Particles . . . . . . . 1.2.2 Microscopic Thermodynamics . . . . . 1.2.3 Formalism for Classical Systems . . . . 1.3 Example Problem: Classical Ideal Gas . . . . . 1.4 Example Problem: Quantum Ideal Gas . . . . . 2 2 3 3 4 7 8 10 15 15 15 16 17 19 20 20 21 22 22 23 24 25

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

2 Canonical Ensemble and Equipartition 2.1 The Canonical Distribution . . . . . . . . . . . . . . . . . . . . . 2.1.1 A Derivation . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Another Derivation . . . . . . . . . . . . . . . . . . . . . 2.1.3 One More Derivation . . . . . . . . . . . . . . . . . . . . 2.2 More Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . 2.3 Formalism for Classical Systems . . . . . . . . . . . . . . . . . . 2.4 Equipartition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Example Problem: Harmonic Oscillators and Blackbody Radiation 2.5.1 Classical Oscillator . . . . . . . . . . . . . . . . . . . . . 2.5.2 Quantum Oscillator . . . . . . . . . . . . . . . . . . . . . 2.5.3 Blackbody Radiation . . . . . . . . . . . . . . . . . . . . 2.6 Example Application: Poisson-Boltzmann Theory . . . . . . . . . 2.7 Brief Introduction to the Grand Canonical Ensemble . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

3 Brownian Motion, Fokker-Planck Equations, and the Fluctuation-Dissipation Theorem 3.1 One-Dimensional Langevin Equation and FluctuationDissipation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Fokker-Planck Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Brownian Motion of Several Particles . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Fluctuation-Dissipation and Brownian Dynamics . . . . . . . . . . . . . . . . . .

27 27 29 30 32

Chapter 1 Basic Principles and the Microcanonical Ensemble


The rst part of this course will consist of an introduction to the basic principles of statistical mechanics (or statistical physics) which is the set of theoretical techniques used to understand microscopic systems and how microscopic behavior is reected on the macroscopic scale. In the later parts of the course we will see how the tool set of statistical mechanics is key in its application to molecular modeling. Along the way in our development of basic theory we will uncover the principles of thermodynamics. This may come as a surprise to those familiar with the classical engineering paradigm in which the laws of thermodynamics appear as if from the brain of Jove (or from the brain of some wise old professor of engineering). This is not the case. In fact, thermodynamics arises naturally from basic principles. So with this foreshadowing in mind we begin by examining the classical laws of motion1.

1.1

Classical Laws of Motion

, where is the force Recall Newtons famous second law of motion, often expressed as acting to accelerate a particle of mass with the acceleration . For a collection of particles located at Cartesian positions the law of motion becomes

We shall see that in the absence of external elds or dissipation the Newtonian equation of motion preserves total energy:

#"% $ "&'("  )" are the forces acting on the particles . where  
2

!

(1.1.1)

"ED "ED  365GF HE!!PIP 0 21436579@8 A # "CB


1

(1.1.2)

This course will be concerned primarily with classical physics. Much of the material presented will be applicable to quantum mechanical systems, and occasionally such references will be made. 2 A note on notation: Throughout these notes vectors are denoted by bold lower case letters (e.g. , ). The notation denotes the time derivative of , i.e., , and .

QV R

QSR

QSV RXW`YQRbaSYc
2

QS d RXW`YefQSRgahYcie

QSR TUR

Chapter 1 Basic Principles

where is some potential energy function and and is the kinetic energy. Another way to pose the classical law of motion is the Hamiltonian formulation, dened in terms of the particle positions and momenta . It is convenient to adopt the notation (from quantum mechanics) momenta and positions, and to consider the scalar quantities and , which denote the entries of the vectors and . For a collection of particles and are the collective positions and momenta vectors listing all entries. The so called Hamiltonian function is an expression of the total energy of a system:

"

"

xy ")

("prqtsp5vuws " 1 " y "( xS " #

" 3d5GF A @ # !Ee fIf "B "


Hamiltons equations of motion are written as:

(1.1.3)

s s" "h q s s "  "g

(1.1.4) (1.1.5)

Hamiltons equations are equivalent to Newtons:

"p "iuw#"  "&rqtsi5fus "jlke" 

(1.1.6)

So why bother with Hamilton when we are already familiar with Newton? The reason is that the Hamiltonian formulation is often convenient. For example, starting from the Hamiltonian formulation, it is straightforward to prove energy conservation:

m A o s "X3 s "gp' A o s p o s pdq o s p o s pvr s " s" s " s "  mn "B s " s " "CB q
Ensembles and Thermodynamics

(1.1.7)

1.2

With our review of the equations of classical mechanics complete, we undertake our study of statistical physics with an introduction to the concepts of statistical thermodynamics. In this section thermodynamics will be briey introduced as a consequence of the interaction of ensembles of large numbers of particles. The material loosely follows Chapter 1 of Pathrias Statistical Mechanics [3], and additional information can be found in that text.

1.2.1

An Ensembles of Particles

Consider a collection of particles conned to a volume , with total internal energy . A system of this sort is often referred to as an NVE system, as , , and are the three thermodynamic variables that are held xed. [In general three variables are necessary to dene the thermodynamic state of a system. Other thermodynamic properties, such as temperature for example, cannot be assigned in an NVE ensemble without changing at least one of the variables , , or .] We will refer to the thermodynamic state as the macrostate of the system.

Chapter 1 Basic Principles

For a given macrostate, there is likely to be a large number of possible microstates, which correspond to different microscopic congurations of the particles in the system. According to the principles of quantum mechanics there is a nite xed number of microscopic states that can be 3 adopted by our NVE system. We denote this number of states as . For a classical system, the microstates are of course not discrete and the number of possible states for a xed ensemble is in general not nite. To see this imagine a system of a single particle ( ) travelling in an otherwise empty box of volume . There are no external force elds acting on the particle so its total energy is . The particle could be found in any location within the box, and its velocity could be directed in any direction without changing the thermodynamic macrostate dened by the xed values of , , and . Thus there are an innite number of allowable states. Let us temporarily ignore this fact and move on with the discussion based on a nite (yet undeniably large) . This should not bother those of us familiar with quantum mechanics. For classical applications we shall see that bookkeeping of the state space for classical systems is done as an integration of the continuous state space rather than a discrete sum as employed in quantum statistical mechanics. , or how At this point dont worry about how you might go about computing might depend on , , and for particular systems. Well address these issues later. For now just appreciate that the quantity exists for an NVE system.

vs 0

t F `hsu 0 I

0 w  s  s t F xhsy 0 I

t F xSsu 0 I { s| 0 

t F `hsu 0 I

1.2.2

Microscopic Thermodynamics

Consider two such so called NVE systems, denoted system 1 and system 2, having macrostates dened by ( , , ) and ( , , ), respectively.

z sp 0

N1, V1, E1

N2, V2, E2

Figure 1.1: Two NVE systems in thermal contact. Next, bring the two systems into thermal contact (see Fig. 1.1). By thermal contact we mean and may change, that the systems are allowed to exchange energy, but nothing else. That is but , , , and remain xed. Of course the total energy remains xed as well, that is,

z } sp

s|

0 

0~ 0 3 0 t

(1.2.8)

if the two systems interact only with one another. Now we introduce a fundamental postulate of statistical mechanics: At any time, system 1 is equally likely to be in any one of its microstates and system 2 is equally likely to be in any one of its microstates (more on this assumption later). Given this assumption, the composite system is equally likely to be in an one of its possible microstates. The number can be expressed as the multiplication:

tf

The number corresponds to the number of independent solutions to the Schr odinger equation that the system can adopt for a given eigenvalue of the Hamiltonian.

()

t ~ F0 S 0  I t ~ F 0  0 hI tt F 0 E Itf F 0 If

tt

t ~ F0   0 SI

(1.2.9)

Chapter 1 Basic Principles

Next we look for the value of (or equivalently, ) for which the number of microstates achieves its maximum value. We will call this achievement equilibrium, or more specically thermal equilibrium. The assumption here being that physical systems naturally move from improbable macrostates to more probable macrostates4 . Due to the large numbers with which ), the most probable macrostate is orders of magnitude more we deal on the macro-level ( probable than even closely related macrostates. That means that for equilibrium we must maximize under the constraint that the sum remains constant. At the maximum , or

t ~ F 0 h 0  I t ~ F0 h  0 I

0 

0~ 0 3 0  s t ~ uws 0 sG tt F 0 Itf F 0 If s tt 3 s 0  s tf r|| & s0 0 tf tt s 0 s 0  B X B &  (1.2.10) q F 0{  0{ I denote the maximum point. Since s 0  us 0 q 8 from Eq. (1.2.8), Equawhere tion (1.2.10) reduces to: 8 ss 0tt F 0 I 8 ss 0tf F 0  If (1.2.11) tt tf}  which is equivalent to s s F 0 s 0 U tt I s 0  tf F 0  If (1.2.12)
To generalize, for any number of systems in equilibrium thermal contact,

8e 

s 0 U t '

constant

(1.2.13)

for each system. Let us pause and think for a moment: From our experience, what do we know about systems in equilibrium thermal contact? One thing that we know is that they should have the same temperature. Most people have an intuitive understanding of what temperature is. At least we can often gauge whether or not two objects are of equal or different temperatures. You might even know of a few ways to measure temperature. But do you have a precise physical denition of temperature? It turns out that the constant is related to the temperature via

where

8  u2 

(1.2.14)

is Boltzmanns constant. Therefore temperature of the NVE ensemble is expressed as

8 s

Until now some readers may have had a murky and vague mental picture of what the thermodynamic variable temperature represents. And now we all have a murky and vague mental picture of what temperature represents. Hopefully the picture will become more clear as we proceed. Next consider that systems 1 and 2 are not only in thermal contact, but also their volumes are allowed to change in such a way that the total volume remains constant. For this

s0 F `hsy 0 I  t U 

(1.2.15)

s ~ si 3 s

P , , , and e , e , e . A more probable macrostate will be one that corresponds to more possible microstates than a less probable macrostate.

Again, the term macrostate refers to the thermodynamic state of the composite system, dened by the variables

Chapter 1 Basic Principles

example imagine a exible wall separates the two chambers the wall exes to allow pressure to equilibrate between the chambers, but the particles are not allowed to pass. Thus and remain xed. For such a system we nd that maximizing yields

or or or

We shall see that the parameter is related to pressure, as you might expect. But rst we have one more case to consider, that is mass equilibration. For this case, imagine that the partition between the chambers is perforated and particles are permitted to freely travel from one system to the next. The equilibrium statement for this system is

t ~ Fp s hs|hI s tt F spItf F s|hIf s tt 3 s s s tf r| s sp  tf tt s si s s| B B q sp 8 s s tt F s I 8 s s tf F s  IP tt sp tf s| s s F s siU tt s I s s| tf F s  If s s s  t 7 constant  s U t 2 s

{

(1.2.16)

(1.2.17) (1.2.18) (1.2.19)

constant

(1.2.20)

To summarize, we have the following:

1. Thermal (Temperature) Equilibrium: 2. Volume (Pressure) Equilibrium: 3. Number (Concentration) Equilibrium:

U   U

. t t ' . t 2 .

m 0 m q m s 3 m (1.2.21) 0 which tells us how to relate changes in energy to changes in the variables entropy , volume number of particles , occurring at temperature , pressure , and chemical potential s ., and Equation (1.2.21) arose as an empericism which relates the three intrinsic thermodynamic 0 properties , , and to the three extrinsic properties , s , and . In developing this relationship, it was necessary to introduce a novel idea, entropy, which we will try to make some sense of below. For constant s and Equation (1.2.21) gives us o s p v 8 s0  (1.2.22)

How do these relationships apply to the macroscopic world with which we are familiar? Recall the fundamental expression from thermodynamics:

Chapter 1 Basic Principles


Going back to Equation (1.2.13) we see that

U t7

(1.2.23)

which makes sense if we think of entropy as a measure of the total disorder in a system. The greater the number of possible states, the greater the entropy. For pressure and chemical potential we nd the following relationships: For constant

0 0

and

we arrive at

o s t p ss

or

s s  t '  s  t '  s 0 U t 7  s s

and

 Gq  v 8  U t

(1.2.24)

For constant

and

o s p q s o s p 8 s0

we obtain

or

and

(1.2.25)

For completeness we repeat:

or

and

(1.2.26)

Through Eqs. (1.2.24)-(1.2.26) the internal thermodynamic parameters familiar to our everyday experience temperature, pressure, and chemical potential are related to the microscopic world of , , and . The key to this translation is the formula . As Pathria puts it, this formula provides a bridge between the microscopic and the macroscopic [3]. After introducing such powerful theory it is compulsory that we work out some example problems in the following sections. But I recommend that readers tackling this subject matter for the rst time should pause to appreciate what they have learned so far. By asserting that entropy (the most mysterious property to arise in thermodynamics) is simply proportional to the log of the number of accessible microstates, we have derived direct relationships between the microscopic to the macroscopic worlds. Before moving on to the application problems I should point out one more thing about the number that is its name. The quantity is commonly referred to as the microcanonical partition function, a partition function being a statistically weighted sum over the possible states of a system. Since is a non-biased enumeration of the microstates, we refer to it as microcanonical. Similarly, another name for the NVE ensemble is the microcanonical ensemble. Later we will meet the canonical (NVT) and grand canonical ( VT) ensembles.

1.2.3

Formalism for Classical Systems

The microcanonical partition function for a classical system is proportional to the volume of phase space accessible by the system. For a system of particles the phase space is a 6 -dimensional space encompassing the variables and the variables , and the partition function is

"

"

Chapter 1 Basic Principles


proportional to the integral:

t F xSsu 0 Iu F F p S)|e( E fI q 0 I m p m |j m m  m j m


or using vector shorthand

(1.2.27)

reminds us that the integration is over -dimensional space. where the notation In Eqs. (1.2.27)-(1.2.28) the delta function restricts the integration to the the constant energy hypersurface dened by constant. [In general we wont be integrating this difcult-looking delta function directly. Just think of it as a mathematical shorthand for restricting the phase space to a constant-energy subspace.] We notice that Equation (1.2.28) lacks a constant of proportionality that allows us to replace the proportionality symbol with the equality symbol and compute . This constant comes from relating a given volume of the classical phase space to a discrete number of quantum mi, where is Plancks constant. crostates. It turns out that this constant of proportionality is Thus (1.2.29)

t F `hsy 0 Iu F F SI q 0 I m m F SiI 0 

(1.2.28)

t F xSsu 0 I 8 u t F xhsy 0 I 8 m m  F SI 0 . where the integration is over the subspace dened by From where does the constant come? We know from quantum mechanics that to specify the position of a particle, we have to allow its momentum to lose coherence. Similarly, when we specify the momentum with increasing certainty, the position loses coherence. If we consider z and to be the fundamental uncertainties in position an momenta, then Plancks constant tells us
how these uncertainties depend upon one another:

#l 

(1.2.30)

Thus the minimal discrete volume element of phase space is approximately for a single particle in one dimension, or when there are degrees of freedom. This explains (heuristically at least) the factor of . From where does the come? We shall see when we enumerate the quantum states of the ideal gas, the indistinguishability of the particles further reduces the partition function by a factor of , which xes as the number of distinguishable microstates.

1.3

Example Problem: Classical Ideal Gas

A system of noninteracting monatomic particles is referred to as the ideal gas. For such a system the kinetic energy is the only contribution to the Hamiltonian , and (1.3.31)

A " u @ t F xSsu 0 I 8 m { m

Chapter 1 Basic Principles

is Avogadros number. For other properties (like energy and entropy) we need to do something with the integral in Equation (1.3.32). We approach this integral by rst noticing that the constant energy surface

F I t F xhsy 0 I 8 s m  It turns out that knowing ts is enough to derive the ideal gas law: s t s s s or s ' ` 'i  } is the gas constant, is the number of particles in moles, and } where F @ 0 I )

where represents integration over the volume of the container. [The integral can be split into and components because does not depend on particle positions in any way.] Therefore (1.3.32)

(1.3.33) (1.3.34)

@ & u|  F ) F s I F (1.3.35) u @ Ih and I F u @ q 8 I  u @ is non-integral. Specically, how would [One may wonder what to do in the case where  F u F u q @ I and  @ 8 Ih factorials? We could use gamma functions F  u @ 3 8 I one dene the  F u @ I , where F I is a generalization of the factorial function: Fi I and ~ || m . It Fi I Fiq 8 Ih for . So in the above equations for surface area and volume we turns out that ~ | m , which is the same as the regular are using the generalized factorial function factorial function for non-negative integer arguments.] m over the constant energy Returning to the task at hand: we wish to evaluate the integral A " @ 0 surface in dened by "CB . One way to do this is to take (1.3.36) m FEF @ 0 I ) I
which gives us

denes a sphere of radius in -dimensional space. We can nd the volume and surface area of such a sphere from a handbook of mathematical functions. In the volume and surface area of a sphere are given by:

A " @ 0 "CB

Taking the , for large

q l    o p E 3 F q 8 F @ 0 F F ` 3 q r 0 0 s @  I  v @ v @ U t xhsy I U q I  vI @ U
of this function, we will employ Stirlings approximation, that . Thus

 u @ s F @ 0 I  r F @ 0 I )   F t F xhsy 0 I   I q

(1.3.37)

(1.3.38)

Chapter 1 Basic Principles

10

In the limit of large , we know that the rst three terms grow faster than the last two. So combining the terms and keeping only terms of order and results in

F U I

 F I oy 0 p  3 F 0 s @  U t xhsy Iul   E 3 @  F xSsu 0 Iy s o 0 p Uv 

(1.3.39)

or the entropy

(1.3.40)

Using our thermodynamic denition of temperature,

constant-energy sphere, we had allowed the energy to vary within some small range, we would have arrived at the same results. In fact we shall see that, for the quantum mechanical ideal gas, that is precisely what we will have to do.

o s p 8 8 s 0 @ 0  or (1.3.41) 0 @ and @ o 0 p  (1.3.42) As you can see, the internal energy is proportional to the temperature and, as expected, the number 0  u @ into Equation (1.3.40), we get: of particles. Inserting o @ ` pr F `hsyEI o s p 3 @ 3 (1.3.43)   q U which is the Sackur-Tetrode equation for entropy. m to be the surface area of the We should note that if, instead of taking the integral

1.4

Example Problem: Quantum Ideal Gas

As we saw for the classical ideal gas, analysis of the quantum mechanical ideal gas will hinge on the enumeration of the partition function, and not on the analysis of the underlying equations of motion. Nevertheless, it is necessary to introduce some quantum mechanical ideas to understand the ideal gas from the perspective of quantum mechanics. It will be worthwhile to go through this exercise to appreciate how statistical mechanics naturally applies to the discrete states observed in quantum systems. First we must nd the quantum mechanical state, or wave function , of a single particle living in an otherwise empty box. The equation describing the shape of the constant-energy wave function for a single particle in the presence of no potential eld is

F E&SI

 s  3 s  3 s  r 0 F q  F q  &HIP (1.4.44)  s  s  s   SHI q F SHIy We solve Equation (1.4.44) , a form of Schr odingers equation, with the condition that on the walls of the container. The constant energy 0 (the subscript 1 reminds us that this is

Chapter 1 Basic Principles

11

o @ p &U E & U & F   SHI (1.4.45) & p where the , , and can be any of the positive integers (1, 2, 3, . . . ). Here the box is assumed 0 to be a cube with sides of length . The energy is related to these numbers via 0   3  3 h  (1.4.46)  0 If energy is xed then the number of possible quantum states is equal to the number of sets x  &  & for which  3  3 h s f 0  (1.4.47)  f  . For a system of noninteracting particles, we have such sets of three where s
integers, and the energy is the sum of the energies from each particle:

the energy of a single particle) is an eigenvalue of the Hamiltonian operator on the left hand side of Equation (1.4.44). Under these conditions the single-particle wave function has the form:

now represents the total energy of the system, and is a nondimensionalization of where energy. The similarities between the classical ideal gas and Equation (1.4.48) are striking. As in the classical system, the constant energy condition limits the quantum phase space to the surface of dimensional space. The important difference is that for the quantum mechanical a sphere in system the phase space is discrete because the are integers. This discrete nature of the phase space means that can be more difcult to pin down than it was for the classical case. To see this imagine the regularly spaced lattice in dimensional space which is dened by the set of positive integers . The number is equal to the number of lattice points which fall on the surface of the sphere dened by Equation (1.4.48)this number is an irregular function . As an illustration, return to the single particle case. There is one possible quantum of

A " s f 0 0 "CB 

(1.4.48)

F xSsu 0 I   0 0  H s f . Yet there are no possible state for s f and three possible states for states for energies falling between these two energies. Thus the distinct microstates can be difcult 0 to enumerate. We shall see that as and become large, the discrete spectrum becomes more regular and smooth and easier to handle. F 0 Consider the number xSsu I which we dene to be the number of microstates with energy F 0 0 0 less than or equal to . In the limit of large and large , `hsy I is equal to the volume of the positive compartment of a  dimensional sphere. Recalling Equation (1.3.35) gives o 8 p  0 r F 0 F  u @ I   (1.4.49) I @ q F u@ F 0 I to the volume spanned by the positive values of [The factor 8 If comes from limiting x &") .] Plugging in 0 s f 0 u  results in o s p F @ 0 I  F 0 F u @ I  xhsy I (1.4.50)

t F xSsu 0 I x &")

x "(

 t F xhsy 0 I

Chapter 1 Basic Principles


Next we calculate range , where calculated as

12 from by assuming that the energy varies over some small . The enumeration of microstates within this energy range can be (1.4.51)

s F xhsy 0 I F 0 s0 t xSsu SIul  0 which is valid for small (relative to ). From Equation (1.4.50), we have s o  p s0 @ 0
and thus

t F xSsu 0 I0

F xSsu 0 I

(1.4.52)

o F 0 @ p 0 t xhsy SI 3

(1.4.53)

and

o 0 p E 3  F 0 s @  t xSsu SI Uv  As for the classical ideal gas, we take terms of order  than U and the constant terms. Thus o F 0 s U t xhsy I v 

o o p @ p 3 U   0  (1.4.54) and order which grow much faster 0 p E 3  @ 


(1.4.55)

From Equation (1.4.55) we could derive the thermodynamics of the system, just as we did for the classical ideal gas. However we notice that the entropy, which is given by

F xSsu 0 I l s o 0 p E 3 @   

(1.4.56)

is not equivalent to the Sackur-Tetrode expression, Equation (1.3.43). [The difference is a factor of in the partition function, which is precisely the factor that we added to the classical partition function, Equation (1.2.29), with no solid justication.] In fact, one might notice that the entropy, according to Equation (1.4.56) is not an extensive measure! If we increase the volume, energy, and number of particles by some xed proportion, then the entropy will not increase by the same proportion. What have we done wrong? How can ? we recover the missing factor of To justify this extra factor, we need to consider that the particles making up the ideal gas system are not only identical, they are also indistinguishable. We label the possible states that a given particle can be in as state 1, state 2, etc., and denote the number that exist in each state at a given instant as , , etc. Thus there are particles in state 1, and particles in state 2, and so on. Since the particles are indistinguishable, we can rearrange the particles of the system (by switching the states of the individual particles) in any way as long as the numbers remain unchanged, and the microstate of the system is unchanged. The number of ways the particles can be rearranged is given by

8 u

8 u

v

v

x "

 hv

Chapter 1 Basic Principles

13

Introducing another assumption, that if the temperature is high enough that the number of possible microstates of a single particle is so fantastically large that each possible single particle state is represented by, at most, one particle, then (because each is either 1 or 0). Thus we need to correct the partition function by a factor , and as a result Equation (1.4.55) reduces to Equation (1.3.39).

" u 8 8

"

Chapter 1 Basic Principles

14

Problems

1. (Warm-up Problem) Invert Equation (1.3.40) to produce an equation for . Using this equation and our basic thermodynamic denitions, derive the pressure-volume law (equation of state). How does this compare with Equation (1.3.34)? 2. (Particle in a box) Verify that Equation (1.4.45) is a solution to Equation (1.4.44). Evaluate the integral , for the one-particle system. What are the 6 lowest possible energies of this system? For each of the 6 lowest energies count the number of corresponding quantum states. Are the energy levels equally spaced? Does the number of quantum states increase monotonically with E? 3. (Gas of nite size particles) Consider a gas of particles that do not interact in any way except that, each particle occupies a nite volume which cannot be overlapped by other particles. What consequences does this imply for the ideal gas law? [Hint: return to the relationship . You might try assuming that each particle is a solid sphere.] Plot vs. for both the ideal gas law and the - relationship for the nite-volume particles. (Use K and mole.) Discuss the following questions: Where do the curves differ? Where are they the same? Why?

0 F h  suvI

m mm

w!

t FG s I{ m s 8 

4. (Phase space of simple harmonic oscillator) Consider a system made up of a single particle attached to a linear spring, with spring constant . One end of the spring is of mass attached to the particle, the other is xed in space, and the particle is free to move in one dimension, . What is the Hamiltonian for this system? Plot the phase space for . Find an expression for the entropy of this system. You can assume that energy varies over some small range: , . Using , derive an expression for the temperature of this system. We saw that the ideal gas has an internal temperature of per particle. How does the energy as a function of temperature for the simple harmonic oscillator compare to that for the ideal gas? Does it make sense to calculate the temperature of a one-particle system? Why or why not?

F I 0

u@

Fi I F 0 0 I 0

8 u s us 0

Chapter 2 Canonical Ensemble and Equipartition


In Chapter 1 we studied the statistical properties of a large number of particles interacting within the microcanonical ensemble a closed system with xed number of particles, volume, and internal energy. While the microcanonical ensemble theory is sound and useful, the canonical ensemble (which xes the number of particles, volume, and temperature while allowing the energy to vary) proves more convenient than the microcanonical for numerous applications. For example, consider a solution of macromolecules stored in a test tube. We may wish to understand the conformations adopted by the individual molecules. However each molecule exchanges energy with its environment, as undoubtedly does the entire system of the solution and its container. If we focus our attention on a smaller subsystem (say one molecule) we adopt a canonical treatment in which variations in energy and other properties are governed by the statistics of an ensemble at a xed thermodynamic temperature.

2.1
2.1.1

The Canonical Distribution


A Derivation

Our study of the canonical ensemble begins by treating a large heat reservoir thermally coupled to a smaller system using the microcanonical approach. The energy of the heat reservoir denoted and the energy of the smaller subsystem, . The system is assumed closed and the total energy is xed: constant. This system is illustrated in Fig. (2.1). For a given energy of the subsystem, the reservoir can obtain microstates, where is the microcanonical partition function. According to our standard assumption that the probability of a state is proportional to the number of microstates available: (2.1.1)

0 3 0v 0 t F 0 Pq 0 I U

0v

We take the

of the microcanonical partition function and expand about

F0 I: t U  s t F 0 I q s 0v t B F 0 I 3 O F 0  IP  U 
15

lt F 0v I t F 0 q 0 If

(2.1.2)

Chapter 2 Canonical Ensemble

16

Er E
Figure 2.1: A system with energy For large reservoirs (

thermally coupled to a large heat reservoir with energy

0v .

0 0 ) the higher order terms in Equation (2.1.2) vanish and we have t constant q 0  (2.1.3) U  where we have used the microcanonical denition of thermodynamic temperature . Thus (2.1.4) ' 8 u has been dened previously. Equation 2.1.4) is the central result in canonical where
2.1.2 Another Derivation

ensemble theory. It tells us how the probability of a given energy of a system depends on its energy.

A second approach to the canonical distribution found in Feynmans lecture notes on statistical mechanics [1] is also based on the central idea from microcanonical ensemble theory that the probability of a microstate is proportional to . Thus

where again is the total energy of system and a heat reservoir to which the system is coupled. The energies and are possible energies of the system and is the microcanonical partition function for the reservoir. (The subscript has been dropped.) Next Feynman makes us of the fact that energy is dened only up to an additive constant. In other words, there is no absolute energy value, and we can always add a constant, say , so long as we add the same to all relevant values of energy. Without changing its physical meaning Equation (2.1.5) can be modied:

0 0

F 0  F0 q 0 E I t I F 0 hI t F 0 q 0  I y

(2.1.5)

0 

Next we dene the function and (2.1.6) results in

F0  I F 0 h I F I t F 0 q

t FF 00 t 0 3

uq 0 3 I q 0  3 I  u (2.1.6) I . Equating the right hand sides of Eqs. (2.1.5)


(2.1.7) (2.1.8)

t F 0 uq 0 It F 0 uq 0 3  t F 0 uq 0 EIt F 0 uq 0  3 S I I

or

F 0  q 0 EI F S F I F 3 0  q 0 EIP I

Chapter 2 Canonical Ensemble


Equation (2.1.8) is uniquely solved by:

17

(2.1.9)  where is some constant. Therefore the probability of a given energy is proportional to , which is the result from Section 2.1.1. To take the analysis one step further we can normalize the probability:

F SI F I 

where

F 0 I   A   " 4 8 u

(2.1.10) (2.1.11)

is the canonical partition function and Equation (2.1.10) denes the canonical distribution function. [Feynman doesnt go on to say why ; we will see why later.] Summation in Equation (2.1.11) is over all possible microstates. Equation (2.1.11) is equation #1 on the rst page of Feynmans notes on statistical mechanics [1]. Feynman calls Equation (2.1.10) the summit of statistical mechanics, and the entire subject is either the slide-down from this summit...or the climb-up. The climb took us a little bit longer than it takes Feynman, but we got here just the same.

2.1.3

One More Derivation

Since the canonical distribution function is the summit, it may be instructive to scale the peak once more from a different route. In particular we seek a derivation that stands on its own and does not rely on the microcanonical theory introduced earlier. Consider a collection of  identical systems which are thermally coupled and thus share energy at a constant temperature. If we label the possible states of the system  and denote the energy these obtainable microstates as , then the total number is system is equal to the summation,  (2.1.12) where the are the number of systems which correspond to microstate  . The total energy of the ensemble can be computed as  (2.1.13) where is the average internal energy of the systems in the ensemble. Eqs. (2.1.12) and (2.1.13) represent constraints on the ways microstates can be distributed amongst the members of the ensemble. Analogous to our study of microcanonical statistics, here we assume that the probability of obtaining a given set of numbers of systems in each microstate is proportional to the number of ways this set can be obtained. Imagine the numbers to represent bins count the number of systems at a given state. Since the systems are identical, they can be shufed about the bins as long as the numbers remain xed. The number of possible ways to shufe the states about the bins is given by:

"

A & "p "

0 "

8  @ 

A & " 0 "p "

x "

x "(

&"

"

F x &"( I r  h e

(2.1.14)

Chapter 2 Canonical Ensemble


One way to arrive at the canonical distribution is via maximizing the number constraints imposed by Eqs. (2.1.12) and (2.1.13). At the maximum value,  

18

under the

 3 3       where the gradient operator is , and   allowed by the constraints. x &"(

(2.1.15)

is a vector which represents a direction

[The occasional mathematician will point out the hazards of taking the derivative of a discontinuous function with respect to a discontinuous variable. Easy-going types will be satised  and with the explanation that for astronomically large numbers of possible states, the function the variables are effectively continuous. Sticklers for mathematical rigor will have to nd satisfaction elsewhere.]  We can maximize the number by using the method of Lagrange multipliers. Again, it is  convenient to work with the of the number , which allows us to apply Stirlings approximation.    (2.1.16)

U U

F rI q A & " F " Iv " U  U

This equation is maximized by setting

! where and are the unknown Lagrange multipliers. The second two terms in this equations are the gradients of the constraint functions. Evaluating Equation (2.1.17) results in:

q"! A & "q A & "0 p "  " " &"q 8 q#!vq 0 i "  &"p

(2.1.17)

(2.1.18)

in which the entries of the gradients in Equation (2.1.17) are entirely uncoupled. Thus Equation (2.1.18) gives us a straightforward expression for the optimal : %$'&)( 10 (2.1.19)

"

! where the unknown constants and can be obtained by returning to the constraints. The probability of a given state follows from the rst constraint (2.1.12) 4 32  5

"

(2.1.20)

which is by now familiar as the canonical distribution function. As you might guess, the parameter will once again turn out to be when we examine the thermodynamics of the canonical ensemble. [Note that the above derivation assumed that the numbers of states assumes the most  probable distribution, e.g., maximizes . For a more rigorous approach which directly evaluates the expected values of see Section 3.2 of Pathria [3].]

8 u

"

x "(

Chapter 2 Canonical Ensemble

19

2.2

More Thermodynamics

With the canonical distribution function dened according to Equation (2.1.20), we can calculate the expected value of a property of a canonical system  5 6 87  (2.2.21)

k y

" ke"

6 87 where is the expected value of some observable property , and is the value of corresponding to the @9BA state. For example, the internal energy of a system in the canonical ensemble is dened as the expected, or average, value of : 5

k"

 " 0 s s " A 5' 5 " r q s| q   r | s (2.2.22)  " U    g5q , and incremental changes in C can The Helmholtz free energy C is dened as C m m 5q mED q ym . be related to changes in internal energy, temperature, and entropy by C m 5 m q6 m s 3 Substituting our basic thermodynamics accounting for the internal energy m , results in: m C q um q m s 3 m  (2.2.23) 2 5 3 C can be expressed as: Thus, the internal energy 5' C q o ss C p q  s s o C pr v s sF F C u u I r  (2.2.24) q q 8 I h 8 u . [So far we have still not We can equate Eqs. (2.2.22) and (2.2.24) by setting shown that is the same constant (Boltzmanns constant) that we introduced in Chapter 1; here is assumed to be some undetermined constant.] The Helmholtz free energy can be calculated
directly from the canonical partition function:

How do we equate the constant of this chapter to Boltzmanns constant of the previous chapter? We know that the probability of a given state  in the canonical ensemble is given by:  F (2.2.26)

q 


(2.2.25)

p "

u 

Next we take the expected value of the log of this quantity: 6 6 H7 G7  

U

j" urq

2q 0 " q

2q5''F C q5 If 

(2.2.27)

[You might think that in the study of statistical mechanics, we are terribly eager to take logarithms of every last quantity that we derive, perhaps with no a priori justication. Of course, the justication is sound in hindsight. So when in doubt in statistical mechanics, try taking a logarithm. Maybe something useful will appear!]

Chapter 2 Canonical Ensemble


A useful relationship follows from Equation (2.2.27). Since C The expected value of is straightforward to evaluate:

20

U

j" I

q A " "  " U

q52rq , rq 6 

7  .

(2.2.28)

From this equation, we can make a connection to the microcanonical ensemble, and the Chapter 1. In a microcanonical ensemble, each state is equally likely. Therefore Equation (2.2.28) becomes

j" t | ,from and

2 t7 2 A t | t ' A m m (2.2.29) t " " t U U  U which should look familiar. Thus the of Chapter 2 is identical to the of Chapter 1, Boltzmanns
constant.

2.3

Formalism for Classical Systems

As in the construction of the classical microcanonical partition function, in dening the canonical partition function for classical systems we make use of the correction factor described in Chapter 1 which relates the volume of classical phase space to a distinct number of microstates. An elementary volume of classical phase space is assumed to correspond to distinguishable microstates. The partition function becomes:

m m 8

m m u

m m  k and mean values of a physical property are expresses as: kF I $'I P 0 m m 6 k87u  $'I P 0 m m

(2.3.30)

(2.3.31)

2.4

Equipartition

The study of molecular systems often makes use of the equipartition theorem, which describes the correlation structure of the variables of a Hamiltonian system in the canonical ensemble. Recalling that the classical Hamiltonian of a system is a function of Q independent momentum and position coordinates. We denote these coordinates by and seek to evaluate the ensemble average:

m R TS s 6 " 7u 4 s2 mR S  coordinates. where the integration is over all possible values of the Q

"

(2.4.32)

The Hamiltonian depends on the internal coordinates although the dependence is not explicitly stated in Equation (2.4.32).

Chapter 2 Canonical Ensemble


Using integration by parts in the numerator to carry out the integration over the produces: V4 U 2 XR   S 4 4 W 6 7 2 R S

21

q 3 m r m u| " ss u q (2.4.33) m m R u| X S indicates integration over all coordinates excluding 2 . The where the integration over i2 and Y 2 ( indicates the extreme values accessible to the coordinate 2 . Thus for a momennotation ` , while for a position coordinate the extreme a tum coordinate these extreme values would be
values would come from the boundaries of the container. In either case, the rst term of the numerator in Equation (2.4.33) vanishes because the Hamiltonian is expected to become innite at the extreme values of the coordinates. Equation (2.4.33) can be further simplied by noting that since the coordinates are independent, 2 b2 b2 b2 dc b2 , where is the usual Kronecker delta function. [ for  ; for gc fe .] After simplication we are left with

coordinate

s )" us "

"

" z 8

" z

" ss 2 7ul t "b2 

(2.4.34)

which is the general form of the equipartition theorem for classical systems. It should be noted that this theorem is only valid when all coordinates of the system can be freely and independently excited, which may not always be the case for certain systems at low temperatures. So we should keep in mind that the equipartition theorem is rigorously true only in the limit of high temperature. 6 7 Equipartition tells us that for any coordinate . Applying this theorem to a momentum coordinate, , we nd, 6 6 G7 7 (2.4.35)

"

u2 s " s " " " 2 l "X " q 

[Remember the basic formulation of Hamiltonian mechanics.] Similarly, 6 G7

(2.4.36)

From 6 Equation (2.4.35), we see that the average kinetic energy associated with the @9BA coor7 dinate is . For a three dimensional system, the average kinetic energy of each particle is specied by . If the potential energy of the Hamiltonian is a quadratic function of the coordinates, then each degree of freedom will contribute energy, on average, to the internal energy of the system.

w " u @ G u @ u@

u@

2.5

Example Problem: Harmonic Oscillators and Blackbody Radiation

A classical problem is statistical mechanics is that of a blackbody radiation. What is the equilibrium energy spectrum associated with a cavity of a given volume and temperature?

Chapter 2 Canonical Ensemble

22

2.5.1

Classical Oscillator

The vibrational modes of a simple material can be approximated by modeling the material as a collection of simple harmonic oscillators, with Hamiltonian (for the case of classical mechanics): (2.5.37) "CB q where each of the identical oscillators vibrates with one degree of freedom. The natural freh quency of the oscillators is denoted by . The partition function for such a system is expressed as: u h  "  3 8 " rwv`m m A i t q  8 q (2.5.38)  "CB q @ @ pVrts which is product of single-particle integrals: h   3 8   8 p m mr  8 xpyrts q o i (2.5.39) @ @ q , Equation (2.5.39) is reduced to m Using the identity for Gaussian distributions @ p ) o @ p )  8 8 o ji (2.5.40) h  or @ r  8 (2.5.41) q h  u Remember that the factor 8 corrects for the fact that the particles in the system are indis-

ih @

 "  3 8 " r @ 

tinguishable. If the particles in the system are distinguishable, then the partition function is given by:  h (2.5.42) which is the single-particle partition function raised to the

@ r q

power.

2.5.2

Quantum Oscillator

q@  m m  3 @8 ih   0 t (2.5.43)  u @ , h is the angular frequency associated with the classical where the constant is equal to 0 oscillator, and is the energy eigenvalue of the Schr odinger operator. This equations has, for 7 0 hu @ z hEu @  hu @  . The @  8   , energy quantum numbers values of eigenvalue. [For a complete analysis and associated so-called Planck oscillator excludes the
wave functions, see any introductory quantum physics text, such as French and Taylor [2].]

The one-dimensional Schr odinger wave equation for a particle in a harmonic potential is:

Chapter 2 Canonical Ensemble

23

Thus the single-particle partition function is given by (for the Schr odinger oscillator): $ (  01  (2.5.44)

A B ~ &)
 

which can be simplied

For

 8 q  X  F8 q I 

(2.5.45)

distinguishable oscillators the partition function becomes  

(2.5.46)

From our thermodynamic analysis, we calculate internal energy of the h h  

3 r @ q 8  q The Planck analysis of this system (excluding the zero-point energy a mean-energy per oscillator of: 5 h 6 7 q 8
.

5'q s|s 

-particle system as (2.5.47) eigenvalue), results in (2.5.48)

2.5.3

Blackbody Radiation

Consider a large box or cavity with length dimensions , , and , in which radiation is reected off the six internal walls. [It is assumed that radiation is absorbed and emitted by the h container, resulting in thermal equilibrium of the photons.] In this cavity, a given frequency h F corresponds to wavenumber , where is the speed of light and is wavenumber measured in units of inverse length. Wavenumbers obtainable in the rectangular cavity are specied by the Cartesian components , , and , where , , and are integers and . Angular frequency, expressed in terms of the integers , , and , is: h (2.5.49) h h The total number of modes corresponding to a given frequency range, as , can be calculated from the integral (in the continuous limit):

&

&

z Pu @ u { @ eu #F(  3d  3d  I )

 @ pu

&

P @ bFi u !I  3'Fu I  32Fipu I  )  

ed m m m p  ' u , f 7eu , f v7&u , yields Changing integration variables to f| m f m f m f  ml d $'g h (ig j g k 0 npo

(2.5.50)

(2.5.51)

Chapter 2 Canonical Ensemble

24

op (2.5.52) @ h hv3 m h is and the number with frequencies between and h  mh h  e s mh  or @  (2.5.53) @  where s is the volume of the cavity. Multiplying by a factor of 2, for the two possible opposite
polarizations of a given mode, we obtain:

h Evaluating this integral gives the number of modes with frequency of less than : h

h h for the number of obtainable states for a photon of frequency between and 6 7 by the Planck expression for the mean energy per oscillator, we get h h

s h  m h 

3 m h

(2.5.54) . Multiplying

m0

s F m q   8I

(2.5.55)

the radiation energy (sum total of energy of the photons) in the frequency range.

2.6

Example Application: Poisson-Boltzmann Theory

As an example application of canonical ensemble theory to biomolecular systems, we next consider the distribution of ions around a solvated charged macromolecule. If the electric eld q can be ir , the Gauss Law can be expressed expressed as the gradient of a potential q ft sr (2.6.56) t where is the position-dependent permittivity, and is the charge density. This electrostatic approximation is valid if the length scale of the system is much smaller than the wavelengths of the electromagnetic radiation. t into two contributions: We can split the charge density t utwv xty (2.6.57) twv where is the charge density associated with the ionized residues on the macromolecule, and ty is the charge density of the salt ions surrounding the molecule. For a mono-monovalent salt the mobile ions, distributed according to Boltzmann statistics (thermal equilibrium canonical distribution), have a mean-eld concentration of ty y   (2.6.58) {z 3| }~ }@~@ y where | is the bulk concentration of the salt, and is Avogadros number, and pz is the elementary charge. This distribution assumes that ions interact with one another only through the electrostatic eld, and thus is strictly valid only in the limit of dilute solutions.

F wI

rq q F wIf F wI F wI F I

F wI

F wI

F I F wI F w I 3 F w If

F wI rq E} F q IP {

Chapter 2 Canonical Ensemble

25

The two terms on the right-hand side of Equation (2.6.58) correspond to concentrations of positive and negative valence ions. Substitution of Equation (2.6.58) into Gauss Law leads to the Poisson-Boltzmann equation: i y twv sr (2.6.59) pz | pze

F wI F wI q @ {

U F u I rq F w If

a nonlinear partial differential equation for electrostatic potential surrounding a macromolecule. Once the electrostatic potential is calculated, the ion concentration eld is straightforward provided by Equation (2.6.58).

2.7

Brief Introduction to the Grand Canonical Ensemble

Grand canonical ensemble theory is the statistical treatment of a system which exchanges not only energy, but also particles, with its environment in thermal equilibrium. Derivation of the basic probability distribution for the grand canonical distribution is similar to that of the canonical are treated as statistically-varying quantities. The resulting ensemble, except that both and probability distribution is of the form:  4 2 (2.7.60)

" v z F   suI h " 0 2 where each state is specied by number of particles , and energy . c The grand partition function is dened by summation over all  and A A zF hsyEI " 2  4 
which is often written in a form like:

states: (2.7.61)

A A zF hsyEI "  2

(2.7.62)

 where is called the fugacity (the tendency to be unstable or y away, from the Latin fugere meaning to ee according to the Oxford English Dictionary [5]).

Chapter 2 Canonical Ensemble

26

Problems
1. (Derivation of canonical ensemble) Show that Equation (2.1.8) is uniquely solved by Equation (2.1.9). 2. (Simple harmonic oscillators and blackbody radiation) Compare the classical oscillator with the Schr odinger and Planck oscillators. (a) What is the energy per oscillator in the canonical ensemble for the classical case? Which oscillators (if any) obey equipartition? For those that do not, is there a limiting case in which equipartition is valid? [Hints: plot as a function of temperature. Perhaps Taylor expansions of these expressions will be helpful.] (b) From Equation (2.5.55) obtain a nondimensional expression for energy per unit frequency spectrum, and plot the nondimensional energy distribution of blackbody radiation versus h nondimensional frequency . At what frequency does the spectral energy distribution obtain a maximum?

5fu

3. (Electrical double layer) Consider a one-dimensional model of a metal electrode/solution electrolyte interface. The potential in the solution is governed by the Poisson-Boltzmann equation: y z 3| (a) Show that the above equation is the Poisson-Boltzmann equation in terms of the dimenpz r sionless potential . Show that this equation can be linearized as

 u

m  @  } m

EU

(b) Evaluate the Debye length ( (c) Using the boundary condition

8 uw

m m   

) for the case of 0.1 M and 0.0001 M solution of NaCl.

m r rq p z m B~ q

(where is the charge density on the surface of the electrode), nd . Plot the concentrations of Na and Cl as functions of (assume a positively-charged electrode). 4. (Donnan equilibrium) Consider a gel which carries a certain concentration | of immobile charges and is immersed in an aqueous solution. The bulk solution carries mono-monovalent mobile ions of concentration |( and | . Away from the gel, the concentration of the salt ions achieves the bulk concentration, denoted | . What is the difference in electrical potential between the bulk solution and the interior of the gel? [Hint: assume that inside the gel, the overall concentration of negative salt ion balances immobile gel charge concentration.]

F I

F wI

t F wI

"fF wI

Chapter 3 Brownian Motion, Fokker-Planck Equations, and the Fluctuation-Dissipation Theorem


Armed with our understanding of the basic principles of microscopic thermodynamics, we are nally ready to examine the motions of microscopic particles. In particular, we will study these motions from the perspective of stochastic equations, in which random processes are used to approximate thermal interactions between the particles and their environment.

3.1

One-Dimensional Langevin Equation and FluctuationDissipation Theorem

Consider the following Langevin equation for the one-dimensional motion of a particle:

Fy where is the mass of the particle, is the coefcient of friction, is the systematic (deterministic) force acting on the particle, and is a random process used to induce thermal uctuations in the energy of the particle. Equation (3.1.1) can be thought of as Newtons second law with three forces acting on the particle: viscous damping, random thermal noise, and a systematic force. Equation (3.1.1) can be factored v v ( Fy 9 9 (3.1.2)

p w F n I 3"w&F n I 'kFyF n I 3k F n I k

(3.1.1)

m H m n w 2k i3k 

and has the general solution:

y v 9 ( F(kFyF D 3k F D mED r w&F n I 9 v & w F I 3 8 ~ I IEI  (3.1.3) q 6 7 Using angled brackets to denote averaging over trajectories, we can calculate the covariance
27

Chapter 3 Thermal Motions


in particle velocity: 6

28

w&F n I w&F n I 7 $ 9 ( 9 0 v w  F I 3 w F I 9 ( y v 6 kFyF D & ~ I 3 k F D EI 7 mED 3 w&F I 9 ( y v 6 kFyF D 3 k F D I 7 mED 3 ~ h  I y y v m Dr  8  ~ 9 ~ 9 (e$ ( 0 6 k F D I k F D hI 7 mtD E 3  ~ ~

(3.1.4)

assuming that the deterministic forces have zero averages. Equation (3.1.4) can be further simplied: v v y y v e$ 9 ( 9 0 6 9 9 (e$ ( 0 6 7 D D 7 tD ED e$ 9 ( 9 0

w&F n I w&F n hI 7w  F I

k F I k F hI m m 

(3.1.5) And if we assume that the random force is a white noise process, then its correlation can be described by: 6 D D 7 D D C (3.1.6) D Integrating Equation (3.1.5) over we obtain: v 0 v y v e $ ( 6 9 7 D ED 9 9 e$ 9 ( 9 0 (  (3.1.7) C where is the step function dened by

k F I k F I u F q hIp w&F n I w&F n h 3  ~   I u'w  F I FnI n F n I b 88 u @ n n . Finally, integration of Equation (3.1.7) yields: v v 6 w&F n w&F n 7w  F C e$ 9 ( 9 0 3 @  I 9 9 q I hI n n n: To obtain the mean kinetic energy, we take v v 6 w  F n 77w  F q  C   9 3 I 8 9 I @
which approaches

F n q hI m  

(3.1.8)

$ 9

( 9 0

v 

(3.1.9)

(3.1.10)

w  F n I 7 @ C 6 w  72 uw at equilibrium. From equipartition we have . So 6 k F D k F D 7 @ I I t F D q D hIp


6

(3.1.11)

(3.1.12)

which is a statement of the uctuation-dissipation theorem. To obtain thermal equilibrium, the strength of the random (thermal) noise must proportional to the frictional damping constant, as prescribed by Equation (3.1.12).

Chapter 3 Thermal Motions

29

3.2

Fokker-Planck Equation

Imagine integrating a stochastic differential equation such as Equation (3.1.1) a number of times so that the noisy trajectories converge  into a probability density of states. Considering an dimensional problem representing the  state space, we denote the probability  x  with the vector x %  distribution as and introduce as the probability of transition   " " % from state at time to at time . From the transition probability it follows:

FnI F Fn F F n 3 Ih n 3 n I  I n Fn 3 I n3 FnI




Fn InI

F  F n # 3 h 3 I F  F n " 3  In # I  n 3#  F n I n I  F  F n I n I m  

(3.2.13)

Expanding the transition probability as a power series, we obtain

F  F n # 3 h 3 I F  F n I n  F n Ih n I 33 In " F " F n 3" I q " F n II s "

A 8 F " F n 3# q " F n I IEIp B s  F  F n h n  F n I n I r  F  F n I n I m   s " I  


(3.2.14) (3.2.15)

where the convention of summation over the indices  e     we obtain

 (e  is implied. Noting that F Fnh I  n F n I n I F F n I q F n E I I F F n I q F n IIp FF F n I q F n IEIIp F  F n " 3  3 I In "


F  F n Ih n I 3 A 8 F " F n " 3 I q " F n IEIp B F " Fn # 3 I q " F  n IEI s " s  s " F  F n I q  F n IEI r 
 

F  F n Ih n I m  
(3.2.16)

or

F  F n # 3 h 3 I In "

F  F n I n I 3 A 8 F " F n " 3 I q " F n IEIp B F " F n " 3 I q " F  n II s " s  s " F  F n I q  F n IEI r 
 

F  F n Ih n I m  
(3.2.17)

By successively integrating by parts, we can move the derivatives off of the delta functions:

F  F n # 3 h 3 I In "

F  F n I n I 3 F  F n I q  F n IEI A B  F " F n # 3 I q " F n  I Ip F " F n 3# I q

Ffq 8 I  s  s "    s " " F n IEI  F  F n I n I m  


(3.2.18)

Chapter 3 Thermal Motions


Equation (3.2.18) integrates to  # #  

30

F Fn 3  I  n 3 I F  F n Ih n I 3 A Fq 8 I  s  F " F # q " F n 3 3 I q " Fn  n I I Ip F " F n #  I I  F  F n  I  n I%u B s "  s " 

(3.2.19)

or in the limit

, s  F  n A Fq 8 I  s  B s " s " sn  I   o "|F n # 3 I q " F n f p 8 " F n 3# I q " F n % ~ I H      I y q




F  F n I n I r 
(3.2.20)

Dening the Kramers-Moyal coefcients [4] as # $ 0 

o " F n 3# I q " F n If p  " " 8 ~ 8 " F n 3 I q " F n % I H      X  F  n We obtain the Fokker-Planck equation for  I: s  F  n A Fq s " $ 0 "  F  F n n sn  I B 8 I  s "  s "  I  I  

(3.2.21)

(3.2.22)

In the following section we evaluate the Kramers-Moyal expansion coefcients for a nonlinear -dimensional stochastic differential equation.

3.3

Brownian Motion of Several Particles




Consider the general nonlinear Langevin equation for several variables

where

2wF n I "

"p f " F  F n I n I 3 "2F  F n I n I 2!F n I

F n I F n Ip  F n I  F n I :
(3.3.23)

are uncorrelated white noise processes distributed according to: 6 6 7 2 7 b2

" F n I u  "FnE % I !F n   I u @ " F n q n h  If

(3.3.24)

In Equation (3.3.23) we use the Einstein convention of summation over repeated indices. Thus the  b2 matrix describes the covariance structure of the random forces acting on . Equation (3.3.23)has the general solution: 9( " b2  2  (3.3.25) 9

FnI

"F n 3 I q "FnI

f "F Fn h I  n I 3 " !F F n I n I !F n % I mn

Chapter 3 Thermal Motions


We can expand this integral by expanding the functions

31

"fF  F n I n I "fF  F n I n I 32 F n I q " 2!F  F n I n I "2wF  F n I n I 32 F n I q b


and inserting these expansions into Equation (3.3.25): # ( (  9 9 9 ( b2  ( 9 2 9 9 9 9

"

F n I% s s "%F  F n Ih n I 3    s Fn "b2!F  F n n 3  If s Ih I  

and

"b2

(3.3.26)

tion (3.3.27):

"fF n 3 I q "fF n I "fF F n  m n 3 F n I q F n I% "F  F n I n I mn 3  In I  3 " !F F n I n I wF n I mn 3 F n I q F n f 2wF  F n ) n 2wF n mHn 3 " I ) I I I  (3.3.27) F n q F n I% terms in the above equation and recursively using EquaWe can expand the I Fn " 3 I q F n I 9 ( i F  F n Ih n I mn 3 9 ( F n I q F n I% i F  F n I n I mn 3  3 9 ( F  F n I n I9 F n I mHn 3 9 ( F n I 9 q F n f F  F n n F n mHn 3  I I I I 9 9 (3.3.28) 6 "%F n 3 I q "fF n I 7 in the limit , we insert Equation (3.3.28) into EquaTo compute

"fF n 3 I q F n I 7P "F  F n Ih n I 3 9 ( 9 s s b" 2!F  F n Ih n I r F  F n I n I @ 2 F n q n I mn mn 3  9 9 q (3.3.29)  where terms of order are not shown. To evaluate the integral in Equation (3.3.29), we use the
6
identity:

tion (3.3.27), average over trajectories, and retain terms of rst order:

and we get:

9 9 $ 0

 F

F Fnh I  n I @ 2 F n q n I mHn
2  F

F F n h I  n Ip
2 

(3.3.30)

" "%F  F n Ih n I 3

s "b2!F  F n n F F n  n I  I s I I

(3.3.31)

for the drift coefcients. 6 Evaluating the limit is 6 " 9( 9( b2  9 9 and

"fF n 3 I q "Fn% I 2wF n 3 I q 2wF n I% 7 , the only term that survives averaging and "F n 3 I q "fF n If F n 3# I q F n I% 7u n F  F n n @ 2 F n q n mn mn @ "b2!F  F n n 2!F  F n n " ! F F n I It I I I I ItF Ih I
(3.3.32)

b$ 2 0

"  " F  F n Ih n I 2 F  F n I n Ip

(3.3.33)

Chapter 3 Thermal Motions


All higher-order coefcients are zero: 6 " $ 0  y

32

"F n # 3 I q " F n f 7u v " " 8 ~ " F n 3 I q " F n % I H      I  F Fnh nI I  sn 3 o % s " wF F n n p q " F F 3 ! F F n n n n s" I I h I I s I I q s  " F Fn n F Fn n F Fn n s "s Ih I I I  Ih Ifu s


(3.3.34)

The Fokker-Planck equation corresponding to Equation (3.3.23) is:    2  2  F   2   2

F  F n  InIr
(3.3.35)

3.4

Fluctuation-Dissipation and Brownian Dynamics

To examine how the uctuation-dissipation theorem arises for the general nonlinear Langevin equation, we rst examine the simple one-dimensional problem described by:

F nI 3 6 Fn I F n   I 7u @

F  n I F n I F n q n hIp

If the systematic term is proportional to the gradient of a potential, Planck equation is expressed as:   

rq 5

(3.4.36) , then the Fokker-

F  n I s F 5 s  3 sn s I s  (3.4.37)  If we further assume that at equilibrium, then m q F m 5 r m   m m I m  (3.4.38) q ) , which is the uctuationIt is straightforward to show that this condition is satised if "p' " k y3 " wF n I "F n EI !F n m I u @ " F n q n hI k 2 m 2 5GF  F n I n Ip V
2 {

dissipation theorem for this one-dimensional Brownian motion. This relationship generalizes for the following -dimensional case: 2 2 b2 {2 6 2 7 b2

(3.4.39) (3.4.40) (3.4.41)

to

is the potential energy function, and is the frictional interaction matrix which determines the hydrodynamic interactions between the particles in the system. Thus the covariance structure of the random forcing is proportional to the hydrodynamic inter 2 action matrix. Given a diffusional matrix , generation of the random force vector requires b2 the calculation of , the factorization of . In fact, for Brownian dynamics simulations, this factorization represents the computational bottleneck that demands the majority of CPU resources.

5GF  F n I  n I In the above equations,  "

"

b2

" 

b2

"

(3.4.42)

"

"

Chapter 3 Thermal Motions

33

Problems
1. (Brownian Motion) Consider the coupled Langevin equations:   g  

$ 3

where  is a diagonal matrix of particle masses, is the non-diagonal frictional interaction matrix, and the noise term is correlated according to: 6 7 F 

5 3 Fn rq I

where

Fn E I F n   I u @ F n q n   I   2

is the identity matrix. The uctuation-dissipation theorem tells us that:

(a) Brownian Motion If we ignore inertia in the above system we get

 where is the diffusion matrix. Show that matrix , how would you calculate its factor ?

h |

 q 8

 53x F n I

. Given a diffusion

(b) Brownian Dynamics Algorithm Under what circumstances do you think the inertial system will reduce to the over-damped (non-inertial) system? Assuming that the random and systematic forces remain constant over a time step of length , nd an ex  pression for in terms of . [Hint: The equations can be decoupled by considering the eigen decomposition of C . Proceed by nding a matrix vector equivalent to Equation (3.1.3).] Show that under a certain highly-damped limit, this expression reduces to the Brownian equation  systematic force + random force

w&F n 3 n I

w&F n I

w&F n I



2. (Numerical Methods) Devise a numerical propagation scheme for the Brownian equation.

Bibliography
[1] R. P. Feynman. Statistical Mechanics. A Set of Lectures. Addison-Wesley, Reading, MA, 1998. [2] A. P. French and E. F. Taylor. An Introduction to Quantum Physics. W. W. Norton and Co., New York, NY, 1978. [3] R. K. Pathria. Statistical Mechanics. Butterworth-Heinemann, Oxford, UK, second edition, 1996. [4] H. Risken. The Fokker-Planck Equation. Methods of Solution and Applications. Second edition. [5] J. A. Simpson and E. S. C. Weiner, editors. Oxford English Dictionary Online http://oed.com. Oxford Univerisity Press, Oxford, UK, second edition, 2001.

34

Das könnte Ihnen auch gefallen