Sie sind auf Seite 1von 141

Problems for the course

Statistical Physics
(FYS3130)

Prepared by Yuri Galperin

Spring 2004
2
Contents

1 General Comments 5

2 Introduction to Thermodynamics 7
2.1 Additional Problems: Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2 Mini-tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.1 A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.2 B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3 The Thermodynamics of Phase Transitions 33


3.1 Mini-tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.1.1 A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.1.2 B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4 Elementary Probability Theory . . . 49

5 Stochastic dynamics ... 59

6 The Foundations of Statistical Mechanics 77

7 Equilibrium Statistical Mechanics 79

8 Tests and training 103

A Additional information 107


A.1 Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
A.1.1 Thermodynamic potentials . . . . . . . . . . . . . . . . . . . . . . . . . 107
A.1.2 Variable transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
A.1.3 Derivatives from the equation of state . . . . . . . . . . . . . . . . . . . 107
A.2 Main distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
A.3 The Dirac delta-function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
A.4 Fourier Series and Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

B Maple Printouts 115

3
4 CONTENTS
Chapter 1

General Comments

This year the course will be delivered along the lines of the book [1]. The problems will also be
selected from this book. It is crucially important for students to solve problems independently.
The problems will be placed on the course homepage. The same page will contain solutions
of the problems. So if a student is not able to solve the problem without assistance, then she/he
should come through the solution. In any case, student have to able to present solutions, obtained
independently, or with help of the course homepage.
FYS 3130 (former FYS 203) is a complicated course, which requires basic knowledge of
classical and quantum mechanics, electrodynamics, as well as basics of mathematics.
Using a simple test below please check if your knowledge is sufficient. Answers can be found
either in the Maple file.

A simple test in mathematics


Elementary functions

Problem 1.1: Function is given by the definition

f (x) = x4 + ax3 + bx2 .

(a) Show that by a proper rescaling it can be expressed as

f (x) = a4 Fη (ξ) where Fη (ξ) = ξ4 + ξ3 + ηξ2 ,


ξ ≡ x/a, η ≡ b/a2 . Function Fη (ξ) is very important in theory of phase transitions.

(b) Investigate Fη (ξ).

– How many extrema it has?


– When it has only 1 minimum? When it has 2 minima? At what value of η it has an
inflection point?

5
6 CHAPTER 1. GENERAL COMMENTS

– Plot Fη (ξ) for this value of η.

Problem 1.2: Logarithmic functions are very important in statistical physics. Check you mem-
ory by the following exercises:
• Plot function
1−x
f1 (x) = ln for |x| ≤ 1 .
1+x
Discuss properties of this function at |x| > 1.
• Plot function
f2 (x) = ln(tan πx) .
• Simplify the expression
e4 ln x − (x2 + 1)2 + 2x2 + 1 .

Problem 1.3: Do you remember trigonometry? Test it!


• Simplify
sin−2 x − tan−2 x − 1 .
• Calculate infinite sums

C(α, β) = ∑ e−αn cos βn , α > 0.
n=0

S(α, β) = ∑ e−αn sin βn , α > 0.
n=0
¡ ¢ ¡ ¢
Hint: take into account that cos x = Re eix , sin x = Im eix .

Basic integrals

Problem 1.4: Calculate interglars:


• Z ∞
In (γ) = xn e−γx dx , γ > 0.
0
• Z Z
dx dx
, .
x ± a2
2 x(1 − x)
• Z ∞
2 /2
G(γ) = e−γx dx , γ > 0.
0
Chapter 2

Introduction to Thermodynamics

Quick access: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

Problem 2.1: Test the following differentials for exactness. For those cases in which the dif-
ferential is exact, find the function u(x, y).
−y dx
(a) dua = x2 +y2
+ x2x+y
dy
2 .

(b) dub = (y − x2 ) dx + (x + y2 ) dy .

(c) duc = (2y2 − 3x) dx − 4xy dy .

Solution 2.1:

(a) The differential is exact, ua (x, u) = − arctan(x/y).


Here one point worth discussion. The function ua (x, u) = − arctan(x/y) has a singularity
at x = 0, y = 0. As a result, any close path embedding this point contributes 2π to the vari-
ation of the quantity u(x, y). Consequently, this function cannot serve as a thermodynamic
potential if both positve and negative values of x and y have physical meaning.

(b) The differential is exact, ub (x, y) = yx + (y3 − x3 )/3.

(c) The differential is not exact.

The function u(x, y) is reconstructed in the following way. For an exact differential, du = ux dx +
uy dy,
∂u(x, y) ∂u(x, y)
ux = , uy = .
∂x ∂y
If we introduce Z x
u1 (x, y) = ux (ξ, y) dξ ,
0

7
8 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

then the difference f ≡ u(x, y) − u1 (x, y) is a function only of y. Consequently,


Z x
df ∂u(x, y) ∂u1 (x, y) ∂ux (x, y)
= − = uy (x, y) − dx .
dy ∂y ∂y 0 ∂y

As a result,
Z y Z yZ x
∂ux (ξ, η)
f (y) = dη uy (x, η) − dξ dη .
0 0 0 ∂η
Finally,
Z x Z y Z yZ x
∂ux (ξ, η)
u(x, y) = dξ ux (ξ, y) + dη uy (x, η) − dξ dη .
0 0 0 0 ∂η
For calculation see the Maple file it1.mws.

Problem 2.2: Consider the two differentials

1. du1 = (2xy + x2 ) dx + x2 dy, and

2. du2 = y(x − 2y) dx − x2 dy.

For both differentials, find the change u(x, y) between two points, (a, b) and (x, y). Compute the
change in two different ways:

(a) Integrate along the path (a, b) → (x, b) → (x, y),

(b) Integrate along the path (a, b) → (a, y) → (x, y).

Discuss the meaning of your results.

Solution 2.2: The calculations are shown in the Maple file. In the case (b) the results are
different because the differential is not exact.

Problem 2.3: Electromagnetic radiation in an evacuated vessel of volume V at equilibrium


with the walls at temperature T (black body radiation) behaves like a gas of photons having
internal energy U = aV T 4 and pressure P = (1/3)aT 4 , where a is Stefan’s constant.

(a) Plot the closed curve in the P −V plane for a Carnot cycle using black body radiation.

(b) Derive explicitly the efficiency of Carnot engine which uses black body radiation as its
working substance.
9

1 2
T
h

P re s s u re
T c
4 3

V o lu m e

Figure 2.1: On the Carnot cycle with black-body radiation.

Solution 2.3: We will follow example shown in Exercise 2.2. Let us start with isotherms. At
the isotherms the pressure is V -independent, thus isotherms are horizontal, see Fig. 2.1 Along
the first isothermal path,
· ¸
4 4 V2
∆Q1→2 = ∆U + P∆V = (4/3)aTh (V2 −V1 ) = aTh V1
4
−1 . (2.1)
3 V1

In a similar way, · ¸
4 4 V3
∆Q3→4 = aTc V4 1 − . (2.2)
3 V4
Now let us consider an adiabatic path. Along an adiabatic path,

dQ = 0 = dU + P dV = 4aV T 3 dT + aT 4 dV + (1/3)aT 4 dV = 4aV T 3 dT + (4/3)aT 4 dV .

Consequently,
dT 1 dV
=− → V T 3 = const, PV 4/3 = const .
T 3 V
Let us start form the point 2 characterized by the values P1 ,V2 and adiabatically expand the
gas to the point 3 characterized by the volume V3 . We have V2 Th3 = V3 Tc3 . In a similar way,
V4 Tc3 = V1 Th3 . Combining these equalities, we get:
µ ¶3
V2 V1 Tc
V2 Th3 = V3 Tc3 , V4 Tc3 = V1 Th3 → = = (2.3)
V3 V4 Th

Combining Eqs. (2.1), (2.2) and (2.3), we find


µ ¶µ ¶
4 4 V3 Tc
∆W = ∆Q1→2 + ∆Q3→4 = aTh V1 −1 1− .
3 V4 Th
10 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

Remember: along a closed path ∆U = 0 and the total heat consumption is equal to mechanical
work.
On the µ ¶
4 4 V2
∆Q1→2 = aTh V1 −1 .
3 V1
As a result,
∆W Th − Tc
η= =
∆Q1→2 Th
as it should be.

Problem 2.4: A Carnot engine uses a paramagnetic substance as its working substance. The
equation of state is
nDH
M=
T
where M is magnetization, H is the magnetic field, n is the number of moles, D is a constant
determined by the type of substance, and T is is the temperature.
(a) show that the internal energy U, and therefore the heat capacity CM , can only depend on
the temperature and not the magnetization.

(b) Let us assume that CM = C = constant. Sketch a typical Carnot cycle in the M − H plane.

(c) Compute the total heat absorbed and the total work done by the Carnot engine.

(d) Compute the efficiency of the Carnot engine.

Solution 2.4:
(a) By definition [see Eq. [1](2.23)],

dU = T dS + H dM .

Thus at M = const the internal energy is independent of the magnetization.

(b) Since C = const, U0 = NcM T , where cM is the specific heat per one particle while N is the
number of particles. Introducing molar quantities we get U0 = ncT .
A Carnot cycle is shown in Fig. 2.2. We have:

(c) For an isothermal process at T = Tc ,


Z H2
Tc
Q1→2 = − H(M) dM = (M 2 − M22 ) .
H1 2nD 1
In a similar way,
Th
Q3→4 = (M32 − M42 ) .
2nD
11

M 3
T
h
4

2
T c
1

H
Figure 2.2: Sketch of the Carnot cycle.

The total work is then W = Q1→2 + Q3→4 , and the efficiency is

W Tc M12 − M22
η= = 1+ .
Q3→4 Th M32 − M42

Now let us discuss adiabatic paths. We have at each path,


MT
0 = dQ = dU − H dM = nc dT − dM .
nD
Immediately we get
dT 1
= 2 M dM .
T n cD
Integrating this equality from point 2 to point 3 we obtain,
Th
2n2 cD ln = M33 − M22 .
Tc
In a similar way, integrating from 4 to 1 we obtain
Tc
2n2 cD ln = M13 − M42 .
Th
As a result,
M32 − M22 = M42 − M12 → M32 − M42 = −(M1 − M22 ) .

(d) Using this expression we obtain the efficiency


Q1→2 + Q3→4 Th − Tc
η= = .
Q3→4 Th
12 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

Coming back to the item (c) we find

M32 − M42 Th − Tc
W = Q3→4 η = .
2nD Th2

Problem 2.5: Find the efficiency of the engine shown in Fig. 2.3 ([1] -Fig.2.18). Assume that

P
1

a d ia b a tic
4

2
3
V

Figure 2.3: Sketch of the cycle.

the operating substance is an ideal monoatomic gas. Express your answer in terms of V1 and V2 .
(The processes 1 → 2 and 3 → 4 are adiabatic. The processes 4 → 1 and 2 → 3 occur at constant
volume).
v

Solution 2.5: Let us start with the processes at constant volume. The mechanical work during
theses processes does not take place. Consequently,

Q2→3 = (3/2)nR(T3 − T2 ) ,
Q4→1 = (3/2)nR(T1 − T4 ) ,
W = (3/2)nR(T1 + T3 − T2 − T4 ) .

The efficiency is given by the expression


T1 + T3 − T2 − T4 T2 − T3
η= = 1− . (2.4)
T1 − T4 T1 − T4

Now let us consider the adiabatic processes where TV 2/3 = const (monoatomic ideal gas!). Thus,
µ ¶2/3
T1 T4 V2
= = ≡ α.
T2 T3 V1
13

Substituting this expression into Eq. (2.4) we obtain:


µ ¶2/3
1 V1
η = 1− = 1− .
α V2

Problem 2.6: One kilogram of water is compressed isothermally at 20◦ C from 1 atm to 20
atm.

(a) How much work is required?

(b) How much heat is rejected?

Assume that the average isothermal compressibility of water during this process is κT = 0.5 ×
10−4 (atm)−1 and the average thermal expansivity of water during this process is αP = 2 × 10−4
(◦ C)−1 .

Solution 2.6: Since for an isothermal process dQ = T dS we have


Z P2 µ ¶
∂S
Q=T dP .
P1 ∂P T

Using the Maxwell relation for the Gibbs free [see Eq. [1]-(2.112)] energy we obtain
µ ¶ µ ¶
∂S ∂V
=− = −V αP .
∂P T ∂T P

Thus Z P2
Q = −T V (P)αT (P) dP
P1
Now let us assume that αT it P-independent, and

V (P) = V0 [1 − κT (P − P0 )] = (M/ρ) [1 − κT (P − P0 )] .

As a result,
Q = −(M/ρ)T αT (P2 − P1 ) [1 − κT (P − P0 )/2] .
We see that since the compressibility of water is very low one can neglect the correction due to
change in the volume and assume V ≈ M/ρ. The mechanical work is
Z P2 Z P2 µ ¶
∂V
W =− P dV = − P dP = (M/2ρ)κT (P22 − P12 ) .
P1 P1 ∂P T
14 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

2 J 0 a

J 0
b c

L 0 2 L 0

Figure 2.4:

Problem 2.7: Compute the efficiency of the heat engine shown in Fig. 2.4 (Fig. [1]-2.19). The
engine uses a rubber band whose equation of state is

J = αLT , (2.5)

where α is a constant, J is the tension, L is the length per unit mass, and T is the temperature in
Kelvins. The specific heat (heat capacity per unit mass) is a constant, cL = c.

Solution 2.7: From Fig. 2.4 we see that the path a → b is isothermal. Indeed, since J ∝ L, it
follow from Eq. ( 2.5) that T = const. Then, from the same equation we get,

Ta = Tb = J0 /αL0 , Tc = J0 /2αL0 = Ta /2 .

As a result,
Z 2L0
Qb→a = MαTa L dL = (3/2)MαL02 Ta = (3/2)MJ0 L0 ,
L0
Qa→c = Mc(Tc − Ta ) = −(1/2)McTa ,
Qc→b = Mc(Tb − Tc ) − MJ0 L0 = (1/2)McTa − MJ0 L0 .

Hence, the total work is

W = Qb→a + Qa→c + Qc→b = (1/2)MJ0 L0

and
W 1
η= = .
Qb→a 3
This result is also clear from geometrical point of view.
15

Problem 2.8: Experimentally one finds that for a rubber band


µ ¶ " µ ¶3 #
∂J aT L0
= 1+2 ,
∂L T L0 L
µ ¶ " µ ¶3 #
∂J aL L0
= 1− ,
∂T L L0 L

where J is the tension, a = 1.0 × 103 dyne/K, and L0 = 0.5 m is the length of the band when no
tension is applied. The mass of the rubber band is held fixed.

(a) Compute (∂L/∂T )J and discuss its physical meaning.

(b) Find the equation of state and show that dJ is an exact differential.

(c) Assume that heat capacity at constant length is CL = 1.0 J/K. Find the work necessary to
stretch the band reversibly and adiabatically to a length 1 m. Assume that when no tension
is applied, the temperature of the band is T = 290 K. What is the change in temperature?

Solution 2.8:

(a) We have µ ¶ µ ¶ µ ¶
∂L ∂T ∂J
· · = −1 .
∂T J ∂J L ∂L T
Consequently,
³ ´ ³ ´3
µ ¶ ·µ ¶ µ ¶ ¸−1 ∂J
1 − L0
∂L ∂T ∂J ∂TL L
=− · = −³ ´ = −
L
³ ´3 .
∂T ∂J ∂L ∂J T
J L T
∂L T 1 + 2 LL0

Physical meaning - µ ¶
1 ∂L
αJ =
L ∂T J
is the thermal expansion coefficient at given tension.

(b) The equation of state has the form


" µ ¶3 #
aT L L0
J= 1− .
L0 L

The proof of the exactness is straightforward.


16 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

(c) Consider adiabatic expansion of the band,


0 = dQ = CL dT + J(L, T ) dL .
Consequently, " µ ¶3 #
dT J(L, T ) aT L L0
=− =− 1− .
dL CL CL L0 L
Measuring length in units of L0 as L = ` · L0 and introducing β ≡ aL0 /CL we obtain the
following differential equation
dT /T = −β`(1 − `−3 ) d` .
Its solution is µ ¶2 " µ ¶3 #
Tf β Lf L0
ln = − 1+2 .
T0 2 L0 Lf
Here L f and T f are final values of the length and temperature, respectively. This is the
equation for adiabatic process which provides the change in the temperature. The mechan-
ical work is then
W = CL (T f − T0 ) .

Problem 2.9: Blackbody radiation in a box of volume V and at temperature T has internal
energy U = aV T 4 and pressure P = (1/3)aT 4 , where a is the Stefan-Boltzmann constant.
(a) What is the fundamental equation for the blackbody radiation (the entropy)?
(b) Compute the chemical potential.

Solution 2.9:
³ ´
∂A
(a) Let us first find the Helmholtz free energy. From the relation P = − ∂V T we get

1 1
A = −PV = − aV T 4 = − U .
3 3
Consequently, µ ¶
∂A 4 4U
S=− = aV T 3 = .
∂T V 3 3T
As a result,
µ ¶
∂S ∂(S,V ) ∂(S,V )/∂(T,V ) (∂S/∂T )V 1
= = = = .
∂U V ∂(U,V ) ∂(U,V )/∂(T,V ) (∂U/∂T )V T

(b) Since A is N-independent, µ = 0.


17

Problem 2.10: Two vessels, insulated from the outside world, one of volume V1 and the other
of volume V2 , contain equal numbers N of the same ideal gas. The gas in each vessel is originally
at temperature Ti . The vessels are then connected and allowed to reach equilibrium in such a way
that the combined vessel is also insulated from the outside world. The final volume is V = V1 +V2 .
What is the maximum work, δW f ree , that can be obtained by connecting these insulated vessels?
Express your answer in terms of Ti , V1 , V2 , and N.

Solution 2.10: The Gibbs free energy of an ideal gas is given by the equation

G = NkT ln P + Nχ(T )

where χ(T ) is some function of the gas excitation spectrum.1


Consequently, µ ¶
∂G
S=− = −Nk ln P − Nχ0 (T ) .
∂T P
Before the vessels are connected,

Si = −Nk ln(P1 P2 ) − 2Nχ0 (T ) .

After the vessels are connected the temperature remains the same, as it follows from the conser-
vation law, the entropy being
S f = −2Nk ln P − 2Nχ0 (T ) .
Consequently, ∆S = −Nk ln(P2 /P1 P2 ). On the other hand,

1 V1 +V2 1 Vi P2 4V1V2
= , = → = .
P 2NkTi Pi NkTi P1 P2 (V1 +V2 )2
1 For an ideal gas, the energy of the particle can be written as a sum of the kinetic energy, εp = p2 /2m, and the
energy of internal excitations, εα (characterized by some quantum numbers α),

εpα = εp + εα .

As we will see later, the free energy of the ideal gas can be constructed as
à !N " µ ¶ #
kT eV mkT 3/2
A = − ln ∑ e −εpα /kT
≈ −NkT ln ∑e−εα /kT
N! pα N 2πh̄2 α
= −NT ln(eV /N) + N f (T ) ,
"µ ¶ #
mkT 3/2
f (T ) = −kT ln
2πh̄2
∑e −εα /kT
.
α

The Gibbs free energy is then

G = A + PV = NkT ln P + Nχ(T ) , χ(T ) ≡ f (T ) − kT ln kT .


18 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

As a result, the maximum work is

(V1 +V2 )2
∆W f ree = Ti ∆S = NkTi ln .
4V1V2

Problem 2.11: For a low-density gas the virial expansion can be terminated at first order in the
density and the equation of state is
· ¸
NkT N
P= 1 + B2 (T ) ,
V V

where B2 (T ) is the second virial coefficent. The heat capacity will have corrections to its ideal
gas value. We can write it in the form

3 N 2k
CV,N = Nk − f (T ) .
2 V
(a) Find the form that f (T ) must have in order for the two equations to be thermodynamically
consistent.

(b) Find CP,N .

(c) Find the entropy and internal energy.

Solution 2.11:

(c) The equation of state under consideration can be obtained from the Helmholtz free energy2

N2
A = Aideal + kT B2 (T ) .
V

Then µ ¶
∂A N2 £ ¤
S=− = Sideal + δS , δS ≡ −k B2 (T ) + T B02 (T ) .
∂T V V
Since we know both entropy and Helmholtz free energy, we find the internal energy as

N2 N2 £ ¤
U = A + T S = Uideal + kT B2 (T ) − kT B2 (T ) + T B02 (T )
V V
N 2
= Uideal + δU , δU ≡ −kT 2 B02 (T ) .
V
³ ´
2 Remember ∂A
that P = − ∂V .
T
19

(a) As a result,
µ ¶ µ ¶
∂S ∂U N2 £ 0 ¤
CV = T = = CVideal + δCV , δCV ≡ −kT 2B2 (T ) + T B002 (T ) .
∂T V ∂T V V

We find in this way,


f (T ) = 2T B02 (T ) + T 2 B002 (T ) .

(b) Let us express the equation of state as V (P, T ). Since the density is assumed to be small in
the correction one can use equation for the ideal gas to find the the volume. We have,

NkT
V= + NB2 (T ) .
P
Consequently, the entropy can be expressed as

B2 + T B02
S = Sideal + δS1 , δS1 ≡ −kN .
kT /P + B2

Now
µ ¶
∂δS1
CP = CPideal + T
∂T P
(B2 + T B02 )0 (kT /P + B2 ) − (k/P + B02 )(B2 + T B02 )
= −kN
(kT /P + B2 )2
N2 £ ¤
= CPideal − k T (2B02 + T B002 ) − (B2 + T B02 )
V
N2
= CPideal + δCV + k (B2 + T B02 ) .
V
Here in all corrections we used equation of state for an ideal gas. As a result,

N2
CP −CV = (CP −CV )ideal + k (B2 + T B02 ) .
V

Problem 2.12: Prove that


µ ¶ µ ¶ µ ¶
∂H ∂H ∂X
CY,N = and =T −X .
∂T Y,N ∂Y T,N ∂T Y,N
20 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

Solution 2.12: Let us first recall definitions for X, Eq. ([1]-2.66), U = ST +Y X + ∑J jµ0j dN j .
The enthalpy is defined as
H = U − XY = ST + ∑ µ0j dN j .
J
Since dU = T dS +Y dX we get
dH = T dS − X dY .
Consequently, µ ¶ µ ¶
∂S ∂H
CY,N = T = .
∂T Y,N ∂T Y,N
To prove the second relation we do the following
µ ¶ µ ¶ µ ¶ µ ¶
∂H ∂H ∂H ∂S
= + .
∂Y T,N ∂Y S,N ∂S Y,N ∂Y T

Now, µ ¶ µ ¶
∂H ∂H
= −X , =T.
∂Y S,N ∂S Y,N
Now we have to use the Maxwell relation, which emerges for the Gibbs free energy G = H − T S.
From
dG = −S dT − X dY
we get µ ¶ µ ¶
∂S ∂X
= .
∂Y T ∂T Y,N
Thus we obtain the desired result.

Problem 2.13: Compute the entropy, enthalpy, Helmholtz free energy, and Gibbs free energy
for a paramagnetic substance and write them explicitly in terms of their natural variables if pos-
sible. Assume that mechanical equation of state is m = (D/T )H and the the molar heat capacity
at constant magnetization is cm = c, where m is the molar magnetization, H is the magnetic field,
D is a constant, c is a constant, and T is the temperature.

Solution 2.13: Let us startR


with the internal energy u(T, m) per one mole. We have the mag-
netic contribution umag = 0m H(m) dm. Since H(m) = (T /D)m we get umag = (T /2D)m2 . The
“thermal” contribution is cT . As a result,
u
u(T, m) = T (c + m2 /2D) , T (u, m) = .
c + m2 /2D
The molar entropy s is then derived from the definition
µ ¶
∂s 1 c + m2 /2D
= = → s(u, m) = (c + m2 /2D) ln(u/u0 ) .
∂u m T u
21

Here u0 is a constant. As a result, in “natural” variables


µ ¶
s
u(s, m) = u0 exp .
c + m2 /2D

To find other thermodynamic potentials we need s(T, m). We can rewrite the above expression
for the entropy as
T (c + m2 /2D)
s(T, m) = (c + m2 /2D) ln .
u0
In particular, the Helmholtz free energy is
· ¸
2 T (c + m2 /2D)
a(T, m) = u − T s = T (c + m /2D) 1 − ln .
u0

To get enthalpy we have to subtract from u the quantity Hm = (D/T )H 2 and to express m through
H as m = (D/T )H. As a result, we obtain:

h(T, H) = u − Hm = T (c + DH 2 /2T 2 ) − (D/T )H 2 = T (c − DH 2 /2T 2 ) .

Finally, Gibbs free energy is g = a − Hm, which has to be expressed through T and H. We get
· ¸
2 2 T (c + DH 2 /2T 2 )
g = a − Hm = T (c + DH /2T ) 1 − ln − (D/T )H 2
u0
T (c + DH 2 /2T 2 )
= T (c − DH 2 /2T 2 ) − T (c + DH 2 /2T 2 ) ln .
u0

Problem 2.14: Compute the Helmholtz free energy for a van der Waals gas. The equation of
state is µ ¶
n2
P + α 2 (V − nb) = nRT ,
V
where α and b are constants which depend on the type of gas and n is the number of moles.
Assume that heat capacity is CV,n = (3/2)nR.
Is this a reasonable choice for the heat capacity? Should it depend on volume?

Solution 2.14: Let us express pressure through the volume,

nRT αn2
P= − 2 .
V − nb V
Since P = −∂A/∂V we obtain
Z
A=− P(V ) dV = −nRT ln(V − nb) − (αn2 /V ) + A (T ) .
22 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

Here A (T ) is the integration constant, which can be found from the given specific heat. Indeed,
Z T
CV,n dT 0
S= = (3/2)nR ln(T /T0 ) .
T0
Consequently, Z T
A (T ) = − S(T 0 ) dT 0 = (3/2)nRT [1 − ln(T /T0 )] .
Here we omit temperature-independent constant.
The suggestion regarding specific heat is OK since the difference between the entropies of
van der Waals gas and the ideal gas is temperature independent. (Check!)

Problem 2.15: Prove that


(a) κT (CP −CV ) = TV α2P
(b) CP /CV = κT /κS .

Solution 2.15: We use the method of Jacobians:


(a)
∂(S,V )/∂(T, P)
CV = T (∂S/∂T )V = T ∂(S,V )/∂(T,V ) =
∂(T,V )/∂(T, P)
(∂S/∂T )P (∂V /∂P)T − (∂S/∂P)T (∂V /∂T )P
= T
(∂V /∂P)T
(∂S/∂P)T (∂V /∂T )P
= CP − T .
(∂V /∂P)T
Now, from the Maxwell relations (∂S/∂P)T = − (∂V /∂T )P . Thus,

[(∂V /∂T )P ]2 α2
CP −CV = −T = TV P .
(∂V /∂P)T κT
The first relation follows from this in a straightforward way from definitions.
(b) Let us first calculate the adiabatic compressibility (∂V /∂P)S as
µ ¶ µ ¶
∂V ∂(V, S) ∂(V, S)/∂(V, T ) ∂(V, T ) (∂S/∂T )V ∂V
= = · = · .
∂P S ∂(P, S) ∂(P, S)/∂(P, T ) ∂(P, T ) (∂S/∂T )P ∂P T
Consequently,
CP (∂V /∂P)T κT
= = .
CV (∂V /∂P)S κS
23

Problem 2.16: Show that

T ds = cx (∂T /∂Y )x dY + cY (∂T /∂x)Y dx ,

where x = X/n is the amount of extensive variable, X, per mole, cx is the heat capacity per mole
at constant x, and cY is the heat capacity per mole at constant Y .

Solution 2.16: Let us substitute the definitions

cx = T (∂S/∂T )x , cY = T (∂S/∂T )Y .

Now the combination cx (∂T /∂Y )x dY + cY (∂T /∂x)Y dx can be rewritten as

T (∂S/∂T )x (∂T /∂Y )x dY + T (∂S/∂T )Y (∂T /∂x)Y dx


= T (∂S/∂Y )x dY + T (∂S/∂x)Y dx = T ds .

Problem 2.17: Compute the molar heat capacity cP , the compressibilities, κT and κS , and
the thermal expansivity αP of a monoatomic van der Waals gas. Start from the fact that the
mechanical equation of state is
RT α
P= − 2.
v−b v
and the molar heat capacity is cv = 3R/2, where v = V /n is the molar volume.

Solution 2.17: Let us start with ther specific heat. Using the method similar to the Problem
2.15 we can derive the relation
[(∂P/∂T )v ]2 R
cP − cv = −T = .
(∂P/∂v)T 1 − 2α(v − b)2 /RT v3

Now let us compute the compressibility


µ ¶ µ ¶−1
1 ∂v 1 ∂P (v − b)2 1
κT = − =− = .
v ∂P T v ∂v T vRT 1 − 2α(v − b)2 /RT v3

Given cv , other quantities can be calculated using results of the Problem 2.15.

Problem 2.18: Compute the heat capacity at constant magnetic field CH,n , the susceptibilities
χT,n and χS,n , and the thermal expansivity αH,n for a magnetic system, given that the mechanical
equation of state is M = nDH/T and the heat capacity CM,n = nc, where M is the magnetization,
H is the magnetic field, n is the number of moles, c is the molar heat capacity, and T is the
temperature.
24 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

Solution 2.18: Let us start with susceptibilities. By definition,


µ ¶
∂M nD
χT,n = = .
∂H T,n T

Now, µ ¶
∂M ∂(M, S) ∂(M, S)/∂(M, T ) ∂(M, T )) CM,n
= = · = χT,n .
∂H S ∂(H, S) ∂(H, S)/∂(H, T ) ∂(H, T ) CH,n
Thus we have found one relation between susceptibilities and heat capacities,

χS,n CM,n
= .
χT,n CH,n

Now let us find CH,n . At constant H, M becomes dependent only on temperature. Then,

dM nDH
dM = dT = − 2 dT .
dT T
Consequently, the contribution to the internal energy is

nDH 2
dU = −H dM = dT .
T2
As a result,
nDH 2
CH,n −CM,n = .
T2
Given CH,n = nc we easily compute CH,n and χS,n .
According to Eq. (R2.149), αH is defined as
µ ¶
∂M nDH
αH = =− .
∂T H T2

Problem 2.19: A material is found to have a thermal expansivity αP = v−1 (R/P + a/RT 2 ) and
an isothermal compressibility κT = v−1 [T f (P) + b/P]. Here v = V /n is the molar volume.

(a) Find f (P).

(b) Find the equation of state.

(c) Under what condition this materials is stable?


25

Solution 2.19:

(a) By definition, we have

∂v R a
= + .
∂T P RT 2
∂v b
= −T f (P) − .
∂P P
To make dv an exact differential we need:
R
− = − f (p) .
P2
Thus
f (p) = R/P2 .

(b) We can reconstruct the equation of state as:


Z P Z P · ¸
∂v RT b
v = dP = dP − 2 −
∂P P P
RT
= − b ln P + g(T ) .
P
Here g(T ) is some function of the temperature. Now,

∂v R R a
= + g0 (T ) ≡ + .
∂T P P RT 2
Thus g(T ) = −a/RT + const. As a result, we can express the equation of state as
RT P0 a
v − v0 = + b ln − .
P P RT

(c) Since the compressibility must be positive, we have the stability condition
b TR b
T f (P) + >0 → + > 0.
P P2 P
Consequently, the stability condition is

P/T < R/b .

Problem 2.20: Compute the efficiency of the reversible two heat engines in Fig. 2.5 (R2.20).
Which engine is the most effective? (Note that these are not Carnot cycles. The efficiency of a
heat engineis η = ∆Wtotal /∆Qabsorbed .
26 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

T T
a b a
T T
2 2
(a ) (b )
T T
1 c 1
c b
S 1 S 2 S S 1 S 2 S

Figure 2.5:

Solution 2.20: Since dQ = T dS, we immediately get for any closed path in the T − S plane:
I
∆Wtotal = T dS .

This is just the area of the triangle,

∆Wtotal = (1/2)(T2 − T1 )(S2 − S1 ) .

The heat absorbed in the case (a) is

∆Qabsorbed = T2 (S2 − S1 ) .

Thus,
T2 − T1
ηa = .
2T2
In the case (b), it easy to show that

∆Qabsorbed = (1/2)(T2 + T1 )(S2 − S1 ) .

Thus
T2 − T1
ηb = > ηa .
T2 + T1

Problem 2.21: It is found for a gas that κT = T v f (P) and αP = Rv/P + Av/T 2 , where T is
the temperature, v is the molar volume, P is the pressure, A is a constant, and f (P) is unknown
function.

(a) What is f (P)?

(b) Find v = v(P, T ).


27

Solution 2.21: The solution is similar to the problem 2.19. We have:


µ ¶
∂v 2 R A
= v + ,
∂T P T2
∂v
= −v2 T f (P) .
∂P
Let us introduce γ(P, T ) ≡ [v(P, T )]−1 . We get,
∂γ R A
= − − 2,
∂T P T
∂γ
= T f (P) .
∂P
Again, from the Maxwell relation we get R
P2
= f (p). Then we can express γ as
Z P Z P
∂γ dP RT
γ= dP = T R =− + g(T ) .
∂P P2 P
Then, g0 (T ) = −A/T 2 , or g(T ) = A/T + const. As a result,
A RT
γ = γ0 + − .
T P
Consequently,
1
v(P, T ) = .
γ0 + A/T − RT /P

Problem 2.22: A monomolecular liquid at volume VL and pressure PL is separated from a gas
of the same substance by a rigid wall which is permeable to the molecules, but does not allow
liquid to pass.The volume of the gas is held fixed at VG , but the volume of the liquid cam be
varied by moving a piston. If the pressure of the liquid is increased by pushing in on the piston,
by how much does the pressure of the gas change? [Assume the liquid in incompressible (its
molar volume is independent of the pressure) and describe the gas by the ideal gas equation of
state. The entire process occurs at fixed temperature T ].

Solution 2.22: Let us consider the part of the system, which contains both liquid and gas
particles. In this part the chemical potentials must be equal, µL = µG . On the other hand, the
chemical potentials and pressures of gas in both parts must be equal. Thus we arrive at the
equation,
µL (PL , T ) = µG (PG , T ) .
If one changes the pressure of liquid by δPL , then
µ ¶ µ ¶
∂µL ∂µG
δPL = δPG .
∂PL T ∂PG T
28 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

For an ideal gas, (∂µ/∂P)T = kT /PG = VG /NG ≡ ṽL . the quantity ṽL has a physical meaning of
the volume per particle. For a liquid, the relation is the same,
µ ¶ µ ¶ µ ¶ µ ¶
∂µ ∂ ∂G ∂ ∂G ∂V
= = = = ṽL .
∂P T ∂P ∂N P,T ∂N ∂P N,T ∂N T
The last relation is a consequence of incompressible character of the liquid. As a result,
δPG ṽL
= .
δPL ṽG

2.1 Additional Problems: Fluctuations


Quick access: 23 24 25 26 27

Problem 2.23: Find the mean square fluctuation of the internal energy (using V and T as inde-
pendent variables). What is the mean square fluctuation of the internal energy for a monoatomic
ideal gas?

Solution 2.23: We have


µ ¶ µ ¶ · µ ¶ ¸
∂U ∂U ∂P
∆U = ∆V + ∆T = T − P ∆V +CV ∆T .
∂V T ∂T V ∂T V
Here we use Maxwell relation which can be obtained from Helmholtz free energy. Squaring and
averaging we obtain (note that h∆V ∆T i = 0)
· µ ¶ ¸2
2 ∂P
h(∆U) i = T − P h(∆V )2 i +CV2 h(∆T )2 i .
∂T V
Now, µ ¶
2 2 2 ∂V
h(∆T ) i = kT /CV , h(∆V ) i = −kT .
∂P T
Thus · µ ¶ ¸2 µ ¶
2 ∂P ∂V
h(∆U) i = −kT T −P +CV kT 2 .
∂T V ∂P T
For the ideal gas,
P = NkT /V , CV = (3/2)Nk .
Thus,
h(∆U)2 i = (3/2)N(kT )2 .
2.1. ADDITIONAL PROBLEMS: FLUCTUATIONS 29

Problem 2.24: Find h∆T ∆Pi (with variables V and T ) in general case and for a monoatomic
ideal gas.

Solution 2.24: We have µ ¶ µ ¶


∂P ∂P
∆P = ∆V + ∆T .
∂V T ∂T V
Squaring and averaging we obtain (note that h∆V ∆T i = 0)
µ ¶ µ 2¶ µ ¶
∂P kT ∂P
h∆T ∆Pi = 2
h(∆T ) i = .
∂T V CV ∂T V
For an ideal gas,
2 kT 2
h∆T ∆Pi = .
3 V

Problem 2.25: Find h∆V ∆Pi with variable (V and T ).

Solution 2.25: Using results of the previous problem we get


µ ¶
∂P
h∆V ∆Pi = h(∆V )2 i = −kT .
∂V T

Problem 2.26: Using the same method show that


µ ¶
∂V
h∆S ∆V i = kT , h∆S ∆T i = kT .
∂T P

Solution 2.26: Starightforward.

Problem 2.27: Find a mean square fluctuation deviation of a simple pendulum with the length
` suspended vertically.

Solution 2.27: Let m be the pendulum mass, and φ is the angle of deviation from the vertical.
The minimal work is just the mechanical work done against the gravity force. For small φ,

Wmin = mg · ` (1 − cos φ) ≈ mg`φ2 /2 .

Thus,
hφ2 i = kT /mg` .
30 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

2.2 Mini-tests
2.2.1 A
The Helmholtz free energy of the gas is given by the expression

A = Nε0 − NkT ln(eV /N) − NcT ln T − NζT .

Here e = 2.718 . . . is the base of natural logarithms, N is the number of particles, V is the volume,
T is the temperature in the energy units, while ε0 , c and ζ are constants.

(a) Find the entropy as function of V and T .

(b) Find internal energy U as a function of the temperature T and number of particles N.

(c) Show that c is the heat capacity per particle at given volume, cv .

(d) Find equation of state.

(e) Find Gibbs free energy, enthalpy. Find the entropy as a function of P and T .

(f) Using these expression find the heat capacity at constant pressure, cP .

(g) Show that for an adiabatic process

T γ P1−γ = constant, where γ = cP /cv .

Solution
By definition, µ ¶
∂A eV
S=− = N ln + Nc (1 + ln T ) + NζT .
∂T V,N N
Now,

U = A + T S = Nε0 + NcT ,
µ ¶
∂A NT
P = − = ,
∂V T,N V
V
G = A + PV = A + NT = Nε0 − NT ln − NcT lT + NζT
N
= Nε0 + NT ln P − NT (1 + c) ln T − NζT ,
W = U + PV = U + NT = Nε0 + N(c + 1)T ,
S = −N ln P + N(1 + c) ln T + N(ζ + 1 + c) ,
µ ¶
∂S
cP = T = 1+c.
∂T P,N
2.2. MINI-TESTS 31

Immediately, to keep entropy constant we get

−N ln P + NcP ln T = const → T cP /P = const .

Since cP − cV = cP − c = 1, we obtain

T cP PcP −cV = const , → T γ P1−γ = const .

2.2.2 B
Problem 1
Discuss entropy variation for
(a) adiabatic process,
(b) isothermic process,
(c) isochoric process,
(d) isobaric process.

Problem 2
(a) Discuss the difference between Gibbs and Helmholtz free energy.
(b) Prove the relation µ ¶
2 ∂ A
U = −T .
∂T T
(c) A body with constant specific heat CV is heated under constant volume from T1 to T2 . How
much entropy it gains?
(d) Discuss the heating if the same body is in contact with a thermostat at T2 . In the last case the
heating is irreversible. Show that the total entropy change is positive.
(e) Two similar bodies with temperatures T1 and T2 brought into contact. Find the final tempera-
ture and the change in entropy.

Solution
Problem 1
(a) Constant
(b)
S = Q/T .
(c)
Z T2
CV (T ) T2
S= dT = CV ln .
T1 T T1
32 CHAPTER 2. INTRODUCTION TO THERMODYNAMICS

(d) Z T2
CP (T ) T2
S= dT = CP ln .
T1 T T1

Problem 2
Mechanical work under isothermic process is given by

dW = dU − dQ = dU − T dS = d(U − T S) .

The function of state


A = U −TS
is called Helmholtz free energy. We have

dA = dU − T dS − S dT = (T dS − P dV ) − T dS − S dT = −S dT − P dV.

Thus, µ ¶ µ ¶
∂A ∂A
S=− , P=− .
∂T V ∂V T
Thus, A is the thermodynamic potential with respect to V and T .
The thermodynamic potential with respect to P and T is called Gibbs free energy. We have

G = U − T S − PV → dG = −S dT +V dP .

(b) Substituting µ ¶
∂A
S=−
∂T V
into definition of A we get the result.
(c) We have Z µ ¶
T2 CV T2
S= dT = CV ln .
T1 T T1
(d) Use 2nd law of thermodynamics
(e) Energy conservation law yields

CV (T2 − TB ) = CV (TB − T1 ) → TB = (T1 + T2 )/2 .

for the entropy change we have


µ ¶
TB TB T1 + T2
δS = CV ln +CV ln = 2CV ln √ ≥ 0.
T1 T2 2 T1 T2
Chapter 3

The Thermodynamics of Phase Transitions

Quick access: 1 2 3 4 5 6 7 8 9 10 11 12 13

Problem 3.1: A condensible vapor has a molar entropy


· ³ ¸
a ´5/2
s = s0 + R ln C(v − b) u + ,
v
where c and s0 are constants.
(a) Compute equation of state.

(b) Compute the molar heat capacities, cc and cP .

(c) Compute the latent heat between liquid and vapor phases temperature T in terms of the
temperature T , the gas constant R and gas molar volumes vl and vg . How can you find
explicit values of vl and vg if you need to?

Solution 3.1:
a) Let us first find the temperature as function of u and s. We have
µ ¶
1 ∂s 5R ³ a ´−1
= = u+ .
T ∂u v 2 v

Thus,
h i
s = s0 + R ln C(5RT /2)5/2 (v − b) ,
5 a
u = RT − ,
2 v
5 a n h io
5/2
a = u − T s = RT − − T s0 + R ln C(v − b)(5RT /2) .
2 v

33
34 CHAPTER 3. THE THERMODYNAMICS OF PHASE TRANSITIONS

As a result, µ ¶
∂a RT a
P=− = − 2.
∂v T v−b v
This is the van der Waals equation.

(b) We have, µ ¶
∂s 5
cv = T = RT .
∂T v 2
The result for cP can be obtained in the same way as in the problem 2.17.

(c) The latent heat q is just


vg − b
q = T (sg − sl ) = RT ln .
vl − b
Since the pressure should be constant along the equilibrium line, Pl = PG , we have
RT a RT a
− 2= − 2.
vg − b vg vl − b vl

Another equation is the equality of chemical potentials, µl = µg . We know that µ =


(∂a/∂n)T,V , so everything could be done. Another way is to plot the isotherm and find
the volumes using the Maxwell rule.

Problem 3.2: Find the coefficient of thermal expansion, αcoex = v−1 (∂v/∂T )coex , for a gas
maintained in equilibrium with its liquid phase. Find an approximate explicit expression for
αcoex , using the ideal gas equation of state. Discuss its behavior.

Solution 3.2: It is implicitly assumed that the total volume of the system is kept constant. We
have, µ ¶ µ ¶
∂v ∂v ∂v dP
= + .
∂T ∂T P ∂P T dT
For an ideal gas, v = RT /P. Thus
µ ¶ µ ¶
∂v ∂v
= R/P , = −RT /P2 .
∂T P ∂P T

Consequently, µ ¶ · µ ¶ ¸
1 ∂v 1 T ∂P
αcoex = = 1− .
v ∂T coex T P ∂T coex
According to the Clapeyron-Clausius formula,
µ ¶
∂P q q
= ≈ ,
∂T coex T (vg − vl ) T vg
35

where q = (∆h)lg , we can rewrite this expression as

1³ q ´
αcoex ≈ 1− .
T RT
The coefficient of thermal expansion is less because with the increase of the temperature under
given pressure the heat is extracted from the system.

Problem 3.3: Prove that the slope of the sublimation curve of a pure substance at the triple
point must be greater than that of the vaporization curve at the triple point.

Solution 3.3: The triple point is defined by the following equations for the 2 phases:

P1 = P2 = P3 , T1 = T2 = T3 , µ1 = µ2 = µ3 .

The definitions of the sublimation and vaporization curves are given in Fig. 3.4 of the text-
book [1]. Consequently, the slopes of the vaporization and sublimation curves are given by the
relations µ ¶ µ ¶
∂P sg − sl sg − sl ∂P sg − ss sg − ss
= ≈ , = ≈ .
∂T lg vg − vl vg ∂T sg vg − vs vg
Since solid state is more ordered, ss < sl , and
µ ¶ µ ¶
∂P ∂P ss − sl
− = < 0.
∂T lg ∂T sg vg

Problem 3.4: Consider a monoatomic fluid along its fluid-gas coexistence curve. Compute the
rate of change of chemical potential along the coexistence curve, (∂µ/∂T )coex , where µ is the
chemical potential and T is the temperature. Express your answer in terms of sl , vl and sg , vg
which are the molar entropy and molar volume of the liquid and gas, respectively.

Solution 3.4: We have µ(P, T ) = µl (P, T ) = µg (P, T ). Thus,


µ ¶ µ ¶
dµ ∂µg ∂µg dP
= + .
dT ∂T P ∂P T dT

Consequently, µ ¶
dµ sg − sl sg vl − vg sl
= −sg + vg = .
dT coex vg − vl vg − vl
36 CHAPTER 3. THE THERMODYNAMICS OF PHASE TRANSITIONS

Problem 3.5: A system in its solid phase has a Helmholtz free energy per mole, as = B/T v3 ,
and in its liquid phase it has a Helmholtz free energy per mole, al = A/T v2 , where A and B are
constants, v is the volume per mole, and T is the temperature.

(a) Compute the Gibbs free energy density of the liquid and solid phases.

(b) How are the molar volumes, v, of the liquid and solid related at the liquid-solid phase
transition?

(c) What is the slope of the coexistence curve in the P − T plane?

Solution 3.5:

(a) By definition, g = a + Pv, and P = − (∂a/∂v)T . Thus,

g = a − v (∂a/∂v)T , gs = 4B/T v3s , gl = 3A/T v2l .

(b) Since P = − (∂a/∂v)T we have,

Ps = 3B/T v4s , Pl = 2A/T v3l .

Since at the phase transition Ps = Pl = P, we obtain

v3l 2A
4
= .
vs 3B

(c) Now we can express the Gibbs free energies per mole in terms of P,
4 3
gs = (3B)1/4 P3/4 T −1/4 , gl = (2A)1/3 P2/3 T −1/3 .
3 2
Since at the transition point gl = gs , we have
à !12
P 37/4 A1/3 321 A4
= = .
T 28/3 B1/4 224 B3

Problem 3.6: Deduce the Maxwell construction using stability properties of the Helmholtz
free energy rather than the Gibbs free energy.

Solution 3.6: Since the system in the equilibrium, the maximum work extracted during the
process of the phase transition is ∆A − P0 ∆V . Let us look at Fig. 3.1.
37

b c
P0
a

v l v g V

Figure 3.1:
We see that P0 ∆V = P0 (Vg −Vl ) is just the area of the dashed rectangle. To find ∆A let is follow
isotherm. Since T = const, Z c
∆A = P dV ,
a
which is just the area below the isotherm a − b − c. Subtracting areas we reconstruct the Maxwell
rule.

Problem 3.7: For a van der Waals gas, plot the isotherms in the in the P̄ − V̄ plane (P̄ and V̄
are the reduced pressure and volume) for reduced temperatures T̄ = 0.5, T̄ = 1.0, and T̄ = 1.5.
For T̄ = 0.5, is P̄ = 0.1 the equilibrium pressure of the liquid gas coexistence region?

Solution 3.7: We use Maple to plot the curves. The graphs have the form We see that for
T̄ = 0.5 there is no stavle region at all. To illustrate the situation we plot the curves in the
vicinity of T̄ = 1.

Problem 3.8: Consider a binary mixture composed of two types of particles, A and B. For this
system the fundamental equation fog the Gibbs energy is
G = nA µA + nB µB , (3.1)
the combined first and second laws are
dG = −S dT +V dP + µA dnA + µB dnB (3.2)
(S is the total entropy and V is the total volume of the system), and the chqemical potentials µA
and µB are intensive so that
µA = µA (P, T, xA ) and µB (P, T, xB ) .
38 CHAPTER 3. THE THERMODYNAMICS OF PHASE TRANSITIONS

12
10
8
6
4
2

0 0.6 0.7 0.8 0.9 1 1.1 1.2


–2 v

–4

Figure 3.2: The upper curve corresponds T̄ = 1.5, the lower one - to T̄ = 0.5.

2
1.8
1.6
1.4
1.2
P 1
0.8
0.6
0.4
0.2

0 1 2 3 4 5
v

Figure 3.3: The upper curve corresponds T̄ = 1.05, the lower one - to T̄ = 0.85.

Use these facts to derive the relations


s dT − v dP + ∑ xα µα = 0 (3.3)
α=A,B

and
∑ xα (dµα + sα dT − vα dP) = 0 , (3.4)
α=A,B
where s = S/n, n = nA + nB , sα = (∂S/∂nα )P,T,nβ6=α , and vα = (∂V /∂nα )P,T,nβ6=α with α = A, B and
β = A, B.

Solution 3.8: Let us express nα as nα = nxα and divide Eq. (3.1) by n. We get
µ ¶
dg = d ∑ xα µα = ∑ (xα dµα + µα dxα ) (3.5)
α α
39

The let us divide Eq. (3.2) by n to obtain

dg = −s dT + v dP + ∑ µα dxα . (3.6)
α

Equating the right hand sides of these equation we prove Eq. (3.3). Further, the entropy is an
extensive variable. Consequently,is should be a homogeneous function of nα .

S = ∑ (∂S/∂nα )P,T,nβ6=α nα = n ∑ sα xα .
α α

Substituting these expressions into Eq. (3.3) we prove Eq. (3.4).

Problem 3.9: Consider liquid mixture (l) of particles A and B coexisting in equilibrium with
vapor mixture (g) of particles A and B. Show that the generalization of of the Clausius-Clapeyron
equation for the coexistence curve between the liquid and vapor phases when the mole fraction
of of A in the liquid phase is fixed is given by
µ ¶ g g g g
∂P xA (sA − slA ) + xB (sB − slB )
= g g , (3.7)
∂T xl g g
xA (vA − vlA ) + xB (vB − vlB )
A

where sα = (∂S/∂nα )P,T,nβ6=α , and vα = (∂V /∂nα )P,T,nβ6=α with α = A, B and β = A, B. [Hint:
Equation (b) of the Problem (3.8) is useful.]

Solution 3.9: Let us apply Eq. (3.4) to the gas phase. We get
¡ ¢
∑ xαg dµgα + sgα dT − vgα dPg = 0 .
α

Dividing this equation by dT we obtain


µ g ¶
g dµα g dPg
∑ xα dT + sα − vα dT = 0 .
g
α

g
Now, let us take into account that at the coexistence curve µα = µlα . Thus
g
dµα dµlα dPl
= = −slα + vlα .
dT dT dT
Since at the equilibrium Pg = Pl , we prove Eq. (3.7).

Problem 3.10: A PV T system has a line of continuous phase transitions (a lambda line) sepa-
rating two phases, I and II, of the system. The molar heat capacity cP and the thermal expansivity
αP are different in the two phases. Compute the slope (dP/dT )coex of the λ line in terms of the
temperature T , the molar volume v, ∆cP = cIP − cII
P , and ∆αP = αP − αP .
I II
40 CHAPTER 3. THE THERMODYNAMICS OF PHASE TRANSITIONS

Solution 3.10: At a continuous phase transition the entropy is continuous, ∆s = 0. At the same
time, s = s(P, T ) and along the coexistence curve P is a function of the temperature. For each
phase, we get
ds ∂s ∂s dP
= + .
dT ∂T ∂P dT
We know that (∂S/∂T )P = cP /T and from the Maxwell relation

(∂s/∂P)T = − (∂v/∂T )P .

Since ∆s = 0,
(∆cP )/T = (dP/dT ) ∆ (∂v/∂T )P = (dP/dT ) v (∆αP ) .
The answer is µ ¶
dP ∆cP
= .
dT coex T ∆αP

Problem 3.11: Water has a latent heat of vaporization, ∆h = 540 cal/gr. One mole of steam is
kept at its condensation point under pressure at T1 = 373 K. The temperature is then lowered to
T2 = 336 K. What fraction of the steam condenses into water? (Treat the steam as an ideal gas
and neglect the volume of the water.)

Solution 3.11: Let us use the Clausius-Clapeyron equation for the case of liquid-vapor mixture.
Since vg ¿ vl we get µ ¶
dP q
= g.
dT coex T v
Here q is the latent heat per mole. Considering vapor as an ideal gas we get
dP qP
= → P = P0 e−q/RT .
dT RT 2
As a consequence, · ¸
P1 q(T1 − T2 )
= exp .
P2 RT1 T2
Since the volume is kept constant,
g · ¸ · ¸
n1 P1 q(T1 − T2 ) 18 ∆h (T1 − T2 )
g = = exp = exp .
n2 P2 RT1 T2 RT1 T2
Here we have taken into account that the olecular weight of H2 O is 18. The relative number of
condensed gas is then · ¸
n1 − n2 18 ∆h (T1 − T2 )
= 1 − exp − .
n1 RT1 T2
41

Problem 3.12: A liquid crystal is composed of molecules which are elongated (and often have
flat segments). I behaves like a liquid because the locations of the center of mass of the molecules
have no long-range order. It behaves like crystal because the orientation of the molecules does
have long range order. The order parameter of the liquid crystal is given by the diatic
S = η [nn − (1/3)I] ,
where n is a unit vector (called the director), which gives the average direction of alignment of
the molecules. The free energy of the liquid crystal can be written as
1 1 1
Φ = Φ0 + A Si j Si j − B Si j S jk Ski + C Si j S jk Skl Skl , (3.8)
2 3 4
where A = A0 (T − T ∗ ), A0 , B and C are constants, I is the unit tensor so
x̂i · I · x̂ j = δi j , Si j = x̂i · S · x̂ j ,
and the summation is over repeated indices. The quantities x̂i are the unit vectors
x̂1 = x , x̂2 = y , x̂3 = z .
(a) Perform the summations in the expression for Φ and write Φ in terms of η, A, B, C.
(b) Compute the critical temperature Tc , at which the transition from isotropic liquid to liquid
crystal takes place.
(c) Compute the difference between the entropies between the isotropic liquid (η = 0) and the
liquid crystal at the critical temperature.

Solution 3.12:
(a) First, for brevity, let us express the matrix S as η (ŝ − I/3), where
 2 
n1 n1 n2 n1 n3
ŝ = nn =  n2 n1 n22 n2 n3  .
n3 n1 n3 n2 n23
The Eq. (3.8) can be expressed as
1 1 1
Φ = Φ0 + Aa2 η2 − Ba3 η3 + Ca4 η4 ,
2 3 4
µ ¶2 µ ¶ " µ ¶2 #2
1 1 3 1
a2 = Tr ŝ − I , a3 = Tr ŝ − I , a4 = Tr ŝ − I . (3.9)
3 3 3
¡ ¢m
It is easy to calculate bm = Tr ŝ − 13 I . Indeed,
µ ¶
1 m m
m! (−1)k ³ k m−k ´
Tr ŝ − I =∑ Tr ŝ I .
k=0 k!(m − k)! 3
3 k
42 CHAPTER 3. THE THERMODYNAMICS OF PHASE TRANSITIONS

It is straightforward that for m > 1 and k 6= 0 we get


³ ´ ³ ´ ³ ´
Tr ŝk Im−k = Tr ŝk I = Tr ŝk = 1 , Tr Im = Tr I = 3 .

Here we have used the properties of trace and of ŝ operators. Since trace is independent of
the presentation, let us direct the axis 1 along the vector n. Then
 
1 0 0 ³ ´
ŝ =  0 0 0  → Tr ŝk = Tr ŝ = 1 .
0 0 0

Consequently,
µ ¶ µ ¶m
1 m −1 2 2 4
bm = 1 − +2 , a2 = b2 = , a3 = b3 = , a4 = b22 = .
3 3 3 9 9

Finally, we get
1 2 1
Φ = Φ0 + Aη2 − Bη3 + Cη4 . (3.10)
3 27 9
(b) At the transition point the dependence Φ(η) has two equal minima. To find this point, let
us shift the origin of η by some η0 in order to kill odd in η − η0 items. It we put η = η0 + ξ
we have
1 2 1
Φ − Φ0 = A(ξ + η0 )2 − B(ξ + η0 )3 + C(ξ + η0 )4 . (3.11)
3 27 9
In order to kill the term proportional to ξ3 one has to put (see the Maple file) η0 = B/6C.
Substituting this value into the expression for the coefficient ξ,

(2/3)Aη0 − (2/9)Bη20 + (4/9)Cη30 ,

and equating the result to 0 we get A = B2 /27C. Since A = A0 (T − T ∗ ) we have

B2
Tc = T ∗ + .
27A0C

(c) Since S = −∂Φ/∂T = −(1/3)A0 η2 we get

∆S = −(1/3)A0 (η+ − η− )(η+ + η− ) = −(2/3)A0 η0 (ξ+ − ξ− ) ,

where η± are the roots of the equation ∂Φ/∂η = 0 while ξ± are the roots of the equation
∂Φ/∂ξ = 0. Note that the potential Φ(ξ) is symmetric, thus ξ+ + ξ− = 0. The important
part Φ(ξ) at the critical point can be obtained substituting η0 and A into Eq. (3.11). It has
the form, µ ¶
C 4 B2 2
δΦ(ξ) = ξ − ξ .
9 18C2
43

Thus,
B B
ξ± = ± , ξ+ − ξ− = .
6C 3C
As a result,
1 A0 B2
∆S = − .
27 C2

Problem 3.13: The equation of state of a gas is given by the Berthelot equation
³ a ´
P + 2 (v − b) = RT .
Tv
(a) Find values of the critical temperature Tc , the critical molar volume vc , and the critical
pressure, Pc in terms of a, b, and R.

(b) Does the Berthelot equation satisfy the law of corresponding states.?

• Find the critical exponents β, δ, and γ from the Berthelot equation.

Solution 3.13:

(a) Let us express the pressure as a function of volume,


RT a
P= − 2.
v−b Tv
For the critical temperature we have
µ ¶
∂P RTc 2a
= − + = 0,
∂V T (vc − b)2 Tc v3c
µ 2 ¶
∂ P 2RTc 6a
= − = 0.
∂V 2 T (vc − b)3 Tc v4c

From this we obtain:


r
8a RTc
vc = 3b , Tc = , Pc = .
27Rb 8b

(b) We put
P = Pc · P̄ , T = Tc · T̄ , v = vc · v̄
to get: µ ¶
3
P̄ + 2 (3v̄ − 1) = 8T̄ .
T̄ v̄
44 CHAPTER 3. THE THERMODYNAMICS OF PHASE TRANSITIONS

(c) First let us simplify the equation of state near the critical point. Putting

P̄ = 1 + p , v̄ = 1 + ν , T̄ = 1 + τ

and expanding in p, ν, and τ up to lowest non-trivial order (see Maple file), we get

3
p = − ν3 − 12ντ + 7τ . (3.12)
2
The derivative (∂p/∂ν)τ is
µ ¶
∂p 9
= − ν2 − 12τ . (3.13)
∂ν τ 2
Since this function is even in ν the Maxwell construction
Z νg Z νg µ ¶
∂p
νdp = ν dν = 0
νl νl ∂ν τ

requires −νl = νg . Since along the coexistence curve p(νg , τ) = p(−νg , τ) we get

νg = −8τ → β = 1/2 .

Substituting this value of νg into Eq. (3.13) we observe that


µ ¶
∂p
= 24τ → γ = 1.
∂ν τ

To obtain the critical exponent δ let us introduce ρ = 1/v̄, the equation of state being

8T̄ ρ 3ρ2
P̄(ρ, T̄ ) = − .
3−ρ T̄

Expanding near the critical point P̄ = T̄ = ρ = 1 we get

P̄ = 1 + (3/2)(ρ − 1)3 → δ = 3.

3.1 Mini-tests
3.1.1 A
Properties of Van der Waals (VdW) liquid
(i) Find the critical temperature Tc at which the Van der Waals isotherm has an inflection
point. Determine the pressure Pc and volume, Vc , for a system of N particles at T = Tc .
3.1. MINI-TESTS 45

(ii) Express the VdW equation in units of Tc , Pc , and Vc . Show that it has the form
µ ¶
3
P + 02 (3V 0 − 1) = 8T 0
0
(3.14)
V

where P0 ≡ P/Pc , V 0 ≡ V /Vc , and T 0 ≡ T /Tc

(iii) Analyze the equation of state (3.14) near the critical point. Assume that

P0 = 1 + p, T0 = 1+τ V0 = 1−n

and show that for small p, τ and n the equation of state has the approximate form

p = 4τ + 6τn + (3/2)n3 . (3.15)

(iv) Plot p(n, τ) versus n for τ = ±0.05 and discuss the plots.

(v) Using the above equation find the stability region. Show this region in the plot.

(vi) Show that the Maxwell relation can be expressed as


Z nr
n (∂p/∂n)τ dn = 0 (3.16)
nl

along the equilibrium liquid-gas line. Using this relation and the equation of state find nl
and nr .

(vii) Discuss why the stability condition and Maxwell relation lead to different stability criteria?
Illustrate discussion using the plot.

Solution
(i) Writing the VdW equation as
NkT N 2a
P= − 2
V − Nb V
and requiring µ ¶ µ ¶
∂P ∂2 P
= 0, =0
∂V Tc ∂V 2 Tc
we obtain
8 a 1 a
Tc = , Vc = 3Nb, Pc = .
27 bk 27 b2
(ii) The result is straightforward in one substitutes P = Pc P0 , V = VcV 0 and T = Tc T 0 .

(iii) Straightforward.

(iv) Straightforward.
46 CHAPTER 3. THE THERMODYNAMICS OF PHASE TRANSITIONS

(v) The stability region is determined by the equation (∂p/∂n)τ = 0. We have

(∂p/∂n)τ = 6τ + (9/2)n2 .

Consequently p
nst = ±2 −τ/3 .

(vi) Since in the main approximation (∂p/∂n)τ is an odd function of n, nl = −nr . Using
Eq. (3.15) we get (see the Maple file)

nr = −4τ .

3.1.2 B
Curie-Weiss theory of a magnet
The simplest equation for a non-ideal magnet has the form

m = tanh[β(Jm + h)] (3.17)

where β = 1/kT , J is the effective interaction constant, while h is the magnetic field measured in
proper units.

(i) Consider the case h = 0 and analyze graphically possible solutions of this equation.
Hint: rewrite equation in terms of an auxiliary dimensional variable m̃ ≡ βJm. Show that
a spontaneous magnetization appears at T < Tc = J/k.

(ii) Simplify Eq. (3.17) at h = 0 near the critical temperature and analyze the spontaneous
magnetization as a function of temperature.
Hint: Put T = Tc (1 + τ) and consider solutions for small m̃ and τ.

(iii) Analyze the magnetization curve m(h) near Tc . Hint: express Eq. (3.17) in terms of the
variables m̃ and h̃ ≡ βh ≈ h/J and plot h̃ as a function of m̃.

(iv) Plot and analyze the magnetization curves for T /Tc = 0.6 and T /Tc = 1.6.

Solution
(i) At h = 0 the equation (3.17) has the form:

(βJ)−1 m̃ = tanh m̃ .

From the graph of the function F(m̃) ≡ (βJ)−1 m̃ − tanh m̃ (see the Maple file) one observes
that only trivial solution m̃ = 0 exists at βJ < 11, or at T ≥ Tc = J/k. At T < Tc there exists
2 non-trivial solutions corresponding spontaneous magnetization. Thus (βJ)−1 = T /Tc
3.1. MINI-TESTS 47

(ii) The dimensionless function F(m̃) ≡ (T /Tc )m̃ − tanh m̃, near Tc acquires the form F(m̃) =
τm̃ + m̃3 /3. Thus the solutions have the from

m̃ = 0, ± −3τ .

Since near Tc the ratio Tc /T can be substituted by 1, the same result is true for m.

(iii) In general, Eq. (3.17) can be rewritten as

(T /Tc )m̃ = tanh(m̃ + h̃) → m̃ + h̃ = artanh [(1 + τ)m̃] .

Consequently,
h̃ = artanh [(1 + τ)m̃] − m̃

To plot the magnetization curve one has to come back to the initial variables.
48 CHAPTER 3. THE THERMODYNAMICS OF PHASE TRANSITIONS
Chapter 4

Elementary Probability Theory and Limit


Theorems

Quick access: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Problem 4.1: A bus has 9 seats facing forward and 8 seats facing backward. In how many
ways can 7 passengers be seated if 2 refuse to ride facing forward and 3 refuse to ride facing
backward?

Solution 4.1: Three people refuse ride facing backward, and they should be definitely placed
to 9 seats facing forward. The number of ways to do that is 9 · 8 · 7. In a similar way, 2 people can
occupy 8 seats facing backward in 8 · 7 ways. Now 17 − 5 = 12 seats left. They can be occupied
by 7 − 3 − 2 = 2 nice people in 12 · 11 ways. As a result, we get

9 · 8 · 7 × 8 · 7 × 12 · 11 = 3725568

ways.

Problem 4.2: Find the number of ways in which 8 persons can be assigned to 2 rooms (A and
B) if each room must have at least 3 persons in it.

Solution 4.2: The number of persons in the room is between 3 and 5. Let us start with the
situation where room A has 3 persons. The number of ways to do that is 8 · 7 · 6. The total
number of ways is then

8 · 7 · 6 + 8 · 7 · 6 · 5 + 8 · 7 · 6 · 5 · 4 = 8736 .

49
50 CHAPTER 4. ELEMENTARY PROBABILITY THEORY . . .

Problem 4.3: Find the number of permutations of the letters in the word MONOTONOUS. In
how many ways are 4 O’s together? In how many ways are (only) 3 O’s together?

Solution 4.3: The total number of the letters is 10, we have 4 O’s, 2 N’s, and other letters do not
repeat. Thus the total number of permutations is 10!/(4!2!) = 75600. To get the number of ways
to have 4 O’s together we have to use model word MONTNUS which has 7 letters with 2 repeated
N’s. Thus the number of ways is 7!/2! = 2520. To have three O’s together we have 6 places to
insert a separate O to the word MONTNUS, thus the number of ways is 6 × 2520 = 15120.

Problem 4.4: In how many ways can 5 red balls, 4 blue balls, and 4 white balls be placed in a
row so that the balls at the ends of the row are of the same color?

Solution 4.4: Let us start with 5 red balls and choose 2 of them to be at the ends. The rest
5 + 4 + 4 − 2 = 11 balls are in the middle. They have to be permuted, but permutation of the balls
of the color must be excluded. Among them the have 3 rest red balls, 4 blue balls and 4 white
balls. Thus we have 11!/(3!4!4!) = 11550 ways. Now let us do the same with white balls. Now
we have 2 rest white balls, but 5 blue balls to permute. We get 11!/(2!5!4!) = 6930 ways. The
same is true for blue balls. The total number of ways is 138600 + 2 × 83160 = 25410 ways.

Problem 4.5: Three coins are tossed.

(a) Find the probability of getting no heads.

(b) Find the probability of getting at least one head.

(c) Show that the event “heads on the first coin” and the events “tails on the last coin” are
independent.

(d) Show that the event “only two coins heads” and the event “three coin heads” are dependent
and mutually exclusive.

Solution 4.5:

(a) The answer is (1/2)3 = 1/8.

(b) The sum of the probabilities is 1. Since the probability to get no heads is 1/8, the answer
is 1 − 1/8 = 7/8.

(c) We assume that result of coin tossing is fully random.

(d) The event “3 heads” contradicts the requirement “only 2”.


51

Problem 4.6: Various 6 digit numbers can be formed by permuting the digits 666655. All
arrangements are equivalently likely. Given that the number is even, what is the probability that
two fives are together?
Hint: You must find a conditional probability.

Solution 4.6: Since the number must be even, the last number is 6, and there are 4 ways to
choose this number. There are 5 digits left, and the total number of their permutations give 5!.
We have to divide it by 3! of permutations of 6 and 2! permutations between 5. We end at the
number of different numbers is 4 · 5!/3!2! = 40. If we want to have two fives together, it is the
same as permute 4 units, the number of permutations 4 · 4!/3! = 16. Thus the probability is
P = 16/40 = 2/5 .

Problem 4.7: Fifteen boys go hiking. 5 get lost, 8 get sunburned, and 6 return home without
problems.
(a) What is the probability that a sunburned boy got lost?
(b) What is the probability that a lost boy got sunburned?

Solution 4.7: See. pp. 176-177 of the book [1].


Let us define the event to get sunburned as A, and the event to get lost - B. The probability to
get sunburned is P(A) = 8/15, while the probability to get lost is P(B) = 5/15 = 1/3. Thus
We know that 15 − 6 = 9 boys had problems. Thus, the joint probability to be either sun-
burned or lost is, P(A ∪ B) = 9/15. Thus the probability to be both sunburned and lost is
P(A ∩ B) = P(A) + P(B) − P(A ∪ B) = (8 + 5 − 9)/15 = 4/15 .
Consequently, conditional probabilities are
P(A|B) = P(A ∩ B)/P(A) = 1/2 (a), P(B|A) = P(A ∩ B)/P(B) = 4/5 (b) ,
respectively.

Problem 4.8: A stochastic variable X can have values x = 1 and x = 3. A stochastic variable
Y can have values y = 2 and y = 4. Denote the joint probability density
PX,Y (x, y) = ∑ ∑ pi, j δ(x − i) δ(y − j) .
i=1,3 j=2,4

Compute the covariance of X and Y for the following 2 cases:


(a) p1,2 = p1,4 = p3,2 = p3,4 = 1/4;
(b) p1,2 = p3,4 = 0 and p1,4 = p3,2 = 1/2.
For each case decide if X and Y are independent.
52 CHAPTER 4. ELEMENTARY PROBABILITY THEORY . . .

Solution 4.8:

(a) First let us calculate

hXY i = (1 · 2 + 1 · 4 + 3 · 2 + 3 · 4) /4 = 6 .

At the same time hXi = 2 and hY i = 3. Since hXY i = hXi · hY i the variables are indepen-
dent.

(b) We have,
hXY i = (1 · 4 + 3 · 2) /2 = 5 .
It is not equal to hXi · hY i, so the variables are dependent.

Problem 4.9: The stochastic variables X and Y are independent and Gaussian distributed with
first moment hxi = hyi = 0 and standard deviation σX = σY = 1. Find the characteristic function
for the random variable Z = X 2 +Y 2 , and compute the moments hzi, hz2 i and hz3 i. Find the first
3 cumulants.

Solution 4.9: First let us find the distribution function for Z. Let us do the problem in two
ways.
Simple way:
Let us transform the problem to the cylindrical coordinates, x, y → ρ, φ. We have

1 −(x2 +y2 ) 1 −ρ2


P(x)P(y) = e = e , dx dy = ρ dρ dφ, .
2π 2π
Thus, Z
1 2 /2 1 −z/2
PZ (z) = ρ dρ dφ e−ρ δ(z − ρ2 ) = e .
2π 2
General way:
Z Z
PZ (z) = dx PX (x) dy PY (y) δ(x2 + y2 − z)
Z √z ³ p ´
dx
= √ √
− z 2 z − x2
PX (x) ∑ Y
P ± z − x 2
±
Z √z µ 2 ¶
1 dx x z − x2 1
= √ exp − − = e−z/2 .
π 0 z − x2 2 2 2

Now, the characteristic function is


Z ∞
1 1
fZ (k) = heikz i = dze(ik−1/2)z = .
2 0 1 − 2ik
53

Then, ¯
n d n fZ (k) ¯¯
n
hz i = (−i) .
dkn ¯k→0
Performing calculations (see the Maple file), we obtain:
hzi = 2 , hz2 i = 8 , hz3 i = 48 .
Thus,
C1 (Z) = hzi = 2, , C2 (Z) = hz2 i − hzi2 = 4 ,
C3 (Z) = hz3 i − 3hzihz2 i + 2hzi3 = 48 − 3 · 2 · 8 + 3 · 23 = 24 .

Problem 4.10: A die is loaded so that even numbers occur 3 times as often as odd numbers.
(a) If the die is thrown N = 12 times, what is the probability that odd numbers occur 3 times?
If is is thrown N = 120 times, what is the probability that odd numbers occur 30 times?
Use the binomial distribution.
(b) Compute the same quantities as in part (a), but use the Gaussian distribution
Note: For parts (a) and (b) compute your answers to four places.
(c) Compare answers (a) and (b). Plot the binomial and Gaussian distributions for the case
N = 12.

Solution 4.10:
(a) For the random process, the parial probability for an odd number p = 1/4 and for an even
number is q = 1 − p = 3/4. The probability to have k odd numbers, according to binimial
distribution is
N!
PNb (k) = pk qN−k .
k!(N − k)!
Consequently (see Maple file),
b 12! 39
P12 (3) = = 0.2881 .
3!9! 412
b (30) = 0.08385.
In a similar way, P120
(b) The number of odd numbers is hki = N/4 and the dispersion is σ = N p · q = 3N/16. Thus
µ ¶
G 1 (k − N/4)2
PN (k) = p exp − .
3Nπ/8 3N/8
As a result,
G G
P12 (3) = 0.2660 , P120 (30) = 0.0841 .
(c) Plots are shown in the Maple file.
54 CHAPTER 4. ELEMENTARY PROBABILITY THEORY . . .

Problem 4.11: A book with 700 misprints contains 1400 pages.

(a) What is the probability that one page contains no mistakes?

(b) What is the probability that one pages contains 2 mistakes?

Solution 4.11: n mistakes can be distributed between g pages by

(g + n − 1)!
wg (n) = .
(g − 1)!n!

If one has a page without mistakes, it means that n mistakes are distributed between g − 1 pages.
The probabiliy of that event is

(g + n − 2)! n!(g − 1)! g−1


P(0) = = .
(g − 2)!n! (g + n − 1)! g + n − 1

In our case g = 1400 and n = 700, thus P(0) = 1399/2099 ≈ 0.67. In a similar way, if we have
two misprints at a page, then n − 2 misprints are distributed between g − 1 pages. Thus, the
number of ways to do that is wg−1 (n − 2). Consequently,

wg−1 (n − 2) (g + n − 4)! n!(g − 1)! (g − 1)n(n − 1)


P(2) = = = .
wg (n) (g − 2)!(n − 2)! (g + n − 1)! (g + n − 1)(g + n − 2)(g + n − 3)

We get P(2) ≈ 0.07.

Problem 4.12: There old batteries and a resistor, R, are used to construct a circuit. Each battery
has a probability p to generate voltage V = v0 and has a probability 1 − p to generate voltage
V = 0 Neglect internal resistance in the batteries.
Find the average power, hV 2 i/R, dissipated in the resistor if

(a) the batteries are connected in the series and

(b) the batteries are connected in parallel.


In cases (a) and (b), what would be the average power dissipated in all batteries were
certain to generate voltage v0 .

(c) How you realize the conditions and results in this problem in the laboratory?
55

Solution 4.12: Let us start with series connection and find the probability to find a given value
of the voltage. We have 4 possible values of the voltage. In units v0 they are 0, 1, 2, and 3. We
have,

P(0) = (1 − p)3 , P(1) = 3p(1 − p)2 , P(2) = 3p2 (1 − p) , P(3) = p3 .

Consequently,
£ ¤
hW i = hV 2 i/R = ∗(v20 /R) 3p(1 − p)2 + 12p2 (1 − p) + 9p3 = (3v20 /R) p (1 + 2p) .

Now let us turn to parallel connection. Let us for a while introduce conductance Gi = 1/Ri of
the batteries, as well as conductance G = 1/R of the resistor. The voltage at the resistor is given
by the expression
∑i=1,2,3 Ei Gi
V= .
G + ∑i=1,2,3 Gi
Here Ei are electro-motive forces of the batteries. Now the crucial point is to assume something
about the resistances of the batteries. Let us for simplicity assume that they are equal, and
Gi À G. Actually, it is a very bad assumption for old batteries. Under this assumption, V =
(E1 + E2 + E3 )/3. The the result is just 9 times less that that for series connection.
To check this kind of phenomena in the lab one has to find many old batteries at a junk-yard,
first check distribution of their voltages and then make circuits.
The problem is badly formulated.

Problem 4.13: Consider a random walk in one dimension. In a single step the probability of
displacement between x and x + dx is given by
µ ¶
1 (x − a)2
P(x) dx = √ exp − dx . (4.1)
2πσ2 2σ2
After N steps the displacement of the walker is S = X1 + X3 + XN where Xi is the displacement
after ith step. After N steps,
(a) what is the probability density for the displacement, S, of the walker, and
(b) what is his standard deviation?

Solution 4.13:
(a) By definition,
Z Z Z
PS (s) = dx1 P(x1 ) dx2 P(x2 ) . . . dxN P(xN ) δ (s − x1 − x2 . . . − xN ) .

To calculate this integral we expand δ-function into Fourier integral


Z ∞
dk ikx
δ(x) = e .
−∞ 2π
56 CHAPTER 4. ELEMENTARY PROBABILITY THEORY . . .

In this way we get


Z ∞ Z Z Z
dk iks
PS (s) = e dx1 e−ikx1 P(x1 ) dx1 e−ikx2 P(x2 ) . . . dxN e−ikxN P(xN )
−∞ 2π
Z ∞ µZ ∞ ¶N Z ∞
dk iks −ikx dk iks
= e dx e P(x) = e [P (k)]N . (4.2)
−∞ 2π −∞ −∞ 2π

Here Z ∞
P (k) = dx e−ikx P(x)
−∞
is the Fourier component of the single-step probability. From Eq. (4.1) we obtain
¡ ¢ ¡ ¢
P (k) = exp −ika − k2 σ2 /2 → [P (k)]N = exp −iNka − k2 Nσ2 /2 .
Now,
Z ∞ µ ¶
dk £ 2 2
¤
(s − Na)2 1
PS (s) = exp ik(s − Na) − k Nσ /2 = √ exp − .
−∞ 2π 2πNσ2 2Nσ2

(b) We have (see Maple file),


Z
hSi = ds sPS (s) = Na , hS2 i − hSi2 = Nσ2 .

Problem 4.14: Consider a random walk in one dimension for which the walker at each step
is equally likely to take a step with displacement anywhere in the interval d − a ≤ x ≤ d + a,
where a < d. Each step is independent of others. After N steps the displacement of the walker is
S = X1 + X3 + · · · + XN where Xi is the displacement after ith step. After N steps,

(a) what is the average displacement, hSi, of the walker, and

(b) what is his standard deviation?

Solution 4.14: Starting from the definition, we have


Z Z Z Z
hSi = s PS (s) ds = s ds dx1 P(x1 ) · · · dxN P(xN ) δ (s − x1 − . . . − xN )
Z Z
= dx1 P(x1 ) · · · dxN P(xN ) (x1 + x2 + . . . xN ) = NhXi = Nd .

In a similar way,
Z Z
hS2 i = dx1 P(x1 ) · · · dxN P(xN ) (x1 + x2 + . . . xN )2 .
57

We know that à !2
∑ xi = ∑ xi2 + ∑ xi x j .
i i i6= j

The number of pairs with non-equal i and j is N(N − 1). After averaging the obtain

hS2 i = NhX 2 i + N(N − 1)hXi2 .

To calculate hX 2 i let us specify the normalized probability as


½
1 1, d − a ≤ x ≤ d + a
P(x) = .
2a 0, otherwise

Then, Z d+a
1
hX 2 i = x2 dx = d 2 + a2 /3 .
2a d−a
Summarizing we get (see Maple file)

hS2 i − hSi2 = N(d 2 + a2 /3) + N(N − 1)d 2 − (Nd)2 = Na2 /3 .

Problem 4.15: Consider a random walk in one dimension. In a single step the probability of
displacement between x and x + dx is given by
1 a
P(x) dx = dx . (4.3)
π x + a2
2

Find the probability density for the displacement of the walker after N steps. Does it satisfy the
Central Limit Theorem? Should it?

Solution 4.15: Using the method of the Problem 4.13 we get


Z ∞
−a|k| dk −Nka 1 Na2
P (k) = e → PS (s) = 2 e cos ks = .
0 2π π Na2 + s2

It does not satisfy the Central Limit Theorem. It should not because hX 2 i is divergent.
58 CHAPTER 4. ELEMENTARY PROBABILITY THEORY . . .
Chapter 5

Stochastic Dynamics and Brownian Motion

Quick access: 1 2 3 4 5 6 7 8 9 10 11 Test 1

Problem 5.1: Urn A has initially 1 red and 1 white marble, and urn B initially has 1 white
and 3 red marbles. The marbles are repeatedly interchanged. In each step of the process one
marble is selected from each urn at random and the two marbles selected are interchanged. Let
the stochastic variable Y denote “configuration of the units”. There configurations are possible.
They are shown in Fig. 5.1

2 3

Figure 5.1: To the problem 5.1

We denote these 3 realizations as y(n) where n = 1, 2 and 3.


(a) Compute the transition matrix Q and the conditional probability matrix P(s0 |s).
(b) Compute the probability vector, hP(s)|, at time s, given the initial condition stated above.
What is the probability that there are 2 red marbles in urn A after 2 steps? After many
steps?
(c) Assume that the realization, y(n), equals to n2 . Compute the moment, hy(s)i, and the
autocorrelation function, hy(0)y(s)i, for the same initial conditions as in part (b).

59
60 CHAPTER 5. STOCHASTIC DYNAMICS ...

Solution 5.1: The configurations are shown in Fig. 5.1. By inspection of the figure we get,

Q11 = 0 , Q12 = 1 , Q13 = 0 ,


1 1 1 1 1 1 3 1 1 3 3
Q21 = · = , Q22 = · + · = , Q23 = · = ,
2 4 8 2 4 2 4 2 2 4 8
1 1
Q31 = 0 , Q32 = , Q33 = .
2 2
Thus the transition matrix Q is given by the expression
 
0 1 0
Q =  1/8 1/2 3/8  ,
0 1/2 1/2

while the initial condition corresponds to the probability vector

hp(0)| = (0, 1, 0) .

Below we proceed as in exercise 5.1 of the book [1].


The following procedure is clear form the Maple printout. First we find eigenvalues of Q
which are equal to 1, ∓1/4. The corresponding “right” eigenvectors are:
· ¸
−1 −3 1
[1, 1, {[1, 1, 1]}], [ , 1, { 6, , 1 }], [ , 1, {[4, 1, −2]}]
4 2 4

Thus we denote λ1 = 1, λ2 = −1/4, λ3 = 1/4, and

|1i = (1, 1, 1) , |2i = (6, −3/2, 1) , |3i = (1, 2, −3) .

The “left” eigenvectors are just the right eigenvalues of the transposed matrix, QT . We have for
them the Maple result

1 −1
[ , 1, {[1, 2, −3]}], [ , 1, {[1, −2, 1]}], [1, 1, {[1, 8, 6]}]
4 4

Thus,
h1| = (1, 8, 6) , h2| = (1, −2, 1) , h3| = (1, −2, 1) .
It is easy to prove that

hi|ki = ni δik , where n1 = 15, n2 = 10, n3 = 12 .

Now we construct matrices Pi = |iihi| to get


     
1 8 6 6 −12 6 4 8 −12
P1 =  1 8 6  , P2 =  −3/2 3 −3/2  , P3 =  1 2 −3  .
1 8 6 1 −2 1 −2 −4 6
61

Now, P(s0 |s) = P(s − s0 ) with

λsi
P(s) = ∑ Pi
i ni
 
1 3 −1 1 1 8 6 −1 2 1 2 3 −1 s 1
 + ( )s + ( )s − ( )s + ( )s + ( ) − ( )s 
 15 5 4 3 4 15 5 4 3 4 5 5 4 4 
 1 3 −1 1 1 8 3 −1 1 1 2 3 −1 1 1 
=
 − ( )s + ( )s + ( )s + ( )s − ( )s − ( )s 

 15 20 4 12 4 15 10 4 6 4 5 20 4 4 4 
 1 1 −1 1 1 8 1 −1 1 1 2 1 −1 1 1 
+ ( )s − ( )s − ( )s − ( )s + ( )s + ( )s
15 10 4 6 4 15 5 4 3 4 5 10 4 2 4
As it should be, P(0) = I. In this way we solve the part (a).
Now we come to the point (b). Straightforward calculation gives

hP(s)| = hp(0)|P(s)
· ¸
1 3 −1 s 1 1 s 8 3 −1 s 1 1 s 2 3 −1 s 1 1 s
= − ( ) + ( ), + ( ) + ( ), − ( ) − ( ) .
15 20 4 12 4 15 10 4 6 4 5 20 4 4 4

We are asked to find the probability of the configuration 3. After 2 steps it is 3/8, and at s → ∞
we get 2/5.
The part (c) is calculated in a straightforward way. We obtain
29 3 −1 3 1
hy(s)i = − ( )s − ( )s ,
5 10 4 2 4
79 33 81
hy(0)y(s)i = (−1)(1+s) 4(−s) − 16(−s) + (−1)(1+2 s) 16(−s)
50 4 100
11 3 841
+ (−1)(1+s) 16(−s) − 4(−s) +
2 2 25

Problem 5.2: Three boys, A,B, and C, stand in a circle and play catch (B stands to the right of
A). Before throwing the ball, each boy flips a coin to decide whether to throw to the boy on his
right or left. If “heads” comes up, the boy throws to his right. If “tails” come up, the boy throws
to his right.
The coin of boy A is “fair” (50% heads and 50% tails), the coin of boy B has head of both
sides, and the coin of boy C is weighted (75% heads and 25% tails).

(a) Compute the transition matrix, its eigenvalues, and its left and right eigenvalues.

(b) If the ball is thrown in regular intervals, approximately what fraction of time each boy have
the ball (assuming they throw the ball many times)?

(c) If boy A has the ball to begin with, what is the chance it has it after 2 throws? What is the
chance he will have in after s throws.
62 CHAPTER 5. STOCHASTIC DYNAMICS ...

Solution 5.2: The problem is similar to the previous one. Let us map the notations

A → 1, B → 2, C → 3,

and note that “right” means counterclockwise. Then

Q12 = Q13 = 1/2 , Q23 = 1 , Q21 = 0 , Q31 = 3/4 , Q32 = 1/4 .

At the same time O11 = Q22 = Q33 = 0. As a result, the transition matrix has the form
 
0 1/2 1/2
Q= 0 0 1 .
3/4 1/4 0

The following solution is similar to the previous problem (see the Maple printout). Note that the
eigenvalues and eigenvectors are complex, but conjugated. It means that the probability is real,
but oscillates in time).
The answers are: after many throws the chance for the boy A to hold the ball is 6/19; starting
from the given initial conditions his chance to hold the ball after 2 throws is 3/8.

Problem 5.3: A trained mouse lives in the house shown in Fig. 5.2.

B (2)

A (1)

C (3)

Figure 5.2:

A bell rings at regular intervals (short compared with the mouses lifetime). Each time it rings,
the mouse changes rooms. When he changes rooms, he is equally likely to pass through any of
the doors of the room he is in.
Let the stochastic variable Y denotes “mouse in a particular room”. There 3 realizations of
Y:
1. “Mouse in room A”

2. “Mouse in room B”

3. “Mouse in room C”
63

which we denote as y(1), y(2), and y(3), respectively.

(a) compute the transition matrix, Q, and the conditional probability matrix, P(s0 |s).

(b) . Compute the probability vector, hP(s)|, at time s, given that the mouse starts in room C.
Approximately, what fraction of his life does he spend in each room?

(c) Assume that the realization, y(n) equals to n. Compute the moment, hy(s)i, and the auto-
correlation function, hy(0)y(s)i, for the same initial condition as in part (b).

Solution 5.3: The problem is very much similar to the previous ones. We have

Q12 = Q13 = 1/2 , Q21 = 1/3 , Q23 = Q32 = 2/3 , Q31 = 1/3 , Qii = 0 .

Thus  
0 1/2 1/2
Q =  1/3 0 2/3  .
1/3 2/3 0
Now we do the same as in previous problems (see Maple printout). The answers are obvious
from the printout.

Problem 5.4: The doors in the mouse’s house shown in Fig. 5.2 so thay get periodically larger
and smaller. This causes the mouse’s transition between the rooms to become time periodic. Let
the stochastic variable Y have the same meaning as in Problem 5.3. The transition matrix is now
given by

Q11 (s) = Q22 (s) = Q33 (s) = 0 , Q12 (s) = cos2 (πs/2) , Q12 (s) = sin2 (πs/2) ,
Q21 (s) = 1/4 + (1/2) sin2 (πs/2) , Q23 (s) = 1/4 + (1/2) cos2 (πs/2)
Q31 (s) = (1/2) cos2 (πs/2) , Q32 (s) = (1/2) + (1/2) sin2 (πs/2) ,

(a) If initially the mouse is in room A, what is the probability to find it in room A after 2s
room changes?

(b) If initially the mouse is in room B, what is the probability to find it in room A after 2s room
changes? In room B?

Solution 5.4: The solution is rather tedious, but obvious from the Maple file.
64 CHAPTER 5. STOCHASTIC DYNAMICS ...

Problem 5.5: Consider a discrete random walk on a one-dimensional periodic lattice with
2N + 1 sites (label the sites from −N to N). Assume that the walker is equally likely to move on
the lattice site to the left or right at each step. Treat this problem as a Markov chain.
(a) Compute the transition matrix, Q, and the conditional probability, P(s0 |s).
(b) Compute the probability P1 (n, s) at time s, given the walker starts at site n = 0.
(c) If the lattice has 5 sites (N = 2), compute the probability to find the walker on each site
after s = 2 steps and after s = ∞ steps.Assume that the walker starts at site n = 0.
Hint: Let us understand the word periodic as peridically extended, i. e. possessing the property

P[n + (2N + 1),t] = P(n,t) .

Solution 5.5: The transition probabilities are given by the expressions

wn−1,n = wn,n−1 = wn+1,n = wn,n+1 = 1/2 .

The Master equation has the form

dPn /ds = wn−1,n Pn−1 + wn+1,n Pn+1 − Pn (wn,n−1 + wn,n+1 )


= (1/2) (Pn−1 + Pn+1 − 2Pn ) .

Let us look for the solution in the form

Pn (s) ∝ eλs+iκn .

Then we get
λ = −[1 − cos(κ)] = −2 sin2 (κ/2) .
Now we have to find κ. If we apply cyclic boundary conditions (assuming that the word periodic
in the problem formulation means that) we have to require: P(n + 2N + 1) = P(n) at any s. Thus

eiκ(2N+1) = 1 → κ(2N + 1) = 2πk ,

where k is the integer number ranging from −N to N. Thus, the general solution is
· ¸
N
2πk πk
P(n, s) = ∑ Pk exp −in − 2s sin 2
.
k=−N 2N + 1 2N + 1

The quantities Pk should be found from the initial conditions. At s = 0 we can rewrite the l. h. s.
of the requirement P(n, 0) = δn,0 in the form
N
P(n, 0) = ∑ Pk e−in2πk/(2N+1) .
k=−N
65

On the other hand,


N
1
δn,0 = ∑ e−in2πk/(2N+1) .
2N + 1 k=−N

From that we immediately get Pk = const(k). The proper constant 1/N (s) is the normalization
coefficient found from the condition
N
∑ P(n, s) = 1 .
n=−N

Finally
· ¸
1 N
2πk πk
P(n, s) = ∑ exp −in 2N + 1 − 2s sin 2N + 1 .
N (s) k=−N
2

Here · ¸
N
2πk πk
N (s) = ∑ exp −in − 2s sin 2
= 2N + 1 .
n=−N 2N + 1 2N + 1
Now let us find the conditional probability P(n, s|m, 0). Now we have to apply the initial condi-
tion P(n, 0) = δm,0 , or

N N
P(n, 0) = ∑ Pk e−in2πk/N = δn,m = ∑ Pk e−i(n−m)2πk/(2N+1) .
k=−N k=−N

Thus,
Pk =∝ eim2πk/(2N+1) .
Consequently,
· ¸
1 N
2πk πk
P(s − s0 |0)mn = P(n, s|m, 0) = ∑ exp −i(n − m) 2N + 1 − 2s sin 2N + 1 .
2N + 1 k=−N
2

As s → ∞ the probability tends to 1/(2N + 1). The calculation for N = 2 is straightforward, see
Maple file. Finally, the matrix Qmn = P(1|0)mn .

Problem 5.6: At time t, a radioactive sample contains n identical undecayed nuclei, each with
the probability for the unit time, λ, of decaying. The probability of decay during the time t →
t + ∆t is λ ∆t. Assume that at time t = 0 there are n0 undecayed nuclei present.

(a) Write down and solve the master equation of the process [find P1 (n,t)].

(b) Compute the the mean number of undecayed nuclei and the variance as a function of time.

(c) What is the half-life of this decay process?


66 CHAPTER 5. STOCHASTIC DYNAMICS ...

Solution 5.6: The master equation has the form


dP
= −λn P(n,t) + λ(n + 1) P(n + 1,t) .
dt
This is a linear master equation, and we solve it by the method of generating functional, p. 262
of the book [1]. We introduce

F(z,t) = ∑ zn P1 (n,t) .
n=−∞
Then we multiply the master equation by zn and sum over n. We get

∂F ∂F
= −λ(z − 1) . (5.1)
∂t ∂z
Indeed,
d ∂F
∑ zn(n + 1)P(n + 1,t) = dz ∑ zn+1P(n + 1,t) = ∂z
.
n n
In a similar way
∂F
∑ znnP(n,t) = z ∑ zn−1nP(n,t) = z ∂z
n n
Introducing τ = λt we rewrite Eq. 5.1 in the form

∂F ∂F
+ (z − 1) = 0. (5.2)
∂τ ∂z
The characteristics of this equation meet the equation
dz
dτ = → ln(z − 1) − τ = const .
z−1
Consequently, ³ ´ £ ¤
F(z,t) = Φ e−τ+ln(z−1) = Φ (z − 1)e−τ .

Since P(n, 0) = δn,n0 we get

Φ(z − 1) = zn0 → Φ(u) = (u + 1)n0 .

As result, £ ¤n
F(z,t) = (z − 1)e−τ + 1 0 .
Now, µ ¶
∂F
hn(t)i = = n0 e−τ .
∂z z=1
In a similar way, ¡ ¢
hn2 (t)i − hn(t)i2 = n0 e−τ 1 − e−τ .
67

To find the probabilities let us express the quantity


£ ¤n0 n0
n0 !
(z − 1)e−τ + 1 = e−n0 τ [z + (eτ − 1)]n0 = e−n0 τ ∑ n!(n0 − n)! zn (eτ − 1)n0−n .
n=0

Thus,
n0 ! ¡ ¢n −n
P1 (n, τ) = e−n0 τ 1 − e−τ 0 .
n!(n0 − n)!

Problem 5.7: Consider a random walk on the lattice shown in Fig. 5.3. The transition rates are
1

2 3

Figure 5.3: A sketch to the problem 5.7.

w12 = w13 = 1/2 , w21 = w23 = w24 = 1/3, w31 = w32 = w34 = 1/3,
w42 = w43 = 1/2, w14 = w41 = 0 .
(a) Write the transition matrix, W, and show that that this system obey detailed balance.
(b) Compute the symmetric matrix, V, and find its eigenvalues and eigenvectors.
(c) Write P1 (n,t) for the case P(n, 0) = δn,1 . What is P(4,t)?

Solution 5.7: The solution is similar to the Exercise 5.2 in p. 245 of the book [1]. The answers
are obvious from the Maple file.

Problem 5.8: Consider a random walk on the lattice shown in Fig. 5.4. The site P absorbs the
walker. The transition rates are
w12 = w13 = 1/2 w21 = w23 = w2P = 1/3,
w31 = w32 = w3P = 1/3, wP1 = wP2 = 1/2 .
(a) Write the transition matrix, M, and compute its eigenvalues and eigenvectors.
(b) In the walker starts at site n = 1 at time t = 0, compute the mean first passage time.
68 CHAPTER 5. STOCHASTIC DYNAMICS ...

2 3

Figure 5.4: A sketch to the problem 5.8.

Solution 5.8: The solution is similar to the Exercise 5.4 of the book [1]. The answers are clear
from the Maple file.

Problem 5.9: Let us consider on RL electric circuit with resistance, R, and inductance, L,
connected in series. Even there is no average electromotive force (EMF) exists across the resistor,
because of the discrete character of electrons in the circuit and their random motion, a fluctuating
EMF, ξ(t), exists whose strength is determined by the temperature, T . This, in turn, induces the
fluctuating current, I(t), in the circuit. The Langevin equation for the current is
d I(t)
L + R I(t) = ξ(t) . (5.3)
dt
If the EMF is delta-correlated,
hξ(t2 )ξ(t1 )i = g δ(t2 − t1 )
and hξ(t)iξ = 0, compute g and the current correlation function hhI(t2 )I(t1 )iξ iT . Assume that
the average magnetic energy LhI02 iT /2 = kT /2 and hI0 iT = 0 where I(0) = I0 .

Solution 5.9: Let us divide Eq. (5.3) by R, denote L/R ≡ τ and measure time in units of τ and

the current in units I1 ≡ R−1 g. Namely, define θ = t/τ, and j(t) = I(t)/I1 . Then the equation
for the current has the form
d j/dθ + j = η(θ) , hη(θ2 )η(θ1 )i = δ(θ − θ1 ) .
Its solution has the form Z θ
−θ
j(θ) = j0 e + ds es−θ η(s) .
0
Multiplying solutions for j(θ2 ) and for j(θ1 ) and averaging over realizations of η we get
h j(θ2 ) j(θ1 )iη = j02 e−(θ2 +θ1 )
Z θ2 Z θ1
+ ds2 ds1 δ(s2 − s1 )es1 +s2 −θ1 −θ2
0 0
µ ¶
1 1
= 2
j0 − e−θ1 −θ2 + e−|θ2 −θ1 | .
2 2
69

The first item must vanish since in the equilibrium the process must be stationary, i. e. it must
depend only on the difference θ2 − θ1 . Thus,

1 g
h j02 iT = → hI02 iT = .
2 2R2

On the other hand,


LhI02 i = kT → g = 2R2 kT /L .

Consequently,
1 −|θ2 −θ1 |
hh j(θ2 ) j(θ1 )iη iT = e ,.
2
In dimensional variables,

kT −|t2 −t1 |/τ


hhI(t2 )I(t1 )iξ iT = I12 hh j(θ2 ) j(θ1 )iη iT = e .
L

Problem 5.10: Due to the random motion and discrete nature of electrons, an LRC series
circuit experience a random electromotive force (EMF), ξ(t). This, in turn, induces a random
varying charge, Q(t), on the capacitor plates and a random current, I)t) = dQ(t)/dt, through the
resistor and inductor. The random charge, Q(t), satisfies the Langevin equation

d2Q dQ Q
L + R + = ξ(t) .
dt 2 dt C

Assume that EMF is delta-correlated,

hξ(t2 )ξ(t1 )i = g δ(t2 − t1 )

and hξ(t)iξ = 0. Assume that the circuit is at the temperature T and the average magnetic energy
in the inductor and average electric energy in the capacitor satisfy the equipartition theorem,

1 1
LhI02 iT = hQ2 iT ,
2 2C 0

where Q(0) = Q0 , I(0) = I0 . Assume that hI0 iT = 0, hQ0 iT = 0, hQ0 I0 iT = 0.

(a) Compute the current correlation function hhI(t2 )I(t1 )iξ iT .

(b) Compute the variance of the charge distribution, hh(Q(t) − Q0 )2 iξ iT .


70 CHAPTER 5. STOCHASTIC DYNAMICS ...

Solution 5.10: The problem is exactly the same as in the Exercise 5.5 of the book [1]. After
mapping
Q → x, I → v, L → m, LC → ω20 , R→γ
we obtain the same equation. Thus,
kT −Γ|τ|
hhI(t + τ)I(t)iξ iT = e C(|τ|) ,
L
where q
C(τ) = cosh δτ − (Γ/δ) sinh δτ , Γ = R/L , δ = Γ2 − ω20 .
The variance of charge distribution can be calculated by integrating the formal solution
Z t
ω20 −Γt 1 0
I(t) = I0 e−Γt − Q0 e sinh δt + dt 0 ξ(t 0 )e−Γ(t−t )C(t − t 0 )
δ L 0

over time:
à !
1 − e−Γt ω2 1 − e−(Γ−δ)t 1 − e−(Γ+δ)t
Q(t) = Q0 + I0 − Q0 0 −
Γ 2δ Γ−δ Γ+δ
Z t Z t
1 00 00 −t 0 )
+ dt dt 0 ξ(t 0 )e−Γ(t C(t 00 − t 0 )
L 0 0

Now we do averages over ξ and T to obtain (linear terms vanish!)


" Ã !#2 µ ¶2
2 2 ω20 1 − e−(Γ−δ)t 1 − e−(Γ+δ)t 2 1 − e−Γt
hhQ (t)iξ iT = hQ0 iT 1 − − + hI0 iT
2δ Γ−δ Γ+δ Γ
Z t Z t Z t Z t
g
+ dt1 dt2 dt3 dt4 e−Γ(t1 −t2 +t3 −t4 )C(t1 − t2 )C(t3 − t4 ) δ(t2 − t4 )
L2 0 0 0 0

Now we substitute hQ20 iT = CkT , hI02 iT = kT /L, and g = 4RkT , and then apply δ function. The
last term becomes
Z t Z t Z t
4RkT
dt1 dt2 dt3 e−Γ(t1 −2t2 +t3 )C(t1 − t2 )C(t3 − t2 ) .
L2 0 0 0

The rest integrations must be done explicitly which results in lengthy expressions.

Problem 5.11: Consider a Brownian particle of mass m in one dimension in the presence of
a constant force f0 in a fluid with force constant γ and in th4e presence of a delta-correlated
random force ξ(t) such that hξ(t1 )ξ(t2 )iξ = g δ(t1 −t2 ) and hξ(t)iξ = 0. Assume that the velocity
and displacement of the Brownian particle at t = 0 are v0 and x0 , respectively.
(a) Compute the velocity correlation function hv(t1 )v(t2 )iξ .

(b) Compute the displacement variance h(x(t) − x0 )2 i.


71

Solution 5.11: The differential equation for the particle is


d v(t) γ f0 1 dx
+ v(t) − = ξ(t) , = v(t) .
dt m m m dt
Its solution is
Z t
−(γ/m)t f0 ³ ´ 1 0
v(t) = v0 e + 1 − e−(γ/m)t + dt 0 e(γ/m)(t −t) ξ(t 0 ) .
γ m 0

The quantity
f0 ³ −(γ/m)t
´
vd (t) = 1−e
γ
is just the drift velocity in the force f0 . Using results of the Sec. 5.E.1 of the book [1] we obtain
kT −(γ/m)|τ|
hh[v(t + τ) − vd (t + τ)] [v(t) − vd (t)]iξ iT = e .
m
Since hv(t)iT = vd (t) we obtain
kT −(γ/m)|τ|
hhv(t + τ)v(t)iξ iT = vd (t + τ)vd (t) + e .
m
In a similar way, we cam introduce drift displacement
Z t ³ ´ f · m³ ´¸
0 f0 −(γ/m)t 0 0 −(γ/m)t
xd (t) = dt 1−e = t− 1−e .
0 γ γ γ
Finally we obtain
· ´¸
2 2kT m³ −(γ/m)t
hh[x(t) − x0 − xd (t)] iξ iT = t− 1−e .
γ γ

Tests for training

Problem 5.12: Stochastic processes


Consider an electron (spin S = 1/2) bounded to an impiruty center and assume that the tempera-
ture T is much less than the energy difference between the levels of orbital motion. Consequently,
only spin degrees of freedom are excited.
Let the system be placed in external magnetic field B, the additional energy in magnetic field
is ±βB where ± corresponds to the values ±1/2 of the spin component along the field. Here
β = |e|h̄/2mc is the Bohr magneton, c is the velocity of light since the Gaussian system is used.
The magnetic moment of the states with Sz = ±1/2 is ∓β.

(a) Find the stationary probabilities for the states with Sz = ±1/2, P± .
- Find average magnetic moment of the system as function of temperature T and magnetic field
B.
72 CHAPTER 5. STOCHASTIC DYNAMICS ...

(b) Find mean square fluctuation of magnetic moment, h(∆M)2 i and the ratio h(∆M)2 i/hMi2 .
- Show that
β2
h(∆M 2 )i = . (5.4)
cosh2 (βB/T )

- Discuss limiting cases of large and low temperatures. Explain these results qualitatively.

(c) Write down the Master Equation for the probability Pi (t) to find the system in the state i = ±
at time t.
- Using the result for the stationary case find the relation between the transition probabilities
W+− and W−+ for the transition from the state “+” to the state “-” and for the reverse transition,
respectively. Express the probabilities W+− and W−+ through the quantity W = W+− +W+− .
- Derive the Master Equation for the population difference, P (t) ≡ w+ (t) − w+ (t).

(d) Write down the Master Equation for the conditional probability, P( f ,t|i,t0 ), to find the system
in the state f at time t under the conditions that at time t0 it was in the state i.
- Specify initial conditions to this equation.
- Show that solution of this equation with proper initial condition is

P( f ,t|i,t0 ) = w f + (δi f − w f )e−W |t−t0 | (5.5)

where w f is the stationary probability to find the system in the state f .


Hint: It is convenient to express the set of Master Equations for the quantities P( f ,t|i,t0 ) at
different i and f as an equation for the 2×2 matrix P̂ with matrix elements Pf i (t|t0 ) ≡ P( f ,t|i,t0 ).
The solution of the matrix Master Equation ∂P̂/∂T = Ŵ P̂ at t ≥ t0 can then be searched in in the
matrix as P̂ ∝ ∑k Ĉ(k) e−λk (t−t0 ) . Here λk are eigenvalues of the matrix W , while Ĉ(k) are time-
independent matrices. They are determined from the intial conditions and from the fact that at
t − t0 → ∞ the conditional probabilities tend to the stationary ones.

(e) Using Eq. (5.5) show that the correlation function h∆M(t)∆M(0)i has the form

h∆M(t)∆M(0)i = h(∆M)2 i e−W |t| (5.6)

where h(∆M)2 i is given by Eq. (5.4) while W = W+− +W−+ .


- Find the fluctuation spectrum.

Solution 5.12: (a) Denote w± the probabilities of the states Sz = ±1/2. We have

M = −β (w+ − w− ) . (5.7)

According to the Gibbs distribution,

w+ /w− = e−2βB/T . (5.8)


73

Since only 2 spin levels are involved,

w+ + w− = 1 . (5.9)

Combinig Eqs. (5.8) and (5.9) we obtain,


µ ¶
1 e2βB/T βB
w+ = , w− = 2βB/T , w− − w+ = tanh (5.10)
e2βB/T + 1 e +1 T
As a result, µ ¶
βB
M = β tanh (5.11)
T

(b) We get:

hM 2 i = β2 (w+ + w− ) = β2
hMi2 = β2 tanh2 (βB/T ) . (5.12)

As a result,
β2 h(∆M)2 i 1
h(∆M 2 )i = , = (5.13)
cosh2 (βB/T ) hMi2 2
sinh (βB/T )
At low temperatures, T ¿ βB, fluctuations are exponentially suppressed because spin is aligned
to magnetic field.

(c) The equation reads,


∂wi (t)
= − ∑ [Wi→s wi (t) −Ws→i ws (t)] (5.14)
∂t s=±

- In the stationary situation, using the detailed equilibrium principle,

Wi→s wi (t) = Ws→i ws (t)

we obtain
W+− w− W W
= = e2βB/T , , W−+ = , W+− = (5.15)
W−+ w+ 1 + e2βB/T 1 + e−2βB/T
- Coming back to Eq. (5.14) we get
∂w+ (t)
= −W+− w+ (t) +W−+ w− (t) ,
∂t
∂w− (t)
= −W−+ w− (t) +W+− w+ (t) . (5.16)
∂t
Subtracting equations we arrive at the result,
∂P (t)
= −W P (t) (5.17)
∂t
74 CHAPTER 5. STOCHASTIC DYNAMICS ...

(d) We consider a stationary random process, and P( f ,t|i,t0 ) is a function of t −t0 . Let us assume
t0 = 0, t ≥ 0 and denote Let us for brevity denote P( f ,t|i, 0) ≡ Pf i (t). The Master Equation for
Pf i (t) has the form
∂Pf i (t)
= − ∑[W f →s Pf i (t) −Ws→ f Psi (t)] . (5.18)
∂t s
The initial condition to this equation is

Pf i (t) = δi f at t → 0 . (5.19)

Let us introduce the matrix P̂(t) with matrix elements Pi f (t) = P( f ,t|i, 0). Then Eq. (5.18) can
be rewritten as
µ ¶
∂P̂(t) −W f →s Ws→ f
= Ŵ P̂(t) , with Ŵ = . (5.20)
∂t W f →s −Ws→ f

It is natural to search the solution of Eq, (5.18) in the form P̂(t) ∑k Ĉ(k) e−λk t . Then λk are the
eigenvalues of the matrix Ŵ while Ĉk are the numerical matrices to be determined from intial
conditions at t = 0 and from the fact that Pf i (t) → w f at t → ∞. Here w f is the stationary
probability of the state f .
Equating the determinant of Ŵ − λIˆ we get

λ1 = 0 , λ2 = −W f →s +Ws→ f ≡ −W .

Here Iˆ is the unit matrix Then, in general,


(1) (2)
Pf i (t) = C f i +C f i e−Wt

where Ĉ(k) are time-independent matrices. From the initial conditions (5.19),
(1) (2)
C f i +C f i = δ f i .

On the other hand, at t → ∞ the conditions probability tends to the stationary one, w f . Conse-
quently,
(1)
Cfi = wf .
Finally, because of the time reversibility, Pf i (t) depends only on |t|. As result,

Pf i (t) = w f + (δ f i − w f )e−W |t|

This expression coincides with Eq. (5.5).

(e) By definition,

h∆M(t)∆M(0)i = hM(t)M(0)i − hMi2 = ∑ Mi M f [P( f ,t|i, 0) − w f ]wi .


if
75

Using Eq. (5.5), one can rewrite the above equation as

h∆M(t)∆M(0)i = e−W |t| ∑ Mi M f (δi f − w f )wi = h(∆M)2 ie−W |t|


if

This expresssion coincides with Eq. (5.6).


The fluctuation spectrum is
Z ∞ Z ∞
(∆M)ω = dt e−ωt h∆M(t)∆M(0)i = h(∆M)2 i dt e−ωt−W |t| .
∞ ∞

The result reads as,


W
(∆M)ω = 2h(∆M)2 i .
ω2 +W 2
76 CHAPTER 5. STOCHASTIC DYNAMICS ...
Chapter 6

The Foundations of Statistical Mechanics

This chapter will be skipped this year

77
78 CHAPTER 6. THE FOUNDATIONS OF STATISTICAL MECHANICS
Chapter 7

Equilibrium Statistical Mechanics

Quick access: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Problem 7.1: Compute the structure function for N noninteracting harmonic oscillators, each
with frequency ω and mass m. Assume the system has total energy E. Using this structure
function and the microcanonical ensemble, compute the entropy and the heat capacity of the
system.

Solution 7.1: One can use either classical, or quantum Hamiltonian. Let us do the problem
classically. For brevity let us assume the the oscillators are one-dimensional. Then the Hamilto-
nian is
N µ 2 ¶
pi mω2 xi2
H =∑ + =E.
i=1 2m 2
The total number of states is
Z Z Z Z Z Z
" µ ¶#
N
p2i mω2 xi2
Ω(E) = dx1 dx2 . . . dxN d p1 d p2 . . . d pN Θ E − ∑ + .
V V V i=1 2m 2

Here ½
1, x > 0
Θ(x) =
0, x < 0
is the Heaviside unit step function. We have

dΘ(x)/dx = δ(x) ,

thus the structure function Σ(E) = ∂Ω(E)/∂E is given by the equation


Z Z Z Z Z Z
" µ ¶#
N
p2i mω2 xi2
Σ(E) = dx1 dx2 . . . dxN d p1 d p2 . . . d pN δ E − ∑ + .
V V V i=1 2m 2

79
80 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS
p p
Let us use the following trick. Denote qi = xi mω2 /2, i = 1..N, qi+N = pi 1/2m and R2 = E.
Then µ ¶N Z Z
à ! µ ¶
2 2N
2 N
Ω= dq1 . . . dq2N Θ R − ∑ q j ≡
2 2
Ω̃(R) .
ω j=1 ω
From the dimensionality considerations,

Ω̃(R) = A2N R2N .

To find the coefficient A2N let us consider the integral


Z ∞ Z ∞
à ! µZ ¶2N
2N ∞
dq1 . . . dq2N exp − ∑ q2j = dq e −q2
= πN .
−∞ −∞ j=1 −∞

On the other hand, the same integral is


Z ∞ Z ∞
∂Ω̃ −R2 2
dR e = 2NA2N dR R2N−1 e−R = NA2N Γ(N)
0 ∂R 0

Thus
πN
A2N = .
NΓ(N)
Finally,
µ ¶ µ ¶
1 E N 1 E N
Ω(E) = 2π , Σ(E) = 2π .
NΓ(N) ω E Γ(N) ω
Now, the entropy is
µ ¶N
Ω(E) E 1
S/k = ln = ln = N ln(E/h̄ω) − 2N ln N + 2N .
N!h N h̄ω (N!)2
Here we used the Stirling’s formula. Now,
1 ∂S Nk
= = → E = NkT, C = kN .
T ∂E E
This is the classical answer. The quantum calculation is demonstrated in the Exercise 7.1 of the
book [1].

Problem 7.2: A system consists of N noninteracting, distinguishable two-level atoms. Each


atom can exist in one of two states, E0 = 0, and E1 = ε. The number of atoms in energy level E1
is n1 . The internal energy of the system is U = n0 E0 + n1 E1 .
(a) Compute the entropy of the system as a function of internal energy.

(b) Compute the heat capacity of a fixed number of atoms, N.


81

Solution 7.2: Since E0 = 0 the energy U cam be realized in U/ε ≡ n1 particles are in the state
1, and (N −U/ε) = N − n1 particles are in the state 0. The number of ways to do that is
N!
W=
n1 !(N − n1 )!
Consequently,
S N!
= ln ≈ N ln N − N − n1 ln n1 + n1 − (N − n1 ) ln(N − n1 ) + (N − n1 ) .
k (N − n1 )!
Finally µ ¶ µ ¶
S U U U U
≈ N ln N − ln − N − ln N − .
k ε ε ε ε
To find temperature we do usual procedure:
1 ∂S k Nε −U
= ≈ ln .
T ∂U ε U
As result,
Nε ³ ε ´2 eε/kT
U = ε/kT , C = Nk ¡ ¢2 .
e +1 kT eε/kT + 1

Problem 7.3: A lattice contains N normal lattice sites and N interstitial lattice sites. N identical
atoms sit on the lattice, M on the interstitial sites and N − M on the normal sites (N À M À 1.
If an atom occupies a normal sites, its energy E = 0. If an atom occupies an interstitial site, its
energy is E = ε. Compute the internal energy and heat capacity as a function of temperature for
this lattice.

Solution 7.3: The internal energy is U = Mε, so we have to express M as a function of temper-
ature. The number of ways in which one can place M atoms at N interstitial sites is
N!
Wi = .
M!(N − M)!
The rest N − M atoms can be placed at N normal states in the same number of ways. Thus
µ ¶2
N! S N!
W= → = 2 ln .
M!(N − M)! k M!(N − M)!
Using the Stiling’s formula we get
S/k ≈ 2 [N ln N − N − M ln M + M − (N − M) ln(N − M) + (N − M)]
≈ 2 [N ln N − M ln M − (N − M) ln(N − M)] ,
−1
T = (2k/ε) ln(N − M)/M , M = U/ε .
Thus,

U= ≈ Nε e−ε/kT , C ≈ 2kN(ε/2kT )2 e−ε/2kT .
eε/2kT+1
82 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS

Problem 7.4: Consider a lattice with N spin-1 atoms with magnetic moment µ. Each atom can
be in one of 3 spin states:, Sz = −1, 0, +1. Let n−1 , n0 and n1 denote the respective number of
atoms in each of those spin states. Find the total entropy and the configuration which maximizes
the total entropy. What is the maximum entropy? (Assume that no magnetic field is present,
so all atoms have the same energy. Also assume that atoms on different lattice sites cannot be
exchanged, so they are distinguishable).

Solution 7.4: We have


N!
W = ,
n−1 !n1 !(N − n−1 − n1 )!
lnW ≈ N ln N − n−1 ln n−1 − n1 ln n1 − (N − n−1 − n1 ) ln(N − n−1 − n1 ) .

Differentiating this function by n−1 and n1 and equating the derivatives to 0 we get:

n1 = (N − n1 − n−1 ) = n−1 → n1 = n0 = n−1 = N/3

The maximum entropy is


Smax = kN ln 3 .

Problem 7.5: A system has 3 distinguishable molecules at rest, each with a quantized magnetic
moment which can have z-components +µ/2 or −µ/2. Find an expression for the distribution
function, fi (i denotes the configuration), which maximizes entropy subject to the conditions

∑ fi = 1 , ∑ Mi,z fi = γµ ,
i i

where Mi,z is the magnetic moment of the system in the i-th configuration. For the case γ = 1/2
compute the entropy and compute fi .

Solution 7.5: Let us measure magnetic moment in units µ. We have 4 configurations with
magnetic moments in parentheses

↑↑↑ (3/2), ↑↑↓ (1/2), ↑↓↓ (−1/2), ↓↓↓ (−3/2) .

The configurations with magnetic moments ±1/2 are triply degenerated. Denoting the configu-
rations as 1,2, 3 and 4, we construct the functional


k ∑
= gi fi (α + βMi,z − ln fi ) .
i

From this expression we get

α + βMi,z − ln fi − 1 = 0 → fi = exp (1 − α − βMi,z ) .


83

Here gi is the degeneracy factor: g1,4 = 1, g2,3 = 3. We have 2 equations for α and β:

∑ gi exp (1 − α − βMi,z) = 1,
i

∑ giMi,z exp (1 − α − βMi,z) = γ.


i

or
µ ¶
1−α 3β β
2e cosh + 3 cosh = 1
2 2
µ ¶
1−α 3 3β 3 β
2e sinh + sinh = γ.
2 2 2 2

For γ = 1/2 that yields:

3β β 3β β
cosh + 3 cosh = 3 sinh + 3 sinh
2 2 2 2

Denoting x = eβ/2 we can rewrite this equation as

x6 − 3x2 − 4 = 0 ,

the meaningful solution being √


x= 2, or β = ln 2 .
Consequently,

27 2
α = 1 + ln .
4
As a result, we have
23/2
fi = N eMi,z ln 2 , N = .
27
The entropy is

S 1 27
= − ∑ gi fi ln fi = − ∑ gi fi (ln N + Mi,z ln 2) = − ln N − ln 2 = ln .
k i i 2 4

Problem 7.6: A fluid in equilibrium is contained in the insulated box of volume V . The fluid is
divided (conceptually) into m cells. Compute the variance of the enthalpy fluctuations, h(∆Hi )2 i
in the i-th cell. (For simplicity assume the fluctuations occur at fixed particle number, Ni ).
Hint: Use P and S as independent variables.
84 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS

Solution 7.6: We have


∆H = T ∆S +V ∆P .
Thus
h(∆H)2 i = T 2 h(∆S)2 i +V 2 h(∆P)2 i .
From the general theory the fluctuation probability is proportional to

w ∝ eWmin /kT , Wmin = ∆U − T ∆S + P ∆V (7.1)

is the minimum work needed to carry out reversibly the given change in the thermodynamic
quantities of small part considered (relative to which the remainder of the body acts as a medium).
Expanding ∆U in the series, we obtain
· ¸
1 ∂2U ∂U ∂2U
Wmin = 2
(∆S) + 2 ∆S ∆V + 2 (∆V ) 2
2 ∂S2 ∂S∂V ∂V
· µ ¶ µ ¶ ¸
1 ∂U ∂U
= ∆S ∆ + ∆V ∆
2 ∂S V ∂V S
1
= (∆S ∆T − ∆P ∆V ) .
2
Thus µ ¶
∆P ∆V − ∆S ∆T
w ∝ exp . (7.2)
2kT
One can use this general formula and choose different couples as independent variables. In our
case the proper pair is S P. We get
µ ¶ µ ¶
∂V ∂V
∆V = ∆P + ∆S ,
∂P S ∂S P
µ ¶ µ ¶
∂T ∂T
∆T = ∆P + ∆S
∂P S ∂S P
µ ¶
∂T T
= ∆P + ∆S .
∂P S CP
The Maxwell relation following from dH = T dS +V dP is
µ ¶ µ ¶
∂V ∂T
= .
∂S P ∂P S
Thus µ ¶ µ ¶
∂V ∂T
∆V = ∆P + .
∂P S ∂P S
Substituting the above relations into Eq. (7.2) we obtain
· µ ¶ ¸
1 ∂V 1
w ∝ exp 2
(∆P) − 2
(∆S) .
2kT ∂P S 2CP
85

As a result,
h∆S ∆Pi = 0 , h(∆S)2 i = kCP , h(∆P)2 i = −kT (∂P/∂V )S .
Using these relations we get

h(∆H)2 i = kT 2CP −V 2 kT (∂P/∂V )S .

Problem 7.7: A fluid in equilibrium is contained in the insulated box of volume V . The fluid
is divided (conceptually) into m cells. Compute the variance of the internal energy fluctuations,
h(∆Ui )2 i in the i-th cell. (For simplicity assume the fluctuations occur at fixed particle number,
Ni ). What happens to the internal energy fluctuations near the critical point?

Solution 7.7: We have,


µ ¶ µ ¶ · µ ¶ ¸
∂U ∂U ∂P
∆U = ∆V + ∆T = T − P ∆V +CV ∆T .
∂V T ∂T V ∂T V

Now we can use Eq. (7.2) and choose V, T as independent variables. We have:
µ ¶ µ ¶ µ ¶
∂S ∂S CV ∂P
∆S = ∆T + ∆T = ∆T + ∆T ,
∂T V ∂V T T ∂T V
µ ¶ µ ¶
∂P ∂P
∆P = ∆T + ∆V .
∂T V ∂V T

The we get
· µ ¶ ¸
CV 1 ∂P
w ∝ exp − 2
(∆T ) + 2
(∆V ) ,
2kT 2kT ∂V T
h∆T ∆V = 0i h(∆T )2 i = kT 2 /CV , h(∆V )2 i = −kT (∂V /∂P)T .

Finally,
· µ ¶ ¸2 µ ¶
2 ∂P ∂V
h(∆U) i = −kT T −P + kT 2CV .
∂T V ∂P T
Near the critical point h(∆U)2 i → ∞.

Problem 7.8: What is the partition function for a van der Waals gas with N particles? Note
that the result is phenomenological and might involve some guessing. It is useful to compare it
to the partition function for an ideal gas. Remember that the particles are indistinguishable , so
when using a partition function one must insert a counting factor. Use this partition function to
compute the internal energy, U(N, T,V ), the pressure, P(N, T,V ), and the entropy, S(N, T,V ).
86 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS

Solution 7.8: The partition function is (β ≡ (kT )−1 ),


Z Z
1
ZN = dr1 . . . drN dp1 . . . dpN e−βH ({r,p}) .
N!h3N
First we can integrate out the momenta,
Z
(k) 1 1
dp1 . . . dpN e−β ∑i pi /2m =
2
ZN = ,
N!h3N N!λ3N
T
√ (0) (k)
where λT = h/ 2πmkT . Since the partition function for the ideal gas is ZN = V N ZN Then we
are left with
Z Z h i
ZN 1 −βV ({ri }) 1 −βV ({ri })
(0)
= dr 1 . . . dr N e = 1 + dr 1 . . . dr N e − 1 ≡ 1 + zN .
Z VN VN
N

Let us try to consider rarefied gas where only pair interaction is important. Then we can choose
N(N − 1)/2 pair to get
Z h i
N2 −βV (r1 −r2 )
zn ≈ dr dr
1 2 e − 1 .
2V 2
Introducing the virial coefficient
Z ³ ´
1
B(T ) = dr 1 − e−βV (r) ,
2
where r = r1 − r2 , we obtain
zN = −N 2 B(T )/V .
We get
(0)
A = A0 − kT ln(1 + zn ) ≈ A0 − kT zN = A0 + kT N 2 B(T )/V , A0 ≡ −kT ln ZN .

In the spirit of the van der Walls model let us assume that the interaction is large at r ≤ 2r0 , and
at r > 2r0 the interaction is small. Then
Z 2r0 ³ ´ Z ∞ ³ ´
2 −βV (r) 2 −βV (r)
B(T ) = 2π r dr 1 − e + 2π r dr 1 − e
0 2r0
Z ∞
≈ 16πr03 /3 + 2πβ r2 drV (r) ≡ b − a/kT ,
2r0
R∞ 2
where b = 16πr03 /3 and a = π 2r0 r drV (r). Finally,

N2 ³ a ´
zn = − b− .
V kT
For an ideal gas,
µ ¶3/2
eV mkT
A0 = −NkT ln = N f (T ) − NkT ln(eV /N) .
N 2πh̄2
87

Thus,
A = N f (T ) − NkT ln(e/N) − NkT (lnV − Nb/V ) − N 2 a/V .
Since ln(V − Nb) ≈ −Nb/V + lnV we get

A ≈ A0 − NkT ln(1 − Nb/V ) − N 2 a/V .

From this we get


µ ¶
∂A NkT N 2a ∂A Nb
P = − = − , S=− = S0 + kN ln 1 − ,
∂V V − b V ∂T V
N 2a 3 N 2a
U = U0 − = kT − .
V 2 V

Problem 7.9: Consider a solid surface to be a two-dimensional lattice with Ns sites. Assume
that Na atoms (Na ¿ Ns ) are adsorbed on the surface, so that each site has either 0 or 1 adsorbed
atom. At adsorbed atom has energy E = −ε, where ε > 0. Assume that atoms on the surface do
not interact with each other.

(a) If the surface is at temperature T , compute chemical potential of the adsorbed atoms as a
function of T , ε, and Na /Ns (use the canonical ensemble).

(b) If the surface is in equilibrium with an ideal gas of similar atoms at temperature T , compute
the ratio Na /Ns as a function of pressure, P of the gas. Assume the gas has number density
n. (Hint: equate chemical potentials of the adsorbed atoms and the gas).

Solution 7.9:

(a) The canonical partition function is

1 Ns −βEi Ns !
Z= ∑
Na ! i=1
e =
Na !(Ns − Na )!
eβNa ε .

Here β = 1/kT . Since Na ¿ Ns ,


NsNa βNa ε
Z≈ e .
Na !
Equating this expression to e−Na βµ we get

µ = −ε − kT ln(Ns /Na ) .
88 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS

(b) For an ideal gas, µ ¶


Pλ3T
µ = kT ln .
kT
Equating chemical potentials, we obtain
µ 3¶
ε Ns PλT Na Pλ3T ε/kT
− − ln = ln → = e .
kT Na kT Ns kT
This expression is valid at high temperatures when

nλ3T ¿ e−ε/kT .

Problem 7.10: Consider a two-dimensional lattice lattice lattice in the x − y plane with sides of
length Lx and Ly which contain N atoms(N is very large) coupled by nearest neighbor harmonic
forces.
(a) Compute the Debye frequency for this lattice.
(b) In the limit T → 0, what is the heat capacity?

Solution 7.10: We follow the section 7.E of the book [1]. Since N À 1 the boundary conditions
are not important, and we can apply zero boundary conditions. As in this exercise, we use
continual approximation and replace
Z
Lx Ly
∑... →
(2π)2
dkx dky . . . .
k

The the density of states is


Z
Lx Ly
g(ω) = ∑ δ (ω − ωk,α ) =
(2π)2 ∑
dk δ (ω − ωk,α ) .
k,α α

Here α denotes the vibrational mode. In two-dimensional lattice we have two modes, longitudi-
nal, l, and transverse, t. In the Debye approximation it is assumed that the dispersion law ωk,α is
linear and isotropic:
π
ωk,α = cα k , ki = si , si = 1, 2, . . . .
Li
where cα is the sound velocity for each mode. Since in two-dimensional case
Z Z Z
dk . . . = kdk dφ . . . = 2π kdk . . .

we obtain µ ¶
Lx Ly 1 1
g(ω) = + ω.
2πh̄2 c2l ct2
89

Since the total number of vibrations should be normalized to 2N, where 2 is the number of modes
and N is number of atoms, we should get
Z ωD
g(ω) dω = 2N .
0

Consequently µ ¶
Lx Ly 1 1
2N = + ω2D
4πh̄2 c2l ct2
and s µ ¶
1 Lx Ly 1 1
= + .
ωD 8Nπh̄2 c2l ct2
The density of states can be expressed as
4N
g(ω) = ω.
ω2D
As result,
Z TD /T ½
4kN x 3 ex 2kN, T →∞
C= dx x = 2
(h̄βωD )2 0 (e − 1)2 24ζ(3)kN(T /TD ) , T → 0 .
Here ζ(x) is the Riemann ζ-function, ζ(3) ≈ 1.20, TD = h̄ωD /k.

Problem 7.11: A cubic box (with infinitely hard walls) of volume V = L3 contains an ideal gas
of N rigid HCl molecules (assume the effective distance between the H atom and the Cl atom
d = 1.3 Å).
(a) If L = 1.0 cm, what is the spacing between the translation energy levels?
(b) Write the partition function for this system (include the translational and rotational contri-
butions). At what temperature do rotational degrees of freedom become important.
(c) Write expressions for the Helmholtz free energy, the entropy, and the heat capacity of
the system for temperatures where the rotational degrees of freedom make a significant
contributions.

Solution 7.11: Let us put the origin of the reference frame in the cube’s corner. Since at the
walls the wave function should vanish, for each component α of the momentum p we get:
sin(pα L/h̄) = 0 → kα = nα π/L , kα ≡ pα /h̄ .
Here nα is an integer number. Thus the energy levels of translational motion are given by the
expression
h̄2 π2
εn = ε0 (nx + ny + nz ) , εt =
2 2 2
.
2mL2
90 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS

Thus the spacing between the levels is

εni +1 − εni = (2ni + 1)εt .

The rotational degrees of freedom are determined by the moment if inertia,


m1 m2 2
I= d ,
m1 + m2

the energy of rotation being

h̄2
εK = εr K(K + 1) , K = 0, 1, . . . , εr = .
2I
Thus the spacing between the levels is

εK+1 − εK = 2(K + 1)εr .

It is important that each rotational level is degenerate by the factor (2K + 1).
Thus Z = Ztr · Zrot where
à !3 ½

V /λ3T , βεt ¿ 1,
Ztr = ∑e −βεt n2x
=
exp(−βε t ), βεt À 1,
nx =1
∞ ½
kT /εr , βεr ¿ 1,
Zrot = ∑ (2K + 1) e−βεr K(K+1) =
K=0
1 + 3e −2βε r , βεr ¿ 1.

The following is straightforward.

A = −kT ln Ztr − kT ln Zrot ;


µ ¶ µ ¶ µ 2 ¶
∂A ∂S ∂ A
S = − , C=T = −T .
∂T V ∂T V ∂T 2 V

The calculations are obvious.

Problem 7.12: An ideal gas is composed of N “red” atoms of mass m , N “blue” atoms of mass
m, and N “green” atoms of mass m. Atoms of the same color are indistinguishable. Atoms of
different color are distinguishable.

(a) Use the canonical ensemble to compute the entropy of this gas.

(b) Compute the entropy of ideal gas of 3N “red” atoms of mass m. Does if differ from that of
the mixture. If so, by how much.
91

Solution 7.12: Since the atoms of different color distinguishable the partition function is just
the product of partition function. For each component
à !N µ ¶N
1 V
ZN = ∑ e
2
−βp /2m
= .
k N! λ3T
From this expression,
µ ¶3/2
eV mkT
AN = −NkT ln = N f (T ) − NkT ln(eV /N) .
N 2πh̄2
Here µ ¶3/2
mkT
f (T ) = .
2πh̄2
Thus, Z = ZN3 , and
A = 3N f (T ) − 3NkT ln(eV /N) , S = 3NK ln(eV /N) − 3N f 0 (T ) .
In the case of single-color gas of 3N particles we have
S1 = 3Nk ln(eV /3N) − 3N f 0 (T ) → S − S1 = 3N ln 3 .

Problem 7.13: An ideal gas consists of a mixture of “green” and “red” spin-1/2 particles. All
particles have mass m. A magnetic field, B, is applied to the system. The “green” particles have
magnetic moment γG , and the “red” particles have magnetic moment γR , where γR < γG . Assume
that temperature is high enough that Fermi statistics can be neglected. The system will be in
equilibrium if chemical potentials the the “red” and the “green” gases are equal. Compute the
ratio NR /NG , where NR is the number of “red” atoms and NG is the number of “red” particles.
Use the canonical ensemble (no other ensembles will be accepted).

Solution 7.13: Let us consider one of the gases since they are noninteracting. We have Z =
Ztr · Zs where Ztr is the usual transport partition function while
³ ´N
Zs = e−βγB + eβγB = 2N coshN (βγB) .

Since µ ¶ µ ¶
kT 1 V cosh(βγB) N eV cosh(βγB)
µ = − ln = −kT ln .
N N! λT NλT
Equating chemical potentials for “red” and “green” gases we get
cosh(βγR B) cosh(βγG B) NR cosh(βγR B)
= → = .
NR NG NG cosh(βγG B)
92 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS

Problem 7.14: Consider a one-dimensional lattice with N lattice sites and assume that i-th
lattice site has spin si = ±1. The Hamiltonian describing this lattice is
N
H = −ε ∑ si si+1 .
i=1

Assume periodic boundary condition, so sN+1 = s1 . Compute the correlation function, hs1 s2 i.
How does it behave at very high temperature and at very low temperature?

Solution 7.14: We have,


à !
ZN (T ) = ∑ ... ∑ exp βε ∑ si si+1 .
s1 =±1 sN =±1 i

As it is recommended in the book [1], we introduce the matrix


µ βε ¶
e e−βε
P̄ = = eβε I + e−βε σ 1 .
e−βε eβε

We have
hsi |P̄|si+1 i = eβsi si+1 .
Then
ZN (T ) = ∑ ... ∑ hs1 |P̄|s2 ihs2 |P̄|s3 i . . . hsN |P̄|s1 i = Tr P̄N .
s1 =±1 sN =±1

The matrix P̄ has eigenvalues

λ1 = 2 cosh βε , λ2 = 2 sinh βε .

Thus
ZN (β) = 2N (coshN βε + sinhN βε) .
We have

1 ∂ ln ZN coshN−1 (βε) sinh(βε) + sinhN−1 (βε) cosh(βε)


hsi si+1 i = = .
Nε ∂β coshN (βε) + sinhN (βε)

At low temperature, βε À 1, cosh(βε) ≈ sinh(βε) and hsi si+1 i = 1. Thus we have ferromagnet
ordering.
At high temperature,

cosh(βε) → 1 , sinh(βε) → 0 → hsi si+1 i = βε ¿ 1 .


93

Problem 7.15: In the mean field approximation to the Ising lattice, the order parameter, hsi,
satisfied the equation µ ¶
Tc
hsi = tanh hsi ,
T
where Tc = ν ε/2k where ε is the strength of the coupling between lattice sites and ν is the number
of nearest neighbors.
(a) Show that hsi is the following temperature dependence
½
1 − e−2Tc /T , T → 0,
hsi ≈ p
3(1 − T /Tc ), T → Tc .

(b) Compute the jump in the heat capacity at T = Tc .

(c) Compute the magnetic susceptibility, χ(T, N)B=0 , in the neighborhood of Tc both for T >
Tc and T < Tc . What is the critical exponent for both cases?

Solution 7.15:
(a) Since tanh ξ ≈ 1 − 2 e−2ξ at large ξ, we get

hsi ≈ 1 − 2 e−2hsiTc /T → hsi ≈ 1 − 2 e−2Tc /T .

At small ξ, tanh ξ ≈ ξ − ξ3 /3, we obtain


µ ¶3
Tc hsi3 Tc
hsi ≈ hsi + .
T 3 T
Thus, p
hsi ≈ 3(1 − T /Tc ) .

(b) We will use notations of Sec. 7.F.2 of the book [1]. The the Hamiltonian is
N
νε
H = − ∑ E(ε, B) si , E(ε, B) = hsi + µB . (7.3)
i=1 2

Thus U = −NEhsi = −NEhsi, and


µ ¶ · ¸µ ¶
∂E(ε, B)hsi ∂E ∂hsi
C = −N = −N E + hsi .
∂T N,B ∂hsi ∂T N,B

At T → Tc − 0 and B = 0 we get
p 3 Nνε
hsi = 3(1 − T /Tc ) → C|Tc −0 = .
2 Tc
At T > Tc the average spin is zero, thus the jump in specific heat is given by C|Tc −0 .
94 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS

(c) To get magnetic susceptibility we have to differentiate the magnetic moment N = Nµhsi
with respect to B: µ ¶
∂hsi
χ = Nµ .
∂B N,B→0
Defining η = (∂hsi/∂B)N,B→0 we get the following equation at B → 0:
· ¸
1 βνε
η= η + βµ .
cosh2 (hsiTc /T ) 2
Its solution is
2βµ
η= 2
.
2 cosh (hsiTc /T ) − Tc /T
Finally,
Nµ2 Tc /T
χ= .
νε cosh (hsiTc /T ) − Tc /T
2

Above Tc we have hsi=0, and


Nµ2 Tc
χ= .
νε T − Tc
Below Tc we get
" µ ¶2 #2 µ ¶
hsi2 Tc T
cosh2 (hsiTc /T ) ≈ 1 + ≈ 1+3 1− .
2 T Tc
As a result, below Tc
Nµ2 Tc
χ= .
νε 2(Tc − T )
Thus there is a divergence at T → Tc , χ ∝ |T − Tc |−1 , with critical exponent equal to -1.

Problem 7.16: The density of states of an ideal Bose-Einstein gas is


½
αE 2 , E > 0,
g(E) =
0, E < 0,
where a is a constant. Compute the temperature for Bose-Einstein condensation.

Solution 7.16: We have the equation for βc = 1/kTc :


Z ∞ Z ∞ 2
g(E) dE α x dx 2αζ(3)
N= = 3 = .
Emin e c − 1 βc
β E 0 ex − 1 β3c
Here ζ(x) is the Riemann zeta-function. Finally,
µ ¶1/3
N
kTc = .
2αζ(3)
95

Problem 7.17: An ideal Bose-Einstein gas consists of noninteracting bosons with mass m
which have an internal degree of freedom which can be described by assuming, that the bo-
son are two-level atoms. Bosons in the ground state have the energy E0 = p2 /2m, while bosons
in the excited state have the energy E1 = p2 /2m + ∆, where p is the momentum and ∆ is the
excitation energy. Assume that ∆ À kT . Compute the Bose-Einstein condensation temperature,
Tc , for this gas of two-level bosons. Does the existence of internal degree of the internal degree
of freedom raise or lower the condensation temperature?

Solution 7.17: We start with calculation of density of states.

Z µ ¶ Z µ ¶
p2 4π p2 √ m3/2
g0 (E) = dp δ E − = 3 p dpδ E −
2
=a E, a≡ √ .
2m h 2m 2π2 h̄3

In a similar way,

g1 (E) = a E − ∆ .

The equation for βc = 1/kTc is:

3/2 Z ∞√ Z ∞ p √ √
Nβc x dx x − βc ∆ dx ζ(3/2) π π −βc ∆
= + ≈ + e .
a 0 ex − 1 βc ∆ ex − 1 2 2

We can solve this equation by iterations since ∆ is large. Denoting

µ √ ¶2/3
aζ(3/2) π
β0 =
2N

we get
à √ !2/3 √
πa −β0 ∆ 2 πa −β0 ∆
βc = β0 1 + e ≈ β0 + p e .
2Nβ0
3/2
6N β0

Thus the internal degree of freedom decreases the condensation temperature.

Problem 7.18: Compute the Clausius-Clapeyron equation for an ideal Bose-Einstein gas and
sketch the coexistence curve. Show that the line of transition points in the P − v plane obeys the
equation
5/3 2πh̄2 g5/2 (1)
pv = .
m [g5/2 (1)]5/3
96 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS

Solution 7.18: Let us first recall the notations. The quantity z = eβµ is called the fugacity, the
distribution function is written as
1 z
hnl i = β(ε −µ) = βε .
e l −1 e l −z
The functions gi (z) are introduced as
Z ∞ ³ ´ ∞
4 zα
∑ 5/2 ,
2
g5/2 (z) = − √ x2 dx ln 1 − z−x = (7.4)
π 0 α=1 α

d zα
g3/2 (z) = z g5/2 (z) = ∑ 3/2 . (7.5)
dz α=1 α

Now, the grand canonical potential is


Z ∞ ¡ ¢
V (kT )5/2 m3/2 √ kT
Ω= √ dy y ln 1 − z e−y = −V 3 g5/2 (z) .
2π2 h̄3 0 λT
Thus the equation of state is
kT
P=
g5/2 (z) .
λ3T
In equilibrium with the condensate at T = Tc ,
kTc
Pc = g5/2 (1) .
λ3T
The critical particle density in the condensation point is
g3/2 (1)
hnc i = .
λ3T
q
Denoting λT = a/T 1/2 where a = 2πh̄2 /mk we get at the critical point:

3/2
à !2/3
N g3/2 (1) Tc Na3
= → Tc = .
V a3 V g3/2 (1)

Thus,
kTc g5/2 (1) g5/2 (1) 2πh̄2 g5/2 (1)
Pc = = ka2 = .
λ3T [vg3/2 (1)]5/3 m [vg3/2 (1)]5/3

Problem 7.19: Show that the pressure, P, of an ideal Bose-Einstein gas can be written in the
form P = αu, where “u” is the internal energy per unit volume and a is a constant.
(a) what is u?
(b) What is α?
97

Solution 7.19: Let us do integration by parts in Eq. 7.5 to get


Z ∞ ¡ ¢
2 √
g5/2 (z) = − √ dy y ln 1 − z e−y
π 0
Z
2 2 ∞ z
= √ · dy y3/2 y
π 3 0 e −z

Calculation the average energy


Z ∞
z
U= g(ε) ε dε
0 eβε − z

and recalling that g(ε) ∝ ε we immediately see that

2 2
PV = U → P= u.
3 3

Thus, α = 2/3.

Problem 7.20: Electrons in a piece of Cu metal can be assumed to behave as an ideal Fermi
gas. Cu metal in the solid state has mass density ρ = 9 gr/cm3 . Assume that each Cu atom
donates an electron to the Fermi gas. Assume the system is at T = 0 K.

(a) Compute the Fermi energy, εF , of the electron gas.

(b) Compute the Fermi temperature, TF = εF /k.

Solution 7.20: Let us start with the calculation of the density of states:
Z µ ¶
2V · 4π p2
g(ε) = p dpδ ε−
2
h3 2m

2V m3/2 √ √
= ε ≡ V g 0 ε.
π2 h̄3

Here g0 = 2m3/2 /π2 h̄3 , we have taken into account that spin degeneracy for electrons 2. Inte-
gration expression for densoty of states from 0 to εF and equationg to the number of electrons
we obtain
3/2
(2/3)g0 εF = ne → εF = (3ne /2g0 )2/3 , .

Here ne = Ne /V = ρ/mCu is the electron density.


98 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS

Problem 7.21: The density of states of an ideal Fermi-Dirac gas is


½
D, if E > 0 ,
f (E) =
0, if E < 0 ,
where D is a constant.
(a) Compute the Fermi energy.
(b) Compute the specific heat at low temperature.

Solution 7.21: We have,


Z ∞ Z ∞
dE D dx D ³ βµ
´
N=D = = ln 1 + e .
0 eβ(E−µ) +1 β −βµ e + 1
x β
Thus
βµ βN/D 1 ³ βN/D ´ N
e =e −1 → µ = ln e − 1 ≈ − kTe−N/DkT .
β D
The energy is then Z ∞
E dE
U =D .
0 eβ(E−µ) + 1
Let us use the trick called the Sommerfeld expansion. Consider an auxiliary integral,
Z ∞ ∞ Z
f (E) dE f (µ + kT z) dE
I = (E−µ)/kT
=
0 e +1 −µ/kT ez + 1
Z µ/kT Z ∞
f (µ − kT z) dE f (µ + kT z) dE
= kT −z
+ kT
0 e +1 0 ez + 1
Z µ Z ∞
f (µ + kT z) − f (µ − kT z)
= f (E) dE + kT dE
0 0 ez + 1
−1
Here we have used the fact that (e−z + 1) = 1 − (ez + 1)−1 and that eµ/kT À 1. Expanding the
integrand, we obtain
Z µ Z ∞ Z µ
2 0 z dz π2
I≈ f (E) dE + 2(kT ) f (µ) +... = f (E) dE + (kT )2 f 0 (µ) + . . . .
0 ez + 1 0 0 6
In our case, f (E) = DE, f 0 (µ) = D, and we get
U µ2 π2
≈ + (kT )2 .
D 2 6
The specific heat is then
C ∂µ π2 2
=µ + k T.
D ∂T 3
The first item is exponentially small at low temperatures, and
π2 2 π2 kT
C≈ Dk T ≈ k .
3 3 µ
99

Problem 7.22: Compute the magnetization of an ideal gas of spin-1/2 fermions in the presence
of magnetic field. Assume that fermions each have magnetic moment µe . Find the expression for
magnetization in the limit of weak magnetic field and T → 0 K.

Solution 7.22: We will calculate here only spin (Pauli) susceptibility. Since the spin degree of
freedom leads to additional energy ±µe B we get
1 1
Ω(µ) = Ω0 (µ − µe B) + Ω0 (µ + µe B)
2 2
where Ω0 is the grand potential in the absence of magnetic field,
Z √
2 2A ∞ E 3/2 dE 2V m3/2
Ω0 (µ) = − U = − , A = .
3 3 0 e(E−µ)/kT + 1 π2 h̄3
As a result,
Z Z
Ω(µ) 1 ∞ E 3/2 dE 1 ∞ E 3/2 dE
− = + .
A 3 0 e(E−µ−µe B)/kT + 1 3 0 e(E−µ+µe B)/kT + 1
In small magnetic fields or at large temperature,
(µe B)2 ∂2 Ω0 (µ)
Ω(µ) ≈ Ω0 (µ) + .
2 ∂µ2
Consequently, µ ¶
∂Ω ∂2 Ω0 (µ)
M=− = µ2e B .
∂B T,V,µ ∂µ2
∂Ω
Since ∂µ = −N we get
µ ¶
∂N
M = µ2e B .
∂µ T,V
This formula works in the wide region
µe B ¿ µ .
In very strong magnetic fields the analysis is more complicated. First the chemical potential is
given by the expression
N 1h i
= (µ + µe B)3/2 + (µ − µe B)3/2 .
A 3
In a similar way,
Ω(µ) 2 h 5/2 5/2
i
− = (µ + µe B) + (µ − µe B) .
A 15
These equations are written assuming that µe B ≤ µ, otherwise only one item is present. Physically
it means that only spins-up-states are occupied. Let us consider the case of very strong magnetic
field, µe B À µ. Then
2A Aµe
Ω(µ) = − (µ + µe B)5/2 → M= (µ + µe B)3/2 = Nµe .
15 3
100 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS

Problem 7.23: Show that the entropy for an ideal Fermi-Dirac gas (neglecting spin) can be
written in the form
S = −k ∑ [hnl i lnhnl i + (1 − hnl i) ln(1 − hnl i)] ,
l
³ ´−1
where hnl i = eβ(ε l −µ) +1 .

Solution 7.23: The number of ways to distribute Nl identical particles among Gl states with
not more than 1 particle per state is just

Gl !
Wl = .
Nl !(Gl − Nl )!

Using the Stirling’s formula we obtain

S = k ∑ ln [Gl ln Gl − Nl ln Nl − (Gl − Nl ) ln(Gl − Nl )] .


l

Using the mean occupation number hnl i = Nl /Gl we can rewrite this formula as

S = −k ∑ Gl [hnl i lnhnl i + (1 − hnl i) ln(1 − hnl i)] .


l

If we ignore spin, then Gl = 1.

Problem 7.24: To the lowest order in the density, find the difference in the pressure and the
isothermal compressibility between an ideal boson and ideal fermion gas. Assume that the
fermions and bosons have the same mass and both are spin-less. (Note: You are now consid-
ering fairly high temperature).

Solution 7.24: Assuming that the density of states is given by the expression

g(ε) = g0V ε1/2

we obtain
Z ∞ Z ∞ ³ ´
2 ε3/2 dε 2 β(µ−ε) β(µ−ε)
Ω = − g0V β(ε−µ)
≈ − g0V ε 3/2
dε e 1∓e .
3 0 e ±1 3 0

Introducing the fugacity z = eβµ we get


Z ∞ √ Ã √ !
2 ³ ´ π 2
P = g0 (kT )5/2 z ξ3/2 dξ e−ξ 1 ∓ z e−ξ = g0 (kT )5/2 z 1 ∓ z .
3 0 2 8
101

The normalization condition in this case gives


à √ !
Z ∞ ³ ´ √π 2
N = g0V (kT )3/2 z ξ1/2 dξ e−ξ 1 ∓ z e−ξ = zg0V 1 ∓ z .
0 2 4

As result,
2 N
z0 = √ .
πg0 (kT )3/2 V
We obtain
³ √ ´ Ã
1 ∓ z02 √ ! Ã √ !
NkT 8 NkT 2 NkT 2 N
P≈ ³ √ ´ ≈ 1 ± z0 ≈ 1± √ .
V 1 ∓ z0 42 V 8 V 4 πg0 (kT )3/2 V

The sign “+” hold for the Fermi statistics, the sign “-” holds for the Bose statistics. The com-
pressibility is calculated in a straightforward way since
à √ !
NkT 2
V≈ 1± √ .
P 4 πg0 (kT )5/2 P
102 CHAPTER 7. EQUILIBRIUM STATISTICAL MECHANICS
Chapter 8

Tests and training

Here we put some exercises for self-testing.

103
104 CHAPTER 8. TESTS AND TRAINING

Problem 8.1:

(a) Discuss the difference between Gibbs and Helmholtz free energy.

(b) Prove the relation µ ¶


2 ∂ A
U = −T ,
∂T T
where U is the internal energy.

(c) A body with constant specific heat CV is heated under constant volume from T1 to T2 . How
much entropy it gains?

(d) Discuss the heating if the same body is in contact with a thermostat at T2 . In the last case
the heating is irreversible. Show that the total entropy change is positive.

(e) Two similar bodies with temperatures T1 and T2 brought into contact. Find the final tem-
perature and the change in entropy.

Solution 8.1:

(a) Mechanical work under isothermic process is given by

dW = dU − dQ = dU − T dS = d(U − T S) .

The function of state


F = U −TS
is called Helmholtz free energy. We have

dF = dU − T dS − S dT = (T dS − P dV ) − T dS − S dT = −S dT − P dV.

Thus, µ ¶ µ ¶
∂F ∂F
S=− , P=− .
∂T V ∂V T
Thus, F is the thermodynamic potential with respect to V and T .
The thermodynamic potential with respect to P and T is called Gibbs free energy. We have

G = U − T S − PV → dG = −S dT +V dP .

(b) Substituting µ ¶
∂F
S=−
∂T V
into definition of F we get the result.
105

(c) We have µ ¶
Z T2
CV T2
S= dT = CV ln .
T1 T T1

(d) Use 2nd law of thermodynamics

(e) Energy conservation law yields

CV (T2 − TB ) = CV (TB − T1 )− > TB = (T1 + T2 )/2 .

for the entropy change we have


µ ¶
TB TB T1 + T2
δS = CV ln +CV ln = 2CV ln √ ≥ 0.
T1 T2 2 T1 T2

Problem 8.2: Find fluctuation of the energy per particle for the Boltzmann gas of spinless
particles at a given temperature . Assume that ε p = p2 /2m.

Solution 8.2:
­ ®
h(δε)2 i = (εi − hεi)2 = hε2 i − hεi2 .

Now
1 1 ∂Z1 (β)
hεi = ∑
Z1 i
εi e−βεi = −
Z1 ∂β
.

1 1 ∂2 Z1 (β)
Z1 ∑
2 −βεi
hε2 i = ε e = .
i
i
Z1 ∂β2

Thus µ ¶
2 1 ∂2 Z1 (β) 1 ∂Z1 (β) 2 ∂2 log Z1
h(δε) i = − = .
Z1 ∂β2 Z1 ∂β ∂β2
Now we have to specify Z1 for εi = p2 /2m.
Z ∞
1
Z1 = ∑ exp(−βε p ) = 4πp2 d p exp(−βε p ) .
p (2πh̄)3 0

It is convenient to introduce the density of states as number of states per energy interval.
Z ∞
4π m3/2 √
g(ε) = ∑ δ(ε − εi ) = p2 d p ∑ δ(ε − εi ) = √ ε.
p (2πh̄)3 0 i 2π2 h̄3
106 CHAPTER 8. TESTS AND TRAINING

In this way,
Z ∞ Z ∞
m3/2 m3/2
Z1 = √ dε ε 1/2
exp(−βε) = √ dx x1/2 exp(−x) .
2π2 h̄3 0 2π2 h̄3 β3/2 0

As Z ∞

dx x1/2 exp(−x) = Γ(3/2) = (1/2) π
0
we get r
a π m3/2 3
Z1 = , a= → log Z1 = log a − log β .
β3/2 8 π2 h̄3 2
Thus
3
h(δε)2 i = .
2β2

Problem 8.3: Two-level systems with random energy spacings ε are distributed with a constant
probability
p(ε) = P0 .
Find specific heat of the system.

Solution 8.3: As we know from the lecture,


n+
= e−βε , n+ + n− = 1 .
n−
Thus
1
n± = .
exp(±βε) + 1
The average energy is (we start from the middle of the distance between the levels)

ε ε βε
hεiε = (n+ − n− ) = tanh .
2 2 2
The contribution to specific heat is

∂hεiε ∂hεiε 1 1
c(ε) = == −β2 = .
∂T ∂β 2
4 cosh (βε/2)

The total specific heat is


Z ∞ Z ∞ Z ∞
1 1 P0 dx P0
C = p(ε) dε c(ε) = P0 dε 2
= 2
= .
0 0 4 cosh (βε/2) 2β 0 cosh x 2β
Note that it is proportional to T .
Appendix A

Additional information

A.1 Thermodynamics
A.1.1 Thermodynamic potentials

Thermodynamic potential Notation Independent variables Differential


Internal energy U S, V, N T dS − P dV + µ dN
Heat function (enthalpy) H S, P, N T dS +V dP + µ dN
Helmholtz free energy F T, V, N −S dT − P dV + µ dN
Gibbs free energy G T, P, N −S dT +V dP + µ dN
Landau free energy Ω T, V, µ −S dT − P dV − N dµ

Table A.1: Thermodynamic potentials (summary)

A.1.2 Variable transformation


Jacobian ∂(u, v)/∂(x, y) is defined as the determinant
¯ ¯
∂(u, v) ¯¯ ∂u/∂x ∂u/∂y ¯¯
= (A.1)
∂(x, y) ¯ ∂v/∂x ∂v/∂y ¯

The following relations can be useful:


µ ¶
∂(v, u) ∂(u, v) ∂(u, y) ∂u ∂(u, v) ∂(u, v) ∂(t, s)
=− , = , = · . (A.2)
∂(x, y) ∂(x, y) ∂(x, y) ∂x y ∂(x, y) ∂(t, s) ∂(x, y)

A.1.3 Derivatives from the equation of state


Important relations arise from the properties of partial differentiations. Consider 3 quantities,
X,Y, Z, related by the equation of state K (X,Y, Z) = const. Now let us consider X,Y as indepen-

107
108 APPENDIX A. ADDITIONAL INFORMATION

dent variables, while Z = Z(X,Y ). We have


µ ¶ µ ¶
∂Z ∂Z
dX + dY − dZ = 0 . (A.3)
∂X Y ∂Y X

If Y, Z are taken as independent variables, then


µ ¶ µ ¶
∂X ∂X
−dX + dY + dZ = 0 (A.4)
∂Y Z ∂Z Y
³ ´ ³ ´
Now we multiply Eq. A.3 by ∂X ∂Y and Eq. A.4 by ∂Z
∂Y and subtract the results. We obtain
Z X
·µ ¶ µ ¶ µ ¶ ¸ · µ ¶ µ ¶ µ ¶ ¸
∂Z ∂X ∂Z ∂X ∂X ∂Z
+ dX + − − dZ = 0 .
∂X Y ∂Y Z ∂Y X ∂Y Z ∂Z Y ∂Y X

Since dX and dY are independent, this equation is compatible if


µ ¶ µ ¶ µ ¶
∂Z ∂X ∂Z
+ = 0
∂X Y ∂Y Z ∂Y X
µ ¶ µ ¶ µ ¶
∂X ∂X ∂Z
+ = 0,
∂Y Z ∂Z Y ∂Y X
or µ ¶ µ ¶ µ ¶
∂X ∂Y ∂Z
= −1 , (A.5)
∂Y Z ∂Z X ∂X Y
µ ¶ µ ¶−1
∂X ∂Y
= . (A.6)
∂Y Z ∂X Z

General scheme for transformation: Consider any quantity F (X,Y ), the differential of which
can be expressed as µ ¶ µ ¶
∂F ∂F
dF = dX + dX .
∂X Y ∂Y X
Then we divide it by dZ and assume Y = const, (dY = 0). We get
µ ¶ µ ¶ µ ¶
∂F ∂F ∂X
= . (A.7)
∂Z Y ∂X Y ∂Z Y

Another important relation arises if one divides the expression for d F by xX,
µ ¶ µ ¶ µ ¶ µ ¶
∂F ∂F ∂F ∂Y
= + . (A.8)
∂X Z ∂X Y ∂Y X ∂X Z

Equations (A.7), (A.8) together with Eqs. (A.5), (A.6) and the Maxwell relations are usually
used for transformations and computation of derivatives from the equation of state.
A.2. MAIN DISTRIBUTIONS 109

A.2 Main distributions


Binomial distribution
N!
WN (m) = pm (1 − p)N−m . (A.9)
m!(N − m)!

Poisson distribution
m̄m −m̄
P(m) = e . (A.10)
m!

Gaussian distribution
1
G(x) = √ e−(x−x̄)/2σ . (A.11)
2πσ
Distribution of displacemnet for 1D random walk with the step l

1 2 2
G(x) = p e−[x−(p−q)l] /8Nl p(1−p) . (A.12)
2l 2πN p(1 − p)

Gaussian integrals Z ∞
2 /2 p
I1 (a) = dz e−az = 2π/a . (A.13)
−∞
Z ∞ · ¸
2 −z2 /2 dI1 (a) √
z dz e =2 = I1 (a) 2π . (A.14)
−∞ da a=1

Gaussian distribution for more than one variables Define entropy as S(x1 , . . . , xn . Then

1
2∑
S − S0 = − βik xi xk , βik = βki .
i,k

For convenience, let us assume summation over repeated subscripts and rewrite the above equa-
tion
1
S − S0 = − βik xi xk . (A.15)
2
Consequently,
w = A e− 2 βik xi xk .
1

Let us first calculate the normalization factor A from


Z
dw dx1 · · · dxn = 1 .

To calculate the integral let us introduce the linear transform

xi = aik yk
110 APPENDIX A. ADDITIONAL INFORMATION

to make the quadratic form (refmgs01 diagonal. In order that

βik xi xk = yi yi = yi yk δik

the relation
βik ail akm = δkm (A.16)
should be valid. Denoting determinants of the matrices β̂ and â as β and a, respectively, we get
the relation βa2 = 1. The Jacobian of the transformation xi → yi is just a,
¯ ¯
¯ ∂(x1 , . . . , xn ) ¯
J = ¯¯ ¯ = |â| = a .
∂(y1 , . . . , yn ) ¯

Consequently,
·Z ¸n
−y2 /2 A(2π)n/2
Aa dy e = p = 1.
β
Thus p µ ¶
β 1
w= exp − βik xi xk . (A.17)
(2π)n/2 2
Let define generalized forces as

Xi = −∂S/∂xi = βik xk , (A.18)

and first calculate p Z


β
hxi Xk i = xi βik xk e−βik xi xk /2 dx1 · · · dxn . (A.19)
(2π)n/2

The easiest way to calculate this integral is to calculate to assume for a while that x̄i 6= 0. Then
p Z
β
x̄ = xi e−βik (xi −xi0 )(xk −xk0 /2 dx1 · · · dxn = xi0 .
(2π) n/2

Then we can differentiate both sides with respect to xk0 and then put xi0 = xk0 = 0. The l.h.s. is
just hxi Xk i while the r.h.s. is δik . Thus
³ ´
hxi Xk i = δik , or βik hxi xk i = δik , hxi xk i = β̂−1 . (A.20)
ik

A.3 The Dirac delta-function


The main property is to single-out one particular value x − x0 of a variable x. It is defined by the
characteristic properties ½
0 for x 6= x0
δ(x − x0 ) = (A.21)
∞ for x = x0
A.4. FOURIER SERIES AND TRANSFORMS 111

but in such way that for any ε > 0


Z x0 +ε
δ(x − x0 ) dx = 1 . (A.22)
x0 −ε

Since delta-function has a very (infinitely) sharp peak at x = x0 and a unit area
Z B ½
f (x0 ) if A < x0 < B
f (x) δ(x − x0 ) dx = . (A.23)
A 0 otherwise

Representations for the Dirac δ-function


Let introduce a positive parameter γ and at final stage tend it to zero. Physically it means that γ
is less than all other involved scales. The main representations are the following: rectangular,
½ −1
γ for − γ/2 < x < γ/e
δ(x) = , (A.24)
0 otherwise
Lorentzian,
1 γ
δ(x) = , (A.25)
π x + γ2
2

Gaussian,
1 2 2
δ(x) = √ e−x /2γ , (A.26)
γ 2π
and integral, Z
1 ∞ ikx
δ(x) = e dk (A.27)
2π −∞
We will also need a representation for the Kronekker’s symbol
Z π
1
δn,0 = e−nφ dφ . (A.28)
2π −π

A.4 Fourier Series and Transforms


Fourier series
Generally the Fourier series is defined as an expansion of a real function in series as

a0
f (x) = + ∑ (ak sin kx + bk cos kx) . (A.29)
2 k=1

The sum of the series is periodic with the period 2π.


Another form is complex Fourier series

f (x) = ∑ ck eikx , c±k = (ak ∓ bk )/2, c0 = a0 /2 . (A.30)
k=−∞
112 APPENDIX A. ADDITIONAL INFORMATION

Since
Z 2π ½
π δmn , m 6= 0,
sin mx sin nx dx =
0 0, m = 0,
Z 2π ½
π δmn , m 6= 0,
cos mx cos nx dx =
0 2π, m = n = 0,
Z 2π ©
sin mx cos nx dx = 0, for integer m, n
0

we have    
ak Z 2π cos kt
 bk  = 1 dt f (t)  sin kt  , k 6= 0 . (A.31)
π 0 1 −ikt
ck 2e
At k = 0, Z
1 2π
a0 = c0 = dt f (t) .
2π 0
If the function has a discontinuity at x = x0 then the series gives
1
f (x0 ) = [ f (x0 + 0) + f (x0 − 0)] .
2
If one is willing to change the interval 2π → 2L than k has to be replaced by πk/L and the
normalization factor is 1/L rather than 1/π.
Advantages of the Fourier series: it can represent discontinuous functions. Let us take an
example of periodic function with the period 2π defined as
½
1, −π/2 < x < π/2
F(x) =
0, −π < x < −π/2, π/2 < x < π,

This function is even, thus


bk = 0 , ak = (2/πk) sin πk/2 .
Below we plot the sum of 5 and 50 Fourier harmonics. It is useful to know the Parseval’s identity
Z π ∞
1 a20
[ f (x)]2 dx = + ∑ (a2k + b2k ) .
π −π 2 k=1

Fourier transform
We assume L → ∞ and replace
∞ Z ∞
∑ (. . .) → L
−∞
dk (. . .) .
k=−∞
Z ∞
1
g(k) = f (t) e−ikt dt . (A.32)
2π −∞
A.4. FOURIER SERIES AND TRANSFORMS 113

0.8

0.6

0.4

0.2

–1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1

Figure A.1:

The completeness condition


Z ∞ Z ∞
1
f (x) = dk e ikt
dx f (x) e−ikx . (A.33)
2π −∞ −∞

The Dirac delta-function is represented as


Z ∞
1
δ(x − t) = dk eik(t−x) . (A.34)
2π −∞

We have several useful rules for the Fourier transform. If g(k) is the Fourier transform of f (x)
then

f 0 (x) ≡ d f /dx → ik g(k) ;


f (n) (x) ≡ d n f /dxn → (ik)n g(k) ;
Z ∞
dy f1 (y) f2 (x − y) → g1 (k) · g2 (k) (convolution theorem) ;
−∞
Z ∞ Z ∞
f1 (t) f2 (t) dt → g1 (k) f2 (−k) dk ;
Z −∞
∞ Z−∞

f1 (t) f2∗ (t) dt → g1 (k) f2∗ (k) dk the Parseval’s theorem . (A.35)
−∞ −∞

These properties are very important for solving differential equations, etc.
114 APPENDIX A. ADDITIONAL INFORMATION
R∞ R∞
f (x) = √1
2π −∞
F(ω) e−iωx dx F(ω) = √12π −∞ f (x) e−iωx dx
q −a|ω|
1 π e
a2 +x2
,a > 0 2 a
½ q
1, |x| < a 2 sin(aω)
f (x) = π ω
0, |x| > a
−a 2 x2 2
e 1

a 2
e−ω /4a

Table A.2: Table of Fourier transform pairs

Examples
One-dimensional diffusion: The equation has the form

∂n(x,t) ∂2 n(x,t)
= −D .
∂t ∂x2
Let us assume that we add a particle at time t = 0 at the point x = 0. Then the equation together
with initial condition can be written as
∂n(x,t) ∂2 n(x,t)
+D = δ(x) δ(t) .
∂t ∂x2
The Fourier transform of this equation has the form:
1 1
(Dk2 − iω) n(k, ω) = 1 → n(k, ω) = .
(2π) Dk − iω
2 2

Now we can come back to real space-time representation:


Z ∞ Z
dk ∞ dω 1
n(x,t) =
−∞ 2π −∞ 2π Dk2 − iω
Z ∞
dk −Dk2 |t| 1 2 /4D|t|
= e = p e−x .
−∞ 2π 2 πD|t|
Appendix B

Maple Printouts

Quick access: 1, 2.1, 2.2, 2.17, 3.7, 3.12, 3.13, 4.9, 4.10, 4.13, 4.14, 5.1, 5.2, 5.3, 5.4, 5.5, 5.7,
5.8

Test in mathematics
> F:=(xi,eta)->xi^4+xi^3+eta*xi^2:
> F1:=(xi,eta)->subs(y=xi,diff(F(y,eta),y)): solve(F1(xi,eta)=0,xi);
3 1√ 3 1√
0, − + 9 − 32 η, − − 9 − 32 η
8 8 8 8

One extremum at eta >9/32, otherwise 3 extrema

> x1:=-3/8+1/8*sqrt(9-32*eta): x2:=-3/8-1/8*sqrt(9-32*eta):

> F2:=(xi,eta)->subs(y=xi,diff(F1(y,eta),y)):
> g0:=eta->simplify(F2(0,eta)): g1:=eta->simplify(F2(x1,eta)):
> g2:=eta->simplify(F2(x2,eta)):

> g0(eta); g1(eta); g2(eta);



9 3√
− 9 − 32 η − 4 η
8 8
9 3√
+ 9 − 32 η − 4 η
8 8
> plot({g0(eta),g1(eta),g2(eta)},eta=-0.1..9/32);

115
116 APPENDIX B. MAPLE PRINTOUTS

2.5

1.5

0.5

–0.1 –0.05 0.05 0.1 0.15 0.2 0.25


eta

x0-> maximum at eta <0, otherwise minimum; x1-> minimum at eta>0, maximum at
9/32>eta>0; x2-> minimum at 9/32>eta.

Two specific values: eta=0 and eta=9/32.

> plot(F(xi,0), xi=-1.1..0.45); plot(F(xi,9/32), xi=-0.7..0.25);


117

0.1

0.05

–1 –0.8 –0.6 –0.4 –0.2 0.2 0.4


xi
–0.05

–0.1

0.035

0.03

0.025

0.02

0.015

0.01

0.005

–0.6 –0.4 –0.2 0.2


xi

Problem 1.2
> f1:=x->ln((1-x)/(1+x)): plot(f1(x),x=-1..1);
118 APPENDIX B. MAPLE PRINTOUTS

–1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1


x
–2

–4

–6

> f2:=x->ln(abs(tan(Pi*x))): plot(f2(x),x=0..1,f2=-5..5);

f2
2

0 0.2 0.4 0.6 0.8 1


x
–2

–4
119

> simplify(exp(4*ln(x))-(x^2+1)^2+2*x^2+1);
0

Problem 1.3
> simplify((sin(x))^(-2)-(tan(x))^(-2)-1,trig);
0
> assume(alpha>0): assume(beta>0):
> k:=alpha->sum(exp((I*beta-alpha)*n),n=0..infinity):

> a1:=alpha->evalc(Re(k(alpha))): a2:=alpha->evalc(Im(k(alpha))):


> a1(alpha); a2(alpha);
eα˜ (cos(β˜) − eα˜ )

(cos(β˜) − eα˜ )2 + sin(β˜)2
eα˜ sin(β˜)
(cos(β˜) − eα˜ )2 + sin(β˜)2
Problem 1.4
> assume(zeta>0): assume(n,natural):
> simplify(int(x^n*exp(-zeta*x),x=0..infinity));

ζ˜(−n˜−1) Γ(n˜ + 1)
> int((x^2+a^2)^(-1),x); int((x^2-a^2)^(-1),x);
x
arctan( )
a
a
1 ln(x − a) 1 ln(a + x)

2 a 2 a
> int(exp(-zeta*x^2/2),x=0..infinity);
√ √
1 2 π
p
2 ζ˜

Problem 2.1
(a)
> uax:=(x,y)->-y/(x^2+y^2): uay:=(x,y)->x/(x^2+y^2):
> simplify(diff(uax(x,y),y)-diff(uay(x,y),x));
0

The differential is exact


> ua:=(x,y)->int(uax(xi,y),xi=0..x): ua(x,y);
x
−arctan( )
y
120 APPENDIX B. MAPLE PRINTOUTS

(b)
> ubx:=(x,y)->(y-x^2): uby:=(x,y)->(x+y^2):
> simplify(diff(ubx(x,y),y)-diff(uby(x,y),x));
0

The differential is exact


> ub:=(x,y)->int(ubx(xi,y),xi=0..x)+int(uby(x,eta),eta=0..y)-int(int(di
> ff(ubx(xi,eta),eta),eta=0..y),xi=0..x);
Z x Z y Z xZ y

ub := (x, y) → ubx(ξ, y) dξ + uby(x, η) dη − ∂η ubx(ξ, η) dη dξ
0 0 0 0
> ub(x,y);
1 1
y x − x 3 + y3
3 3
(c)
> ucx:=(x,y)->(2*y^2-3*x): ucy:=(x,y)->-4*x*y:
> simplify(diff(ucx(x,y),y)-diff(ucy(x,y),x));
8y

The differential is not exact

Problem 2.2

General definitions
> ua:=(x,y)->int(ux(xi,b),xi=a..x)+int(uy(x,eta),eta=b..y);
Z x Z y
ua := (x, y) → ux(ξ, b) dξ + uy(x, η) dη
a b
> ub:=(x,y)->int(uy(a,eta),eta=b..y)+int(ux(xi,y),xi=a..x);
Z y Z x
ub := (x, y) → uy(a, η) dη + ux(ξ, y) dξ
b a
> ux1:=(x,y)->2*x*y+x^2: uy1:=(x,y)->x^2:
> ua1:=(x,y)->int(ux1(xi,b),xi=a..x)+int(uy1(x,eta),eta=b..y):
> ub1:=(x,y)->int(uy1(a,eta),eta=b..y)+int(ux1(xi,y),xi=a..x):
> ua1(x,y); ub1(x,y); simplify(ua1(x,y)-ub1(x,y));
1 1
b (x2 − a2 ) + x3 − a3 + x2 (y − b)
3 3
1 1
a2 (y − b) + y (x2 − a2 ) + x3 − a3
3 3
0
> ux2:=(x,y)->y*(2-2*y): uy2:=(x,y)->-x^2:
121

> ua2:=(x,y)->int(ux2(xi,b),xi=a..x)+int(uy2(x,eta),eta=b..y):
> ub2:=(x,y)->int(uy2(a,eta),eta=b..y)+int(ux2(xi,y),xi=a..x):
> ua2(x,y); ub2(x,y); simplify(ua2(x,y)-ub2(x,y));
b (2 − 2 b) (x − a) − x2 (y − b)
−a2 (y − b) + y (2 − 2 y) (x − a)
2 b x − 2 b a − 2 b2 x + 2 b2 a − x2 y + x2 b + a2 y − a2 b − 2 y x + 2 y a + 2 y2 x − 2 y2 a

Why?
> simplify(diff(ux2(x,y),y)-uy2(x,y),x);
2 − 4 y + x2

The differential is not exact.

Problem 2.17
> P:=(T,v)->R*T/(v-b)-alpha/v^2:
> num:=(diff(P(T,v),T))^2: den:=diff(P(T,v),v);
RT 2α
den := − +
(v − b)2 v3
> simplify(-T*num/den);
v3 R2 T

−R T v3 + 2 α v2 − 4 α v b + 2 α b2
> factor(2*alpha*v^2-4*alpha*v*b+2*alpha*b^2);
2 α (−v + b)2

Problem 3.7
> P:=(v,T)->(8*T/(3*v-1))-3/v^2;
T 3
P := (v, T ) → 8 − 2
3v−1 v
> plot({P(v,1.5),P(v,1),P(v,0.5)
> },v=0.5..1.2,thickness=3,color=black);
122 APPENDIX B. MAPLE PRINTOUTS

12
10
8
6
4
2

0 0.6 0.7 0.8 0.9 1 1.1 1.2


–2 v

–4

For T=0.5 there is no stable region at all. The phse transition can exist only nes T=1.
> plot({P(v,1.05),P(v,1),P(v,0.85)
> },v=0.45..5.3,P=0..2,thickness=3,color=black);

2
1.8
1.6
1.4
1.2
P 1
0.8
0.6
0.4
0.2

0 1 2 3 4 5
v
> ?plot,color

Problem 3.12
> phi:=xi->(A/3)*(xi+eta)^2-(2*B/27)*(xi+eta)^3+(C/9)*(xi+eta)^4:
> series(phi(xi),xi=0,5);
1 2 1 2 2 4 1 2 2
( A η2 − B η3 + C η4 ) + ( A η − B η2 + C η3 ) ξ + ( A − B η + C η2 ) ξ2 +
3 27 9 3 9 9 3 9 3
2 4 1
(− B + C η) ξ3 + C ξ4
27 9 9
> c3:=eta->-2/27*B+4/9*C*eta: solve(c3(eta)=0,eta);
1B
6C
> c1:=(A,eta)->2/3*A*eta-2/9*B*eta^2+4/9*C*eta^3;
2 2 4
c1 := (A, η) → A η − B η2 + C η3
3 9 9
> solve(c1(A,B/(6*C)),A);

1 B2
27 C
> c2:=subs({eta=B/(6*C),A=B^2/(27*C)},1/3*A-2/9*B*eta+2/3*C*eta^2);
1 B2
c2 := −
162 C
> delta:=xi->(C/9)*xi^4+c2*xi^2:
> delta1:=xik->subs(y=xi,diff(delta(y),y)): delta1(xi);

4 3 1 B2 ξ
Cξ −
9 81 C
> solve(4/9*C*xi^2-1/81*B^2/C,xi);
1B 1B
− ,
6C 6C

Problem 3.13
> P:=(v,T)->8*T/(3*v-1)-3/(T*v^2);
T 3
P := (v, T ) → 8 − 2
3v−1 T v
> p1:=(nu,epsilon)=P(1+nu,1+epsilon)-1;
1+ε 3
p1 := (ν, ε) = 8 − −1
2 + 3 ν (1 + ε) (1 + ν)2
> aux:=series(8*(1+epsilon)/(2+3*nu)-3/(1+epsilon)/(1+nu)^2-1,nu=0,4):a
> ux;
3 6 1 1 27 27
(3 + 4 ε − ) + (−6 − 6 ε + ) ν + (−9 + 9 + 9 ε) ν2 + (12 − − ε)
1+ε 1+ε 1+ε 1+ε 2 2
ν + O(ν )
3 4

> aux1:=(epsilon)->3+4*epsilon-3/(1+epsilon)+(-6-6*epsilon+6/(1+epsilon
> ))*nu+(-9*1/(1+epsilon)+9+9*epsilon)*nu^2
> +(12*1/(1+epsilon)-27/2-27/2*epsilon)*nu^3;

3 6 1
aux1 := ε → 3 + 4 ε − + (−6 − 6 ε + ) ν + (−9 + 9 + 9 ε) ν2
1+ε 1+ε 1+ε
1 27 27
+ (12 − − ε) ν3
1+ε 2 2
> aux2:=series(aux1(epsilon),epsilon=0,2);
3 51
aux2 := − ν3 + (7 − ν3 + 18 ν2 − 12 ν) ε + O(ε2 )
2 2
Simplified equation of state
> p:=(nu,epsilon)->-3/2*nu^3+(7-12*nu)*epsilon;
3
p := (ν, ε) → − ν3 + (7 − 12 ν) ε
2
Critical point - epsilon=0;
> d1:=(nu,epsilon)->subs(y=nu,diff(p(y,epsilon),y)): d1(nu,epsilon);
9
− ν2 − 12 ε
2
The curve is symmetric. That leads to vg=-vl=v
> solve(p(v,epsilon)= p(-v,epsilon),v);;
√ √
0, 2 −2 ε, −2 −2 ε

Critical exponent beta=1/2

Along the coexitence curve, vˆ2=-8epsilon. exponent gamma=1

Function of pressure
> P1:=(rho,T)->8*T*rho/(3-rho)-3*rho^2/T:

Near the critical point

> P1(1+xi,1);
1+ξ
8 − 3 (1 + ξ)2
2−ξ
> series(8*(1+xi)/(2-xi)-3*(1+xi)^2,xi=0);
3 3 3
1 + ξ3 + ξ4 + ξ5 + O(ξ6 )
2 4 8

the exponent delta=3

Problem 4.9
> assume(z>0): int(1/sqrt(z-x^2),x=0..sqrt(z));
1
π
2
> f:=k->1/(1-2*I*k): d1:=k->subs(y=k,diff(f(y),y)):
> d2:=k->subs(y=k,diff(d1(y),y)): d3:=k->subs(y=k,diff(d2(y),y)):
> d1(0); d2(0); d3(0);
2I
−8
−48 I

Problem 4.10
> P:=(N,k)->(N!/(k!*(N-k)!))*(1/4)^k*(3/4)^(N-k):
> evalf(P(12,3));
.2581036091
> evalf(P(120,30));
.08385171464
> G:=(N,k)->(1/sqrt(3*N*Pi/8))*exp(-(k-N/4)^2*8/(3*N));
(k−1/4 N)2
e(−8/3 N )
G := (N, k) → r
3

8
> evalf(G(12,3));
.2659615201
> evalf(G(120,30));
.08410441740
> plot({P(12,k),G(12,k)},k=0..12);
0.25

0.2

0.15

0.1

0.05

0 2 4 6 8 10 12
k

> plot({P(120,k),G(120,k)},k=0..120);

0.08

0.06

0.04

0.02

0 20 40 60 80 100 120
k
Problem 4.13
beta = sigmaˆ2
> assume(beta>0): assume(a,real): assume(N>0):
> P:=k->(2*Pi*beta)^(-1/2)*int(exp(-(x-a)^2/(2*beta)-I*k*x),x=-infinity.
> .infinity): simplify(P(k));

e(1/2 I k (−2 a˜+I k β˜))


> P1:=s->(2*Pi)^(-1)*int(exp(I*(s-N*a)*k-k^2*N*beta/2),k=-infinity..inf
> inity): P1(s);
(−s+N˜ a˜)2 √
(−1/2 N˜ β˜
)
1e 2
√ p
2 π N˜ β˜
> int(s*P1(s),s=-infinity..infinity);
N˜ a˜
> int(s^2*P1(s),s=-infinity..infinity)-N^2*a^2;
N˜ β˜

Problem 4.14
> assume(a>0): assume(d>a): assume(N>0):
> simplify((2*a)^(-1)*int(x^2,x=d-a..d+a));
1
d˜2 + a˜2
3
> simplify(N*(d^2+1/3*a^2)+N*(N-1)*d^2-N^2*d^2);
1
N˜ a˜2
3

Problem 5.1
> with(linalg):
> Q:=array([[0,1,0],[1/8,1/2,3/8],[0,1/2,1/2]]);
 
0 1 0
 1 1 3 
 
Q := 
 8 2 8 

 1 1 
0
2 2
> QT:=transpose(Q):
> eigenvectors(Q);
· ¸
−1 −3 1
[1, 1, {[1, 1, 1]}], [ , 1, { 6, , 1 }], [ , 1, {[4, 1, −2]}]
4 2 4
> eigenvectors(QT);
1 −1
[ , 1, {[1, 2, −3]}], [ , 1, {[1, −2, 1]}], [1, 1, {[1, 8, 6]}]
4 4
> r1:=vector([1, 1, 1]); r2:=vector([6, -3/2, 1]); r3:=vector([4, 1,
> -2]);
r1 := [1, 1, 1]
· ¸
−3
r2 := 6, ,1
2
r3 := [4, 1, −2]
> l1:=vector([1, 8, 6]); l2:=vector([1, -2, 1]); l3:=vector([1, 2,
> -3]);
l1 := [1, 8, 6]
l2 := [1, −2, 1]
l3 := [1, 2, −3]
> n1:=15: n2:=10: n3:=12:

> P1:=array([[l1[1]*r1[1], l1[2]*r1[1],l1[3]*r1[1]], [l1[1]*r1[2],


> l1[2]*r1[2],l1[3]*r1[2]],[l1[1]*r1[3], l1[2]*r1[3],l1[3]*r1[3]]]);
 
1 8 6
P1 :=  1 8 6 
1 8 6
> P2:=array([[l2[1]*r2[1], l2[2]*r2[1],l2[3]*r2[1]], [l2[1]*r2[2],
> l2[2]*r2[2],l2[3]*r2[2]],[l2[1]*r2[3], l2[2]*r2[3],l2[3]*r2[3]]]);
 
6 −12 6
 −3 −3 
P2 := 
 2
3
2 

1 −2 1
> P3:=array([[l3[1]*r3[1], l3[2]*r3[1],l3[3]*r3[1]], [l3[1]*r3[2],
> l3[2]*r3[2],l3[3]*r3[2]],[l3[1]*r3[3], l3[2]*r3[3],l3[3]*r3[3]]]);
 
4 8 −12
P3 :=  1 2 −3 
−2 −4 6
> P:=s->(1/n1)*P1+(1/n2)*(-1/4)^s*P2+(1/n3)*(1/4)^s*P3;
> Question (a)
−1 s 1 s
P1 ( 4 ) P2 ( 4 ) P3
P := s → + +
n1 n2 n3
> Pr:=s->evalm(P(s)): Pr(s); Pr(0);
 
1 3 −1 s 1 1 s 8 6 −1 2 1 2 3 −1 s 1
 15 + 5 ( 4 ) + 3 ( 4 ) − ( )s + ( )s + ( ) − ( )s 
 15 5 4 3 4 5 5 4 4 
 1 3 −1 1 1 8 3 −1 1 1 s 2 3 −1 s 1 1 s 
 s s s 
 15 − 20 ( 4 ) + 12 ( 4 ) 15 + 10 ( 4 ) + 6 ( 4 ) − ( ) − ( )
5 20 4 4 4 
 
 1 1 −1 1 1 8 1 −1 1 1 2 1 −1 1 1 
+ ( )s − ( )s − ( )s − ( )s + ( )s + ( )s
15 10 4 6 4 15 5 4 3 4 5 10 4 2 4
 
1 0 0
 0 1 0 
0 0 1
> p0:=vector([0,1,0]);
p0 := [0, 1, 0]
> PS:=s->evalm(p0&*P(s)): PS(s);

Question (b)
· ¸
1 3 −1 s 1 1 s 8 3 −1 s 1 1 s 2 3 −1 s 1 1 s
− ( ) + ( ), + ( ) + ( ), − ( ) − ( )
15 20 4 12 4 15 10 4 6 4 5 20 4 4 4
> PS(2); PS(infinity);
> Answer: after 2 steps 3/8, after many steps 2/5
· ¸
1 9 3
, ,
16 16 8
· ¸
1 8 2
, ,
15 15 5
> av:=s->1*PS(s)[1]+4*PS(s)[2]+9*PS(s)[3]: av(s);
> Moment
29 3 −1 3 1
− ( )s − ( )s
5 10 4 2 4
> av:=sum(m^2*PS(s)[m],m=1..3);

29 3 −1 3 1
− ( )s − ( )s
5 10 4 2 4
> corr:=s->simplify(sum(n^2*sum(m^2*PS(s)[m]*Pr(s)[m,n],
> m=1..3),n=1..3)): corr(s);

79 33 81 11 3
(−1)(1+s) 4(−s) − 16(−s) + (−1)(1+2 s) 16(−s) + (−1)(1+s) 16(−s) − 4(−s)
50 4 100 2 2
841
+
25
> corr(0);
16
> corr(infinity);
841
25

Problem 5.2
> with(linalg):
Warning, the protected names norm and trace have been redefined and
unprotected
> Q:=array([[0,1/2,1/2],[0,0,1],[3/4,1/4,0]]);
 
1 1
 0 2 2 
 
Q := 
 0 0 1 

 3 1 
0
4 4
> QT:=transpose(Q):
> r:=eigenvectors(Q):
> l:=eigenvectors(QT):
> r[1][1]; r[2][1]; r[3][1];
1
1 1 √
− + I 2
2 4
1 1 √
− − I 2
2 4
> l[1][1]; l[2][1]; l[3][1];
1
1 1 √
− + I 2
2 4
1 1 √
− − I 2
2 4
> r[1][3]; r[2][3]; r[3][3];
{[1, 1, 1]}
· ¸
1 1 √ 1 1 √
{ − − I 2, 1, − + I 2 }
6 3 2 4
· ¸
1 1 √ 1 1 √
{ − + I 2, 1, − − I 2 }
6 3 2 4
> l[1][3]; l[2][3]; l[3][3];
· ¸
6 8
{ , 1, }
5 5
· ¸
1 1 √ 2 1 √
{ 1, − − I 2, − + I 2 }
3 3 3 3
· ¸
1 1 √ 2 1 √
{ 1, − + I 2, − − I 2 }
3 3 3 3
> r1:=vector([1, 1, 1]): r2:=vector([-1/6-1/3*I*sqrt(2), 1,
> -1/2+1/4*I*sqrt(2)]): r3:=vector([-1/6+1/3*I*sqrt(2), 1,
> -1/2-1/4*I*sqrt(2)]): l1:=vector([6/5, 1, 8/5]): l2:=vector([1,
> -1/3-1/3*I*sqrt(2), -2/3+1/3*I*sqrt(2)]): l3:=vector([1,
> -1/3+1/3*I*sqrt(2), -2/3-1/3*I*sqrt(2)]):
> n1:=evalm(l1&*r1): n2:=evalc(evalm(l2&*r2)):
> n3:=evalc(evalm(l3&*r3)):
Probability matrices.
> P1:=array([[l1[1]*r1[1], l1[2]*r1[1],l1[3]*r1[1]], [l1[1]*r1[2],
> l1[2]*r1[2],l1[3]*r1[2]],[l1[1]*r1[3], l1[2]*r1[3],l1[3]*r1[3]]]);
 
6 8
 5 1 5 
 
 6 8 
P1 :=   5 1 
 5 
 6 8 
1
5 5
> P2:=array([[l2[1]*r2[1], l2[2]*r2[1],l2[3]*r2[1]], [l2[1]*r2[2],
> l2[2]*r2[2],l2[3]*r2[2]],[l2[1]*r2[3], l2[2]*r2[3],l2[3]*r2[3]]]);
 
1 1 √ 1 1 √ 1 1 √ 2 1 √ 1 1 √
 − 6 − 3 I 2 (− 3 − 3 I 2) (− 6 − 3 I 2) (− 3 + 3 I 2) (− 6 − 3 I 2) 
 
 1 1 √ 2 1 √ 
P2 :=  1 − − I 2 − + I 2 

 3 3 3 3 
 1 1 √ 1 1 √ 1 1 √ 2 1 √ 1 1 √ 
− + I 2 (− − I 2) (− + I 2) (− + I 2) (− + I 2)
2 4 3 3 2 4 3 3 2 4
> P3:=array([[l3[1]*r3[1], l3[2]*r3[1],l3[3]*r3[1]], [l3[1]*r3[2],
> l3[2]*r3[2],l3[3]*r3[2]],[l3[1]*r3[3], l3[2]*r3[3],l3[3]*r3[3]]]);
 
1 1 √ 1 1 √ 1 1 √ 2 1 √ 1 1 √

 6 3 + I 2 (− + I 2) (− + I 2) (− − I 2) (− + I 2) 
 3 3 6 3 3 3 6 3 
 1 1 √ 2 1 √ 
P3 :=  1 − + I 2 − − I 2 
3 3 3 3 
 
 1 1 √ 1 1 √ 1 1 √ 2 1 √ 1 1 √ 
− − I 2 (− + I 2) (− − I 2) (− − I 2) (− − I 2)
2 4 3 3 2 4 3 3 2 4

> P:=s->(1/n1)*P1+(1/n2)*(r[2][1])^s*P2+(1/n3)*(r[3][1])^s*P3;
> Question (a)
P1 r21 s P2 r3 1 s P3
P := s → + +
n1 n2 n3
> Pr:=s->evalm(P(s)):
> p0:=vector([1,0,0]);
p0 := [1, 0, 0]
> PS:=s->evalm(p0&*P(s)): PS(s)[1];
1 1 √ s 1 1 √ 1 1 √ s 1 1 √
6 (− + I 2) (− − I 2) (− − I 2) (− + I 2)
+ 2 4 6 3 + 2 4 6 3
19 1 √ 1 √
− −I 2 − +I 2
3 3

> evalc(PS(infinity)[1]); evalc(PS(2)[1]);


6
19
3
8

Problem 5.3
> with(linalg):
Warning, the protected names norm and trace have been redefined and
unprotected
> Q:=array([[0,1/2,1/2],[1/3,0,2/3],[1/3,2/3,0]]);
 
1 1
 0 2 2 
 
 1 2 
Q := 
 3 0 3 

 
 1 2 
0
3 3
> QT:=transpose(Q):
> r:=eigenvectors(Q):
> l:=eigenvectors(QT):
> r[1][1]; r[2][1]; r[3][1];
−1
3
−2
3
1
> l[1][1]; l[2][1]; l[3][1];
−1
3
1
−2
3
> r[3][3]; r[1][3]; r[2][3];
{[1, 1, 1]}
{[−3, 1, 1]}
{[0, −1, 1]}
> l[2][3]; l[1][3]; l[3][3];
· ¸
3 3
{ 1, , }
2 2
{[−2, 1, 1]}
{[0, −1, 1]}
> r1:=vector([1, 1, 1]): r2:=vector([-3, 1, 1]): r3:=vector([0, -1,
> 1]): l1:=vector([1, 3/2, 3/2]): l2:=vector([-2, 1, 1]): l3:=vector([0,
> -1, 1]):
> n1:=evalm(l1&*r1): n2:=evalc(evalm(l2&*r2)):
> n3:=evalc(evalm(l3&*r3)):

Probability matrices.
> P1:=array([[l1[1]*r1[1], l1[2]*r1[1],l1[3]*r1[1]], [l1[1]*r1[2],
> l1[2]*r1[2],l1[3]*r1[2]],[l1[1]*r1[3], l1[2]*r1[3],l1[3]*r1[3]]]);
 
3 3
 1 2 2 
 
 3 3 

P1 :=  1 
2 2 
 
 3 3 
1
2 2
> P2:=array([[l2[1]*r2[1], l2[2]*r2[1],l2[3]*r2[1]], [l2[1]*r2[2],
> l2[2]*r2[2],l2[3]*r2[2]],[l2[1]*r2[3], l2[2]*r2[3],l2[3]*r2[3]]]);
 
6 −3 −3
P2 :=  −2 1 1 
−2 1 1
> P3:=array([[l3[1]*r3[1], l3[2]*r3[1],l3[3]*r3[1]], [l3[1]*r3[2],
> l3[2]*r3[2],l3[3]*r3[2]],[l3[1]*r3[3], l3[2]*r3[3],l3[3]*r3[3]]]);
 
0 0 0
P3 :=  0 1 −1 
0 −1 1

> P:=s->(1/n1)*P1+(1/n2)*(r[3][1])^s*P2+(1/n3)*(r[2][1])^s*P3;
> Question (a)
P1 r3 1 s P2 r21 s P3
P := s → + +
n1 n2 n3
> Pr:=s->evalm(P(s)): Pr(s);
> Conditional probability
 
1 0 0
 1 1 −2 s 1 1 −2 s 
 0 + ( ) − ( ) 
 2 2 3 2 2 3 
 
 1 1 −2 s 1 1 −2 s 
0 − ( ) + ( )
2 2 3 2 2 3
> p0:=vector([0,0,1]):
> PS:=s->evalm(p0&*P(s)): PS(s);

Probability vector

· ¸
1 1 −2 s 1 1 −2 s 7 1 −2 s
− ( ) ,− + ( ) , + ( )
4 4 3 8 8 3 8 8 3

> PS(infinity); PS(0);


· ¸
1 1
0, ,
2 2
[0, 0, 1]
> moment:=1*PS(s)[1]+2*PS(s)[2]+3*PS(s)[3]: moment;
5 1 −2 s
+ ( )
2 2 3
> corr:=s->simplify(sum(n*sum(m*PS(s)[m]*Pr(s)[m,n], m=1..3),n=1..3)):
> corr(s);
25 5 (−2 s)
+ 3 (−1)(2 s) 2(2 s) + (−1)s 3(1−s) 2(−1+s)
4 4
> corr(0); corr(infinity);
9
25
4

Problem 5.4
> with(linalg):
Warning, the protected names norm and trace have been redefined and
unprotected
> Q0:=s->array([[0,(cos(Pi*s/2))^2,(sin(Pi*s/2))^2],[1/4
> +1/2*(sin(Pi*s/2))^2,0,1/4+1/2*(cos(Pi*s/2))^2],[1/2*(cos(Pi*s/2))^2,1
> /2+1/2*(sin(Pi*s/2))^2,0]]): Q0(s);
 
1 1
 0 cos( π s)2 sin( π s)2 
 2 2 
 1 1 
 + sin( 1 π s)2 0
1 1 1
+ cos( π s)2 
 4 2 2 4 2 2 
 
 1 1 1 1 1 
cos( π s)2 + sin( π s)2 0
2 2 2 2 2
> simplify(Q0(2*s));
 
0 cos(π s)2 1 − cos(π s)2
 3 1 1 1 
 − cos(π s)2 0 + cos(π s)2 
 4 2 4 2 
 
 1 1 
cos(π s)2 1 − cos(π s)2 0
2 2

Since [cos(Pi*s)]ˆ2=1, we get


> Q:=array([[0,1,0],[1/4,0,3/4],[1/2,1/2,0]]);

 
0 1 0
 1 3 
 0 
Q := 
 4 4 

 1 1 
0
2 2
> QT:=transpose(Q):
> r:=eigenvectors(Q):
> l:=eigenvectors(QT):
> r[1][1]; r[2][1]; r[3][1];
1
1 1 √
− + I 2
2 4
1 1 √
− − I 2
2 4
> l[1][1]; l[2][1]; l[3][1];
1 1 √
− + I 2
2 4
1 1 √
− − I 2
2 4
1
> r[3][3]; r[1][3]; r[2][3];
· ¸
1 1 √ 1 1 √
{ 1, − − I 2, − + I 2 }
2 4 6 3
{[1, 1, 1]}
· ¸
1 1 √ 1 1 √
{ 1, − + I 2, − − I 2 }
2 4 6 3
> l[3][3]; l[1][3]; l[2][3];
· ¸
8 6
{ 1, , }
5 5
· ¸
1 1 √ 2 1 √
{ − − I 2, − + I 2, 1 }
3 3 3 3
· ¸
1 1 √ 2 1 √
{ − + I 2, − − I 2, 1 }
3 3 3 3
> r1:=vector([1, 1, 1]): r2:=vector([1, -1/2+1/4*I*sqrt(2),
> -1/6-1/3*I*sqrt(2)]): r3:=vector([1, -1/2-1/4*I*sqrt(2),
> -1/6+1/3*I*sqrt(2)]): l1:=vector([1, 8/5, 6/5]):
> l2:=vector([-1/3-1/3*I*sqrt(2), -2/3+1/3*I*sqrt(2), 1]):
> l3:=vector([-1/3+1/3*I*sqrt(2), -2/3-1/3*I*sqrt(2), 1]):
> n1:=evalm(l1&*r1): n2:=evalc(evalm(l2&*r2)):
> n3:=evalc(evalm(l3&*r3)):

Probability matrices.
> P1:=array([[l1[1]*r1[1], l1[2]*r1[1],l1[3]*r1[1]], [l1[1]*r1[2],
> l1[2]*r1[2],l1[3]*r1[2]],[l1[1]*r1[3], l1[2]*r1[3],l1[3]*r1[3]]]);
 
8 6
 1 5 5 
 
 8 6 

P1 :=  1 
 5 5 
 8 6 
1
5 5
> P2:=array([[l2[1]*r2[1], l2[2]*r2[1],l2[3]*r2[1]], [l2[1]*r2[2],
> l2[2]*r2[2],l2[3]*r2[2]],[l2[1]*r2[3], l2[2]*r2[3],l2[3]*r2[3]]]);
 
1 1 √ 2 1 √
 − − I 2 − + I 2 1 
 3 3 3 3 
 1 1 √ 1 1 √ 2 1 √ 1 1 √ 1 1 √ 
P2 :=  (− − I 2) (− + I 2) (− + I 2) (− + I 2) − + I 2 


 3 3 2 4 3 3 2 4 2 4 
 1 1 √ 1 1 √ 2 1 √ 1 1 √ 1 1 √ 
(− − I 2) (− − I 2) (− + I 2) (− − I 2) − − I 2
3 3 6 3 3 3 6 3 6 3
> P3:=array([[l3[1]*r3[1], l3[2]*r3[1],l3[3]*r3[1]], [l3[1]*r3[2],
> l3[2]*r3[2],l3[3]*r3[2]],[l3[1]*r3[3], l3[2]*r3[3],l3[3]*r3[3]]]);
 
1 1 √ 2 1 √
 − + I 2 − − I 2 1 
 3 3 3 3 
 1 1 √ 1 1 √ 2 1 √ 1 1 √ 1 1 √ 
P3 := 
 (− + I 2) (− − I 2) (− − I 2) (− − I 2) − − I 2 

 3 3 2 4 3 3 2 4 2 4 
 1 1 √ 1 1 √ 2 1 √ 1 1 √ 1 1 √ 
(− + I 2) (− + I 2) (− − I 2) (− + I 2) − + I 2
3 3 6 3 3 3 6 3 6 3

> P:=s->(1/n1)*P1+(1/n2)*(r[1][1])^(2*s)*P2+(1/n3)*(r[2][1])^(2*s)*P3;
> Question (a)
P1 r11 (2 s) P2 r21 (2 s) P3
P := s → + +
n1 n2 n3
> Pr:=s->evalm(P(s)):
> Conditional probability
> p01:=vector([1,0,0]): p02:=vector([0,1,0]):
> PS1:=s->evalm(p01&*P(s)): PS2:=s->evalm(p02&*P(s)):
> PS1(s)[1];
1 1 √ 1 1 √ (2 s) 1 1 √
5 − − I 2 (− + I 2) (− + I 2)
+ 3 3 + 2 4 3 3
19 1 √ 1 √
− −I 2 − +I 2
3 3
> PS1(s)[2];
2 1 √ 1 1 √ (2 s) 2 1 √
8 − + I 2 (− + I 2) (− − I 2)
+ 3 3 + 2 4 3 3
19 1 √ 1 √
− −I 2 − +I 2
3 3
> PS2(s)[1];
1 1 √ 1 1 √ 1 1 √ (2 s) 1 1 √ 1 1 √
5 (− − I 2) (− + I 2) (− + I 2) (− + I 2) (− − I 2)
+ 3 3 2 4 + 2 4 3 3 2 4
19 1 √ 1 √
− −I 2 − +I 2
3 3
> PS2(s)[2];
2 1 √ 1 1 √ 1 1 √ (2 s) 2 1 √ 1 1 √
8 (− + I 2) (− + I 2) (− + I 2) (− − I 2) (− − I 2)
+ 3 3 2 4 + 2 4 3 3 2 4
19 1 √ 1 √
− −I 2 − +I 2
3 3
The results can be transformed to polar form by the command polar.

Problem 5.5
> assume(N,natural): assume(k,real): assume(s>0); assume(n,integer):
> P:=(n,s,N)->(1/(2*N+1))*(1+2*sum(cos(2*Pi*k*n/(2*N+1))*exp(-2*s*(sin(
> Pi*k/(2*N+1)))^2),k=1..N)):
> P(n,s,N);
à !

π k˜ n˜ (−2 s˜ sin( π k˜ )2 )
1 + 2 ∑ cos(2 )e 2 N˜+1

k˜=1 2 N˜ + 1
2 N˜ + 1
> Pm:=[seq(P(i,2,2),i=-2..2)]: evalf(Pm);
[.1220644066, .2223516301, .3111679263, .2223516301, .1220644066]

Problem 5.8
> with(linalg):
Warning, the protected names norm and trace have been redefined and
unprotected
> w:=matrix(3,3,[0,1/2,1/2,1/3,0,1/3,1/3,1/3,0]);
 
1 1
 0
 2 2  
 1 1 
w :=  0 
3 3 
 
 1 1 
0
3 3
> M:=matrix(3,3,[-1,1/2,1/2,1/3,-1,1/3,1/3,1/3,-1]);
 
1 1
 −1 2 2 
 
 1 1 
M :=  3 −1 

 3 
 1 1 
−1
3 3
> MT:=transpose(M):
> eigenvectors(M);
· ¸ · ¸
5 1√ 1 1√ 5 1√ 1 1√
[− + 13, 1, { − + 13, 1, 1 }], [− − 13, 1, { − − 13, 1, 1 }],
6 6 2 2 6 6 2 2
−4
[ , 1, {[0, −1, 1]}]
3
> eigenvectors(MT);
· ¸
−4 5 1√ 1 1√
[ , 1, {[0, −1, 1]}], [− + 13, 1, { − + 13, 1, 1 }],
3 6 6 3 3
· ¸
5 1√ 1 1√
[− − 13, 1, { − − 13, 1, 1 }]
6 6 3 3
> lambda:=vector([-5/6+1/6*sqrt(13),-5/6-1/6*sqrt(13),-4/3]):
> r1:=vector([-1/2+1/2*sqrt(13), 1, 1]); r2:=vector([-1/2-1/2*sqrt(13),
> 1, 1]); r3:=vector([0, -1, 1]);
· ¸
1 1√
r1 := − + 13, 1, 1
2 2
· ¸
1 1√
r2 := − − 13, 1, 1
2 2
r3 := [0, −1, 1]
> l1:=vector([-1/3+1/3*sqrt(13), 1, 1]); l2:=vector([-1/3-1/3*sqrt(13),
> 1, 1]); l3:=vector([0, -1, 1]);
· ¸
1 1√
l1 := − + 13, 1, 1
3 3
· ¸
1 1√
l2 := − − 13, 1, 1
3 3
l3 := [0, −1, 1]
1
> n1:=evalm(l1&*r1): n2:=evalm(l2&*r2): n3:=evalm(l3&*r3):

> Q1:=-sum((1/n1)*r1[1]*l1[k]/lambda[1],k=1..3):
> Q2:=-sum((1/n2)*r1[1]*l1[k]/lambda[2],k=1..3):
> Q3:=-sum((1/n3)*r1[1]*l1[k]/lambda[1],k=1..3):
> evalf(Q1+Q2+Q3);
> First passage time
13.64536110

> :
Bibliography

[1] L. E. Reichl A modern course in statistical physics, 1998.

141

Das könnte Ihnen auch gefallen