Beruflich Dokumente
Kultur Dokumente
1 Introduction
The purpose of these lectures is to provide important statistical mechanics back-
ground, especially for soft condensed matter physics. Of course, this begs the
question of what exactly is important and what is less so? This is inevitably
a subjective matter, and the choices made here are somewhat personal, and
with an eye toward the Amsterdam (VU, UvA, AMOLF) context. These notes
have taken shape over the past approximately ten years, beginning with a joint
course developed by Daan Frenkel (then, at AMOLF/UvA) and Fred MacK-
intosh (VU), as part of the combined UvA-VU masters program. Thus, these
lecture notes have evolved over time, and are likely to continue to do so.
In preparing these lectures, we chose to focus on both the essential techniques
of equilibrium statistical physics, as well as a few special topics in classical sta-
tistical physics that are broadly useful, but which all too frequently fall outside
the scope of most elementary courses on the subject, i.e., what you are likely
to have seen in earlier courses. Most of the topics covered in these lectures can
be found in standard texts, such as the book by Chandler [1]. Another very
useful book is that by Barrat and Hansen [2]. Finally, a very useful, though
more advanced reference is the book by Chaikin and Lubensky [3].
1-1
In thermodynamics, we encounter concepts such as energy, entropy and tem-
perature. I have no doubt that the reader has, at one time or another, been
introduced to all these concepts. Thermodynamics is often perceived as very
abstract and it is easy to forget that it is firmly rooted in experiment. I shall
therefore briefly sketch how derive thermodynamics from a small number of
experimental observations.
Statistical mechanics provides a mechanical interpretation (classical or quan-
tum) of rather abstract thermodynamic quantities such as entropy. Again, the
reader is probably familiar with the basis of statistical mechanics. But, for the
purpose of the course, I shall re-derive some of the essential results. The aim
of this derivation is to show that there is nothing mysterious about concepts
such as phase space, temperature and entropy and many of the other statistical
mechanical objects that will appear time and again in the remainder of this
course.
1.1 Thermodynamics
Thermodynamics is a remarkable discipline. It provides us with relations be-
tween measurable quantities. Relations such as
dU = q + w (1.1)
or
dU = T dS P dV + dN (1.2)
or
S P
= (1.3)
V T T V
These relations are valid for any substance. But - precisely for this reason
- thermodynamics contains no information whatsoever about the underlying
microscopic structure of a substance. Thermodynamics preceded statistical me-
chanics and quantum mechanics, yet not a single thermodynamic relation had
to be modified in view of these later developments. The reason is simple: ther-
modynamics is a phenomenological science. It is based on the properties of
matter as we observe it, not upon any theoretical ideas that we may have about
matter.
Thermodynamics is difficult because it seems so abstract. However, we
should always bear in mind that thermodynamics is based on experimental
observations. For instance, the First Law of thermodynamics expresses the em-
pirical observation that energy is conserved, even though it can be converted in
various forms. The internal energy of a system can be changed by performing
work w on the system or by transferring an amount of heat q. It is meaning-
less to speak about the total amount of heat in a system, or the total amount
of work. This is not mysterious: it is just as meaningless to speak about the
number of train-travelers and the number of pedestrians in a train-station: peo-
ple enter the station as pedestrians, and exit as train-travelers or vice versa.
However if we add the sum of the changes in the number of train-travelers and
1-2
pedestrians, then we obtain the change in the number of people in the station.
And this quantity is well defined. Similarly, the sum of q and w is equal to the
change in the internal energy U of the system
dU = q + w (1.4)
This is the First Law of Thermodynamics. The Second Law seems more ab-
stract, but it is not. The Second Law is based on the experimental observation
that it is impossible to make an engine that works by converting heat from a
single heat bath (i.e. a large reservoir in equilibrium) into work. This observa-
tion is equivalent to another - equally empirical - observation, namely that heat
can never flow spontaneously (i.e. without performing work) form a cold reser-
voir to a warmer reservoir. This statement is actually a bit more subtle than
it seems because, before we have defined temperature, we can only distinguish
hotter and colder by looking at the direction of heat flow. What the Second
Law says is that it is never possible to make heat flow spontaneously in the
wrong direction. How do we get from such a seemingly trivial statement to
something as abstract as entropy? This is most easily achieved by introducing
the concept of a reversible heat engine.
A reversible engine is, as the word suggests, an engine that can be operated
in reverse. During one cycle (a sequence of steps that is completed when the
engine is returned into its original state) this engine takes in an amount of heat
q1 from a hot reservoir converts part of it into work w and delivers a remaining
amount of heat q2 to a cold reservoir. The reverse process is that, by performing
an amount of work w, we can take an amount of heat q2 form the cold reservoir
and deliver an amount of heat q1 to the hot reservoir. Reversible engines are
an idealization because in any real engine there will be additional heat losses.
However, the ideal reversible engine can be approximated arbitrarily closely by
a real engine if, at every stage, the real engine is sufficiently close to equilibrium.
As the engine is returned to its original state at the end of one cycle, its internal
energy U has not changed. Hence, the First Law tells us that
dU = q1 (w + q2 ) = 0 (1.5)
or
q1 = w + q2 (1.6)
Now consider the efficiency of the engine w/q1 - i.e. the amount of work
delivered per amount of heat taken in. At first, one might think that depends
on the precise design of our reversible engine. However this is not true. is the
same for all reversible engines operating between the same two reservoirs. To
demonstrate this, we show that if different engines could have different values
for then we would contradict the Second Law in its form heat can never
spontaneously flow from a cold to a hot reservoir. Suppose therefore that we
have another reversible engine that takes in an amount of heat q1 form the hot
reservoir, delivers the same amount of work w, and then delivers an amount of
heat q2 to the cold reservoir. Let us denote the efficiency of this engine by .
1-3
Now we use the work generated by the engine with the highest efficiency (say
) to drive the second engine in reverse. The amount of heat delivered to the
hot reservoir by the second engine is
q1 = w/ = q1 (/ ) (1.7)
parents both died from eating poisonous mushrooms. The city council put the four younger
Fahrenheit children in foster homes. But they apprenticed Daniel to a merchant, who taught
him bookkeeping and took him off to Amsterdam. He settled in Amsterdam in 1701 looking
for a trade. He became interested in the manufacture of scientific instruments, specifically me-
teorological apparatus. The first thermometers were constructed in around 1600 by Galileo.
These were gas thermometers, in which the expansion or contraction of air raised or low-
ered a column of water, but fluctuations in air pressure caused inaccuracies. The Florentine
thermometer showed up as a trade item in Amsterdam, and it caught young Fahrenheits
fancy. So he skipped out on his apprenticeship and borrowed against his inheritance to take
up thermometer making. When the city fathers of Gdansk found out, they arranged to have
the 20-year-old Fahrenheit arrested and shipped off to the East India Company. So he had
to dodge Dutch police until he became a legal adult at the age of 24. At first he had sim-
ply been on the run; but he kept traveling through Denmark, Germany, Holland, Sweden,
Poland, meeting scientists and other instrument makers before returning to Amsterdam to
begin trading in 1717. In 1695 Guillaume Amontons had improved the gas thermometer by
using mercury in a closed column, and in 1701 Olaus Roemer devised the alcohol thermometer
and developed a scale with boiling water at 60 and an ice/salt mixture at 0. Fahrenheit met
with Roemer and took up his idea, constructing the first mercury-in-glass thermometer in
1714. This was more accurate because mercury possesses a more constant rate of expansion
than alcohol and could be used over a wider range of temperatures. In order to reflect his
greater sensitivity, Fahrenheit expanded Roemers scale using body temperature (90F) and
ice/salt (0F) as fixed reference points, with the freezing point of water awarded a value of 30
(later revised to 32). He used the freezing point of water for 32 degrees. Body temperature
he called 96 degrees. Why the funny numbers? Fahrenheit originally used a twelve-point
scale with zero, four, and twelve for those three benchmarks. Then he put eight gradations in
each large division. That is body temperature in the Fahrenheit scale is 96 (actually, it is a
bit higher) it is simply eight times twelve. Using his thermometer, Fahrenheit was able to
determine the boiling points of liquids and show that liquids other than water also had fixed
boiling points that varied with atmospheric pressure. He also researched the phenomenon
of supercooling of water: the cooling of water below freezing without solidification, and dis-
covered that the boiling point of liquids varied with atmospheric pressure. He published his
method for making thermometers in the Philosophical Transactions in 1724 and was admitted
to the Royal Society the same year.
1-4
efficient, it follows that
R(t3 , t1 ) = R(t3 , t2 )R(t2 , t1 ) (1.8)
This can only be true in general if R(t1 , t2 ) is of the form
f (t2 )
R(t2 , t1 ) = (1.9)
f (t1 )
where f (t) is an, as yet unknown function of our measured temperature. What
we do now is to introduce an absolute or thermodynamic temperature T given
by
T = f (t) (1.10)
Then it immediately follows that
q2 T2
= R(t2 , t1 ) = (1.11)
q1 T1
Note that the thermodynamic temperature could just as well have been defined
as c f (t). In practice, c has been fixed such that, around room temperature, 1
degree in the absolute (Kelvin) scale is equal to 1 degree Celsius. But that choice
is of course purely historical and - as it will turn out later - a bit unfortunate.
Now, why do we need all this? We need it to introduce entropy, this most
mysterious of all thermodynamic quantities. To do so, note that Eqn.1.11 can
be written as
q1 q2
= (1.12)
T1 T2
where q1 is the heat that flows in reversibly at the high temperature T1 , and
q2 is the heat that flows out reversibly at the low temperature T2 . We see
therefore that, during a complete cycle, the difference between q1 /T1 and q2 /T2
is zero. Recall that, at the end of a cycle, the internal energy of the system
has not changed. Now Eqn.1.12 tells us that there is also another quantity
that we call entropy and that we denote by S that is unchanged when we
restore the system to its original state. In the language of thermodynamics, we
call S a state function. We do not know what S is, but we do know how to
compute its change. In the above example, the change in S was given by S =
(q1 /T1 ) (q2 /T2 ) = 0. In general, the change in entropy of a system due to the
reversible addition of an infinitesimal amount of heat qrev from a reservoir at
temperature T is
qrev
dS = (1.13)
T
We also note that S is extensive. That means that the total entropy of two
non-interacting systems, is equal to the sum of the entropy of the individual
systems. Consider a system with a fixed number of particles N and a fixed
volume V . If we transfer an infinitesimal amount of heat q to this system,then
the change in the internal energy of the system, dU is equal to q.Hence,
S 1
= (1.14)
U V,N T
1-5
The most famous (though not most intuitively obvious) statement of the Second
Law of Thermodynamics is that Any spontaneous change in a closed system
(i.e. a system that exchanges neither heat nor particles with its environment)
can never lead to a decrease of the entropy. Hence, in equilibrium, the entropy
of a closed system is at a maximum. Can we understand this? Well, let us first
consider a system with an energy U , volume V and number of particles N that
is in equilibrium. Let us denote the entropy of this system by S0 (U, V, N ). In
equilibrium, all spontaneous changes that can happen, have happened. Now
suppose that we want to change something in this system - for instance, we
increase the density in one half and decrease it in the other. As the system was
in equilibrium, this change does not occur spontaneously. Hence, in order to
realize this change, we must perform a certain amount of work, w (for instance,
my placing a piston in the system and moving it). Let us perform this work
reversibly in such a way that E, the total energy of the system stays constant
(and also V and N ). The First Law tells us that we can only keep E constant if,
while we do the work, we allow an amount of heat q, flow out of the system, such
that q = w. Eqn.1.13 tells us that when an amount of heat q flows out of the
system, the entropy S of the system must decrease. Let us denote the entropy
of this constrained state by S1 (E, V, N ) < S0 (E, V, N ). Having completed the
change in the system, we insulate the system thermally from the rest of the
world, and we remove the constraint that kept the system in its special state
(taking the example of the piston: we make an opening in the piston). Now the
system goes back spontaneously (and irreversibly) to equilibrium. However,
no work is done and no heat is transferred. Hence the finally energy E is equal
to the original energy (and V and N ) are also constant. This means that the
system is now back in its original equilibrium state and its entropy is once
more equal to S0 (E, V, N ). The entropy change during this spontaneous change
is equal to S = S0 S1 .But, as S1 < S0 , it follows that S > 0. As this
argument is quite general, we have indeed shown that any spontaneous change
in a closed system leads to an increase in the entropy. Hence, in equilibrium,
the entropy of a closed system is at a maximum.
From this point on, we can derive all of thermodynamics, except one law
- the so-called Third Law of Thermodynamics. The Third law states that,
at T = 0, the entropy of the equilibrium state of pure substance equals zero.
Actually, the Third Law is not nearly as basic as the First and the Second.
And, anyway, we shall soon get a more direct interpretation of its meaning.
1-6
all calculations and it is therefore most fortunate that it is indeed justified in
many cases of practical interest (although the fraction of non-trivial models that
can be solved exactly still remains depressingly small). It is therefore somewhat
surprising that, in order to derive the basic laws of statistical mechanics, it
easier to use the language of quantum mechanics. I will follow this route of least
resistance. In fact, for our derivation, we need only little quantum mechanics.
Specifically, we need the fact that a quantum mechanical system can be found
in different states. For the time being, I limit myself to quantum states that are
eigenvectors of the Hamiltonian H of the system (i.e. energy eigenstates). For
any such state |i >, we have that H|i > = Ei |i >, where Ei is the energy of
state |i >. Most examples discussed in quantum-mechanics textbooks concern
systems with only few degrees of freedom (e.g. the one-dimensional harmonic
oscillator or a particle in a box). For such systems, the degeneracy of energy
levels will be small. However, for the systems that are of interest to statistical
mechanics (i.e. systems with O(1023 ) particles), the degeneracy of energy levels
is astronomically large - actually, the word astronomical is misplaced - the
numbers involved are so large that, by comparison the total number of particles
in the universe is utterly negligible.
In what follows, we denote by (U, V, N ) the number of energy levels with
energy U of a system of N particles in a volume V . We now express the basic
assumption of statistical thermodynamics as follows: A system with fixed N ,
V and U is equally likely to be found in any of its (U ) energy levels. Much
of statistical thermodynamics follows from this simple (but highly non-trivial)
assumption.
To see this, let us first consider a system with total energy U that consists of
two weakly interaction sub-systems. In this context, weakly interacting means
that the sub-systems can exchange energy but that we can write the total energy
of the system as the sum of the energies U1 and U2 of the sub-systems. There
are many ways in which we can distribute the total energy over the two sub-
systems, such that U1 +U2 = U . For a given choice of U1 , the total number of
degenerate states of the system is 1 (U1 ) 2 (U2 ). Note that the total number
of states is not the sum but the product of the number of states in the individual
systems. In what follows, it is convenient to have a measure of the degeneracy
of the sub-systems that is extensive (i.e. additive). A logical choice is to take
the (natural) logarithm of the degeneracy. Hence:
1-7
condition for this maximum is that
ln (U1 , U U1 )
=0 (1.16)
U1 N,V,U
Clearly, if initially we put all energy in system 1 (say), there will be energy
transfer from system 1 to system 2 until Eqn. 1.19 is satisfied. From that
moment on, there is no net energy flow from one sub-system to the other, and
we say that the two sub-systems are in thermal equilibrium. This implies that
the condition (U1 , V1 , N1 ) = (U2 , V2 , N2 ) must be equivalent to the statement
that the two sub-systems have the same temperature. ln is a state function
(of U, V and N ), just like S. Moreover, when thermal equilibrium is reached,
ln of the total system is at a maximum, again just like S. This suggests that
ln is closely related to S. We note that both S and ln are extensive. This
suggests that S is simply proportional to ln :
where kB is Boltzmanns constant which, in S.I. units, has the value 1.3806503
1023 J/K. This constant of proportionality cannot be derived - as we shall see
later, it follows from the comparison with experiment. With this identification,
we see that our assumption that all degenerate energy levels of a quantum system
are equally likely immediately implies that, in thermal equilibrium, the entropy
of a composite system is at a maximum. Hence, in the statistical picture, the
Second Law of thermodynamics is not at all mysterious, it simply states that,
in thermal equilibrium, the system is most likely to be found in the state that
has the largest number of degenerate energy levels. The next thing to note is
that thermal equilibrium between sub-systems 1 and 2 implies that 1 = 2 .
In thermodynamics, we have another way to express the same thing: we say
that two bodies that are brought in thermal contact are in equilibrium if their
temperatures are the same. This suggests that must be related to the absolute
temperature. The thermodynamic definition of temperature is
S
1/T = (1.21)
U V,N
1-8
If we use the same definition here, we find that
= 1/(kB T ) . (1.22)
1-9
therefore compute the thermal average of A as
P
exp(Ei /kB T )hi|A|ii
hAi = i P . (1.24)
i exp(Ei /kB T )
where [K, U ] is the commutator of the kinetic and potential energy opera-
tors while O([K, U ]) is meant to denote all terms containing commutators and
higher-order commutators of K and U . It is easy to verify that the commutator
[K, U ] is of order h. Hence, in the limit h 0, we may ignore the terms of
order O([K, U ]). In that case, we can write
1-10
If we use the notation |ri for eigenvectors of the position operator and |ki for
eigenvectors of the momentum operator, we can express Eqn. 1.26 as
X
T r exp(H) = hr|eU |rihr|kihk|eK |kihk|ri (1.27)
r,k
where U (rN ) on the right hand side is no longer an operator, but a function of
the coordinates of all N particles. Similarly,
N
X
hk| exp(K)|ki = exp( p2i /(2mi ))
i=1
hr|kihk|ri = 1/V N
where V is the volume of the system and N the number of particles. Finally,
we can replace the sum over states by an integration over all coordinates and
momenta. The final result is
1
Z X
T r exp(H) dN dpN drN exp(( p2i /(2mi ) + U (rN )) (1.28)
h N! i
where d is the dimensionality of the system. The factor 1/N ! is a direct result
of the indistinguishability of identical particles. In almost the same way, we
can derive the classical limit for T r exp(H)A and finally, we can write the
classical expression for the thermal average of the observable A as
Equations 1.28 and 1.29 are the starting point for virtually all classical simula-
tions of many-body systems.
1-11
bath is given by B (U i ). Clearly, the degeneracy of the bath determines the
probability P (i ) to find system S in state i.
B (U i )
P (i ) = P . (1.30)
i B (U i )
F = kB T ln Q . (1.37)
1-12
Further links with thermodynamics
Let us go back to the formulation of the second law of thermodynamics that
states that, in a closed system at equilibrium, the entropy of the system is at a
maximum. As argued above, we have a simple understanding of this law: the
system is most likely to be found in the state that has the largest degeneracy.
Now consider again a thermal reservoir in contact with a (macroscopic) system,
under the condition that system plus reservoir are isolated. Then we know that
the entropy of this combined system should be a maximum at equilibrium. As
before, we can write the total degeneracy of system plus bath as
where US is the internal energy of the system, and Utotal the total internal energy
of system plus bath. The condition for equilibrium is that the derivative with
respect to US of ln total vanishes. As in Eqn.1.32, we expand ln bath (Utotal
US ) up to linear order in US and we find
Note that ln bath (Utotal ) does not depend on US . Hence, to find the maximum
of ln total , we have to locate the maximum of ln system (US ) US . Hence,
we arrive at the statement that, for a system in contact with a heat bath, the
condition for equilibrium is that ln system (US ) US is at a maximum or,
what amounts to the same thing US ln system (US ) is at a minimum. Now
we make use of the fact that we have identified kB ln system with the entropy S
of the system. Hence, we conclude that, at constant temperature, the condition
for equilibrium is
(US T S) is at a minimum (1.40)
But from thermodynamics we know that U T S is nothing else than the
Helmholtz free energy F . Hence, we immediately recover the well-known state-
ment that - at constant temperature and volume - F is at a minimum in equi-
librium.
1-13
However, from thermodynamics we know that
dU P
dS = + dV dN (1.43)
T T T
and hence
S P
= (1.44)
V U,N T
This expresses the fact that, if a system and a bath can exchange both energy
and volume, then the conditions for equilibrium are
Tsys = Tbath (1.45)
and
Psys = Pbath (1.46)
In practice, it is often more convenient to use the relation between the change
in volume and the change in Helmholtz free energy F
dF = P dV SdT + dN (1.47)
and the corresponding expression for the pressure
F
P = (1.48)
V T,N
to obtain
ln Q
P = kB T (1.49)
V T,N
Later on, we shall use this expression to compute the pressure of gases from our
knowledge of the partition function.
In order to obtain an explicit expression for the pressure, it is convenient to
write the partition function Q in a slightly different way. We assume that the
system is contained in a cubic box with diameter L = V 1/3 . We now define
scaled coordinates sN by
ri = Lsi for i = 1, 2, , N. (1.50)
If we now insert these scaled coordinates in the expression for the partition
function, we obtain
Z 1 Z 1
VN
Q(N, V, T ) = 3N dsN exp[U (sN ; L)]. (1.51)
N! 0 0
1-14
The first part on the right-hand side simply yields the ideal-gas pressure N kB T /V .
To evaluate the second term on the right-hand side, we note that the only quan-
tity that depends on V is the potential energy U (sN ; L). We use the chain rule
to compute the derivative
N
X L ri
=
V i=1
V L ri
N
X 1
= s
2 i r
i=1
3L i
N
X 1
= ri . (1.53)
i=1
3V r i
We can now perform the differentiation in the second term on the right-hand
side of Eqn.1.52 to obtain
*N +
N kB T X 1 U
P = ri .
V i=1
3V ri
*N +
N kB T X 1
= + ri .fi (1.54)
V i=1
3V
where fi denotes the force on particle i due to all other particles. In the special
case that the intermolecular forces are pairwise additive, we can write
X
fi = fij
j6=i
where fij is the force on particle i due to particle j. We can now write
*N +
N kB T XX 1
P = + ri .fij (1.55)
V i=1
3V
j6=i
where rij ri rj . Eqn. 1.57 shows how we can compute the pressure of the
system from knowledge of the particle positions and the intermolecular forces.
1-15
1.3.2 Other ensembles
In the previous section, we considered a system that could exchange energy
with a large thermal bath. This allowed us to derive the relevant statistical
mechanical expressions that describe the behavior of a system of N particles
in a volume V at a temperature T . As in the case in the case of a system at
constant N ,V and U , a system at constant N V T can be found in any one of
a very large number of quantum states. Such a collection of states is usually
referred to as an ensemble. The probability to find the system in any one of these
states depends on the external conditions (constant N V U or constant N V T ).
The choice of ensemble depends on the experimental conditions that one aims
to model: an isolated system will be described with the N V U -ensemble (which,
for historical reasons, is often called the micro-canonical ensemble). A system
at fixed volume and temperature will be described by the N V T (or canonical)
ensemble. But often it is convenient to consider systems at constant pressure, or
systems that can exchange particles with a reservoir. For every condition, one
can introduce the appropriate ensemble. The procedure is very similar to the
derivation of the canonical ensemble described above. For instance, consider
the situation that a system of N particles can exchange energy and volume
with a large reservoir. The probability to find the system in a given state with
energy Ei and volume Vi is, as before, determined by the number of realizations
B (Utot Ei , Vtot Vi ) of the bath. As before, we expand ln B to first order
in Ei , and now also in Vi . Using
S 1
=
U T
and
S P
=
V T
we obtain
Ei + P Vi
ln B (Utot Ei , Vtot Vi ) = ln B (Utot , Vtot )
kB T
And the probability to find the system with volume V is given by
QN V T exp(P V )
P(V ) = R . (1.58)
0 dV QN V T exp(P V )
1-16
and following the same procedure as before, we find that the probability to find
N particles in the system in volume V at temperature T , is given by
QN V T exp(N )
P(N ) = P . (1.59)
N =0 QN V T exp(N )
P V = kB T ln
1-17
The number of ways in which N atoms can be distributed over M lattice sites,
is
M!
(N ) =
N !(M N )!
The canonical partition function of this system is therefore
M!
Q(N, M, T ) = exp(N ) (1.60)
N !(M N )!
The chemical potential is
ln Q
= kB T
N
= + kB T (ln N ln(M N )) (1.61)
1-18
The average number of particles in the system is
ln
hN i =
M exp(( + ))
= (1.66)
1 + exp(( + ))
It then immediately follows that the average density hi is equal to
1
hi = (1.67)
1 + exp(( + ))
The answer is the same as the one obtained in Eqn.1.64, but the effort involved
was less. In more complex cases, a judicious choice of ensemble becomes even
more important.
1.4 Fluctuations
We started our discussion of statistical mechanical systems by considering a
system of N particles in a volume V with total energy E. We noted that
such a system can be found in a very large number ((N, V, E)) eigenstates.
Subsequently, we considered a system of N particles in a volume V that could
exchange energy with a large thermal bath. In that case, the probability to
find the system in any of the (N, V, E) states with energy E was given by
Eqn. 1.33
(N, V, E) exp(E/kB T )
P(E) = P
i exp(Ei /kB T )
using Eqn. 1.21, we can express in terms of the entropy S and we find
The most likely energy of the system, E , is the one for which
S
= 1/T .
E E=E
1-19
we obtain
1 (E)2
P(E) = (2kB CV T 2 ) 2 exp . (1.70)
2kB CV T 2
From Eqn. 1.70 we immediately see that the mean-square fluctuation in the
energy of a system a constant N, V, T is directly related to the heat capacity
CV :
h(E)2 i = kB T 2 CV . (1.71)
Using Eqn. 1.70 we can relate any average in the N V E ensemble to the corre-
sponding average in the N V T ensemble by Taylor expansion:
1 2A
A
hAiN V T = hAiN V E + hEi + h(E)2 i + O(hE 3 i) (1.72)
E 2 E 2
A well-known application of this conversion is the expression derived by Lebowitz,
Percus and Verlet for the relation between kinetic energy fluctuations in the
N V E and N V T ensembles [4]:
2 2
2 3N kB T 3N kB
h(K) iN V E = 1 (1.73)
2 2CV
Of course, one can use a similar approach to relate averages in other ensembles.
For instance, it is possible to derive an expression for the finite-size correction
on the excess chemical potential by comparing the averages in the canonical
(N V T ) and grand-canonical (V T ) ensembles [5]:
( 2 P/2 )
1 P
ex (N ) = 1 kB T kB T . (1.74)
2N P (P/)2
1-20
i.e. when the thermodynamic pressure of the system is equal to the applied
pressure. In fact, if we could measure the histogram P(V ), we could directly
measure the variation of the Helmholtz free energy with volume. The higher the
free energy of a given volume fluctuation, the less likely we are to observe this
fluctuation. In principle, we could determine the complete equation of state of
the system from knowledge of P(V ). To see this, consider
ln P(V )
= (P (N, V, T ) P0 ) (1.76)
V
One amusing consequence of this expression is the relation between two-phase
coexistence and van der Waals loops in the equation of state [6]. Suppose that
a system at a given pressure P0 undergoes a first-orde phase transition from a
state with volume V1 to a state with volume V2 . At coexistence, the system is
equally likely to be found in either state, but it is unlikely to be in a state with
intermediate density. Hence, the histogram of volumes, P(V ) will be double-
peaked. Eqn. 1.76 the immediately shows that the pressure as a function of
volume should exhibit an oscillation around P0 .
The relation between probabilities of fluctuations and free energies is, in fact,
quite general. To see this, let us consider a system with N particles in volume
V in contact with a reservoir at constant T . Now let us assume that we are not
interested in the fluctuations of the energy of the system, but in fluctuations of
some other observable property, e.g. the total magnetic moment M . We wish to
know the probability that the system is found in a state with magnetic moment
M . To do so, we should sum the probabilities of all states i that satisfy the
constraint Mi =M . P
exp(Ei )Mi ,M
P(M ) = iP
i exp(Ei )
where the Kronecker delta constrains the sum to those terms that have the
required magnetic moment. We see that the restricted sum in the numerator
has the form of a partition function and we denote it by Q(N, V, T, M ). We also
define an associated free energy
F (N, V, T, M ) kB T ln(Q(N, V, T, M )) (1.77)
We refer to F (N, V, T, M ) as the Landau free energy associated with the variable
M . Clearly, there is a close connection between (Landau) free energies and
constraints. We define a subset of all possible states of the system by the
constraint Mi =M . The Landau free energy then determines the probability that
the system will spontaneously be found in a state that satisfies this constraint:
P(M ) = c exp(F (N, V, T, M )) (1.78)
1-21
the properties of individual atoms or molecules, but not in a trivial way. In
fact, knowledge of the properties of individual atoms and molecules is only the
starting point on the way towards understanding matter. In this sense, Diracs
famous quote that, once the laws of Quantum Mechanics are understood the
rest is chemistry, places a heavy burden on chemistry. Diracs chemistry
includes virtually all of condensed matter physics (not to mention biology).
The number of distinct phases of matter is far greater than the three listed
above: in addition to crystalline solid phases (of which there very many), there
exist dozens of distinct liquid-crystalline phases with order between that of
a crystalline solid and of an isotropic liquid. Then there are materials with
magnetic or ferro-electric ordering, superconductors, superfluids, and the list is
still far from complete.
One of the aims of statistical mechanics is to provide an explanation for the
existence of this bewildering variety of phases, and to explain how, and under
what conditions, a transition from one phase to another takes place. In this
course, I shall treat only a small number of very simple examples. However,
before I begin, it may be useful to point out that one of the phases that we
know best, viz. the liquid phase, is actually one of the least obvious. We are
so used to the occurrence of phenomena such as boiling and freezing that we
rarely pause to ask ourselves if things could have been different. Yet the fact
that liquids must exist is not obvious a priori. This point is eloquently made in
an essay by V. F. Weisskopf [7]:
...The existence and general properties of solids and gases are relatively easy
to understand once it is realized that atoms or molecules have certain typical
properties and interactions that follow from quantum mechanics. Liquids are
harder to understand. Assume that a group of intelligent theoretical physicists
had lived in closed buildings from birth such that they never had occasion to see
any natural structures. Let us forget that it may be impossible to prevent them
to see their own bodies and their inputs and outputs. What would they be able
to predict from a fundamental knowledge of quantum mechanics? They probably
would predict the existence of atoms, of molecules, of solid crystals, both metals
and insulators, of gases, but most likely not the existence of liquids.
Weisskopfs statement may seem a bit bold. Surely, the liquid-vapor tran-
sition could have been predicted a priori. This is a hypothetical question that
can never be answered. In his 1873 thesis, van der Waals gave the correct ex-
planation for a well known, yet puzzling feature of liquids and gases, namely
that there is no essential distinction between the two: above a critical temper-
ature Tc , a vapor can be compressed continuously all the way to the freezing
point. Yet below Tc , a first-order phase transition separates the dilute fluid
(vapor) from the dense fluid (liquid). In the Van der Waals picture, the liquid-
vapor transition is due to a the competition between short-ranged repulsion and
longer-ranged attraction. We shall discuss the Van der Waals theory in more
detail in section 1.6. Here we focus on the question: why does the liquid phase
exist at all? To discuss this question, we must go beyond the original Van der
Waals model because it does not allow for the possibility of crystallization. Half
1-22
a century after Van der Waals death, Longuet-Higgins and Widom [8] showed
that the van der Waals model 2 is richer than expected: it exhibits not only the
liquid-vapor transition but also crystallization. The liquid-vapor transition is
possible between the critical point and the triple point. In the Van der Waals
model, the temperature of the critical point is about a factor two higher than
that of the triple point. There is, however, no fundamental reason why this
transition should occur in every atomic or molecular substance, nor is there any
rule that forbids the existence of more than one fluid-fluid transition.
It turns out that the possibility of the existence of a liquid phase, depends
sensitively on the range of attraction of the intermolecular potential: as this
range is decreased, the critical temperature approaches the triple-point temper-
ature. When Tc drops below the latter, only a single stable fluid phase remains.
For instance, in mixtures of spherical colloidal particles and non-adsorbing poly-
mer, the range of the attractive part of the effective colloid-colloid interaction
can be varied by changing the size of the polymers. Experiment, theory and
simulation all suggest that when the width of the attractive well becomes less
than approximately one third of the diameter of the colloidal spheres, the col-
loidal liquid phase disappears. Fortunately for Van der Waals and Kamerlingh
Onnes, the range of attraction of all noble gases and other simple molecules, is
longer than this critical value. That is why liquids exist in everyday life 3 .
But let us consider another (for the time being, hypothetical) limit. We take
a model system of spherical particles and decrease the range of the attractive
forces. As the range of attraction decreases, the liquid-vapor curve moves into
the meta-stable regime. But that is not the end of the story. For very short-
ranged attraction (less than 5% of the hard-core diameter), a first-order iso-
structural solid-solid transition appears in the solid phase [10]. This transition
is similar to the liquid-vapor transition, as it takes place between two phases
that have the same structure and only differ in density. Such iso-structural
transitions have, thus far, not been observed in colloidal systems. Nor had they
been predicted before the simulations appeared. This suggests that Weisskopf
may have been right.
1-23
in volume VII = V VI and energy EII = E EI . Note that we ignore any
contributions of the interface between I and II to the extensive properties N, V
and E. This is allowed in the thermodynamic limit.
The second law of thermodynamics states that, at equilibrium, the total
entropy (S = SI + SII ) is at a maximum. Hence:
dS = dSI + dSII = 0
But
1 P
dS = dE + dV dN
T T T
Hence:
1 1 PI PII I II
dS = dEI + dEII + dVI + dVII dNI dNII
TI TII TI TII TI TII
As dEI + dEII = 0, dVI + dVII = 0 and dNI + dNII = 0, we can write
1 1 PI PII I II
0= dEI + dVI dNI
TI TII TI TII TI TII
As dEI , dVI and dNI are independent variations, all three terms should vanish
individually. The first term yields:
1 1
=
TI TII
or TI = TII . Let us call this temperature T . Then the vanishing of the second
term implies that:
PI PII
=
T T
or PI = PII . Finally, from the fact that the third term is also zero, we get:
I II
=
T T
or I = II .
Stability
We can use thermodynamic arguments to find conditions for the stability of
a single phase. To this end, we first consider a homogeneous phase in closed
system (constant N, V, E) that is divided in two (equal) parts. In equilibrium,
the entropy of the total system is at a maximum. Let us now consider what
happens if we transfer a small amount of energy E from one half of the system
to the other. In equilibrium, the temperature in the two halves of the system is
the same, and hence to linear order in E, the entropy does not change. When
we consider the second-order variation we get:
1 1/T 1 1/T
S = (E)2 + (E)2
2 E 2 E
1-24
As S is a maximum. S 0. Hence:
1/T
0
E
or
E
0
1/T
which implies that:
E
T 2 = T 2 CV 0
T
and hence the heat capacity CV 0. In other words, thermodynamic stability
implies that the heat capacity is never negative.
A similar argument can be used to show that the compressibility of a system
is non-negative. To this end, we consider the condition for equilibrium at con-
stant N, V and T . Under these conditions, the Helmholtz free energy F must be
a minimum. As before, we divide the system in to equal parts (both containing
N/2 particles in a volume V /2 at temperature T ). We now vary the volume
of one half of the system by an amount V (and that of the other half by an
amount V . To first order in V , the variation of the free energy of the
system vanishes (because the pressures in the two halves are equal). To second
order we have:
2F
F = (V )2
V 2
but this can be written as
P
F = (V )2 = (V )2
V V
where is the isothermal compressibility of the system. As F is a minimum in
equilibrium, this implies that 0.
1.5.2 Coexistence
How do we compute the point where two phases coexist? Let us consider the
case where we have an analytical approximation of the Helmholtz free energy
of both phases. For simplicity, we assume that we are dealing with a one-
component system. The question is then: given F (N, V, T ) for both phases,
how do we find the point where the two phases coexist. It turns out that there
is a nice graphical method to do this (see fig. 1). For a given temperature, we
plot F1 (N, V, T ) and F2 (N, V, T ) as a function of V (while keeping N and T
constant). We now have to check if it is possible to draw a line that is tangent
to both curves. If this common tangent exists, then this line touches F1 at a
volume V1 and it touches F2 at a volume V2 . As the curves F1 and F2 have a
common tangent, the derivatives of F1 and F2 at V1 and V2 respectively are the
same. However as
F
= P
V
1-25
m 1= m 2
F1
F2
V1 V2
This implies that the pressure of the phase 1 at V1 is the same as that of phase
2 at V2 . But as the two curves have a common tangent, the tangents have the
same intercepts at V = 0. The intercept is
F1
F1 V1 = F1 + P V1 = N 1
V
and this is equal to
F2
F2 V2 = F2 + P V2 = N 2
V
Hence, the volumes V1 and V2 are the volumes of the coexisting phases.
A completely analogous analysis can be performed if we plot F/V versus
N/V . But then the slope is related to the chemical potential and the intercept
is related to the pressure.
In the previous section it was shown that the free energy must be a convex
function of V . Hence the non-convex part does not correspond to an equilibrium
situation. This is sketched in figure 2. Between volumes V1 and a, and also
between b and V2 , the system is meta-stable, i.e. it is stable with respect to
small fluctuations but will eventually phase separate in the thermodynamically
stable phases 1 and 2. Between a and b, the free energy curve corresponds to a
situation that is absolutely unstable (negative compressibility). The system will
phase separate spontaneously under the influence of infinitesimal fluctuations.
The boundary between the meta-stable and the unstable region is called the
spinodal. It should be stressed that the spinodal, although a useful qualitative
concept, is not well defined. It appears naturally when we use approximate
1-26
expressions for the free energy. However, an exact expression for the equilibrium
free energy is necessarily convex and has therefore no spinodal.
F(V)
V1 a b V2
Figure 2: Between V1 and V2 , the free-energy curve does not correspond to the
lowest free energy of the system. Between V2 and a, and between b and V2 , the
free-energy curve describes a meta-stable phase. Between a and b, the curve
corresponds to a state that is absolutely unstable (negative compressibility).
1-27
that can be measured in an experiment at constant N , V and T is
and
F/T
= E. (1.82)
1/T VT
As the pressure P and the energy E are mechanical quantities, they can be
measured. In order to compute the free energy of a system at given temperature
and density, we should find a reversible path in the V T plane, that links the
state under consideration to a state of known free energy. The change in F
along that path can then simply be evaluated by integration of Eqns. 1.81 and
1.82. There are only very few thermodynamic states where the free energy of
a substance is known. One state is the ideal gas phase, another may be the
perfectly ordered ground state at T = 0K.
In order to compute the free energy of a dense liquid, one may construct
a reversible path to the very dilute gas phase. It is not really necessary to
go all the way to the ideal gas. But at least one should reach a state that is
sufficiently dilute that the free energy can be computed accurately, either from
knowledge of the first few terms in the virial expansion of the compressibility
factor P V /(N kT ), or that the chemical potential can be computed by other
means. The obvious reference state for solids is the harmonic lattice. Computing
1-28
the absolute free energy of a harmonic solid is relatively straightforward, at least
for atomic and simple molecular solids. However, not all solid phases can be
reached by a reversible route from a harmonic reference state. For instance,
in molecular systems it is quite common to find a strongly anharmonic plastic
phase just below the melting line. This plastic phase is not (meta-) stable at
low temperatures.
Artificial paths
Fortunately, in theoretical and numerical studies we do not have to rely on the
presence of a natural reversible path between the phase under study and a
reference state of known free energy. If such a path does not exist, we can
construct an artificial path. This is in fact a standard trick in statistical me-
chanics (see e.g. [11]). It works as follows: Consider a case where we need to
know the free energy F (V, T ) of a system with a potential energy function U1 ,
where U1 is such that no natural reversible path exists to a state of known free
energy. Suppose now that we can find another model system with a potential
energy function U0 for which the free energy can be computed exactly. We
define a generalized potential energy function U (), such that U ( = 0) = U0
and U ( = 1) = U1 . The free energy of a system with this generalized poten-
tial is denoted by F (). Although F () itself cannot be measured directly in a
simulation, we can measure its derivative with respect to :
F U ()
= (1.83)
N V T NV T
1-29
This expression (first derived by Kirkwood), allows us to put bounds on the free
energy F1 . Consider 2
F
2 N V T
For the linear parametrization considered here, it is straightforward to show
that
2F
2
(U1 U0 )2 N V T h(U1 U0 )iN V T
= (1.86)
2 NV T
The important thing to note is that the second derivative of F with respect to
is always negative (or zero). This implies that
F F
N V T =0 N V T
and hence
F1 F0 + hU1 U0 iN V T =0 . (1.87)
This variational principle for the free energy is known as the Gibbs-Bogoliubov
F1
F(l)
F0
0 l 1
inequality. It implies that we can compute an upper bound to the free energy of
1-30
the system of interest, from knowledge of the average of U1 U0 evaluated for
the reference system. Note that the present derivation was based on classical
statistical mechanics. However, the same inequality also holds in quantum sta-
tistical mechanics [12](a quick-and-dirty derivation is given in appendix A). Of
course, the usefulness of Eqn. 1.87 depends crucially on the quality of the choice
of reference system. A good reference system is not necessarily one that is close
in free energy to the system of interest, but one for which the fluctuations in de
potential energy difference U1 U0 are small. Thermodynamic perturbation the-
ory for simple liquids has been very successful, precisely because the structure
of the reference system (hard-sphere fluid) and the liquid under consideration
(e.g. Lennard-Jones) is very similar. As a result, hU1 U0 i hardly depends on
and, as a consequence, its derivative (Eqn. 1.86) is very small.
1-31
where b is the second virial coefficient of hard spheres. If we insert Eqn. 1.90
in Eqn. 1.89, the well-known van der Waals equation results 4 . If, on the other
hand, we use the exact equation of state of hard spheres (as deduced from
computer simulations), then we can compute the exact equation of state of the
van der Waals model (i.e. the equation of state that van der Waals would have
given an arm an a leg for). Using this approach, Longuet-Higgins and Widom [8]
were the first to compute the true phase diagram of the van der Waals model.
As an illustration (Figure 4), I have recomputed the Longuet-Higgins-Widom
phase diagram, using more complete data about the hard-sphere equation of
state than were available to Longuet-Higgins and Widom.
0.2
0.1
0.0
0.0 0.5 1.0 1.5
Figure 4: Phase diagram for the van der Waals model. This phase diagram is
computed by using the hard-sphere system as reference system and adding a
weak, long-ranged attractive interaction. Density in units 3 , where is the
diameter of the hard spheres. The temperature is defined in terms of the
second virial coefficient: B2 /B2HS 1 1/(4 ), where B2HS is the second virial
coefficient of the hard-sphere reference system.
etrable, spherical objects that exert attractive forces on each other. It should be stressed that
Van der Waals proposed his theory some 40 years before the atomic hypothesis was generally
accepted. The short-ranged repulsion between atoms was only explained in the 1920s on basis
of the Pauli exclusion principle. And the theory for the attractive (Van der Waals) forces
between atoms was only formulated by London in 1930.
1-32
energy of a many-body system. For the sake of simplicity, we consider a many-
body Hamiltonian of interacting spins (Ising model)
N
J XX
U1 = si sj .
2 i=1 jnni
The Ising model is not just relevant for magnetic system. In appendix C we
show that a lattice-gas model that can be used to describe the liquid-vapor
transition is, in fact, equivalent to the Ising model.
We wish to approximate this model system, using a reference system with
a much simpler Hamiltonian, namely one that consists of a sum of one-particle
contributions, e.g.
XN
U0 = hsi
i=1
where h denotes the effective field that replaces the interaction with the other
particles. The free energy per particle of the reference system is given by
Z
f0 (h) = kB T ln ds exp(hs)
If we only have two spin states (+1 and -1), this becomes
f0 (h)
hsi0 = (1.91)
h
In the case of spins 1:
hsi0 = tanh(h) (1.92)
Now, we consider the Gibbs-Bogoliubov inequality (Eqn. 1.87
f1 f0 + hu1 u0 i0
J X
= f0 + h si sj + hsi i0
2 jnni
J
= f0 zhsi20 + hhsi0 (1.93)
2
where, in the last line, we have introduced z, the coordination number of particle
i. Moreover, we have used the fact that, in the reference system, different spins
are uncorrelated. We now look for the optimum value of h, i.e. the one that
minimizes our estimate of f1 . Carrying out the differentiation with respect to
1-33
h, we find
f0 J2 zhsi20 + hhsi0
0 =
h
hsi0
= hsi0 (Jzhsi0 h) + hsi0
h
hsi0
= (Jzhsi0 h) (1.94)
h
And hence,
h = Jzhsi0 (1.95)
If we insert this expression for h in Eqn. 1.91, we obtain an implicit equation
for hsi0 , that can be solved to yield hsi0 as a function of T . For spins equal to
1,
hsi0 = tanh(Jzhsi0 ) (1.96)
The free energy estimate that we obtain when inserting this value of hsi0 in
Eqn. 1.93 is
J
fMF = f0 + zhsi20 (1.97)
2
The subscript M F in this expression stands for the mean-field approximation.
It is very important to note that the free energy that results from the mean-field
approximation is not simply the free energy of the reference system with the
effective field.
1-34
answer is yes (for the purists mostly, yes). And, even more surprisingly, the
answer can be given without much detailed knowledge of the intermolecular
interactions. What is crucial, though, is the symmetry of the system. As Lan-
dau already pointed out in his first paper [15], a symmetry is either there or not
there. Unlike the density, the symmetry of a system cannot change continuously.
In this respect, Landau says, a so-called continuous phase transformation that
involves a symmetry change, is less continuous then the first-order transforma-
tion from liquid to vapor, that can also be carried out continuously by following
a path around a critical point.
To formulate the Landau theory of phase transitions, we need to quantify
the symmetry properties of the free energy of the system. To be more precise,
we must define a quantity that provides a measure for the degree of ordering of
a system. In order to formulate the theory, we need to introduce a new concept:
the order parameter. When a phase transition involves a change in symmetry of
the system (e.g. the transition from an isotropic liquid to a crystalline solid, or
the transition from a para-magnetic to a ferromagnetic phase) then it is useful
to use an order parameter that is zero in the disordered phase, and becomes non-
zero in the ordered phase. In the case of magnetic systems, the magnetization is
such an order parameter. In section 1.4.1, we showed that it is possible to relate
a free energy to the probability to observe a particular spontaneous fluctuation
in the value of an observable quantity (see Eqn. 1.78. For an observable M , we
have:
F (N, V, T, M = x) = kB T ln P(M = x) + constant (1.98)
where P(M = x) denotes the probability that, due to a spontaneous fluctuation,
the observable M equals x. Clearly, the most likely situation corresponds to the
maximum of P, and hence to the minimum of F .
We now consider the situation that M is the order parameter that distin-
guishes the ordered phase (usually at low temperatures) from the disordered
phase (usually at high temperatures). In the disordered phase, the most likely
value of M is zero. In the ordered phase, the minimum of F corresponds to a
non-zero value of M . We now make the assumption that we can expand the
free energy of the disordered phase in powers of the order-parameter M .
As this is an expansion of the free energy of the disordered phase, every term
must be invariant under all symmetry operations of the disordered phase. This
implies that the term linear in M must vanish (because if M itself had the
full symmetry of the disordered phase, it would not be a good measure for the
change in symmetry at the phase transition). In general the quadratic term in
Eqn. 1.99 will be non-zero. The cubic term may or may not vanish, depending
on nature of the order parameter. To give an example: if we consider sponta-
neous magnetization, then the fluctuations with magnetization +M and M
are equally likely, and hence the Landau free energy (Eqn. 1.99) must be an
even function of M . But in other cases (e.g. the fluid-solid transition or the
isotropic-nematic transition), the cubic term in the expansion is not symmetry
1-35
forbidden. This is easy to see in the case of the isotropic-nematic transition.
In the isotropic phase, the distribution of molecular orientations has spherical
symmetry. In the nematic phase, the distribution has cylindrical symmetry. A
positive nematic order parameter corresponds to the case that the molecules
tend to align parallel to the symmetry axis: the distribution function of molec-
ular orientations resembles a prolate ellipsoid. If the nematic order parameter
is negative, then the molecules have a propensity to align perpendicular to the
symmetry axis. In that case, the orientational distribution function is more like
an oblate ellipsoid. These two distributions are physically distinct, and hence
there is no symmetry between states with positive and negative order parameter.
We assume that the coefficients b, c, d etc. in Eqn. 1.99 all are continuous
functions of temperature. For the study of phase transitions, we focus on the
temperature regime where b changes sign at a temperature Tc . Close to Tc we
can ignore the temperature dependence of the other coefficients. Then we can
write
F (N, V, T, M ) F (N, V, T, 0) + (T Tc )M 2 + cM 3 + dM 4 + (1.100)
Now we can distinguish several situations. First, we consider the case that the
Landau free energy is an even function of M and that the coefficient d is positive:
F (N, V, T, M ) F (N, V, T, 0) + (T Tc )M 2 + dM 4 (1.101)
Above Tc , the most minimum of the free energy corresponds to the state M = 0.
In other words, above Tc , the disordered phase is stable. Below Tc , the point
M = 0 changes form a minimum in the free energy to a local maximum. The
minimum of the free energy is easily found to be
r
(Tc T )
M =
2d
The free energy below Tc is
2 (T Tc )2
F (N, V, T, M ) F (N, V, T, 0) (1.102)
4d
Note that, in this case, ordering sets in at Tc and that the free energy and its
first derivative are continuous at the transition. However, the heat capacity
exhibits a jump at the phase transition. This is characteristic for a second-
order phase transition in the Ehrenfest classification. An interesting alternative
situation is that the coefficient d is negative. Then we have to consider the next
non-vanishing term in the series. Let us assume that the coefficient f of M 6
is positive. Then it is easy to show that at high temperatures, the absolute
minimum of the free energy corresponds to the point M = 0. However, when
d2
T Tc =
4f
The point M = 0 is still a local minimum, but no longer the globalpminimum.
The global minimum of the free energy jumps to the point M = |d/2|. Of
1-36
course, the free energy itself is continuous at the transition. But now there
is a discontinuity in its first derivative: in the Ehrenfest language, we have a
first-order phase-transition.
Finally, let us consider the very important case that F contains odd powers
of M :
F (N, V, T, M ) F (N, V, T, 0) + (T Tc )M 2 + cM 3 + dM 4 + (1.103)
We assume that d is positive. At high temperatures, the global minimum of the
free energy is a M = 0. But at the point
c2
T Tc =
4d
the minimum jumps to M = c/2 and a first-order phase transition takes
place. Hence, according to Landau theory, both the freezing transition and the
isotropic-nematic transition must be first order.
The Landau theory is a very useful way to analyze phase transitions. But it
is not rigorous. In particular, it makes the assumption that the state of a system
is determined by the minimum of Landau free energy. Fluctuations around that
minimum are ignored. In practice, fluctuations may have a large effect on the
behavior of a system near a phase transition.
1-37
where we have also made the change of variable = s u.
(A.4)
with A = exp( 2 H0 )H1 exp( 1
2 H 0 ). The trick is to apply the Cauchy-
Schwartz inequality by introducing B = exp( 2 H0 ):
h 1
i2
Tr e 2 H0 H1 e 2 H0 e 2 H0
2
Tr eH0 H1
(1)H0 H0 2
Tr e H1 e H1 = = hH1 iH0 Tr eH0
Tr e 2 H0 e 2 H0 Tr eH0
(A.5)
Thus a lower bound for the double integral in (A.3) is:
1 s 1 s
Tr e(1)H0 H1 eH0 H1 1
Z Z Z Z
22
ds d hH1 iH0
hH1 iH0 ds d =
0 0 Tr e(H0 +H1 ) o 0 0 2
(A.6)
which proves that F ( = 0) 0, and hence the Gibbs-Bogoliubov inequality.
This elegant derivation is due to Dr Benjamin Rotenberg.
B Virial pressure
Clausius proposed the following, very general, method to derive an expression
for the pressure of a many body system. Consider a system of N -particles in a
volume V at total energy E. As the volume and the total energy of the system
are finite, the following quantity
N
X
V ri .pi
i=1
must be bounded. This means that, for a system in equilibrium, the average
value of V must vanish
V = 0
where the bar over V denotes the time average. It the follows that
N
X N
X
ri .pi + ri .pi = 0
i=1 i=1
2-38
Using
ri = pi /mi
and
pi = fi ,
where fi is the force on particle i, we can write
N
X N
X
p2i /mi + ri .fi = 0
i=1 i=1
The first term on the left is simply twice the kinetic energy of the system. Using
the equipartition theorem, we can write this as 3N kB T . We then get
N
X
3N kB T + ri .fi = 0
i=1
N
!(dS) N
(dS)
X X
ri .fi = rS .fi
i=1 i=1
N
(dS)
X
= rS . fi
i=1
(dS)
= rS .Ftot
= rS .(P )dS , (B.1)
where the superscript (dS) indicates that we only consider forces due to the
element dS of the container wall. In the last line of the above equation, we have
used the fact that the average of the total force exerted on an area dS equals
P dS and hence the force exerted by that area on the particles is P dS. We
can now integrate over the entire surface of the container to obtain
N
! I
X
ri .fi = P rS .dS (B.2)
i=1 S
wall
2-39
In 3D, .r = 3, and hence
I
P r.dS = 3P V
S
or
N
1X
P V = N kB T + ri .fiinter
3 i=1
which should be compared to the result that we obtained in Eqn. 1.54
*N +
N kB T X 1
P = + ri .fi (B.3)
V i=1
3V
where = 1/kT and {s} denotes all possible sets of spin values, i.e (s1 = 0, 1),
(s2 = 0, 1) (sN = 0, 1).
3-40
Now consider a gas on a lattice with M sites. There can be at most one
particle per lattice site. Neighboring particles have an interaction energy .
The Hamiltonian is
N
XX
HLG = ni nj .
2 i=1 jnni
Now we can change the order of the summation to convert the sum in an unre-
stricted sum over n1 , n2 , , nN :
N
X XX X
= = exp(( ni nj ) ni )
2 i=1 jnni
{n}
3-41
References
[1] D. Chandler, Introduction to Modern Statistical Mechanics (Oxford Uni-
versity Press, New York, 1987).
[2] J.-L. Barrat and J.-P. Hansen, Basic Concepts for Simple and Complex
Liquids (Cambridge U.P., Cambridge, 2003).
[3] P.M. Chaikin and T.C.Lubensky, Principles of Condensed Matter Physics
(Cambridge U.P., Cambridge, 2000).
[4] J. L. Lebowitz, J. K. Percus, and L. Verlet, Phys. Rev. 153, 250 (1967).
[5] J. I. Siepmann, I. R. McDonald, and D. Frenkel, Journal of Physics Con-
densed Matter 4, 679 (1992).
[6] W. W. Wood, J. Chem. Phys. 48, 415 (1968).
[7] V. F. Weisskopf, Trans NY Acad. Sci. Ser. II 38, 202 (1977).
[8] H.C. Longuet-Higgins and B. Widom, Mol. Phys. 8, 549 (1964).
[9] P.C. Hemmer and J.L. Lebowitz, in Critical Phenomena and Phase Tran-
sitions 5b, edited by C. Domb and M. Green (Academic Press, New York,
1976).
[10] P. Bolhuis and D. Frenkel, Phys. Rev. Lett. 72, 2211 (1994).
[11] J. P. Hansen and I. R. McDonald, Theory of Simple Liquids (Academic
Press, (2nd edition), London, 1986).
[12] R. P. Feynman, Statistical Mechanics (Benjamin, Reading (Mass.), 1972).
[13] J.S.Rowlinson, in Studies in Statistical Mechanics, Vol. XIV, edited by
J.L.Lebowitz (North Holland, Amsterdam, 1988).
[14] P. Ehrenfest, Proc. Amsterdam Acad. 36, 153 (1933).
3-42