Beruflich Dokumente
Kultur Dokumente
Personal introduction.
Finding me:
Jim Bernard
Meyer Hall 447 (moving to Timberline #2
Room 17 during the semester)
Email:
jbernard@mines.edu
Office Phone:
303-384-2180
Office hours:
TR 1112, 34; W 34; or by appointment
If you cant make it to one of my office hours, please let me know, and Ill be
happy to set up an alternative time to meet. Also, please feel free to email me
at any time.
Office:
Our textbook:
15% each
45%
25%
Due date for the assignment specified at closure (usually one week
after closure).
Summary of problems in each assignment will be sent by email shortly
after closure.
Hint: You dont have to wait for closure or the summary to start
work on the problems. Get started early!
Help each other, but its crucial to develop and write your own solutions
if you are to learn. You cannot learn thermo by observationyou must
do the work yourself.
Verbal explanations are a critical component of a written solution, so I
will allocate approximately 20% of the homework credit to the extent and
quality of explanations. See the syllabus for details.
Brush up on math.
Outline, scope
Overview of system, equilibrium, measurement, variables.
Entropy and the tendency toward equilibrium.
Thermodynamic states, processes, and laws.
Thermodynamic relations.
Phase transitions, phase diagrams.
Multicomponent systems and solutions.
Statistics of open and closed systems.
Chapter 1
Introduction
1.1
CHAPTER 1. INTRODUCTION
CHAPTER 1. INTRODUCTION
1.2
We will nearly always direct our attention to some macroscopic subset of the entire
universe and refer to it as the system of interest, with everything surrounding
it being the environment. The thermodynamic treatment of the system of interest
will naturally be much more detailed than that of the environment. In making the
distinction, we must establish a real or imagined boundary between the two and specify
some constraints, sometimes called boundary conditions, on the system that are
generally enforced at the boundary. The environment, including the physical boundary
if any, serves to implement the external constraints, and these will frequently, but
not necessarily, correspond to actual experimental conditions. For example, chemical
reactions may be performed with the system in a fixed-temperature environment
and/or a fixed-pressure environment, or they may be performed with fixed energy
and fixed volume. We define specific terminology for a few of the most commonly
useful constraints:
Isolated system: One that is unable to exchange energy, volume, or particles with
its environment. This corresponds to having the system in a rigid, closed box that
is well insulated to prevent heat flow. The total energy is then fixed.
Closed system: One that can exchange thermal energy (heat) with its environment,
but that is otherwise isolated. This permits the energy to vary, but not the volume,
and particles cannot flow in or out.
Open system: One that can exchange both energy and particles with its environment.
It will often be useful to identify more than one system of interest, generally
referred to as subsystems of a combined supersystem. The constraints applied at
the boundaries between the subsystems may then be specified independently of
each other and of the boundary conditions between the combined system and its
environment.
CHAPTER 1. INTRODUCTION
1.3
Equilibrium
We will confine our attention to macroscopic systems that are in states of equilibrium. For an isolated system, the equilibrium state has two key characteristics:
its macroscopic properties are (1) independent of time and (2) independent of the
history, the initial conditions, of the system. For example, if we initialize a sealed,
thermally insulated container of gas with the gas confined to half the container by
a partition, after removing the partition we must wait long enough for the density
to become uniform and mass currents to damp out before we can make meaningful
measurements of the temperature and pressure. Furthermore, no matter how we
initialize the system, so long as its total energy, total volume, and the amount and
composition of the gas are the same, the measured temperature and pressure in
equilibrium will be the same.
Note that there are plenty of examples of materials that appear to be temporally
stable, yet whose properties have a significant dependence on preparation history.
Ordinary glass is perhaps the most familiar exampleits constituents are stable
in the crystalline state, but the microscopic structure of glass is amorphous and
its macroscopic properties depend, for example, on the cooling rate when it was
formed. Though glass is not in an equilibrium state, the mobility of its atoms is so
tiny at room temperature that the time required to reach the true equilibrium state
is effectively infinite.
Its useful, even at this early stage, to contemplate the reason why isolated
macroscopic systems tend toward internal equilibrium. Consider the isolated container
of gas again, with the partition initially confining the contents to half the full volume.
If we open a small hole in the partition, we expect the gas to flow from the initially
filled side to the initially empty side until the density becomes uniform throughout.
But why does that occur? It is, quite simply, driven by probability, for the molecules
undergo rapid, random motion, and soon after the hole is opened, the fact that there
are larger numbers of molecules on the filled side than on the empty side implies
that there is a higher probability for molecules to pass through the hole from the
filled side than from the empty side, producing a net flow that tends to equalize the
concentrations of molecules on the two sides. The initial imbalance between the flows
in the two directions would lead to overshoot, and thus to oscillation, but friction
inevitably damps out the oscillations, leading eventually to a macroscopically static
uniform distribution of molecules. There are, of course, ongoing random fluctuations
in the local molecular densities, but these are quite small and rapidly corrected
in a large system. We are led to suspect then that the equilibrium state is the
most-probable macroscopic state.
1.4
Measurement
CHAPTER 1. INTRODUCTION
1.5
Some macroscopic properties, such as the volume V and the number of particles
N are obviously proportional to the size, or extent, of the systemindeed, they
define it. These are examples of extensive variables. Less obvious, perhaps, are
the entropy S, the internal energy E, and the various thermodynamic potentials,
such as the Helmholtz free energy F , the enthalpy H, and the Gibbs free energy G.
In contrast, intensive variables do not scale with the size of the system, becoming
asymptotically constant instead. These include things like temperature T , pressure
P , and chemical potential .
Since macroscopic size is an important requisite for the success of ordinary
thermodynamics and statistical mechanics, it is sometimes important to define it
carefully through the concept of the thermodynamic limit. This is the limit as
the extensive variables characterizing the size of a system are scaled up consistently.
For example, if a closed isolated system has N0 particles in volume V0 , we scale both
of them via a linear factor s, so that scaled versions of the system have
N = s3 N0
and V = s3 V0 .
(1.1)
=T,
= P , and
= P ,
(1.2)
S V,N
V S,N
T V,N
T
where P is the constant-pressure volume-thermal-expansion coefficient and T is
the isothermal compressibility. In contrast, the heat capacity at constant pressure
E
CP =
(1.3)
T P,N
is an extensive quantity, since E is extensive and T is intensive.
While we naturally think of intensive variables as having uniform values throughout a system, under certain circumstances they can acquire position dependence. This
happens, for example, in a gas in an external gravitational field, which confers height
dependence on both the density and pressure; in a two-phase mixture, such as ice and
water at 0 C, in which the particle density may be different in one phase than in the
other; and in a solid having a spatially inhomogeneous magnetic susceptibility, which
leads to spatial dependence in the magnetic-moment density in an external magnetic
field. It also features in nonequilibrium extensions of thermodynamics, where one
CHAPTER 1. INTRODUCTION
1.6
Strategy
Chapter 2
Introduction
2.2
Probability
10
that
0 P (x) 1 ,
(2.1)
with zero indicating certain falsehood, and one indicating certain truth. Thus, one can
view reasoning with probabilities as a generalization of the usual logic of propositions
that can only be either true or false. Probabilities satisfy two properties that will
prove to be especially useful in manipulations:
Sum rule for mutually exclusive events: If two events are mutually exclusive,
so that at most one of them can occur, then the probability of either occurring is
the sum of the probabilities of each. An example is the roll of a dieexactly one
number results, each being mutually exclusive of the others. The probability of
rolling a two or a five is the sum P (2) + P (5).
Product rule for independent events: If two events are independent, that is,
the probability of either is unaffected by whether the other occurs or not, the
probability of both occurring together, their joint probability, is the product of the
probabilities of each. An example is the flipping of two coins consecutivelyneither
result influences the other. The probability of obtaining heads on both is the
product P (head on first) P (head on second).
Both of these properties are trivially extended to multiple events.
When dealing with probabilities of the results of experiments, such as the occurrence of a particular measured value x of some property, it is often convenient
to interpret probability in terms of frequency of occurrence. Imagine performing a
long sequence of measurements of some physical quantity, always under the same
conditions. For each possible result x, determine the ratio
Number of occurrences of x
,
Total number of measurements
(2.2)
2.2.1
Probability distributions
what follows, we also assume the set of possible values is finite. While it may seem that the
allowed energies, for example, constitute an infinite set, for a confined system, they are quantized.
As well, extremely large values can be ruled out (e.g., greater than the total energy of the universe),
so any realistic enumeration of the set is finite.
11
(2.3)
1 1 1
3
1
+ + = = .
6 6 6
6
2
Equivalently, we could simply count the number of elementary events satisfying the
proposition (3) and divide by the total number of elementary events (6) to get the
same result.
This sort of direct enumeration is easy to do for very small systems like a few dice
or coins, but for large systems that is often impractical, if not outright impossible.
However, combinatorial methods can often provide the counts we need, regardless
of the number of constituents or the number of elementary possibilities. We will
review several handy combinatorial results here. As usual, the important part of
your solution is the logic you use to answer the question, so be sure to explain your
reasoning clearly and completely in each case.
(a) Find the number of distinct sequences (order does matter) of n integers in the
range 1, . . . , N . Repetition of values is permitted, and N has no particular relation
to n, but both are to be considered to have fixed values. Pictorially, we can imagine
n boxes, distinguishable by labels bi assigned to them, into which we place numbers
selected from a pool of arbitrarily many copies of the first N integers:
Pool
Sequence
?
b1
b2
b3
bn
1 2 3 N
1 2 3 N
..
.
(b) Find the number of distinct sequences of n integers in the range 1, . . . , N , but
dont allow repetition within a sequenceall elements of each sequence are distinct
12
(thus, we must assume N n). In this case, we can imagine selecting the numbers
from a pool consisting of just a single copy of each of the first N integers:
Sequence
Pool
?
b1
b2
b3
bn
1 2 3 N
N
X
N!
.
n!
(N
n)!
n=0
2.3
The two specialized rules of probability we cited above allow us to break down
the probabilities of macroscopically characterized states in terms of those of morefundamental states at the microscopic, or atomic level, ultimately reducing their
determination to a counting problem, that of counting the microscopic states that
have a given set of macroscopic properties. To do that, we must first develop a
scheme for identifying and characterizing all of the microscopic states of a system.
In general, microscopic many-particle states can be quite complicated, not just
because there are of order 1023 particles, but also because they tend to interact with
each other in ways that preclude exact analytic treatment in either classical or quantum mechanics, even when there are as few as three particles. We will sidestep that
issue, homing in on the statistical aspects of the problem by making the simplifying
assumption that the particles are noninteracting. For a gas of atoms or molecules,
this would define the ideal gas, but we will not restrict ourselves to any specific type
of system yet. While models based on this simplifying assumption can give only
approximate predictions of the macroscopic behavior of real systems, our primary
goal at this point is to elucidate the statistical foundations of thermodynamics,
which is most efficiently accomplished without the extra burden of dealing with
interparticle interactions. In addition, these simplified models can serve as starting
points for improvements that include interactions in an approximate way.
13
2.3.1
In a simple model system in which the constituents do not interact with each other,
the states of the constituents are independenteach one of them can occupy any
one of its possible states without regard for the states of other constituents. This
means the state of the system as a whole, viewed on a microscopic scale, the singleconstituent level, changes if even one of the constituents changes its state. That is,
the microscopic state, or microstate, of the complete system depends on the states
of all of its constituents, so that each microstate is characterized, or defined, by the
complete set of single-particle states of its constituents.
To distinguish the different states of a constituent and the different constituents
themselves (if they are distinguishable, which we assume here for simplicity), we use
a constituent-state identifier consisting of a state label, together with a constituent
label as a subscript:
sc = stateconstituent .
(2.4)
Here s might stand for some quantum number or set of quantum numbers characterizing the quantum state of a constituent, such as the quantum numbers n, l, and m
for a hydrogen atom, and c just indicates to which constituent the identifier applies.
Then a microstate can be characterized by the complete set of its constituent-state
identifiers:
microstate = {i1 , j2 , k3 , . . .} .
(2.5)
We could also express this more concisely as an ordered sequence of constituent-state
labels, just the s parts of (2.4):
microstate = (i, j, k, . . . ) ,
(2.6)
where the nth entry gives the state of the nth constituent. In this representation
the constituent specification is implicit in the order of the state identifiers, whereas
in the set representation (2.5), the order of the entries is insignificant.
Example 1. As a simple example, consider a collection of numbered coins: the
state label s for any particular coin is either H (heads) or T (tails), and the
constituent label c is the number on the coin. Thus,
sc = H31
(2.7)
2 A normal mode is a wavelike state of vibration of a crystal, in which all atoms vibrate with the
same frequency. In a perfectly harmonic crystal, these behave like independent quantum harmonic
oscillators, whose quantized excitations, called phonons, act like noninteracting particles.
14
indicates that coin 31 is heads up. If there were three coins altogether, there would
be eight microstates
{H1 , H2 , H3 } , {H1 , H2 , T3 } , {H1 , T2 , H3 } ,
{H1 , T2 , T3 } ,
{T1 , H2 , T3 } ,
{T1 , T2 , H3 } ,
{T1 , H2 , H3 } ,
and {T1 , T2 , T3 } ,
(2.8)
in each of which the constituent-state identifiers could be listed in any order. In the
more concise sequential notation (2.6), these microstates would be written as
(H, H, H) , (H, H, T) , (H, T, H) ,
(H, T, T) ,
(T, H, T) ,
(T, T, H) ,
(T, H, H) ,
and (T, T, T) ,
(2.9)
2.3.2
Macrostates
2.3.3
A small example
15
16
26
36
46
56
66
14
(1, 1)
(1, 2)
(1, 3)
(1, 3)
(1, 5)
(1, 6)
24
(2, 1)
(2, 2)
(2, 3)
(2, 4)
(2, 5)
(2, 6)
34
(3, 1)
(3, 2)
(3, 3)
(3, 4)
(3, 5)
(3, 6)
44
(4, 1)
(4, 2)
(4, 3)
(4, 4)
(4, 5)
(4, 6)
Table 2.1: The 24 elements of the microstate space (sample space) of a system
consisting of one tetrahedral and one cubic die. Each cell of the table contains the
pair representation (i, j) (2.10) of the microstate whose constituent states are i4 in
the row header and j6 in the column header. These microstates constitute a complete
set of mutually exclusive elementary events for this system: exactly one of them
must be the outcome of any roll of the dice.
There are 4 6 = 24 of these states in the full microstate space, which is shown in
Table 2.1.
We can also identify compound events, as subsets of the microstate space that
satisfy a less-selective criterion or proposition than specification of both constituent
states. For example, the compound event defined by the assertion i > j corresponds
to a six-element subset at the lower left of the microstate-space table, as shown in
Table 2.2.
16
26
36
46
56
66
14
(1, 1)
(1, 2)
(1, 3)
(1, 3)
(1, 5)
(1, 6)
24
(2, 1)
(2, 2)
(2, 3)
(2, 4)
(2, 5)
(2, 6)
34
(3, 1)
(3, 2)
(3, 3)
(3, 4)
(3, 5)
(3, 6)
44
(4, 1)
(4, 2)
(4, 3)
(4, 4)
(4, 5)
(4, 6)
Table 2.2: The microstates corresponding to the compound event defined by the
requirement i > j constitute a six-element subset in the lower-left corner of the
microstate-space table.
A macroscopic property is a variable whose value can be determined for every
microstate of the system. Each possible value of the variable defines a compound
event or macrostate, the subset of microstates that share that same value. Thus,
the possible values of the variable partition the microstate space into subsets, each
of which is a single macrostate of the system. For the two-die system any function
f (i, j) constitutes such a property, and as an example, we use the sum f (i, j) = i + j
[Table 2.3], which is an extensive macroscopic variable.
While there are 24 distinct microstates of this system, there are only 9 different
macrostates, 9 different values of the macroscopic property i + j, and most of the
macrostates have multiplicities greater than one. For example, = 2 for the
macrostate having the sum 3, since there are two microstates, {24 , 16 } and {14 , 26 }
with that sum. The distribution of the macrostate multiplicities is shown in Table
2.4.
Now, the macrostates of the two-die system constitute a complete set of mutually
exclusive compound eventsexactly one value of the sum i + j occurs on every roll
and we would like to obtain their probability distribution. Since each macrostate
comprises one or more microstates, and those microstates are themselves mutually
exclusive of each other, the probability of any macrostate i + j is the sum of the
16
26
36 46 56 66
14
24
34
44
10
16
Table 2.3: Values of the macroscopic property i+j for the microstates of the two-die
system. All microstates having the same sum belong to the macrostate characterized
by that value. The sums for the two microstates belonging to the macrostate with
sum 3 are highlighted.
Sum:
:
2
1
3
2
4
3
5
4
6
4
7
4
8
3
9 10
2 1
(2.11)
where Ive taken the liberty of denoting all of the probabilities by the same symbol P ,
even though they belong to four completely different probability distributionsthe
arguments should suffice to avoid ambiguity. Thus, weve reduced the calculation of
the macrostate probabilities to the calculation of the microstate probabilities, which
can be determined if we know the constituent-state probabilities.
For precise work on real dice, one might determine the constituent-state probabilities via the frequency interpretation by making a large number of measurements.
We will simply assume the dice to be fair, so that the constituent states are assigned
equal probabilities. These are 1/4 and 1/6 for the four-sided and six-sided dice,
respectively. Since each die has a uniform (all probabilities equal) constituent-state
probability distribution, the microstate (joint) probability distribution for the pair
is also uniform:
P ({i4 , j6 }) = P (i4 )P (j6 ) =
1
24
(2.12)
We will universally assume that to be true of the microstate probabilities of statistically large isolated systems, though we do attempt to motivate the assumption
later.
Since the macrostate probabilities are just the sums of their uniform microstate
probabilities, all we need to do is count the microstates belonging to each macrostate.
We have succeeded in reducing the determination of the macrostate probability
distribution to a counting problem. Specifically, the probability of each macrostate is
simply its multiplicity, the number of microstates belonging to it, divided by the
total number of microstates. That is, the probability distribution of the macrostates
is just the normalized analog of the distribution of multiplicities.
17
For example, the probability of occurrence of the macrostate of the dice having
the sum of 8 is
(8)
3
1
=
= .
(2.13)
24
24
8
Proceding in the same way for the other macrostates, we find the complete probability
distribution, shown in Table 2.5. You should verify that the probability distribution
Sum:
:
P:
2
1
1/24
3
2
1/12
4
3
1/8
5
4
1/6
6
4
1/6
7
4
1/6
8
3
1/8
9
2
1/12
10
1
1/24