Sie sind auf Seite 1von 38

1

Lecture 13
Fluctuations.
Fluctuations of macroscopic variables.
Correlation functions.
Response and Fluctuation.
Density correlation function.
Theory of random processes.
Spectral analysis of fluctuations: the Wiener-Khintchine theorem.
The Nyquist theorem.
Applications of Nyquist theorem.
2
We considered the system in equilibrium, where we did different
statistical averages of the various physical quantities. Nevertheless,
there do occur deviations from, or fluctuations about these mean
values. Though they are generally small, a study of these fluctuations
is of great physical interest for several reasons.
1. It enables us to develop a mathematical scheme with the help of
which the magnitude of the relevant fluctuations, under a variety
of physical situations, can be estimated. We find that while in a
single-phase system the fluctuations are thermodynamically
negligible they can assume considerable importance in multi-phase
systems, especially in the neighborhood of the critical points. In
the latter case we obtain a rather high degree of spatial
correlation among the molecules of the system which in turn gives
rise to phenomena such as critical opalescence.
3
2. It provides a natural framework for understanding a class of
physical phenomena which come under the common heading of
Brownian motion; these phenomena relate properties such as the
mobility of a fluid system, its coefficient of diffusion, etc., with
temperature trough the so-called Einsteins relations. The
mechanism of the Brownian motion is vital in formulating, and in a
certain sense solving, problems as to how a given physical system,
which is not in a state of equilibrium, finally approaches a state of
equilibrium, while a physical system, which is already in a state of
equilibrium, persists to be in that state.
3. The study of fluctuations, as a function of time, leads to the
concept of correlation functions which play an important role in
relating the dissipation properties of a system, such as the viscose
resistance of fluid or the electrical resistance of a conductor, with
the microscopic properties of the system in a state of the
equilibrium. This relationship (between irreversible processes on
one-hand and equilibrium properties on the other) manifests itself
in the so-called fluctuation-dissipation theorem.
4
At the same time, a study of the frequency spectrum of fluctuations,
which is related to the time-dependent correlation function through the
fundamental theorem of Wiener and Khinthchine, is of considerable
value in assessing the noise met with in electrical circuits as well as in
the transmission of electromagnetic signals.
Fluctuations
ox x x =
(13.1)
We note that
0 x x x = = o (13.2)
We look to the mean square deviation for the first rough measure of
the fluctuation:
( ) ( ) ox x x x xx x x x
2 2 2 2 2 2
2 = = + =
(13.3)
The deviation ox of a quantity x from its average value is defined
as
x
5
Consider the distribution g(x)dx which gives the number of systems in
dx at x. In principle the distribution g(x) can be determined from a
knowledge of all the moments, but in practice this connection is not
always of help. The theorem is usually proved; we take the Fourier
transform of the distribution:
u t g x e dx
ixt
( ) ( ) =

}
1
2t
(13.4)
Now it is obvious on differentiating u(t) that
x i
d
dt
u t
n n
n
n
t
=

=
2
0
t ( )
(13.5)
We usually work with the mean square deviation, although it is
sometimes necessary to consider also the mean fourth deviation. This
occurs, for example, in considering nuclear resonance line shape in
liquids. One refers to as the n-th moment of the distribution.
x
n
6
Thus if u(t) is an analytic function we know from the moments all the
information needed to obtain the Taylor series expansion of u(t) the
inverse Fourier transform of u(t) gives g(x) as required. However, the
higher moments are really needed to use this theorem, and they are
sometimes hard to calculate. The function u(t) is sometimes called the
characteristic function of the distribution.
Energy Fluctuations in a Canonical Ensemble
When a system is in thermal equilibrium with a reservoir the
temperature u
s
of the system is defined to be equal to the
temperature u
r
of the reservoir, and it has strictly no meaning to ask
questions about the temperature fluctuation. The energy of the
system will however, fluctuate as energy is exchanged with the
reservoir. For a canonical ensemble we have
( ) ( )
E E e e E e e
n
E E
n
E E
n n n n
2 2 2
= =


/ /
/ /
u u o o
(13.6)
where o=-1/u. Now
7
Z e
E
n
=

o
(13.7)
so that
E
Z
Z
2
2 2
=
c co /
(13.8)
E
Z
Z
=
c co /
(13.9)
c
co
c
co
c
co
E
Z
Z
Z
Z
=
|
\

|
.
|
1 1
2
2 2
2
(13.10)
c
co
o
E
E E E = =
2 2 2
( )
(13.11)
Further
and
thus
Now the heat capacity at constant values of the external parameters
is given by
8
C
E
T
E d
dT
E
kT
V
= = =
c
c
c
co
o c
co
1
2
(13.12)
thus
( )
oE kT C
V
2
2
=
(13.13)
Here C
v
refers to the heat capacity at the Actual volume of the system.
The fractional fluctuation in energy is defined by
2 2
2
2
2
2
/
) ( ) (
E C kT
E
E
E
E
F
V
=
(

=
(
(

=
o o
(13.14)
We note then that the act of defining the temperature of a system by
bringing it into contact with a heat reservoir leads to an uncertainty in
the value of the energy. A system in thermal equilibrium with a heat
reservoir does not have energy, which is precisely constant. Ordinary
thermodynamics is useful only so long as the fractional fluctuation in
energy is small.
9
For perfect gas for example we have
NkT E
Nk C
V
~
~
thus
F
N
~
1
(13.15)
For N=10
22
, F~10
-11
, which is negligibly small.
For solid at low temperatures. According to the Debye low the heat
capacity of a dielectric solid for T<<O
D
is
C Nk T
V D
~ ( / ) O
3
(13.16)
3
) / (
D
T NkT E O ~
(13.17)
F
N N T
D
~
|
\

|
.
|

(
1 1
3
1 2
O
/
(13.18)
so that
also
10
Suppose that T=10
-2
deg K; O
D
=200 deg K; N~10
16
for a particle 0.01
cm on a side. Then
F~0.03 (13.19)
which is not inappreciable. At very low temperatures thermodynamics
fails for a fine particle, in the sense that we cannot know E and T
simultaneously to reasonable accuracy. At 10
-5
degree K the fractional
fluctuation in energy is of the order of unity for a dielectric particle of
the volume 1cm
3

Concentration Fluctuations in a Grand Canonical Ensemble
We have the grand partition function
from which we may calculate
N Z
Z
Z
= = =
c
c
u
c
c
u c
c
O
ln
(13.21)
Z e
N E
N i
N i
=

( )/
,
,
u
(13.20)
11
and
Thus
Perfect Classical Gas
From an earlier result
N
N e
e
Z
Z
N E
N i
N E
N i
N i
N i
2
2
2 2
2
= =

( )/
,
( )/
,
,
,
u
u
u c
c
(13.22)
( ) o u
c
c
c
c
u
c
c
N N N
Z
Z
Z
Z N
2 2 2 2
2
2 2
2
1 1
= =
|
\

|
.
|

(
(
=
(13.23)
N e V =
u

/
( / )
3
(13.24)
thus
and using (13.23)
c c u N N / / =
(13.25)
12
The fractional fluctuation is given by
( ) oN N
2
= (13.26)
| |
F N N
N
= = ( ) /
/
o
2 2
1 2
1
(13.27)
Random Process
A stochastic or random variable quantity with a definite range of
values, each one of which, depending on chance, can be attained with
a definite probability. A stochastic variable is defined
1. if the set of possible values is given, and
2. if the probability attaining each value is also given.
Thus the number of points on a die that is tossed is a stochastic
variable with six values, each having the probability 1/6.
13
The sum of a large number of independent stochastic variables is itself
a stochastic variable. There exists a very important theorem known as
a central limit theorem, which says that under very general conditions
the distribution of the sum tends toward a normal (Gaussian)
distribution law as the number of terms is increased. The theorem may
be stated rigorously as follows:
Let x
1
, x
2
,, x
n
be independent stochastic variables with their means
equal to 0, possessing absolute moments
2+o
(i)
of the order 2+o,where
o is some number >0. If denoting by B
n
the mean square fluctuation of
the sum x
1
+ x
2
++ x
n
, the quotient
tends to zero as n, the probability of the inequality
w
B
n
i
i
n
n
=
+
=
+

o
o
2
1
1 2
( )
( / )
(13.28)
14
x x x
B
t
n
n
1 1
+ + +
<
...
(13.28)
tends uniformly to the limit
For a distribution f(x
i
), the absolute moment of order o is defined as
Almost all the probability distributions f(x) of stochastic variables x of
interest to us in physical problems will satisfy the requirements of the
central limit theorem. Let us consider several examples.
du e
2
1
t
2 u
2
}

/
t
(13.29)

o
o
( )
( )
i
i i i
x f x dx =

}
(13.30)
15
Example 13a
The variable x distributes uniformly between 1. Then f(x)=1/2,
-1s x s 1, and f(x)=0 otherwise. The absolute moment of order 3
exists:
4
1
1
1
3
2
1
3
dx x = =
}

(13.32)
The mean square fluctuation is
( ) ox x x
2 2 2
=
(13.33)
but . We have x
2
0 =
( )
ox x x dx
2
2 2
0
1
1
3
= = =
}
(13.34)
If there are n independent variables x
i
it is easy to see that the mean
square fluctuation B
n
of their sum (under the same distribution) is
16
Thus (for o=1) we have for (13.28) the result
which does tend to zero as n. Therefore the central limit theorem
holds for this example.
( )
w
n
n
n
=
/
/
/
4
3
3 2 (13.36)
B n
n
= / 3
(13.35)
Example 13b
The variable x is a normal variable with standard deviation o - that
means, that it is distributed according to the Gaussian distribution
f x e
x
( )
/
=

1
2
2 2
2
o t
o
(13.37)
where o
2
is the mean square deviation; o is called standard
deviation. The absolute moment of order 3 exists:
17
The mean square fluctuation is
If there are n independent variables x
i
, then
For o=1

o t t
o
o
3
3 2
0
3
2
2
4
2
2 2
= =

}
x e dx
x /
(13.38)
( ) o
o t
o
o
x x x e dx
x
2
2 2 2
0
2
2
2
2 2
= = =

}
/
(13.39)
B n
n
= o
2
(13.40)
( )
w
n
n
n
=
4 2
3
2
3 2
o t
o
/
/
(13.41)
which approaches 0 as n approaches . Therefore the central limit
theorem applies to this example. A Gaussian random process is one
for which all the basic distribution functions f(x
i
) are Gaussian
distributions.
18
But this integral does not converge for o>1, and thus not for o=2+o,
o>0. We see that central limit theorem does not apply to a Lorentzian
distribution.
Example 13c
The variable x has a Lorentzian distribution:
f x
x
( )
+
1
1
2
(13.42)
The absolute moment of order o is proportional to
x
x
dx
o
1
1
2
0
+

}
(13.43)
19
Random Process or Stochastic Process
By a random process or stochastic process x(t) we mean a process in
which the variable x does not depend in a completely definite way on
the independent variable t, which may denote the time. In
observations on the different systems of a representative ensemble
we find different functions x(t). All we can do is to study certain
probability distributions - we cannot obtain the functions x(t)
themselves for the members of the ensemble. In Figure 13.1 one can
see a sketch of a possible x(t) for one system.
t
x
Figure 13.1 Sketch of a random process x(t)
20
The plot might, for example, be an oscillogram of the thermal noise
current x(t)I(t) obtained from the output of a filter when a thermal
noise voltage is applied to the input.
We can determine, for example
p
1
(x,t)dx =Probability of finding x in the range (x, x+dx)at time t;
(13.44)
p
2
(x
1
,t
1
; x
2
,t
2
)dx
1
dx
2
=Probability of finding x in (x
1
, x
1
+dx
1
) at time t
1
;
and in the range (x
2
, x
2
+dx
2
) at time t
2

(13.45)
If we had an actual oscillogram record covering a long period of time
we might construct an ensemble by cutting the record up into strips of
equal length T and mounting them one over the other, as in Figure
13.2.
21
T
x
T
T
1
x(t)
2
x(t)
3
x(t)
Figure 13.2 Recordings of
x(t) versus t for three
system of an ensemble, as
simulated by taking three
intervals of duration T from
a single long recording.
Time averages are taken in
a horizontal direction in
such a display; ensemble
averages are taken in a
vertical direction.
The probabilities p
1
and p
2
will be found from the ensemble.
Proceeding similarly we can form p
3
, p
4
,. The whole set of
probability distributions p
n
(n=1,2,,) may be necessary to
describe the random process completely.
22
In many important cases p
2
contains all the information we need.
When this is true the random process is called a Markoff process. A
stationary random process is one for which the joint probability
distributions p
n
are invariant under a displacement of the origin of time.
We assume in all our further discussion that we are dealing with
stationary Markoff processes.
It is useful to introduce the conditional probability P
2
(x
1
,0|x
2
,t)dx
2
for
the probability that given x
1
one finds x in dx
2
at x
2
a time t later.
Than it is obvious that
) , 0 , ( ) 0 , ( ) , ; 0 , (
2 1 2 1 1 2 1 2
t x x P x p t x x p =
(13.46)
23
Wiener-Khintchine Theorem
The Wiener-Khintchine theorem states a relationship between two
important characteristics of a random process: the power spectrum
of the process and the correlation function of the process.
Suppose we develop one of the records in Fig.13.2 of x(t) for 0<t<T
in a Fourier series:
( ) x t a f t b f t
n n n n
n
( ) cos sin = +
=

2 2
1
t t
(13.47)
where f
n
=n/T. We assume that <x(t)>=0, where the angular
parentheses <> denote time average; because the average is assumed
zero there is no constant term in the Fourier series.
The Fourier coefficients are highly variable from one record of duration
T to another. For many type of noise the a
n
, b
n
have Gaussian
distributions. When this is true the process (13.47) is said to be a
Gaussian random process.
24
Let us now imagine that x(t) is an electric current flowing through unit
resistance. The instantaneous power dissipation is x
2
(t). Each Fourier
component will contribute to the total power dissipation. The power in
the n-th component is
( )
2
2 sin 2 cos t f b t f a
n n n n n
t t + = P (13.48)
We do not consider cross products terms in the power of the form
( )( ) t f b t f a t f b t f a
m m m m n n n n
t t t t 2 sin 2 cos 2 sin 2 cos + + (13.49)
because for n=m the time average of such terms will be zero. The
time average of P is
2 /
2 2
> + >=< <
n n n
b a P
(13.50)
because
. 0 2 sin 2 cos ; 2 sin ; 2 cos
2
1
2
2
1
2
= = = t f t f t f t f
n n n n
t t t t (13.51)
25
We now turn to ensemble averages, denoted here by a bar over the
quantity. As we mentioned above, every record in Fig.13.2 running in
time from 0 to T. We will consider that an ensemble average is an
average over a large set of independent records. From a random
process we will have
0 a ; 0 ; 0
n
= = =
n n n
b b a
(13.52)
nm n m m
b b b a o o
2
n n
= =
(13.53)
where for a Gaussian random process o
n
is just the standard
deviation, as in example 13b
2 2
2 /
2
1
) (
o
t o
x
e x f

=
Thus
( )
2 2 2 2
2
) 2 sin 2 (cos 2 sin 2 cos
n n n n n n n n
t f t f t f b t f a o t t o t t = + = + (13.54)
26
Thus from (13.49) the ensemble average of the time average power
dissipation associated with n-th component of x(t) is
2
n n
o = > < P
(13.55)
Power Spectrum
We define the power spectrum or spectral density G(f) of the random
process as the ensemble average of the time average of the power
dissipation in unit resistance per unit frequency bandwidth. If Af
n

equal to the separation between two adjacent frequencies
T T
n
T
n
f f f
n n n
1 1
1
=
+
= = A
+
(13.56)
we have
2
n n n n
f f G o = > < = A P ) (
(13.57)
Now by (13.51), (13.52) and (13.53)
27

=
n
2
n
2
t x o ) (
(13.58)
Using (13.56)

= A =
n
n n n
df f G f f G t x
0
2
) ( ) ( ) ( (13.59)
The integral of the power spectrum over all frequencies gives the
ensemble average total power.
Correlation Function
Let us consider now the correlation function
) ( ) ( ) ( t t + = t x t x C
(13.60)
where the average is over the time t. This is the autocorrelation
function. Without changing the result we may take an ensemble
average of the time average
) ( ) ( t + t x t x so that
28
| || |

= +
= + + + +
= + =
n n
n n n n n
m n
n n n n n n
f f b a
t f b t f a t f b t f a
t x t x C
t t o t t
t t t t t t
t t
2 cos 2 cos ) (
) ( 2 sin ) ( 2 cos 2 sin 2 cos
) ( ) ( ) (
2 2 2
2
1
,
(13.61)
Using (13.57)
Thus the correlation function is the Fourier cosine transform of the
power spectrum.
}

=
0
2 cos ) ( ) ( df f f G C t t t
(13.62)
Using the inverse Fourier transform we can write
}

=
0
2 cos ) ( 4 ) ( t t t t d f C f G
(13.63)
This, together with (13.62) is the Winer-Khitchine theorem. It has an
obvious physical content. The correlation function tells us essentially
how rapidly the random process is changing.
29
Example 13d.
If
c
e C
t t
t
/
) (

=
(13.64)
we may say that t
c
is a measure of the above time the system exists
without changing its state, as measured by x(t), by more than e
-1
. t
c
in
this case have a meaning of correlation time. We then expect physically
that frequencies much higher than, 1/t
c
will not be represented in an
important way in the power spectrum. Now if C(t) is given by (13.64),
the Wiener-Khintchine theorem tells us that
2
0
/
) 2 ( 1
4
2 cos 4 ) (
c
c
f
d f e f G
c
t t
t
t t t
t t
+
= =
}

(13.65)
Thus, as shown in Fig. 13.3, the power spectrum is flat (on a log.
frequency scale) out to 2tf~1/t
c
, and then decreases as 1/f
2
at high
frequencies. Note that the noise spectrum for the correlation function
is white out of cutoff f
c
~1/2tt
c
,
30
0.5
0
1
log
10
2tf 1 2 3 4
Figure 13.3 Plot of
spectral density versus
log
10
2tf for an
exponential function
with t
c
=10
-4
c.
The Nyquist Theorem
The Nyquist theorem is of great importance in experimental physics
and in electronics. The theorem gives a quantitative expression for
the thermal noise generated by a system in thermal equilibrium and is
therefore needed in any estimate of the limiting signal-to-noise ratio
of experimental set-ups. In the original form the Nyquist theorem
states that the mean square voltage across a resistor of resistance R
in thermal equilibrium at thermal T is given by
f RkT V A = 4
2
(13.66)
31
where Af is the frequency band width which the voltage fluctuations
are measured; all Fourier components outside the given range are
ignored. Remember the definition of the spectral density G(f), we may
write Nyquist results as
This is not strictly the power density, which would be G(f)/R.
R R
Filter
Noise
generator
Figure 13.4 The noise
generator produces a power
spectrum G(f)=4RkT. If the
filter passes unit frequency
range, the resistance R will
absorb power 2RkT. R is
matched to R.
The maximum thermal noise power per unit frequency range delivered
by a resistor to a matched load will be G(f)/4R=kT; factor of 4 enters
where it does because the power delivered to the load R is
2 2 2
) ' /( ' ' R R R V R I + =
(13.68)
RkT f G 4 ) ( =
(13.67)
32
which at match (R=R) is (Figure.13.4). R V 4 /
2
We will derive the Nyquist theorem in two ways: first, following the
original transmission line derivation, and, second, using a microscopic
argument.
Transmission line derivation
Figure 13.5 Transmission line of
length l with matched terminations.
Z
c
=R
l
R R
Consider as in Figure 13.5 a loss less transmission line of length l and
characteristic impedance Z
c
=R terminated at each end by a resistance
R. The line is therefore matched at each end in the sense that all
energy traveling down the line will be absorbed without reflection in
the appropriate resistance.
33
The entire circuit is maintained at temperature T. In analogy to the
argument on the black-body radiation (Lecture 8) the transmission line
has two electromagnetic modes (one propagation in each direction) in
the frequency range
where c is the propagation velocity on the line. Each mode has energy
in equilibrium. We are usually concerned here with the classical limit ,
so that the thermal energy on the line in the frequency range Af
The rate at which energy comes off the line in one direction is
l
c
f
'
= o
(13.69)
1
/

kT
e
e
e

(13.70)
' c
f kTlA
(13.71)
34
Because the thermal impedance is matched to the line, the power
coming off the line at one end is absorbed in the terminal impedance
R at that end. The load emits energy at the same rate. The power
input to the load is
But V=I(2R), so that
which is the Nyquist theorem.
f kTA
(13.72)
f kT R I A =
2 (13.73)
f kT R V A = 4 /
2 (13.74)
Microscopic Derivation
We consider a resistance R with N electrons per unit volume; length l,
area A and carrier relaxation time t
c
. We treat the electrons as
Maxwellian but it was shown that the noise voltage is independent of
such details, involving only the value of the resistance regardless of the
details of the mechanisms contributing to the resistance.
35
First note that
u RANe RAj IR V = = =
(13.75)
here V is the voltage, I the current, j the current density, and is the
average (or drift) velocity component of the electrons down the
resistor. Observing that NAl is the total number of electrons in the
specimen

=
i
u u NAl
(13.76)
Summed over all electrons. Thus

= =
i i
V u l V ) (Re/
(13.77)
where u
i

and V
i
are the random variables. The spectral density G(f)
has the property that in the range Af
f f G V
i
A = ) (
2
(13.78)
u
36
We suppose that the correlation function may be written as
c
e V t V t V C
i i i
t t
t t
/ 2
) ( ) ( ) (

>= + =<
(13.79)
Then, from the Wiener-Khintchine theorem we have
2
c
c
2 2
0
2 2
f 2 1
u l 4 d f 2 e u l 4 f G
c
) (
) (Re/ cos ) (Re/ ) (
/
t t
t
t t t
t t
+
= =
}

(13.80)
Usually in metals at room temperature t
c
<10
-13
s, so from dc through
the microwave range et
c
<<1 and may be neglected. We recall that
kT u m
2
1
2
2
1
=
(13.81)
(m- mass of electron, average velocity of electron)
u
So that
m kT u /
2
=
(13.82)
37
Thus in the frequency range Af
f
l m
kT
NAl f f NAlG V NAl V
c i
A
|
.
|

\
|
|
.
|

\
|
= A = = t
2
2 2
Re
4 ) ( (13.83)
or
f RkT V A = 4
2
(13.84)
Here we have used the relation
m Ne
c
/
2
t o =
(13.85)
from the theory of conductivity and also elementary relation
A l R o / =
(13.86)
o is the electrical conductivity.
38
The simplest way to establish (13.85) in a plausible way is to solve the
drift velocity equation
so that in the steady state ( or for et
c
<<1 ) we have
eE u
dt
d
m
c
=
|
|
.
|

\
|
+
t
1
(13.87)
m E e u
c
/ t =
(13.88)
giving for the mobility (drift velocity per unit electric field)
Then we have for the electric conductivity
m e E u
c
/ / t = =
(13.89)
m Ne E u Ne E j
c
/ / /
2
t o = = =
(13.90)

Das könnte Ihnen auch gefallen