Sie sind auf Seite 1von 38

Lecture 13

Fluctuations.
Fluctuations of macroscopic variables.
Correlation functions.
Response and Fluctuation.
Density correlation function.
Theory of random processes.
Spectral analysis of fluctuations: the Wiener-Khintchine
theorem.
The Nyquist theorem.
Applications of Nyquist theorem.

We considered the system in equilibrium,


equilibrium where we did
different statistical averages of the various physical
quantities. Nevertheless, there do occur deviations from, or
fluctuations about these mean values. Though they are
generally small, a study of these fluctuations is of great
physical interest for several reasons.
1. It enables us to develop a mathematical scheme with the
help of which the magnitude of the relevant fluctuations,
fluctuations
under a variety of physical situations, can be estimated.
We find that while in a single-phase system the
fluctuations are thermodynamically negligible they can
assume considerable importance in multi-phase systems,
systems
especially in the neighborhood of the critical points.
points In the
latter case we obtain a rather high degree of spatial
correlation among the molecules of the system which in
turn gives rise to phenomena such as critical opalescence.
opalescence

2. It provides a natural framework for understanding a class


of physical phenomena which come under the common
heading of Brownian motion;
motion these phenomena relate
properties such as the mobility of a fluid system, its
coefficient of diffusion, etc., with temperature trough the
so-called Einsteins relations.
relations The mechanism of the
Brownian motion is vital in formulating, and in a certain
sense solving, problems as to how a given physical
system, which is not in a state of equilibrium, finally
approaches a state of equilibrium,
while a physical
equilibrium
system, which is already in a state of equilibrium, persists
3. The study of fluctuations, as a function of time, leads to
to
in that state.
state
thebe concept
of correlation functions which play an
important role in relating the dissipation properties of a
system,
system such as the viscose resistance of fluid or the
electrical resistance of a conductor, with the microscopic
properties of the system in a state of the equilibrium. This
relationship (between irreversible processes on one-hand
and equilibrium properties on the other) manifests itself in
the so-called fluctuation-dissipation theorem.

At the same time, a study of the frequency spectrum


spectrum of
fluctuations, which is related to the time-dependent
correlation function through the fundamental theorem of
Wiener and Khinthchine,
Khinthchine is of considerable value in assessing
the noise
noise met with in electrical circuits as well as in the
transmission of electromagnetic signals.
Fluctuations
The deviation x of a quantity x from its average xvalue
defined as
We note that

is

x x x

(13.1)

x x x 0

(13.2)

We look to the mean square deviation for the first rough


measure of the fluctuation:

(x ) 2 ( x x ) 2 x 2 2 xx x 2 x 2 x 2

(13.3)

We usually work with the mean square deviation,


deviation although it
is sometimes necessary to consider also the mean fourth
deviation.
deviation This occurs, for example, in considering nuclear
resonance line shape
x n in liquids. One refers to

as the n-th
n-

moment the
of the
distribution.
Consider
distribution
g(x)dx which gives the number of
systems in dx at x. In principle the distribution g(x) can be
determined from a knowledge of all the moments, but in
practice this connection is not always of help. The theorem is
usually proved; we take the Fourier transform of the
distribution:

u( t )

1
2

ixt
g
(
x
)
e
dx

(13.4)

Now it is obvious on differentiating

u(t) that

d
n
n
x 2 i n u(t )
dt

(13.5)
t 0

Thus if u(t) is an analytic function we know from the moments


all the information needed to obtain the Taylor series
expansion of u(t) the inverse Fourier transform of u(t) gives

g(x) as required. However, the higher moments are really


needed to use this theorem, and they are sometimes hard to
calculate. The function u(t) is sometimes called
characteristic function of the distribution.
Energy Fluctuations in a Canonical Ensemble

the

When a system is in thermal equilibrium with a reservoir the

s of the system is defined to be equal to the


temperature r of the reservoir, and it has strictly no
temperature

meaning to ask questions about the temperature fluctuation.


The energy of the system will however, fluctuate as energy is
exchanged with the reservoir. For a canonical ensemble we
have
E2
E 2 e E n / / e E n /
E 2 e En / e En (13.6)

where

=-1/. Now

so that

Further

and

Z e

E n

(13.7)

2
2

Z
/

E2
Z

(13.8)

Z /
E
Z

(13.9)

E
1 2Z
1 Z

Z 2 Z 2

(13.10)

thus

E
E 2 E 2 (E ) 2

Now the heat capacity at constant values of the external


parameters is given by

(13.11)

thus

E
E d E 1
CV

T
dT
kT 2

(13.12)

E 2 kT 2 CV

(13.13)

Here Cv refers to the heat capacity at the Actual volume of the


system. The fractional fluctuation in energy is defined by

(E ) 2

2
E

(E ) 2
E2

kT 2 CV / E 2

(13.14)

We note then that the act of defining the temperature of a


system by bringing it into contact with a heat reservoir leads
to an uncertainty in the value of the energy. A system in
thermal equilibrium with a heat reservoir does not have
energy, which is precisely constant. Ordinary thermodynamics
is useful only so long as the fractional fluctuation in energy8is

For perfect gas for example we have

CV Nk
E NkT

thus

For

1
F
N

(13.15)

N=1022, F10-11, which is negligibly small.

For solid at low temperatures. According to the Debye low the


heat capacity of a dielectric solid for
also

T<<D is

CV Nk (T / D ) 3

(13.16)

E NkT (T / D )3

(13.17)

so that

1 1 D

N N T

1/ 2

(13.18)

Suppose that T=10-2deg K; D=200 deg K; N1016 for a


particle 0.01 cm on a side. Then
F0.03
(13.19)
which is not inappreciable. At very low temperatures
thermodynamics fails for a fine particle, in the sense that we
cannot know E and T simultaneously to reasonable accuracy.
At 10-5 degree K the fractional fluctuation in energy is of the
order of unity for a dielectric particle of the volume 1cm 3
Concentration Fluctuations in a Grand Canonical Ensemble
We have the grand partition function
( N E N ,i )/

Z e

(13.20)

N ,i

from which we may calculate

Z
N

ln Z

(13.21)

10

and

N2

2
N
e

( N E N ,i )/

N ,i

( N E N ,i )/

2 2Z

Z 2

(13.22)

N ,i

Thus

1 2Z
1 Z
2
2
2
2
(N ) N N

2
2
Z
Z

(13.23)

Perfect Classical Gas


From an earlier result

N e

(V / )
3

(13.24)

thus

N / N /
and using (13.23)

(13.25)

11

(N ) 2 N

(13.26)

The fractional fluctuation is given by

F (N ) / N
2

1/ 2

(13.27)

Random
Process
A stochastic or random variable quantity with a definite range
of values, each one of which, depending on chance, can be
attained with a definite probability. A stochastic variable is
defined
1. if the set of possible values is given, and
2. if the probability attaining each value is also given.
Thus the number of points on a die that is tossed is a
stochastic variable with six values, each having the
probability 1/6.

12

The sum of a large number of independent stochastic


variables is itself a stochastic variable.
variable There exists a very
important theorem known as a central limit theorem,
theorem which
says that under very general conditions the distribution of the
sum tends toward a normal (Gaussian) distribution law as the
number of terms is increased. The theorem may be stated
rigorously as follows:
Let x1, x2,, xn be independent stochastic variables with their
means equal to 0, possessing absolute moments

2+(i) of the

order 2+,where is some number >0.


>0 If denoting by Bn the
mean square fluctuation of the sum
n

quotient

wn

i 1

(i )

1 ( / 2 )
n

x1+ x2++ xn , the


(13.28)

tends to zero as n, the probability of the inequality

13

x1 x1 ... x n
Bn
tends uniformly to the
limit

1
2

For a distribution

u 2 / 2

du

(13.28)

(13.29)

f(xi), the absolute moment of order is defined

(i )

f ( xi )dxi

Almost all the probability distributions

(13.30)

f(x) of stochastic

variables x of interest to us in physical problems will satisfy


the requirements of the central limit theorem. Let us consider
14
several examples.

Example 13a
The variable

f(x)=1/2,

x distributes uniformly between 1. Then


-1 x 1,
1 and f(x)=0 otherwise. The absolute

moment of order 3 exists:

1
2

x dx

1
4

(13.32)

(x ) 2 x 2 x 2

(13.33)

The mean square fluctuation is

but x 2 0 . We have
1

x 2 x 2 x 2 dx

1
3

(13.34)

If there are

n independent variables xi it is easy to see that

the mean square fluctuation

Bn of their sum (under the same


15

Bn n / 3

(13.35)

Thus (for =1)


=1 we have for (13.28) the result

n/4
wn
n / 3 3/2

(13.36)

which does tend to zero as n. Therefore the central limit


theorem holds for this example.
Example 13b
The variable x is a normal variable with standard deviation
- that means, that it is distributed according to the Gaussian
distribution
2
1
x 2/2
f ( x)
e
2

(13.37)

where 2 is the mean square deviation; is called standard


deviation. The absolute moment of order 3 exists:
16

2
3
2

x e

3 x 2 / 2 2

dx

4
3
2

(13.38)

The mean square fluctuation is

x
If there are

2
2
x
2

x e

2 x 2 / 2 2

dx 2

(13.39)

n independent variables xi, then

Bn n 2
For =1

wn

(13.40)

4n 3 / 2

2 3/ 2

(13.41)

which approaches 0 as n approaches . Therefore the central


limit theorem applies to this example. A Gaussian random
process is one for which all the basic distribution functions

f(xi) are Gaussian distributions.

17

Example
13c
The variable

x has a Lorentzian distribution:


distribution
1
f ( x)
1 x2

(13.42)

The absolute moment of order is proportional to

1
x
2 dx
1 x

(13.43)

But this integral does not converge for 1, and thus not for

=2+, >0.
>0 We see that central limit theorem does not apply
to a Lorentzian distribution.

18

Random Process or Stochastic Process


By a random process or stochastic process x(t) we mean a
process in which the variable x does not depend in a
completely definite way on the independent variable t, which
may denote the time. In observations on the different
systems of a representative ensemble we find different
functions x(t). All we can do is to study certain probability
distributions - we cannot obtain the functions x(t) themselves
for the members of the ensemble. In Figure 13.1 one can see
a sketch
x of a possible x(t) for one system.

Figure 13.1 Sketch of a random process

x(t)

19

The plot might, for example, be an oscillogram of the thermal


noise current x(t)I(t) obtained from the output of a filter when
a thermal noise voltage is applied to the input.
We can determine, for example

p1(x,t)dx =Probability of finding x in the range (x, x+dx)at


x+dx)

(13.44)

time t;

p2(x1,t1; x2,t2)dx1dx2 =Probability of finding x in (x1, x1+dx1) at time


t1; and in the range (x2, x2+dx2) at time t2

(13.45)

If we had an actual oscillogram record covering a long period


of time we might construct an ensemble by cutting the record
up into strips of equal length
the other, as in Figure 13.2.

T and mounting them one over

20

The probabilities

p1 and p2 will be found from the ensemble.

Proceeding similarly we can form


probability distributions

p3, p4,. The whole set of

pn (n=1,2,,) may be necessary to

describe the random process completely.


x
Figure 13.2 Recordings
1

x(t)
T

x(t)

x(t)

of x(t) versus t for three


system of an ensemble,
as simulated by taking
three intervals of
duration T from a single
long recording. Time
averages are taken in a
horizontal direction in
such a display;
ensemble averages are
taken in a vertical
21
direction.

In many important cases p2 contains all the information we


need. When this is true the random process is called a Markoff
process. A stationary random process is one for which the
joint probability distributions pn
displacement of the origin of time.

are invariant under a


We assume in all our

further discussion that we are dealing with stationary Markoff


processes.
processes
It is useful to introduce the conditional probability

P2(x1,0 x2,t)dx2 for the probability that given x1 one finds x in


dx2 at x2 a time t later.
Than it is obvious
that

p2 ( x1 ,0; x2 , t ) p1 ( x1 ,0) P2 ( x1 ,0 x2 , t )

(13.46)

22

Wiener-Khintchine
Theorem
The Wiener-Khintchine theorem states a relationship
between two important characteristics of a random
process: the power spectrum of the process and the
correlation
the
Suppose
wefunction
develop of
one
ofprocess.
the records in Fig.13.2 of x(t) for

0<t<T in a Fourier series:

x (t ) a n cos 2f n t bn sin 2f n t

(13.47)

n 1

where

fn=n/T. We assume that <x(t)>=0, where the angular

parentheses <> denote time average; because the average is


assumed zero there is no constant term in the Fourier series.
The Fourier coefficients are highly variable from one record of
duration T to another. For many type of noise the an, bn have
Gaussian distributions.
distributions When this is true the process (13.47)
is said to be a Gaussian random process.
23
process

Let us now imagine that x(t) is an electric current flowing


through unit resistance. The instantaneous power dissipation

x2(t). Each Fourier component will contribute to the total


power dissipation. The power in the n-th component is
is

Pn an cos 2f n t bn sin 2f n t

(13.48)

We do not consider cross products terms in the power of the form

an cos 2f n t bn sin 2f n t am cos 2f m t bm sin 2f m t

(13.49)

because for nm the time average of such terms will be


zero. The time average of P is

Pn an2 bn2 / 2

(13.50)

because

cos 2 2f nt 12 ;

sin 2 2f nt 12 ;

cos 2f n t sin 2f nt 0. (13.51)

24

We now turn to ensemble averages, denoted here by a bar


over the quantity. As we mentioned above, every record in
Fig.13.2 running in time from 0 to T. We will consider that an
ensemble average is an average over a large set of
independent records. From a random process we will have

an 0;

bn 0;

a n bn 0

an bm bn bm n2 nm

(13.52)
(13.53)

where for a Gaussian random process

n is just the standard

deviation, as in example 13b

f ( x)

x 2 / 2 2

Thus

an cos 2f n t bn sin 2f n t 2 n2 (cos 2 2f n t sin 2 2f n t ) n2

(13.54)

25

Thus from (13.49) the ensemble average of the time average


power dissipation associated with n-th component of x(t) is

Pn

2
n

(13.55)

Power Spectrum
We define the power spectrum or spectral density G(f) of the
random process as the ensemble average of the time average
of the power dissipation in unit resistance per unit frequency
bandwidth. If

fn equal to the separation between two

adjacent frequencies

f n f n 1 f n

n 1 n 1

T
T T

(13.56)

we have

G ( f n )f n Pn n2
Now by (13.51), (13.52) and (13.53)

(13.57)

26

x 2 (t ) n2

(13.58)

Using (13.56)

x 2 (t ) G ( f n )f n G ( f n )df
n

(13.59)

The integral of the power spectrum over all frequencies gives


the ensemble average total power.
power

Correlation Function
Let us consider
function

now

the

correlation

C ( ) x(t ) x(t )

(13.60)

where the average is over the time t. This is the


autocorrelation function. Without changing the result we may
take an ensemble average xof
average
(t ) the
x(t time
)
so
that

27

C ( ) x(t ) x(t )

cos 2f nt b sin 2f nt an cos 2f n (t ) b sin 2f n (t )

n ,m

1
2

(a
n

2
n

bn2 ) cos 2f n n2 cos 2f n


n

(13.61)

Using (13.57)

C ( ) G ( f ) cos 2fdf

(13.62)

Thus the correlation function is the Fourier cosine transform of


the power spectrum.
Using the inverse Fourier transform we can write

G ( f ) 4 C ( ) cos 2fd

(13.63)

This, together with (13.62) is the Winer-Khitchine theorem. It


has an obvious physical content. The correlation function tells
us essentially how rapidly the random process is changing.

28

Example
13d.
If

C ( ) e / c

(13.64)

c is a measure of the above time the system


exists without changing its state, as measured by x(t), by more
than e-1. c in this case have a meaning of correlation time. We
then expect physically that frequencies much higher than, 1/c
we may say that

will not be represented in an important way in the power


spectrum. Now ifC() is given by (13.64), the Wiener4 c
tells
/c
Khintchine
theorem
us
that
(13.65)
G ( f ) 4 e
cos 2fd
2
1 ( 2f c )
0
Thus, as shown in Fig. 13.3, the power spectrum is flat (on a
log. frequency scale) out to

2f1/c, and then decreases as

1/f2 at high frequencies. Note that the noise spectrum for the
29
correlation function is white out of cutoff fc1/2c,

0.5

log102f

The Nyquist Theorem

Figure 13.3 Plot of


spectral density
versus log102f for
an exponential
function with c=10-4

c.

The Nyquist theorem is of great importance in experimental


physics and in electronics. The theorem gives a quantitative
expression for the thermal noise generated by a system in
thermal equilibrium and is therefore needed in any estimate
of the limiting signal-to-noise ratio of experimental set-ups.
In the original form the Nyquist theorem states that the
mean square voltage across a resistor of resistance
thermal equilibrium 2at thermal T is given by

V 4 RkTf

R in
(13.66)

30

where f is the frequency band width which the voltage


fluctuations are measured; all Fourier components outside the
given range are ignored. Remember the definition of the
spectral density G(f), we may write Nyquist results as
(13.67)
G ( f ) 4 RkT
This is not strictly the power density, which would be

G(f)/R.

Figure 13.4 The noise


generator produces a
Noise
Filter
generator
power spectrum
R
R G(f)=4RkT. If the filter
passes unit frequency
range, the resistance R
will absorb power 2RkT.
2RkT
R isunit
matched
to R.range
The maximum thermal noise power per
frequency
delivered by a resistor to a matched load will be G(f)/4R=kT;
factor of 4 enters where it does because the power delivered to
the load R is
(13.68)
2
2
2

I R' V R ' /( R R ' )

31

V / 4R
which at match (R=R) is
(Figure.13.4).
We will derive the Nyquist theorem in two ways: first,
following the original transmission line derivation,
derivation and,
second, using a microscopic argument.
Transmission line derivation
Zc=R

Figure 13.5 Transmission line of


length l with matched
terminations.

Consider as in Figure 13.5 a loss less transmission line of


length l and characteristic impedance

Zc=R terminated at

each end by a resistance R. The line is therefore matched at


each end in the sense that all energy traveling down the line
will be absorbed without reflection in the appropriate
32
resistance.

The entire circuit is maintained at temperature

T. In analogy

to the argument on the black-body radiation (Lecture 8) the


transmission line has two electromagnetic modes (one
propagation in each direction) in the frequency range

c'
f
l
where

(13.69)

c is the propagation velocity on the line. Each mode has e

e / kT 1

(13.70)

in equilibrium. We are usually concerned here with the


classical limit , so that the thermal energy on the line in the
frequency range

kTlf
c'

(13.71)

The rate at which energy comes off the line in one direction is

33

kTf

(13.72)

Because the thermal impedance is matched to the line, the


power coming off the line at one end is absorbed in the
terminal impedance R at that end. The load emits energy at
the same rate. The power input to the load is
(13.73)
I 2 R kTf

But

V=I(2R), so that

V 2 / R 4kTf

(13.74)

which is the Nyquist


theorem.
Microscopic Derivation
We consider a resistance R with N electrons per unit volume;
length l, area A and carrier relaxation time c. We treat the
electrons as Maxwellian but it was shown that the noise
voltage is independent of such details, involving only the
value of the resistance regardless of the details of the
mechanisms contributing to the resistance.

34

First note that

V IR RAj RANeu

(13.75)

here V is the voltage, I the current, j the current density, and


is the average (or drift) velocity component of the electrons
down the resistor. Observing that
electrons in the specimen

NAl is the total number of

NAlu ui

(13.76)

Summed over all electrons. Thus

V (Re/ l ) ui Vi

(13.77)

ui and Vi are the random variables. The spectral


density G(f) has the property that in the range f
where

Vi G ( f )f
2

(13.78)

35

We suppose that the correlation function may be written as

C ( ) Vi (t )Vi (t ) Vi 2e / c

(13.79)

Then, from the Wiener-Khintchine theorem we


have

G ( f ) 4 (Re/ l ) 2 u 2 e / c cos 2fd 4 (Re/ l ) 2 u 2


0

Usually in metals at room temperature


through the microwave range
We recall that

1
2

(13.80)

c<10-13 s, so from dc

c<<1 and may be neglected.

mu 2 12 kT

(m- mass of electron,


electronu
So that

c
1 (2f c ) 2

(13.81)

average velocity of electron)

u 2 kT / m

(13.82)

36

Thus in the frequency range

or

kT Re
V 2 NAlVi 2 NAlG ( f )f NAl 4

c f
m l

V 4 RkTf
2

(13.83)

(13.84)

Here we have used the relation

Ne c / m
2

(13.85)

from the theory of conductivity and also elementary relation

R l / A

(13.86)

is the electrical conductivity.

37

The simplest way to establish (13.85) in a plausible way is to


solve the drift velocity equation

d 1
m
u eE
dt c

so that in the steady state ( or for


have

(13.87)

c<<1 ) we

u e c E / m

(13.88)

giving for the mobility (drift velocity per unit electric


field)

u / E e c / m

(13.89)

Then we have for the electric


conductivity

j / E Neu / E Ne 2 c / m

(13.90)

38

Das könnte Ihnen auch gefallen