Sie sind auf Seite 1von 270

Contents

Chapter 1. Introduction

1.1

Classical Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1.2

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1.3

Branches of description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1.4

Thermodynamic equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1.5

Non-equilibrium thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1.6

Laws of thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1.7

System models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1.8

States and processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.1.9

Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

1.1.10 Conjugate variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.1.11 Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.1.12 Axiomatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.1.13 Scope of thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

1.1.14 Applied elds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

1.1.15 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

1.1.16 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

1.1.17 Cited bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

1.1.18 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

1.1.19 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

Statistical Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

1.2.1

Principles: mechanics and ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

1.2.2

Statistical thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

1.2.3

Non-equilibrium statistical mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

1.2.4

Applications outside thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

1.2.5

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

1.2.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

1.2.7

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

1.2

ii

CONTENTS

1.3

1.4

1.5

1.2.8

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

1.2.9

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

Chemical Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

1.3.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

1.3.2

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

1.3.3

Chemical energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

1.3.4

Chemical reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

1.3.5

Non equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

1.3.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

1.3.7

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

1.3.8

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

1.3.9

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

Equilibrium Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

1.4.1

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

1.4.2

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

Non-equilibrium Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

1.5.1

Scope of non-equilibrium thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

1.5.2

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

1.5.3

Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

1.5.4

Stationary states, uctuations, and stability . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

1.5.5

Local thermodynamic equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

1.5.6

Entropy in evolving systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

1.5.7

Flows and forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

1.5.8

The Onsager relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

1.5.9

Speculated extremal principles for non-equilibrium processes . . . . . . . . . . . . . . . . . .

36

1.5.10 Applications of non-equilibrium thermodynamics . . . . . . . . . . . . . . . . . . . . . . . .

36

1.5.11 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

1.5.12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

1.5.13 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

1.5.14 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

Chapter 2. Laws of Thermodynamics

40

2.1

Zeroth law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

2.1.1

Zeroth law as equivalence relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

2.1.2

Foundation of temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

2.1.3

Physical meaning of the usual statement of the zeroth law . . . . . . . . . . . . . . . . . . . .

41

2.1.4

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

2.1.5

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

2.1.6

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

CONTENTS

iii

2.2

First law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

2.2.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

2.2.2

Conceptually revised statement, according to the mechanical approach . . . . . . . . . . . . .

45

2.2.3

Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

2.2.4

Various statements of the law for closed systems . . . . . . . . . . . . . . . . . . . . . . . . .

46

2.2.5

Evidence for the rst law of thermodynamics for closed systems . . . . . . . . . . . . . . . . .

48

2.2.6

State functional formulation for innitesimal processes . . . . . . . . . . . . . . . . . . . . . .

51

2.2.7

Spatially inhomogeneous systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

52

2.2.8

First law of thermodynamics for open systems . . . . . . . . . . . . . . . . . . . . . . . . . .

52

2.2.9

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

2.2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

2.2.11 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

2.2.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

Second law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

2.3.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

2.3.2

Various statements of the law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

2.3.3

Corollaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

2.3.4

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

66

2.3.5

Statistical mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68

2.3.6

Derivation from statistical mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68

2.3.7

Living organisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

2.3.8

Gravitational systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

2.3.9

Non-equilibrium states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

2.3.10 Arrow of time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

2.3.11 Irreversibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

2.3.12 Quotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

72

2.3.13 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

2.3.14 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

2.3.15 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

2.3.16 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

Third law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

2.4.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

2.4.2

Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

2.4.3

Mathematical formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

2.4.4

Consequences of the third law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

2.4.5

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

2.4.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

2.4.7

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

2.3

2.4

iv
3

CONTENTS
Chapter 3. History

83

3.1

History of thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

3.1.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

3.1.2

Branches of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

3.1.3

Entropy and the second law

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

3.1.4

Heat transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

3.1.5

Cryogenics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

3.1.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

3.1.7

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

3.1.8

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

3.1.9

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction . . . . . . .

89

3.2.1

Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

3.2.2

Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

3.2.3

Reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

3.2.4

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

3.2.5

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91

3.2

Chapter 4. System State

92

4.1

Control volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92

4.1.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92

4.1.2

Substantive derivative

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92

4.1.3

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

4.1.4

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

4.1.5

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

Ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

4.2.1

Types of ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

4.2.2

Classical thermodynamic ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

4.2.3

Heat capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

4.2.4

Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

4.2.5

Thermodynamic potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

96

4.2.6

Speed of sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

96

4.2.7

Table of ideal gas equations

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

4.2.8

Ideal quantum gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

4.2.9

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

4.2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

Real gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

4.3.1

Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

98

4.3.2

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99

4.2

4.3

CONTENTS

4.3.3

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.3.4

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

Chapter 5. System Processes


5.1

5.2

5.3

5.4

5.5

99

101

Thermodynamic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


5.1.1

Kinds of process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

5.1.2

A cycle of quasi-static processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

5.1.3

Conjugate variable processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

5.1.4

Thermodynamic potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.1.5

Polytropic processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.1.6

Processes classied by the second law of thermodynamics . . . . . . . . . . . . . . . . . . . . 103

5.1.7

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.1.8

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.1.9

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Isobaric process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104


5.2.1

Specic heat capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.2.2

Sign convention for work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.2.3

Dening enthalpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.2.4

Variable density viewpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.2.5

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.2.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Isochoric process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106


5.3.1

Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.3.2

Ideal Otto cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.3.3

Etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.3.4

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.3.5

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.3.6

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Isothermal process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108


5.4.1

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.4.2

Details for an ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.4.3

Calculation of work

5.4.4

Entropy changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

5.4.5

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

5.4.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Adiabatic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110


5.5.1

Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

5.5.2

Adiabatic heating and cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

5.5.3

Ideal gas (reversible process) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

vi

CONTENTS
5.5.4

Graphing adiabats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

5.5.5

Etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

5.5.6

Conceptual signicance in thermodynamic theory . . . . . . . . . . . . . . . . . . . . . . . . 116

5.5.7

Divergent usages of the word adiabatic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

5.5.8

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

5.5.9

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

5.5.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118


5.6

5.7

5.8

Isenthalpic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118


5.6.1

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5.6.2

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

Isentropic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118


5.7.1

Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

5.7.2

Isentropic processes in thermodynamic systems . . . . . . . . . . . . . . . . . . . . . . . . . 119

5.7.3

Isentropic ow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

5.7.4

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5.7.5

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5.7.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Polytropic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122


5.8.1

Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

5.8.2

Applicability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.3

Polytropic Specic Heat Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.4

Relationship to ideal processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.5

Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.6

Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.7

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.8

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Chapter 6. System Properties


6.1

6.2

125

Introduction to entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125


6.1.1

Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

6.1.2

Example of increasing entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.1.3

Origins and uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.1.4

Heat and entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.1.5

Introductory descriptions of entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6.1.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6.1.7

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6.1.8

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.2.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

CONTENTS

vii

6.2.2

Denitions and descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

6.2.3

Second law of thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

6.2.4

Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

6.2.5

Entropy change formulas for simple processes . . . . . . . . . . . . . . . . . . . . . . . . . . 137

6.2.6

Approaches to understanding entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

6.2.7

Interdisciplinary applications of entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

6.2.8

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

6.2.9

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

6.2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142


6.2.11 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.2.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.3

6.4

6.5

Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.3.1

Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

6.3.2

Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

6.3.3

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

6.3.4

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

6.3.5

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

6.3.6

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

Thermodynamic temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153


6.4.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

6.4.2

The relationship of temperature, motions, conduction, and thermal energy

6.4.3

Practical applications for thermodynamic temperature . . . . . . . . . . . . . . . . . . . . . . 160

6.4.4

Denition of thermodynamic temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

6.4.5

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

6.4.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

6.4.7

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

6.4.8

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

. . . . . . . . . . . 154

Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.5.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

6.5.2

Heat and work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

6.5.3

Specic volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

6.5.4

Gas volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

6.5.5

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

6.5.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

Chapter 7

175

7.1

Thermodynamic system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175


7.1.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

7.1.2

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

viii

CONTENTS
7.1.3

Systems in equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

7.1.4

Walls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

7.1.5

Surroundings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

7.1.6

Closed system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

7.1.7

Isolated system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

7.1.8

Selective transfer of matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

7.1.9

Open system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

7.1.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179


7.1.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
8

Chapter 8. Material Properties


8.1

181

Heat capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181


8.1.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

8.1.2

Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

8.1.3

Measurement of heat capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

8.1.4

Theory of heat capacity

8.1.5

Table of specic heat capacities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

8.1.6

Mass heat capacity of building materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

8.1.7

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

8.1.8

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

8.1.9

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

8.1.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195


8.1.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
8.2

8.3

Compressibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
8.2.1

Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

8.2.2

Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

8.2.3

Earth science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

8.2.4

Fluid dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

8.2.5

Negative compressibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

8.2.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

8.2.7

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Thermal expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199


8.3.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

8.3.2

Coecient of thermal expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

8.3.3

Expansion in solids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

8.3.4

Isobaric expansion in gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

8.3.5

Expansion in liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

8.3.6

Expansion in mixtures and alloys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

8.3.7

Apparent and absolute expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

CONTENTS

ix

8.3.8

Examples and applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

8.3.9

Thermal expansion coecients for various materials . . . . . . . . . . . . . . . . . . . . . . . 204

8.3.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205


8.3.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
8.3.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
9

Chapter 9. Potentials
9.1

207

Thermodynamic potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207


9.1.1

Description and interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

9.1.2

Natural variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

9.1.3

The fundamental equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

9.1.4

The equations of state

9.1.5

The Maxwell relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

9.1.6

Euler integrals

9.1.7

The GibbsDuhem relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

9.1.8

Chemical reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

9.1.9

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

9.1.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211


9.1.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
9.1.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
9.1.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
9.2

Enthalpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
9.2.1

Origins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

9.2.2

Formal denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

9.2.3

Other expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

9.2.4

Physical interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

9.2.5

Relationship to heat

9.2.6

Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

9.2.7

Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

9.2.8

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

9.2.9

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

9.2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218


9.2.11 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
9.2.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
9.3

Internal energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219


9.3.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

9.3.2

Description and denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

9.3.3

Internal energy of the ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

9.3.4

Internal energy of a closed thermodynamic system . . . . . . . . . . . . . . . . . . . . . . . . 222

CONTENTS
9.3.5

Internal energy of multi-component systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

9.3.6

Internal energy in an elastic medium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

9.3.7

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

9.3.8

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

9.3.9

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

9.3.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225


9.3.11 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
10 Chapter 10. Equations

226

10.1 Ideal gas law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226


10.1.1 Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
10.1.2 Applications to thermodynamic processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
10.1.3 Deviations from ideal behavior of real gases . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
10.1.4 Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
10.1.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
10.1.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
10.1.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
10.1.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
11 Chapter 11. Fundamentals

230

11.1 Fundamental thermodynamic relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230


11.1.1 Derivation from the rst and second laws of thermodynamics . . . . . . . . . . . . . . . . . . 230
11.1.2 Derivation from statistical mechanical principles . . . . . . . . . . . . . . . . . . . . . . . . . 231
11.1.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
11.1.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
11.2 Heat engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
11.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
11.2.2 Everyday examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
11.2.3 Examples of heat engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
11.2.4 Eciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
11.2.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
11.2.6 Heat engine enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
11.2.7 Heat engine processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
11.2.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
11.2.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
11.3 Thermodynamic cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
11.3.1 Heat and work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
11.3.2 Modelling real systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
11.3.3 Well-known thermodynamic cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240

CONTENTS

xi

11.3.4 State functions and entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241


11.3.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
11.3.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
11.3.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
11.3.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
12 Text and image sources, contributors, and licenses

243

12.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243


12.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
12.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

Chapter 1

Chapter 1. Introduction
1.1 Classical Thermodynamics

Its laws are explained by statistical mechanics, in terms of


the microscopic constituents.
Thermodynamics applies to a wide variety of topics in
science and engineering, especially physical chemistry,
chemical engineering and mechanical engineering.
Historically, the distinction between heat and temperature
was studied in the 1750s by Joseph Black. Characteristically thermodynamic thinking began in the work of Carnot
(1824) who believed that the eciency of heat engines was
the key that could help France win the Napoleonic Wars.[1]
The Irish-born British physicist Lord Kelvin was the rst
to formulate a concise denition of thermodynamics in
1854:[2]
Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous
parts of bodies, and the relation of heat to electrical agency.

Initially, thermodynamics, as applied to heat engines, was


concerned with the thermal properties of their 'working materials, such as steam, in an eort to increase the eciency
Annotated color version of the original 1824 Carnot heat engine and power output of engines. Thermodynamics was later
showing the hot body (boiler), working body (system, steam), and expanded to the study of energy transfers in chemical procold body (water), the letter-labels indicate the stopping points in cesses, such as the investigation, published in 1840, of the
Carnot cycle
heats of chemical reactions[3] by Germain Hess, which was
not originally explicitly concerned with the relation between
Thermodynamics is a branch of physics concerned with energy exchanges by heat and work. From this evolved the
heat and temperature and their relation to energy and work. study of chemical thermodynamics and the role of entropy
It denes macroscopic variables, such as internal energy, in chemical reactions.[4][5][6][7][8][9][10][11]
entropy, and pressure, that partly describe a body of matter
or radiation. It states that the behavior of those variables is
subject to general constraints, that are common to all mate- 1.1.1 Introduction
rials, beyond the peculiar properties of particular materials.
These general constraints are expressed in the four laws of Historically, thermodynamics arose from the study of two
thermodynamics. Thermodynamics describes the bulk be- distinct kinds of transfer of energy, as heat and as work,
havior of the body, not the microscopic behaviors of the and the relation of those to the systems macroscopic varivery large numbers of its microscopic constituents, such as ables of volume, pressure and temperature.[12][13] As it demolecules. The basic results of thermodynamics rely on the veloped, thermodynamics began also to study transfers of
existence of idealized states of thermodynamic equilibrium. matter.
1

CHAPTER 1. CHAPTER 1. INTRODUCTION

The plain term 'thermodynamics refers to a macroscopic description of bodies and processes.[14] Reference to atomic constitution is foreign to classical
thermodynamics.[15] Usually the plain term 'thermodynamics refers by default to equilibrium as opposed to nonequilibrium thermodynamics. The qualied term 'statistical
thermodynamics refers to descriptions of bodies and processes in terms of the atomic or other microscopic constitution of matter, using statistical and probabilistic reasoning.

empirical work in physics and chemistry.[9] Always associated with the material that constitutes a system, its working
substance, are the walls that delimit the system, and connect
it with its surroundings. The state variables chosen for the
system should be appropriate for the natures of the walls
and surroundings.[24]
A thermodynamic operation is an articial physical manipulation that changes the denition of a system or its surroundings. Usually it is a change of the permeability of a
wall of the system,[25] that allows energy (as heat or work) or
matter (mass) to be exchanged with the environment. For
example, the partition between two thermodynamic systems can be removed so as to produce a single system. A
thermodynamic operation that increases the range of possible transfers usually leads to a thermodynamic process of
transfer of mass or energy that changes the state of the system, and the transfer occurs in natural accord with the laws
of thermodynamics. But if the operation simply reduces the
possible range of transfers, in general it does not initiate a
process. The states of the systems surrounding systems are
assumed to be unchanging in time except when they are
changed by a thermodynamic operation, whereupon a thermodynamic process can be initiated.

Thermodynamic equilibrium is one of the most important


concepts for thermodynamics.[16][17] The temperature of a
thermodynamic system is well dened, and is perhaps the
most characteristic quantity of thermodynamics. As the
systems and processes of interest are taken further from
thermodynamic equilibrium, their exact thermodynamical
study becomes more dicult. Relatively simple approximate calculations, however, using the variables of equilibrium thermodynamics, are of much practical value. Many
important practical engineering cases, as in heat engines or
refrigerators, can be approximated as systems consisting of
many subsystems at dierent temperatures and pressures.
If a physical process is too fast, the equilibrium thermodynamic variables, for example temperature, may not be well
enough dened to provide a useful approximation.
A thermodynamic system can also be dened in terms of
Central to thermodynamic analysis are the denitions of the the cyclic processes that it can undergo.[26] A cyclic prosystem, which is of interest, and of its surroundings.[8][18] cess is a cyclic sequence of thermodynamic operations and
The surroundings of a thermodynamic system consist of processes that can be repeated indenitely often without
physical devices and of other thermodynamic systems that changing the nal state of the system.
can interact with it. An example of a thermodynamic sur- For thermodynamics and statistical thermodynamics to
rounding is a heat bath, which is held at a prescribed tem- apply to a physical system, it is necessary that its interperature, regardless of how much heat might be drawn from nal atomic mechanisms fall into one of two classes:
it.
There are four fundamental kinds of physical entities
in thermodynamics:

those so rapid that, in the time frame of the process of


interest, the atomic states rapidly bring system to its
own state of internal thermodynamic equilibrium; and

states of a system, and the states of its surrounding


systems

those so slow that, in the time frame of the process of


interest, they leave the system unchanged.[27][28]

walls of a system,[19][20][21][22][23]
thermodynamic processes of a system, and
thermodynamic operations.
This allows two fundamental approaches to thermodynamic
reasoning, that in terms of states of a system, and that in
terms of cyclic processes of a system.

The rapid atomic mechanisms account for the internal


energy of the system. They mediate the macroscopic
changes that are of interest for thermodynamics and statistical thermodynamics, because they quickly bring the system
near enough to thermodynamic equilibrium. When intermediate rates are present, thermodynamics and statistical
mechanics cannot be applied.[27] Such intermediate rate
atomic processes do not bring the system near enough to
thermodynamic equilibrium in the time frame of the macroscopic process of interest. This separation of time scales of
atomic processes is a theme that recurs throughout the subject.

A thermodynamic system can be dened in terms of


its states.[17] In this way, a thermodynamic system is a
macroscopic physical object, explicitly specied in terms of
macroscopic physical and chemical variables that describe
its macroscopic properties. The macroscopic state variables For example, classical thermodynamics is characterized by
of thermodynamics have been recognized in the course of its study of materials that have equations of state or char-

1.1. CLASSICAL THERMODYNAMICS

acteristic equations. They express equilibrium relations between macroscopic mechanical variables and temperature
and internal energy. They express the constitutive peculiarities of the material of the system. A classical material
can usually be described by a function that makes pressure
dependent on volume and temperature, the resulting pressure being established much more rapidly than any imposed
change of volume or temperature.[29][30][31][32]
The present article takes a gradual approach to the subject, starting with a focus on cyclic processes and thermodynamic equilibrium, and then gradually beginning to further
consider non-equilibrium systems.
Thermodynamic facts can often be explained by viewing
macroscopic objects as assemblies of very many microscopic or atomic objects that obey Hamiltonian dynamics.[8][33][34] The microscopic or atomic objects exist in
species, the objects of each species being all alike. Because
of this likeness, statistical methods can be used to account
for the macroscopic properties of the thermodynamic system in terms of the properties of the microscopic species.
Such explanation is called statistical thermodynamics; also
often it is referred to by the term 'statistical mechanics',
though this term can have a wider meaning, referring to 'microscopic objects, such as economic quantities, that do not
obey Hamiltonian dynamics.[33]

1.1.2

History

The history of thermodynamics as a scientic discipline


generally begins with Otto von Guericke who, in about
1650,[35] built and designed the worlds rst vacuum pump
and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the physicist and chemist Robert Boyle had learned of Guerickes designs and, in 1656, in coordination with the scientist Robert
Hooke, built an air pump.[36] Using this pump, Boyle and
Hooke noticed a correlation between pressure, temperature,
and volume. In time, they formulated Boyles Law, which
states that for a gas at constant temperature, its pressure
and volume are inversely proportional. In 1679, based on
these concepts, an associate of Boyles named Denis Papin built a steam digester, which was a closed vessel with
a tightly tting lid that conned steam until a high pressure was generated. Later versions of this design implemented a steam release valve that kept the machine from
exploding. By watching the valve rhythmically move up
and down, Papin conceived of the idea of a piston and a
cylinder engine. He did not, however, follow through with
his design. Nevertheless, in 1697, based on Papins designs,
the engineer Thomas Savery built the rst engine, followed

The thermodynamicists representative of the original eight founding schools of thermodynamics. The schools with the most-lasting
eect in founding the modern versions of thermodynamics are the
Berlin school, particularly as established in Rudolf Clausiuss 1865
textbook The Mechanical Theory of Heat, the Vienna school, while
the statistical mechanics of Ludwig Boltzmann, and the Gibbsian
school at Yale University, led by the American engineer Willard
Gibbs' 1876 On the Equilibrium of Heterogeneous Substances
launched chemical thermodynamics.

by Thomas Newcomen in 1712. Although these early engines were crude and inecient, they attracted the attention
of the leading scientists of the time.
The concepts of heat capacity and latent heat, which were
necessary for development of thermodynamics, were developed by Professor Joseph Black at the University of Glasgow, where James Watt worked as an instrument maker.
Watt consulted with Black on tests of his steam engine, but
it was Watt who conceived the idea of the external condenser, greatly raising the steam engine's eciency.[37] All
the previous work led Sadi Carnot, the father of thermodynamics, to publish Reections on the Motive Power of Fire
(1824), a discourse on heat, power, energy, and engine efciency. The paper outlined the basic energetic relations
between the Carnot engine, the Carnot cycle, and motive
power. It marked the start of thermodynamics as a modern
science.[11]
The rst thermodynamic textbook was written in 1859 by
William Rankine, originally trained as a physicist and a professor of civil and mechanical engineering at the University
of Glasgow.[38] The rst and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out
of the works of William Rankine, Rudolf Clausius, and
William Thomson (Lord Kelvin).
The foundations of statistical thermodynamics were set out
by physicists such as James Clerk Maxwell, Ludwig Boltz-

CHAPTER 1. CHAPTER 1. INTRODUCTION

engine in reference to Thomsons 1849 phraseology,[42]:545


From 1873 to '76, the American mathematical physicist and Thomsons note on Joules 1851 paper On the AirJosiah Willard Gibbs published a series of three papers, Engine.
the most famous being "On the equilibrium of heteroge- In 1854, thermo-dynamics, as a functional term to denote
neous substances".[4] Gibbs showed how thermodynamic the general study of the action of heat, was rst used by
processes, including chemical reactions, could be graphi- William Thomson in his paper On the Dynamical Theory
cally analyzed. By studying the energy, entropy, volume, of Heat.[2]
chemical potential, temperature and pressure of the In 1859, the closed compound form thermodynamics was
thermodynamic system, one can determine whether a pro- rst used by William Rankine in A Manual of the Steam Encess would occur spontaneously.[39] Chemical thermody- gine in a chapter on the Principles of Thermodynamics.[43]
namics was further developed by Pierre Duhem,[5] Gilbert
N. Lewis, Merle Randall,[6] and E. A. Guggenheim,[7][8]
who applied the mathematical methods of Gibbs.
1.1.3 Branches of description
mann, Max Planck, Rudolf Clausius and J. Willard Gibbs.

Thermodynamic systems are theoretical constructions used


to model physical systems that exchange matter and energy in terms of the laws of thermodynamics. The study
of thermodynamical systems has developed into several related branches, each using a dierent fundamental model
as a theoretical or experimental basis, or applying the principles to varying types of systems.
Classical thermodynamics

The lifetimes of some of the most important contributors to thermodynamics

Etymology
The etymology of thermodynamics has an intricate history.
It was rst spelled in a hyphenated form as an adjective
(thermo-dynamic) in 1849 and from 1854 to 1859 as the
hyphenated noun thermo-dynamics to represent the science
of heat and motive power and thereafter as thermodynamics.

Classical thermodynamics accounts for the adventures of a


thermodynamic system in terms, either of its time-invariant
equilibrium states, or else of its continually repeated cyclic
processes, but, formally, not both in the same account.
It uses only time-invariant, or equilibrium, macroscopic
quantities measurable in the laboratory, counting as timeinvariant a long-term time-average of a quantity, such as
a ow, generated by a continually repetitive process.[44][45]
In classical thermodynamics, rates of change are not admitted as variables of interest. An equilibrium state stands
endlessly without change over time, while a continually repeated cyclic process runs endlessly without a net change in
the system over time.

In the account in terms of equilibrium states of a system, a


The components of the word thermo-dynamic are derived state of thermodynamic equilibrium in a simple system is
from the Greek words therme, meaning heat, and spatially homogeneous.
dynamis, meaning power (Haynie claims that the
In the classical account solely in terms of a cyclic process,
word was coined around 1840).[40][41]
the spatial interior of the 'working body' of that process is
The term thermo-dynamic was rst used in January 1849 not considered; the 'working body' thus does not have a deby William Thomson, later Lord Kelvin, in the phrase a per- ned internal thermodynamic state of its own because no asfect thermo-dynamic engine to describe Sadi Carnots heat sumption is made that it should be in thermodynamic equiengine.[42]:545 In April 1849, Thomson added an appendix librium; only its inputs and outputs of energy as heat and
to his paper and used the term thermodynamic in the phrase work are considered.[46] It is common to describe a cycle
the object of a thermodynamic engine.[42]:569
theoretically as composed of a sequence of very many therPierre Perrot claims that the term thermodynamics was modynamic operations and processes. This creates a link to
coined by James Joule in 1858 to designate the science of the description in terms of equilibrium states. The cycle is
relations between heat and power.[11] Joule, however, never then theoretically described as a continuous progression of
used that term, but did use the term perfect thermo-dynamic equilibrium states.

1.1. CLASSICAL THERMODYNAMICS


Classical thermodynamics was originally concerned with
the transformation of energy in a cyclic process, and the
exchange of energy between closed systems dened only by
their equilibrium states. The distinction between transfers
of energy as heat and as work was central.
As classical thermodynamics developed, the distinction between heat and work became less central. This was because
there was more interest in open systems, for which the distinction between heat and work is not simple, and is beyond the scope of the present article. Alongside the amount
of heat transferred as a fundamental quantity, entropy was
gradually found to be a more generally applicable concept,
especially when considering chemical reactions. Massieu in
1869 considered entropy as the basic dependent thermodynamic variable, with energy potentials and the reciprocal of
the thermodynamic temperature as fundamental independent variables. Massieu functions can be useful in presentday non-equilibrium thermodynamics. In 1875, in the work
of Josiah Willard Gibbs, entropy was considered a fundamental independent variable, while internal energy was a
dependent variable.[47]
All actual physical processes are to some degree irreversible. Classical thermodynamics can consider irreversible processes, but its account in exact terms is restricted to variables that refer only to initial and nal states
of thermodynamic equilibrium, or to rates of input and output that do not change with time. For example, classical
thermodynamics can consider time-average rates of ows
generated by continually repeated irreversible cyclic processes. Also it can consider irreversible changes between
equilibrium states of systems consisting of several phases
(as dened below in this article), or with removable or replaceable partitions. But for systems that are described in
terms of equilibrium states, it considers neither ows, nor
spatial inhomogeneities in simple systems with no externally imposed force elds such as gravity. In the account in
terms of equilibrium states of a system, descriptions of irreversible processes refer only to initial and nal static equilibrium states; the time it takes to change thermodynamic
state is not considered.[48][49]

Local equilibrium thermodynamics

5
For processes that involve only suitably small and smooth
spatial inhomogeneities and suitably small changes with
time, a good approximation can be found through the assumption of local thermodynamic equilibrium. Within the
large or global region of a process, for a suitably small local
region, this approximation assumes that a quantity known
as the entropy of the small local region can be dened in a
particular way. That particular way of denition of entropy
is largely beyond the scope of the present article, but here it
may be said that it is entirely derived from the concepts of
classical thermodynamics; in particular, neither ow rates
nor changes over time are admitted into the denition of
the entropy of the small local region. It is assumed without proof that the instantaneous global entropy of a nonequilibrium system can be found by adding up the simultaneous instantaneous entropies of its constituent small local
regions. Local equilibrium thermodynamics considers processes that involve the time-dependent production of entropy by dissipative processes, in which kinetic energy of
bulk ow and chemical potential energy are converted into
internal energy at time-rates that are explicitly accounted
for. Time-varying bulk ows and specic diusional ows
are considered, but they are required to be dependent variables, derived only from material properties described only
by static macroscopic equilibrium states of small local regions. The independent state variables of a small local region are only those of classical thermodynamics.
Generalized or extended thermodynamics
Like local equilibrium thermodynamics, generalized or extended thermodynamics also is concerned with the time
courses and rates of progress of irreversible processes in
systems that are smoothly spatially inhomogeneous. It describes time-varying ows in terms of states of suitably
small local regions within a global region that is smoothly
spatially inhomogeneous, rather than considering ows as
time-invariant long-term-average rates of cyclic processes.
In its accounts of processes, generalized or extended thermodynamics admits time as a fundamental quantity in a
more far-reaching way than does local equilibrium thermodynamics. The states of small local regions are dened by
macroscopic quantities that are explicitly allowed to vary
with time, including time-varying ows. Generalized thermodynamics might tackle such problems as ultrasound or
shock waves, in which there are strong spatial inhomogeneities and changes in time fast enough to outpace a tendency towards local thermodynamic equilibrium. Generalized or extended thermodynamics is a diverse and developing project, rather than a more or less completed subject
such as is classical thermodynamics.[50][51]

Local equilibrium thermodynamics is concerned with the


time courses and rates of progress of irreversible processes
in systems that are smoothly spatially inhomogeneous. It
admits time as a fundamental quantity, but only in a restricted way. Rather than considering time-invariant ows
as long-term-average rates of cyclic processes, local equilibrium thermodynamics considers time-varying ows in systems that are described by states of local thermodynamic For generalized or extended thermodynamics, the deniequilibrium, as follows.
tion of the quantity known as the entropy of a small local

6
region is in terms beyond those of classical thermodynamics; in particular, ow rates are admitted into the denition
of the entropy of a small local region. The independent state
variables of a small local region include ow rates, which
are not admitted as independent variables for the small local regions of local equilibrium thermodynamics.

CHAPTER 1. CHAPTER 1. INTRODUCTION

equilibrium. In thermodynamic equilibrium, a systems


properties are, by denition, unchanging in time. In thermodynamic equilibrium no macroscopic change is occurring or can be triggered; within the system, every microscopic process is balanced by its opposite; this is called the
principle of detailed balance. A central aim in equilibrium
Outside the range of classical thermodynamics, the def- thermodynamics is: given a system in a well-dened initial
to calculate what the
inition of the entropy of a small local region is no sim- state, subject to specied constraints,
equilibrium state of the system is.[53]
ple matter. For a thermodynamic account of a process in
terms of the entropies of small local regions, the deni- In theoretical studies, it is often convenient to consider the
tion of entropy should be such as to ensure that the second simplest kind of thermodynamic system. This is dened
law of thermodynamics applies in each small local region. variously by dierent authors.[48][54][55][56][57][58] For the
It is often assumed without proof that the instantaneous present article, the following denition is convenient, as
global entropy of a non-equilibrium system can be found abstracted from the denitions of various authors. A reby adding up the simultaneous instantaneous entropies of gion of material with all intensive properties continuous in
its constituent small local regions. For a given physical space and time is called a phase. A simple system is for the
process, the selection of suitable independent local non- present article dened as one that consists of a single phase
equilibrium macroscopic state variables for the construc- of a pure chemical substance, with no interior partitions.
tion of a thermodynamic description calls for qualitative Within a simple isolated thermodynamic system in thermophysical understanding, rather than being a simply math- dynamic equilibrium, in the absence of externally imposed
ematical problem concerned with a uniquely determined force elds, all properties of the material of the system are
thermodynamic description. A suitable denition of the spatially homogeneous.[59] Much of the basic theory of therentropy of a small local region depends on the physically modynamics is concerned with homogeneous systems in
insightful and judicious selection of the independent local thermodynamic equilibrium.[4][60]
non-equilibrium macroscopic state variables, and dierent
selections provide dierent generalized or extended ther- Most systems found in nature or considered in engineermodynamical accounts of one and the same given physical ing are not in thermodynamic equilibrium, exactly considprocess. This is one of the several good reasons for con- ered. They are changing or can be triggered to change over
sidering entropy as an epistemic physical variable, rather time, and are continuously and discontinuously subject to
[22]
than as a simply material quantity. According to a respected ux of matter and energy to and from other systems.
For
example,
according
to
Callen,
in
absolute
thermodyauthor: There is no compelling reason to believe that the
classical thermodynamic entropy is a measurable property namic equilibrium all radioactive materials would have decayed completely and nuclear reactions would have transof nonequilibrium phenomena, ...[52]
muted all nuclei to the most stable isotopes. Such processes,
which would take cosmic times to complete, generally can
be ignored..[22] Such processes being ignored, many sysStatistical thermodynamics
tems in nature are close enough to thermodynamic equilibStatistical thermodynamics, also called statistical mechan- rium that for many purposes their behaviour can be well
ics, emerged with the development of atomic and molecu- approximated by equilibrium calculations.
lar theories in the second half of the 19th century and early
20th century. It provides an explanation of classical thermodynamics. It considers the microscopic interactions between individual particles and their collective motions, in
terms of classical or of quantum mechanics. Its explana- Quasi-static transfers between simple systems are
tion is in terms of statistics that rest on the fact the sys- nearly in thermodynamic equilibrium and are retem is composed of several species of particles or collective versible
motions, the members of each species respectively being in
some sense all alike.
It very much eases and simplies theoretical thermodynamical studies to imagine transfers of energy and matter between two simple systems that proceed so slowly that at
1.1.4 Thermodynamic equilibrium
all times each simple system considered separately is near
enough to thermodynamic equilibrium. Such processes are
Equilibrium thermodynamics studies transformations of sometimes called quasi-static and are near enough to being
matter and energy in systems at or near thermodynamic reversible.[61][62]

1.1. CLASSICAL THERMODYNAMICS

Natural processes are partly described by tendency to- bringing them into contact and measuring any changes of
wards thermodynamic equilibrium and are irreversible their observable properties in time.[65] In traditional statements, the law provides an empirical denition of temperIf not initially in thermodynamic equilibrium, simple iso- ature and justication for the construction of practical therlated thermodynamic systems, as time passes, tend to evolve mometers. In contrast to absolute thermodynamic temperanaturally towards thermodynamic equilibrium. In the ab- tures, empirical temperatures are measured just by the mesence of externally imposed force elds, they become ho- chanical properties of bodies, such as their volumes, withmogeneous in all their local properties. Such homogene- out reliance on the concepts of energy, entropy or the rst,
ity is an important characteristic of a system in thermo- second, or third laws of thermodynamics.[56][66] Empirical
dynamic equilibrium in the absence of externally imposed temperatures lead to calorimetry for heat transfer in terms
of the mechanical properties of bodies, without reliance on
force elds.
mechanical concepts of energy.
Many thermodynamic processes can be modeled by compound or composite systems, consisting of several or many The physical content of the zeroth law has long been reccontiguous component simple systems, initially not in ther- ognized. For example, Rankine in 1853 dened tempermodynamic equilibrium, but allowed to transfer mass and ature as follows: Two portions of matter are said to have
energy between them. Natural thermodynamic processes equal temperatures when neither tends to communicate heat
are described in terms of a tendency towards thermody- to the other.[67] Maxwell in 1872 stated a Law of Equal
namic equilibrium within simple systems and in transfers Temperatures.[68] He also stated: All Heat is of the same
between contiguous simple systems. Such natural processes kind.[69] Planck explicitly assumed and stated it in its customary present-day wording in his formulation of the rst
are irreversible.[63]
two laws.[70] By the time the desire arose to number it as
a law, the other three had already been assigned numbers,
1.1.5 Non-equilibrium thermodynamics
and so it was designated the zeroth law.
Non-equilibrium thermodynamics[64] is a branch of thermodynamics that deals with systems that are not in
thermodynamic equilibrium; it is also called thermodynamics of irreversible processes.

1.1.6

Laws of thermodynamics

Main article: Laws of thermodynamics


Thermodynamics states a set of four laws that are valid for
all systems that fall within the constraints implied by each.
In the various theoretical descriptions of thermodynamics
these laws may be expressed in seemingly diering forms,
but the most prominent formulations are the following:

First law of thermodynamics: The increase in internal


energy of a closed system is equal to the dierence of
the heat supplied to the system and the work done by the
system: U = Q W [71][72][73][74][75][76][77][78][79][80]
(Note that due to the ambiguity of what constitutes
positive work, some sources state that U = Q + W,
in which case work done on the system is positive.)
The rst law of thermodynamics asserts the existence of
a state variable for a system, the internal energy, and tells
how it changes in thermodynamic processes. The law allows a given internal energy of a system to be reached by
any combination of heat and work. It is important that
internal energy is a variable of state of the system (see
Thermodynamic state) whereas heat and work are variables
that describe processes or changes of the state of systems.

The rst law observes that the internal energy of an iso Zeroth law of thermodynamics: If two systems are
lated system obeys the principle of conservation of eneach in thermal equilibrium with a third, they are also
ergy, which states that energy can be transformed (changed
in thermal equilibrium with each other.
from one form to another), but cannot be created or
destroyed.[81][82][83][84][85]
This statement implies that thermal equilibrium is an
equivalence relation on the set of thermodynamic systems
Second law of thermodynamics: Heat cannot spontaunder consideration. Systems are said to be in thermal equineously ow from a colder location to a hotter location.
librium with each other if spontaneous molecular thermal
energy exchanges between them do not lead to a net ex- The second law of thermodynamics is an expression of the
change of energy. This law is tacitly assumed in every mea- universal principle of dissipation of kinetic and potential
surement of temperature. For two bodies known to be at the energy observable in nature. The second law is an obsersame temperature, deciding if they are in thermal equilib- vation of the fact that over time, dierences in temperarium when put into thermal contact does not require actually ture, pressure, and chemical potential tend to even out in

CHAPTER 1. CHAPTER 1. INTRODUCTION

a physical system that is isolated from the outside world.


Entropy is a measure of how much this process has progressed. The entropy of an isolated system that is not in
equilibrium tends to increase over time, approaching a maximum value at equilibrium.
In classical thermodynamics, the second law is a basic postulate applicable to any system involving heat energy transfer; in statistical thermodynamics, the second law is a consequence of the assumed randomness of molecular chaos.
There are many versions of the second law, but they all
have the same eect, which is to explain the phenomenon
of irreversibility in nature.

under study. Everything in the universe except the system


is known as the surroundings. A system is separated from
the remainder of the universe by a boundary, which may
be actual, or merely notional and ctive, but by convention
delimits a nite volume. Transfers of work, heat, or matter
between the system and the surroundings take place across
this boundary, which may or may not have properties that
restrict what can be transferred across it. A system may
have several distinct boundary sectors or partitions separating it from the surroundings, each characterized by how it
restricts transfers, and being permeable to its characteristic
transferred quantities.

The volume can be the region surrounding a single atom res Third law of thermodynamics: As a system approaches onating energy, as Max Planck dened in 1900; it can be a
absolute zero the entropy of the system approaches a body of steam or air in a steam engine, such as Sadi Carnot
minimum value.
dened in 1824; it can be the body of a tropical cyclone, as
Kerry Emanuel theorized in 1986 in the eld of atmospheric
The third law of thermodynamics is a statistical law of na- thermodynamics; it could also be just one nuclide (i.e. a
ture regarding entropy and the impossibility of reaching system of quarks) as hypothesized in quantum thermodyabsolute zero of temperature. This law provides an abso- namics.
lute reference point for the determination of entropy. The
entropy determined relative to this point is the absolute en- Anything that passes across the boundary needs to be actropy. Alternate denitions of the third law are, the en- counted for in a proper transfer balance equation. Thermotropy of all systems and of all states of a system is smallest dynamics is largely about such transfers.
at absolute zero, or equivalently it is impossible to reach Boundary sectors are of various characters: rigid, exible,
the absolute zero of temperature by any nite number of xed, moveable, actually restrictive, and ctive or not actuprocesses.
ally restrictive. For example, in an engine, a xed boundAbsolute zero is 273.15 C (degrees Celsius), 459.67 F ary sector means the piston is locked at its position; then
no pressure-volume work is done across it. In that same
(degrees Fahrenheit), 0 K (kelvin), or 0 R (Rankine).
engine, a moveable boundary allows the piston to move in
and out, permitting pressure-volume work. There is no re1.1.7 System models
strictive boundary sector for the whole earth including its
atmosphere, and so roughly speaking, no pressure-volume
work is done on or by the whole earth system. Such a system is sometimes said to be diabatically heated or cooled
SURROUNDINGS
by radiation.[86][87]
Thermodynamics distinguishes classes of systems by their
boundary sectors.

SYSTEM

An open system has a boundary sector that is permeable to matter; such a sector is usually permeable
also to energy, but the energy that passes cannot in
general be uniquely sorted into heat and work components. Open system boundaries may be either actually
restrictive, or else non-restrictive.

A diagram of a generic thermodynamic system

A closed system has no boundary sector that is permeable to matter, but in general its boundary is permeable
to energy. For closed systems, boundaries are totally
prohibitive of matter transfer.

The thermodynamic system is an important concept of thermodynamics. It is a precisely dened region of the universe

An adiabatically isolated system has only adiabatic


boundary sectors. Energy can be transferred as work,

BOUNDARY

1.1. CLASSICAL THERMODYNAMICS

but transfers of matter and of energy as heat are pro- cannot be adequately accounted for in terms of equilibrium
hibited.
states or classical cyclic processes.[93][94]
A purely diathermically isolated system has only
boundary sectors permeable only to heat; it is sometimes said to be adynamically isolated and closed to
matter transfer. A process in which no work is transferred is sometimes called adynamic.[88]
An isolated system has only isolating boundary sectors. Nothing can be transferred into or out of it.
Engineering and natural processes are often described as
composites of many dierent component simple systems,
sometimes with unchanging or changing partitions between them. A change of partition is an example of a
thermodynamic operation.

1.1.8

States and processes

The notion of a cyclic process does not require a full account of the state of the system, but does require a full
account of how the process occasions transfers of matter
and energy between the principal system (which is often
called the working body) and its surroundings, which must
include at least two heat reservoirs at dierent known and
xed temperatures, one hotter than the principal system and
the other colder than it, as well as a reservoir that can receive energy from the system as work and can do work on
the system. The reservoirs can alternatively be regarded as
auxiliary idealized component systems, alongside the principal system. Thus an account in terms of cyclic processes
requires at least four contributory component systems. The
independent variables of this account are the amounts of
energy that enter and leave the idealized auxiliary systems.
In this kind of account, the working body is often regarded
as a black box,[95] and its own state is not specied. In this
approach, the notion of a properly numerical scale of empirical temperature is a presupposition of thermodynamics,
not a notion constructed by or derived from it.

There are four fundamental kinds of entity in


thermodynamicsstates of a system, walls between
systems, thermodynamic processes, and thermodynamic
operations. This allows three fundamental approaches
to thermodynamic reasoningthat in terms of states of
thermodynamic equilibrium of a system, and that in terms Account in terms of states of thermodynamic equilibof time-invariant processes of a system, and that in terms rium
of cyclic processes of a system.
When a system is at thermodynamic equilibrium under a
The approach through states of thermodynamic equilibrium given set of conditions of its surroundings, it is said to be in
of a system requires a full account of the state of the system a denite thermodynamic state, which is fully described by
as well as a notion of process from one state to another of a its state variables.
system, but may require only an idealized or partial account
of the state of the surroundings of the system or of other If a system is simple as dened above, and is in thermodynamic equilibrium, and is not subject to an externally imsystems.
posed force eld, such as gravity, electricity, or magnetism,
The method of description in terms of states of thermo- then it is homogeneous, that is say, spatially uniform in all
dynamic equilibrium has limitations. For example, pro- respects.[96]
cesses in a region of turbulent ow, or in a burning gas mixture, or in a Knudsen gas may be beyond the province of In a sense, a homogeneous system can be regarded as spathermodynamics.[89][90][91] This problem can sometimes tially zero-dimensional, because it has no spatial variation.
be circumvented through the method of description in terms If a system in thermodynamic equilibrium is homogeneous,
of cyclic or of time-invariant ow processes. This is part of then its state can be described by a few physical varithe reason why the founders of thermodynamics often pre- ables, which are mostly classiable as intensive variables
ferred the cyclic process description.
and extensive variables.[8][33][97][98][99]
Approaches through processes of time-invariant ow of a An intensive variable is one that is unchanged with the thersystem are used for some studies. Some processes, for modynamic operation of scaling of a system.
example Joule-Thomson expansion, are studied through
steady-ow experiments, but can be accounted for by distin- An extensive variable is one that simply scales with the scalguishing the steady bulk ow kinetic energy from the inter- ing of a system, without the further requirement used just
nal energy, and thus can be regarded as within the scope of below here, of additivity even when there is inhomogeneity
classical thermodynamics dened in terms of equilibrium of the added systems.
states or of cyclic processes.[44][92] Other ow processes, for Examples of extensive thermodynamic variables are total
example thermoelectric eects, are essentially dened by mass and total volume. Under the above denition, entropy
the presence of dierential ows or diusion so that they is also regarded as an extensive variable. Examples of in-

10
tensive thermodynamic variables are temperature, pressure,
and chemical concentration; intensive thermodynamic variables are dened at each spatial point and each instant of
time in a system. Physical macroscopic variables can be
mechanical, material, or thermal.[33] Temperature is a thermal variable; according to Guggenheim, the most important conception in thermodynamics is temperature.[8]
Intensive variables have the property that if any number of
systems, each in its own separate homogeneous thermodynamic equilibrium state, all with the same respective values
of all of their intensive variables, regardless of the values
of their extensive variables, are laid contiguously with no
partition between them, so as to form a new system, then
the values of the intensive variables of the new system are
the same as those of the separate constituent systems. Such
a composite system is in a homogeneous thermodynamic
equilibrium. Examples of intensive variables are temperature, chemical concentration, pressure, density of mass,
density of internal energy, and, when it can be properly
dened, density of entropy.[100] In other words, intensive
variables are not altered by the thermodynamic operation
of scaling.
For the immediately present account just below, an alternative denition of extensive variables is considered,
that requires that if any number of systems, regardless of
their possible separate thermodynamic equilibrium or nonequilibrium states or intensive variables, are laid side by side
with no partition between them so as to form a new system,
then the values of the extensive variables of the new system
are the sums of the values of the respective extensive variables of the individual separate constituent systems. Obviously, there is no reason to expect such a composite system to be in a homogeneous thermodynamic equilibrium.
Examples of extensive variables in this alternative denition are mass, volume, and internal energy. They depend
on the total quantity of mass in the system.[101] In other
words, although extensive variables scale with the system
under the thermodynamic operation of scaling, nevertheless the present alternative denition of an extensive variable requires more than this: it requires also its additivity
regardless of the inhomogeneity (or equality or inequality
of the values of the intensive variables) of the component
systems.
Though, when it can be properly dened, density of entropy is an intensive variable, for inhomogeneous systems,
entropy itself does not t into this alternative classication
of state variables.[102][103] The reason is that entropy is a
property of a system as a whole, and not necessarily related
simply to its constituents separately. It is true that for any
number of systems each in its own separate homogeneous
thermodynamic equilibrium, all with the same values of intensive variables, removal of the partitions between the separate systems results in a composite homogeneous system in

CHAPTER 1. CHAPTER 1. INTRODUCTION


thermodynamic equilibrium, with all the values of its intensive variables the same as those of the constituent systems,
and it is reservedly or conditionally true that the entropy
of such a restrictively dened composite system is the sum
of the entropies of the constituent systems. But if the constituent systems do not satisfy these restrictive conditions,
the entropy of a composite system cannot be expected to
be the sum of the entropies of the constituent systems, because the entropy is a property of the composite system as
a whole. Therefore, though under these restrictive reservations, entropy satises some requirements for extensivity
dened just above, entropy in general does not t the immediately present denition of an extensive variable.
Being neither an intensive variable nor an extensive variable
according to the immediately present denition, entropy is
thus a stand-out variable, because it is a state variable of a
system as a whole.[102] A non-equilibrium system can have a
very inhomogeneous dynamical structure. This is one reason for distinguishing the study of equilibrium thermodynamics from the study of non-equilibrium thermodynamics.
The physical reason for the existence of extensive variables
is the time-invariance of volume in a given inertial reference
frame, and the strictly local conservation of mass, momentum, angular momentum, and energy. As noted by Gibbs,
entropy is unlike energy and mass, because it is not locally
conserved.[102] The stand-out quantity entropy is never conserved in real physical processes; all real physical processes
are irreversible.[104] The motion of planets seems reversible
on a short time scale (millions of years), but their motion,
according to Newtons laws, is mathematically an example
of deterministic chaos. Eventually a planet suers an unpredictable collision with an object from its surroundings,
outer space in this case, and consequently its future course
is radically unpredictable. Theoretically this can be expressed by saying that every natural process dissipates some
information from the predictable part of its activity into
the unpredictable part. The predictable part is expressed in
the generalized mechanical variables, and the unpredictable
part in heat.
Other state variables can be regarded as conditionally 'extensive' subject to reservation as above, but not extensive as
dened above. Examples are the Gibbs free energy, the
Helmholtz free energy, and the enthalpy. Consequently,
just because for some systems under particular conditions
of their surroundings such state variables are conditionally
conjugate to intensive variables, such conjugacy does not
make such state variables extensive as dened above. This
is another reason for distinguishing the study of equilibrium
thermodynamics from the study of non-equilibrium thermodynamics. In another way of thinking, this explains why
heat is to be regarded as a quantity that refers to a process
and not to a state of a system.

1.1. CLASSICAL THERMODYNAMICS


A system with no internal partitions, and in thermodynamic
equilibrium, can be inhomogeneous in the following respect: it can consist of several so-called 'phases, each homogeneous in itself, in immediate contiguity with other
phases of the system, but distinguishable by their having
various respectively dierent physical characters, with discontinuity of intensive variables at the boundaries between
the phases; a mixture of dierent chemical species is considered homogeneous for this purpose if it is physically
homogeneous.[105] For example, a vessel can contain a system consisting of water vapour overlying liquid water; then
there is a vapour phase and a liquid phase, each homogeneous in itself, but still in thermodynamic equilibrium
with the other phase. For the immediately present account,
systems with multiple phases are not considered, though
for many thermodynamic questions, multiphase systems are
important.

11
other independent variable, and then changes in volume are
considered as dependent. Careful attention to this principle
is necessary in thermodynamics.[107][108]

Changes of state of a system In the approach through


equilibrium states of the system, a process can be described
in two main ways.

In one way, the system is considered to be connected to the


surroundings by some kind of more or less separating partition, and allowed to reach equilibrium with the surroundings with that partition in place. Then, while the separative
character of the partition is kept unchanged, the conditions
of the surroundings are changed, and exert their inuence
on the system again through the separating partition, or the
partition is moved so as to change the volume of the system;
and a new equilibrium is reached. For example, a system is
allowed to reach equilibrium with a heat bath at one temperature; then the temperature of the heat bath is changed
Equation of state The macroscopic variables of a therand the system is allowed to reach a new equilibrium; if the
modynamic system in thermodynamic equilibrium, in
partition allows conduction of heat, the new equilibrium is
which temperature is well dened, can be related to
dierent from the old equilibrium.
one another through equations of state or characteristic
[29][30][31][32]
equations.
They express the constitutive pe- In the other way, several systems are connected to one anculiarities of the material of the system. The equation of other by various kinds of more or less separating partitions,
state must comply with some thermodynamic constraints, and to reach equilibrium with each other, with those partibut cannot be derived from the general principles of ther- tions in place. In this way, one may speak of a 'compound
system'. Then one or more partitions is removed or changed
modynamics alone.
in its separative properties or moved, and a new equilibrium
is reached. The Joule-Thomson experiment is an example
Thermodynamic processes between states of thermody- of this; a tube of gas is separated from another tube by a
namic equilibrium
porous partition; the volume available in each of the tubes is
determined by respective pistons; equilibrium is established
A thermodynamic process is dened by changes of state in- with an initial set of volumes; the volumes are changed and
ternal to the system of interest, combined with transfers of a new equilibrium is established.[109][110][111][112][113] Anmatter and energy to and from the surroundings of the sys- other example is in separation and mixing of gases, with
tem or to and from other systems. A system is demarcated use of chemically semi-permeable membranes.[114]
from its surroundings or from other systems by partitions
that more or less separate them, and may move as a piston
to change the volume of the system and thus transfer work. Commonly considered thermodynamic processes It
is often convenient to study a thermodynamic process in
which a single variable, such as temperature, pressure, or
Dependent and independent variables for a process A volume, etc., is held xed. Furthermore, it is useful to group
process is described by changes in values of state variables these processes into pairs, in which each variable held conof systems or by quantities of exchange of matter and en- stant is one member of a conjugate pair.
ergy between systems and surroundings. The change must
be specied in terms of prescribed variables. The choice Several commonly studied thermodynamic processes
of which variables are to be used is made in advance of are:
consideration of the course of the process, and cannot be
changed. Certain of the variables chosen in advance are
Isobaric process: occurs at constant pressure
called the independent variables.[106] From changes in independent variables may be derived changes in other vari Isochoric process: occurs at constant volume (also
ables called dependent variables. For example, a process
called isometric/isovolumetric)
may occur at constant pressure with pressure prescribed as
Isothermal process: occurs at a constant temperature
an independent variable, and temperature changed as an-

12

CHAPTER 1. CHAPTER 1. INTRODUCTION

Adiabatic process: occurs without loss or gain of en- A cyclic process of a system requires in its surroundings
ergy as heat
at least two heat reservoirs at dierent temperatures, one
at a higher temperature that supplies heat to the system,
Isentropic process: a reversible adiabatic process oc- the other at a lower temperature that accepts heat from
curs at a constant entropy, but is a ctional idealization. the system. The early work on thermodynamics tended to
Conceptually it is possible to actually physically con- use the cyclic process approach, because it was interested
duct a process that keeps the entropy of the system in machines that converted some of the heat from the surconstant, allowing systematically controlled removal roundings into mechanical power delivered to the surroundof heat, by conduction to a cooler body, to compensate ings, without too much concern about the internal workfor entropy produced within the system by irreversible ings of the machine. Such a machine, while receiving an
work done on the system. Such isentropic conduct of a amount of heat from a higher temperature reservoir, alprocess seems called for when the entropy of the sys- ways needs a lower temperature reservoir that accepts some
tem is considered as an independent variable, as for lesser amount of heat. The dierence in amounts of heat
example when the internal energy is considered as a is equal to the amount of heat converted to work.[83] Later,
function of the entropy and volume of the system, the the internal workings of a system became of interest, and
natural variables of the internal energy as studied by they are described by the states of the system. Nowadays,
Gibbs.
instead of arguing in terms of cyclic processes, some writers
are inclined to derive the concept of absolute temperature
Isenthalpic process: occurs at a constant enthalpy
from the concept of entropy, a variable of state.
Isolated process: no matter or energy (neither as work
nor as heat) is transferred into or out of the system
It is sometimes of interest to study a process in which several variables are controlled, subject to some specied constraint. In a system in which a chemical reaction can occur,
for example, in which the pressure and temperature can affect the equilibrium composition, a process might occur in
which temperature is held constant but pressure is slowly altered, just so that chemical equilibrium is maintained all the
way. There is a corresponding process at constant temperature in which the nal pressure is the same but is reached
by a rapid jump. Then it can be shown that the volume
change resulting from the rapid jump process is smaller than
that from the slow equilibrium process.[115] The work transferred diers between the two processes.
Account in terms of cyclic processes
A cyclic process[26] is a process that can be repeated indenitely often without changing the nal state of the system in which the process occurs. The only traces of the
eects of a cyclic process are to be found in the surroundings of the system or in other systems. This is the kind
of process that concerned early thermodynamicists such as
Sadi Carnot, and in terms of which Kelvin dened absolute
temperature,[116] before the use of the quantity of entropy
by Rankine and its clear identication by Clausius.[117] For
some systems, for example with some plastic working substances, cyclic processes are practically nearly unfeasible
because the working substance undergoes practically irreversible changes.[118] This is why mechanical devices are
lubricated with oil and one of the reasons why electrical
devices are often useful.

1.1.9

Instrumentation

There are two types of thermodynamic instruments, the


meter and the reservoir. A thermodynamic meter is any
device that measures any parameter of a thermodynamic
system. In some cases, the thermodynamic parameter is
actually dened in terms of an idealized measuring instrument. For example, the zeroth law states that if two bodies
are in thermal equilibrium with a third body, they are also
in thermal equilibrium with each other. This principle, as
noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is
a sample of an ideal gas at constant pressure. From the
ideal gas law PV=nRT, the volume of such a sample can
be used as an indicator of temperature; in this manner it
denes temperature. Although pressure is dened mechanically, a pressure-measuring device, called a barometer may
also be constructed from a sample of an ideal gas held at a
constant temperature. A calorimeter is a device that measures and dene the internal energy of a system.
A thermodynamic reservoir is a system so large that it does
not appreciably alter its state parameters when brought into
contact with the test system. It is used to impose a particular value of a state parameter upon the system. For example, a pressure reservoir is a system at a particular pressure,
which imposes that pressure upon any test system that it is
mechanically connected to. The Earths atmosphere is often
used as a pressure reservoir.

1.1. CLASSICAL THERMODYNAMICS

1.1.10

Conjugate variables

Main article: Conjugate variables

13
times without,[120][121] explicit mention. Particular attention is paid to the law in accounts of non-equilibrium
thermodynamics.[122][123] One statement of this law is The
total mass of a closed system remains constant.[9] Another
statement of it is In a chemical reaction, matter is neither
created nor destroyed.[124] Implied in this is that matter
and energy are not considered to be interconverted in such
accounts. The full generality of the law of conservation of
energy is thus not used in such accounts.

A central concept of thermodynamics is that of energy. By


the First Law, the total energy of a system and its surroundings is conserved. Energy may be transferred into a system by heating, compression, or addition of matter, and
extracted from a system by cooling, expansion, or extraction of matter. In mechanics, for example, energy transfer In 1909, Constantin Carathodory presented[56] a purely
equals the product of the force applied to a body and the mathematical axiomatic formulation, a description often
resulting displacement.
referred to as geometrical thermodynamics, and sometimes
[79]
Conjugate variables are pairs of thermodynamic concepts, said to take the mechanical approach to thermodynamwith the rst being akin to a force applied to some ics. The Carathodory formulation is restricted to equilibthermodynamic system, the second being akin to the re- rium thermodynamics and does not attempt to deal with
at a dissulting displacement, and the product of the two equalling non-equilibrium thermodynamics, forces that act
[125]
tance
on
the
system,
or
surface
tension
eects.
Morethe amount of energy transferred. The common conjugate
over,
Carathodorys
formulation
does
not
deal
with
matevariables are:
rials like water near 4 C, which have a density extremum
as a function of temperature at constant pressure.[126][127]
Pressure-volume (the mechanical parameters);
Carathodory used the law of conservation of energy
as an axiom from which, along with the contents of
Temperature-entropy (thermal parameters);
the zeroth law, and some other assumptions including
Chemical potential-particle number (material param- his own version of the second law, he derived the rst
law of thermodynamics.[128] Consequently, one might also
eters).
describe Carathodorys work as lying in the eld of
energetics,[129] which is broader than thermodynamics.
1.1.11 Potentials
Carathodory presupposed the law of conservation of mass
without explicit mention of it.
Thermodynamic potentials are dierent quantitative meaSince the time of Carathodory, other inuential axiomatic
sures of the stored energy in a system. Potentials are used
formulations of thermodynamics have appeared, which like
to measure energy changes in systems as they evolve from
Carathodorys, use their own respective axioms, dierent
an initial state to a nal state. The potential used depends
from the usual statements of the four laws, to derive the four
on the constraints of the system, such as constant temperusually stated laws.[130][131][132]
ature or pressure. For example, the Helmholtz and Gibbs
energies are the energies available in a system to do useful Many axiomatic developments assume the existence of
work when the temperature and volume or the pressure and states of thermodynamic equilibrium and of states of thermal equilibrium. States of thermodynamic equilibrium of
temperature are xed, respectively.
compound systems allow their component simple systems
The ve most well known potentials are:
to exchange heat and matter and to do work on each other
where T is the temperature, S the entropy, p the pressure, on their way to overall joint equilibrium. Thermal equiV the volume, the chemical potential, N the number of librium allows them only to exchange heat. The physical
particles in the system, and i is the count of particles types properties of glass depend on its history of being heated and
in the system.
cooled and, strictly speaking, glass is not in thermodynamic
[133]
Thermodynamic potentials can be derived from the en- equilibrium.
ergy balance equation applied to a thermodynamic sys- According to Herbert Callen's widely cited 1985 text on
tem. Other thermodynamic potentials can also be obtained thermodynamics: An essential prerequisite for the meathrough Legendre transformation.
surability of energy is the existence of walls that do not permit transfer of energy in the form of heat..[134] According
to Werner Heisenberg's mature and careful examination of
1.1.12 Axiomatics
the basic concepts of physics, the theory of heat has a selfstanding place.[135]
Most accounts of thermodynamics presuppose the law
of conservation of mass, sometimes with,[119] and some- From the viewpoint of the axiomatist, there are several dif-

14
ferent ways of thinking about heat, temperature, and the
second law of thermodynamics. The Clausius way rests on
the empirical fact that heat is conducted always down, never
up, a temperature gradient. The Kelvin way is to assert the
empirical fact that conversion of heat into work by cyclic
processes is never perfectly ecient. A more mathematical way is to assert the existence of a function of state called
the entropy that tells whether a hypothesized process occurs spontaneously in nature. A more abstract way is that
of Carathodory that in eect asserts the irreversibility of
some adiabatic processes. For these dierent ways, there
are respective corresponding dierent ways of viewing heat
and temperature.
The ClausiusKelvinPlanck way This way prefers ideas
close to the empirical origins of thermodynamics. It
presupposes transfer of energy as heat, and empirical
temperature as a scalar function of state. According
to Gislason and Craig (2005): Most thermodynamic
data come from calorimetry...[136] According to Kondepudi (2008): Calorimetry is widely used in present day
laboratories.[137] In this approach, what is often currently
called the zeroth law of thermodynamics is deduced as a
simple consequence of the presupposition of the nature of
heat and empirical temperature, but it is not named as a
numbered law of thermodynamics. Planck attributed this
point of view to Clausius, Kelvin, and Maxwell. Planck
wrote (on page 90 of the seventh edition, dated 1922, of
his treatise) that he thought that no proof of the second law
of thermodynamics could ever work that was not based on
the impossibility of a perpetual motion machine of the second kind. In that treatise, Planck makes no mention of the
1909 Carathodory way, which was well known by 1922.
Planck for himself chose a version of what is just above
called the Kelvin way.[138] The development by Truesdell
and Bharatha (1977) is so constructed that it can deal naturally with cases like that of water near 4 C.[131]
The way that assumes the existence of entropy as a
function of state This way also presupposes transfer of
energy as heat, and it presupposes the usually stated form
of the zeroth law of thermodynamics, and from these two it
deduces the existence of empirical temperature. Then from
the existence of entropy it deduces the existence of absolute
thermodynamic temperature.[8][130]
The Carathodory way This way presupposes that the
state of a simple one-phase system is fully speciable by
just one more state variable than the known exhaustive
list of mechanical variables of state. It does not explicitly name empirical temperature, but speaks of the onedimensional non-deformation coordinate. This satises
the denition of an empirical temperature, that lies on a
one-dimensional manifold. The Carathodory way needs
to assume moreover that the one-dimensional manifold has
a denite sense, which determines the direction of irre-

CHAPTER 1. CHAPTER 1. INTRODUCTION


versible adiabatic process, which is eectively assuming
that heat is conducted from hot to cold. This way presupposes the often currently stated version of the zeroth law,
but does not actually name it as one of its axioms.[125] According to one author, Carathodorys principle, which is
his version of the second law of thermodynamics, does not
imply the increase of entropy when work is done under
adiabatic conditions (as was noted by Planck[139] ). Thus
Carathodorys way leaves unstated a further empirical fact
that is needed for a full expression of the second law of
thermodynamics.[140]

1.1.13

Scope of thermodynamics

Originally thermodynamics concerned material and radiative phenomena that are experimentally reproducible. For
example, a state of thermodynamic equilibrium is a steady
state reached after a system has aged so that it no longer
changes with the passage of time. But more than that, for
thermodynamics, a system, dened by its being prepared
in a certain way must, consequent on every particular occasion of preparation, upon aging, reach one and the same
eventual state of thermodynamic equilibrium, entirely determined by the way of preparation. Such reproducibility is
because the systems consist of so many molecules that the
molecular variations between particular occasions of preparation have negligible or scarcely discernable eects on the
macroscopic variables that are used in thermodynamic descriptions. This led to Boltzmanns discovery that entropy
had a statistical or probabilistic nature. Probabilistic and
statistical explanations arise from the experimental reproducibility of the phenomena.[141]
Gradually, the laws of thermodynamics came to be used
to explain phenomena that occur outside the experimental laboratory. For example, phenomena on the scale of
the earths atmosphere cannot be reproduced in a laboratory experiment. But processes in the atmosphere
can be modeled by use of thermodynamic ideas, extended well beyond the scope of laboratory equilibrium
thermodynamics.[142][143][144] A parcel of air can, near
enough for many studies, be considered as a closed thermodynamic system, one that is allowed to move over signicant distances. The pressure exerted by the surrounding air on the lower face of a parcel of air may dier from
that on its upper face. If this results in rising of the parcel
of air, it can be considered to have gained potential energy
as a result of work being done on it by the combined surrounding air below and above it. As it rises, such a parcel
usually expands because the pressure is lower at the higher
altitudes that it reaches. In that way, the rising parcel also
does work on the surrounding atmosphere. For many studies, such a parcel can be considered nearly to neither gain
nor lose energy by heat conduction to its surrounding at-

1.1. CLASSICAL THERMODYNAMICS

15

mosphere, and its rise is rapid enough to leave negligible Wikibooks


time for it to gain or lose heat by radiation; consequently
Engineering Thermodynamics
the rising of the parcel is near enough adiabatic. Thus the
adiabatic gas law accounts for its internal state variables,
Entropy for Beginners
provided that there is no precipitation into water droplets,
no evaporation of water droplets, and no sublimation in the
process. More precisely, the rising of the parcel is likely 1.1.16 References
to occasion friction and turbulence, so that some potential
and some kinetic energy of bulk converts into internal en- [1] Clausius, Rudolf (1850). On the Motive Power of Heat, and
ergy of air considered as eectively stationary. Friction and
on the Laws which can be deduced from it for the Theory
of Heat. Poggendors Annalen der Physik, LXXIX (Dover
turbulence thus oppose the rising of the parcel.[145][146]
Reprint). ISBN 0-486-59065-8.

1.1.14

Applied elds

Atmospheric thermodynamics
Biological thermodynamics
Black hole thermodynamics
Chemical thermodynamics
Equilibrium thermodynamics
Geology
Industrial ecology (re: Exergy)
Maximum entropy thermodynamics
Non-equilibrium thermodynamics
Philosophy of thermal and statistical physics
Psychrometrics
Quantum thermodynamics
Statistical thermodynamics
Thermoeconomics

1.1.15

See also

Entropy production
Lists and timelines
List of important publications in thermodynamics
List of textbooks in statistical mechanics
List of thermal conductivities
List of thermodynamic properties

[2] Thomson, W. (1854). On the Dynamical Theory of


Heat. Part V. Thermo-electric Currents. Transactions of the Royal Society of Edinburgh 21 (part I): 123.
doi:10.1017/s0080456800032014. reprinted in Sir William
Thomson, LL.D. D.C.L., F.R.S. (1882). Mathematical and
Physical Papers 1. London, Cambridge: C.J. Clay, M.A. &
Son, Cambridge University Press. p. 232. Hence Thermodynamics falls naturally into two divisions, of which the subjects are respectively, the relation of heat to the forces acting
between contiguous parts of bodies, and the relation of heat
to electrical agency.
[3] Hess, H. (1840). Thermochemische Untersuchungen, Annalen der Physik und Chemie (Poggendor, Leipzig) 126(6):
385404.
[4] Gibbs, Willard, J. (1876). Transactions of the Connecticut
Academy, III, pp. 108248, Oct. 1875 May 1876, and pp.
343524, May 1877 July 1878.
[5] Duhem, P.M.M. (1886). Le Potential Thermodynamique et
ses Applications, Hermann, Paris.
[6] Lewis, Gilbert N.; Randall, Merle (1923). Thermodynamics
and the Free Energy of Chemical Substances. McGraw-Hill
Book Co. Inc.
[7] Guggenheim, E.A. (1933). Modern Thermodynamics by the
Methods of J.W. Gibbs, Methuen, London.
[8] Guggenheim, E.A. (1949/1967)
[9] Ilya Prigogine, I. & Defay, R., translated by D.H. Everett
(1954). Chemical Thermodynamics. Longmans, Green &
Co., London. Includes classical non-equilibrium thermodynamics.
[10] Enrico Fermi (1956). Thermodynamics. Courier Dover
Publications. p. ix. ISBN 0-486-60361-X. OCLC
230763036.
[11] Perrot, Pierre (1998). A to Z of Thermodynamics. Oxford University Press. ISBN 0-19-856552-6. OCLC
123283342.

Table of thermodynamic equations

[12] Bridgman, P.W. (1943). The Nature of Thermodynamics,


Harvard University Press, Cambridge MA, p. 48.

Timeline of thermodynamics

[13] Partington, J.R. (1949), page 118.

16

CHAPTER 1. CHAPTER 1. INTRODUCTION

[14] Reif, F. (1965). Fundamentals of Statistical and Thermal


Physics, McGraw-Hill Book Company, New York, page
122.

[39] Gibbs, Willard (1993). The Scientic Papers of J. Willard


Gibbs, Volume One: Thermodynamics. Ox Bow Press. ISBN
0-918024-77-3. OCLC 27974820.

[15] Fowler, R., Guggenheim, E.A. (1939), p. 3.

[40] Oxford English Dictionary, Oxford University Press, Oxford


UK.

[16] Tisza, L. (1966), p. 18.


[17] Marsland, R. III, Brown, H.R., Valente, G. (2015).
[18] Adkins, C.J. (1968/1983), p. 4.
[19] Born, M. (1949), p. 44.
[20] Guggenheim, E.A. (1949/1967), pp. 78.
[21] Tisza, L. (1966), pp. 109, 112.
[22] Callen, p. 15.
[23] Bailyn, M. (1994), p. 21.
[24] Callen, H.B. (1960/1985), p. 427.
[25] Tisza, L. (1966), pp. 41, 109, 121, originally published
as 'The thermodynamics of phase equilibrium', Annals of
Physics, 13: 192.
[26] Serrin, J. (1986). Chapter 1, 'An Outline of Thermodynamical Structure', pp. 332, especially p. 8, in Serrin, J. (1986).
[27] Fowler, R., Guggenheim, E.A. (1939), p. 13.

[41] Donald T. Haynie (2001). Biological Thermodynamics (2


ed.). Cambridge University Press. p. 22.
[42] Thomson, W. (1849). An Account of Carnots Theory
of the Motive Power of Heat; with Numerical Results deduced from Regnaults Experiments on Steam. Transactions of the Royal Society of Edinburgh 16 (part V): 541574.
doi:10.1017/s0080456800022481.
[43] Rankine, William (1859). 3: Principles of Thermodynamics. A Manual of the Steam Engine and other Prime Movers.
London: Charles Grin and Co. pp. 299448.
[44] Pippard, A.B. (1957), p. 70.
[45] Partington, J.R. (1949), p. 615621.
[46] Serrin, J. (1986). An outline of thermodynamical structure,
Chapter 1, pp. 332 in Serrin, J. (1986).
[47] Callen, H.B. (1960/1985), Chapter 6, pages 131152.
[48] Callen, H.B. (1960/1985), p. 13.

[28] Tisza, L. (1966), pp. 7980.

[49] Landsberg, P.T. (1978). Thermodynamics and Statistical


Mechanics, Oxford University Press, Oxford UK, ISBN 019-851142-6, p. 1.

[29] Planck, M. 1923/1926, page 5.

[50] Eu, B.C. (2002).

[30] Partington, p. 121.

[51] Lebon, G., Jou, D., Casas-Vzquez, J. (2008).

[31] Adkins, pp. 1920.

[52] Grandy, W.T., Jr (2008), passim and p. 123.

[32] Haase, R. (1971), pages 1116.

[53] Callen, H.B. (1985), p. 26.

[33] Balescu, R. (1975). Equilibrium and Nonequilibrium Statistical Mechanics, Wiley-Interscience, New York, ISBN 0-47104600-0.

[54] Gibbs J.W. (1875), pp. 115116.

[34] Schrdinger, E. (1946/1967). Statistical Thermodynamics.


A Course of Seminar Lectures, Cambridge University Press,
Cambridge UK.

[56] C. Carathodory (1909). Untersuchungen ber die Grundlagen der Thermodynamik. Mathematische Annalen 67:
355386. A partly reliable translation is to be found
at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.
doi:10.1007/BF01450409.

[35] Partington, J.R. (1949), p. 551.


[36] Partington, J.R. (1989). A Short History of Chemistry.
Dover. OCLC 19353301.

[55] Bryan, G.H. (1907), p. 5.

[57] Haase, R. (1971), p. 13.

[37] The Newcomen engine was improved from 1711 until Watts
work, making the eciency comparison subject to qualication, but the increase from the Newcomen 1765 version
was on the order of 100%.

[58] Bailyn, M. (1994), p. 145.

[38] Cengel, Yunus A.; Boles, Michael A. (2005). Thermodynamics an Engineering Approach. McGraw-Hill. ISBN
0-07-310768-9.

[61] Partington, J.R. (1949), p. 129.

[59] Bailyn, M. (1994), Section 6.11.


[60] Planck, M. (1897/1903), passim.

[62] Callen, H.B. (1960/1985), Section 42.

1.1. CLASSICAL THERMODYNAMICS

[63] Guggenheim, E.A. (1949/1967), 1.12.


[64] de Groot, S.R., Mazur, P., Non-equilibrium thermodynamics,1969, North-Holland Publishing Company, AmsterdamLondon
[65] Moran, Michael J. and Howard N. Shapiro, 2008. Fundamentals of Engineering Thermodynamics. 6th ed. Wiley and
Sons: 16.
[66] Planck, M. (1897/1903), p. 1.
[67] Rankine, W.J.M. (1953). Proc. Roy. Soc. (Edin.), 20(4).
[68] Maxwell, J.C. (1872), page 32.
[69] Maxwell, J.C. (1872), page 57.
[70] Planck, M. (1897/1903), pp. 12.
[71] Clausius, R. (1850). Ueber de bewegende Kraft der Wrme
und die Gesetze, welche sich daraus fr de Wrmelehre
selbst ableiten lassen, Annalen der Physik und Chemie, 155
(3): 368394.

17

[86] Goody, R.M., Yung, Y.L. (1989). Atmospheric Radiation.


Theoretical Basis, second edition, Oxford University Press,
Oxford UK, ISBN 0-19-505134-3, p. 5
[87] Wallace, J.M., Hobbs, P.V. (2006). Atmospheric Science.
An Introductory Survey, second edition, Elsevier, Amsterdam, ISBN 978-0-12-732951-2, p. 292.
[88] Partington, J.R. (1913). A Text-book of Thermodynamics,
Van Nostrand, New York, page 37.
[89] Glansdor, P., Prigogine, I., (1971). Thermodynamic
Theory of Structure, Stability and Fluctuations, WileyInterscience, London, ISBN 0-471-30280-5, page 15.
[90] Haase, R., (1971), page 16.
[91] Eu, B.C. (2002), p. 13.
[92] Adkins, C.J. (1968/1975), pp. 4649.
[93] Adkins, C.J. (1968/1975), p. 172.
[94] Lebon, G., Jou, D., Casas-Vzquez, J. (2008), pp. 3738.

[72] Rankine, W.J.M. (1850). On the mechanical action of heat,


especially in gases and vapours. Trans. Roy. Soc. Edinburgh, 20: 147190.

[95] Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, London, pp. 117
118.

[73] Helmholtz, H. von. (1897/1903). Vorlesungen ber Theorie


der Wrme, edited by F. Richarz, Press of Johann Ambrosius
Barth, Leipzig, Section 46, pp. 176182, in German.

[96] Guggenheim, E.A. (1949/1967), p. 6.

[74] Planck, M. (1897/1903), p. 43.


[75] Guggenheim, E.A. (1949/1967), p. 10.
[76] Sommerfeld, A. (1952/1956), Section 4 A, pp. 1316.
[77] Ilya Prigogine, I. & Defay, R., translated by D.H. Everett
(1954). Chemical Thermodynamics. Longmans, Green &
Co., London, p. 21.

[97] Balescu, R. (1975). Equilibrium and Non-equilibrium Statistical Mechanics, Wiley-Interscience, New York, ISBN 0471-04600-0, Section 3.2, pp. 6472.
[98] Ilya Prigogine, I. & Defay, R., translated by D.H. Everett
(1954). Chemical Thermodynamics. Longmans, Green &
Co., London. pp. 16.
[99] Lavenda, B.H. (1978). Thermodynamics of Irreversible Processes, Macmillan, London, ISBN 0-333-21616-4, p. 12.

[100]
[78] Lewis, G.N., Randall, M. (1961). Thermodynamics, second
edition revised by K.S. Pitzer and L. Brewer, McGraw-Hill, [101]
New York, p. 35.
[102]
[79] Bailyn, M. (1994), page 79.
[103]
[80] Khanna, F.C., Malbouisson, A.P.C., Malbouisson, J.M.C.,
[104]
Santana, A.E. (2009). Thermal Quantum Field Theory. Algebraic Aspects and Applications, World Scientic, Singa- [105]
pore, ISBN 978-981-281-887-4, p. 6.
[106]
[81] Helmholtz, H. von, (1847). Ueber die Erhaltung der Kraft,
G. Reimer, Berlin.
[107]

Guggenheim, E.A. (1949/1967), p. 19.


Guggenheim, E.A. (1949/1967), pp. 1819.
Grandy, W.T., Jr (2008), Chapter 5, pp. 5968.
Kondepudi & Prigogine (1998), pp. 116118.
Guggenheim, E.A. (1949/1967), Section 1.12, pp. 1213.
Planck, M. (1897/1903), p. 65.
Planck, M. (1923/1926), Section 152A, pp. 121123.
Prigogine, I. Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co., London, p. 1.

[82] Joule, J.P. (1847). On matter, living force, and heat, Manchester Courier, 5 and 12 May 1847.
[108] Adkins, pp. 4346.
[83] Truesdell, C.A. (1980).

[109] Planck, M. (1897/1903), Section 70, pp. 4850.

[84] Partington, J.R. (1949), page 150.

[110] Guggenheim, E.A. (1949/1967), Section 3.11, pp. 9292.

[85] Kondepudi & Prigogine (1998), pp. 3132.

[111] Sommerfeld, A. (1952/1956), Section 1.5 C, pp. 2325.

18

CHAPTER 1. CHAPTER 1. INTRODUCTION

[112] Callen, H.B. (1960/1985), Section 6.3.


[113] Adkins, pp. 164168.

[132] Wright, P.G. (1980). Conceptually distinct types of thermodynamics, Eur. J. Phys. 1: 8184.
[133] Callen, H.B. (1960/1985), p. 14.

[114] Planck, M. (1897/1903), Section 236, pp. 211212.


[134] Callen, H.B. (1960/1985), p. 16.
[115] Ilya Prigogine, I. & Defay, R., translated by D.H. Everett
(1954). Chemical Thermodynamics. Longmans, Green & [135] Heisenberg, W. (1958). Physics and Philosophy, Harper &
Row, New York, pp. 9899.
Co., London, Chapters 1819.
[136] Gislason, E.A., Craig, N.C. (2005). Cementing the foundations of thermodynamics:comparison of system-based and
surroundings-based denitions of work and heat, J. Chem.
Truesdell, C.A. (1980), Sections 8G,8H, 9A, pp. 207224.
Thermodynamics 37: 954966.
Ziegler, H., (1983). An Introduction to Thermomechanics,
[137] Kondepudi, D. (2008). Introduction to Modern ThermodyNorth-Holland, Amsterdam, ISBN 0-444-86503-9
namics, Wiley, Chichester, ISBN 978-0-470-01598-8, p.
63.
Ziegler, H. (1977). An Introduction to Thermomechanics,
North-Holland, Amsterdam, ISBN 0-7204-0432-0.
[138] Planck, M. (1922/1927).
Planck M. (1922/1927).
[139] Planck, M. (1926). ber die Begrndung des zweiten
Hauptsatzes der Thermodynamik, Sitzungsberichte der
Guggenheim, E.A. (1949/1967).
Preuischen Akademie der Wissenschaften, physikalischmathematischen Klasse, pp. 453463.
de Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North Holland, Amsterdam.
[140] Mnster, A. (1970). Classical Thermodynamics, translated
by E.S. Halberstadt, WileyInterscience, London, ISBN 0Gyarmati, I. (1970). Non-equilibrium Thermodynamics,
471-62430-6, p 41.
translated into English by E. Gyarmati and W.F. Heinz,
Springer, New York.
[141] Grandy, W.T., Jr (2008). Entropy and the Time Evolution of
Macroscopic Systems, Oxford University Press, Oxford UK,
Tro, N.J. (2008). Chemistry. A Molecular Approach, PearISBN 978-0-19-954617-6. p. 49.
son Prentice-Hall, Upper Saddle River NJ, ISBN 0-13100065-9.
[142] Iribarne, J.V., Godson, W.L. (1973/1989). Atmospheric
thermodynamics, second edition, reprinted 1989, Kluwer
Turner, L.A. (1962). Simplication of Carathodorys treatAcademic Publishers, Dordrecht, ISBN 90-277-1296-4.
ment of thermodynamics, Am. J. Phys. 30: 781786.

[116] Truesdell, C.A. (1980), Section 11B, pp. 306310.


[117]
[118]
[119]
[120]
[121]
[122]
[123]

[124]

[125]

[126] Turner, L.A. (1962). Further remarks on the zeroth law, [143] Peixoto, J.P., Oort, A.H. (1992). Physics of climate, American Institute of Physics, New York, ISBN 0-88318-712-4
Am. J. Phys. 30: 804806.
[127] Thomsen, J.S., Hartka, T.J., (1962). Strange Carnot cycles;
thermodynamics of a system with a density maximum, Am.
J. Phys. 30: 2633, 30: 388389.

[144] North, G.R., Erukhimova, T.L. (2009). Atmospheric Thermodynamics. Elementary Physics and Chemistry, Cambridge
University Press, Cambridge UK, ISBN 978-0-521-899635.

[128] C. Carathodory (1909). Untersuchungen ber die Grund[145] Holton, J.R. (2004). An Introduction of Dynamic Meteorollagen der Thermodynamik. Mathematische Annalen 67:
ogy, fourth edition, Elsevier, Amsterdam, ISBN 978-0-12363. doi:10.1007/bf01450409. Axiom II: In jeder be354015-7.
liebigen Umgebung eines willkrlich vorgeschriebenen Anfangszustandes gibt es Zustnde, die durch adiabatische Zu- [146] Mak, M. (2011). Atmospheric Dynamics, Cambridge Unistandsnderungen nicht beliebig approximiert werden knversity Press, Cambridge UK, ISBN 978-0-521-19573-7.
nen.
[129] Duhem, P. (1911). Trait d'Energetique, Gautier-Villars,
Paris.
[130] Callen, H.B. (1960/1985).
[131] Truesdell, C., Bharatha, S. (1977). The Concepts and
Logic of Classical Thermodynamics as a Theory of Heat Engines, Rigorously Constructed upon the Foundation Laid by
S. Carnot and F. Reech, Springer, New York, ISBN 0-38707971-8.

1.1.17

Cited bibliography

Adkins, C.J. (1968/1975). Equilibrium Thermodynamics, second edition, McGraw-Hill, London, ISBN
0-07-084057-1.
Bailyn, M. (1994). A Survey of Thermodynamics,
American Institute of Physics Press, New York, ISBN
0-88318-797-3.

1.1. CLASSICAL THERMODYNAMICS


Born, M. (1949). Natural Philosophy of Cause and
Chance, Oxford University Press, London.
Bryan, G.H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and
their Direct Applications, B.G. Teubner, Leipzig.
Callen, H.B. (1960/1985). Thermodynamics and an
Introduction to Thermostatistics, (1st edition 1960) 2nd
edition 1985, Wiley, New York, ISBN 0-471-862568.
Eu, B.C. (2002). Generalized Thermodynamics. The
Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers,
Dordrecht, ISBN 1-4020-0788-4.

19
Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co.,
London.
Planck, M. (1923/1926). Treatise on Thermodynamics, third English edition translated by A. Ogg from
the seventh German edition, Longmans, Green & Co.,
London.
Serrin, J. (1986). New Perspectives in Thermodynamics, edited by J. Serrin, Springer, Berlin, ISBN 3-54015931-2.
Sommerfeld, A. (1952/1956). Thermodynamics and
Statistical Mechanics, Academic Press, New York.

Fowler, R., Guggenheim, E.A. (1939). Statistical


Thermodynamics, Cambridge University Press, Cambridge UK.

Tschoegl, N.W. (2000). Fundamentals of Equilibrium


and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5.

Gibbs, J.W. (1875). On the equilibrium of heterogeneous substances, Transactions of the Connecticut
Academy of Arts and Sciences, 3: 108248.

Tisza, L. (1966). Generalized Thermodynamics, M.I.T


Press, Cambridge MA.

Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems, Oxford University Press,
Oxford, ISBN 978-0-19-954617-6.

Truesdell, C.A. (1980). The Tragicomical History of


Thermodynamics, 18221854, Springer, New York,
ISBN 0-387-90403-4.

Guggenheim, E.A. (1949/1967). Thermodynamics.


An Advanced Treatment for Chemists and Physicists, 1.1.18 Further reading
(1st edition 1949) 5th edition 1967, North-Holland,
Amsterdam.
Goldstein, Martin, and Inge F. (1993). The Refrigerator and the Universe. Harvard University Press. ISBN
Haase, R. (1971). Survey of Fundamental Laws,
0-674-75325-9. OCLC 32826343. A nontechnical
chapter 1 of Thermodynamics, pages 197 of volume
introduction, good on historical and interpretive mat1, ed. W. Jost, of Physical Chemistry. An Advanced
ters.
Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73117081.
Kazakov, Andrei (JulyAugust 2008). Web Thermo
Tables an On-Line Version of the TRC Thermody Kondepudi, D., Prigogine, I. (1998). Modern Thermonamic Tables (PDF). Journal of Research of the Nadynamics. From Heat Engines to Dissipative Structures,
tional Institutes of Standards and Technology 113 (4):
John Wiley & Sons, ISBN 0-471-97393-9.
209220. doi:10.6028/jres.113.016.
Lebon, G., Jou, D., Casas-Vzquez, J. (2008). Understanding Non-equilibrium Thermodynamics, Springer,
The following titles are more technical:
Berlin, ISBN 978-3-540-74251-7.
Marsland, R. III, Brown, H.R., Valente, G. (2015).
Time and irreversibility in axiomatic thermodynamics, Am. J. Phys., 83(7): 628634.
Partington, J.R. (1949). An Advanced Treatise on
Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and
Co., London.
Pippard, A.B. (1957). The Elements of Classical Thermodynamics, Cambridge University Press.

Cengel, Yunus A., & Boles, Michael A. (2002). Thermodynamics an Engineering Approach. McGraw
Hill. ISBN 0-07-238332-1. OCLC 45791449.
Fermi, E. (1956).
York.

Thermodynamics, Dover, New

Kittel, Charles & Kroemer, Herbert (1980). Thermal


Physics. W. H. Freeman Company. ISBN 0-71671088-9. OCLC 32932988.

20

1.1.19

CHAPTER 1. CHAPTER 1. INTRODUCTION

External links

1.2.1

Principles: mechanics and ensembles

Thermodynamics Data & Property Calculation Web- Main articles: Mechanics and Statistical ensemble
sites
In physics there are two types of mechanics usually exam Thermodynamics OpenCourseWare from the
ined: classical mechanics and quantum mechanics. For
University of Notre Dame Archived March 4, 2011,
both types of mechanics, the standard mathematical apat the Wayback Machine.
proach is to consider two ingredients:
Thermodynamics at ScienceWorld
Biochemistry Thermodynamics
Engineering Thermodynamics A Graphical Approach

1.2 Statistical Thermodynamics


Statistical mechanics is a branch of theoretical physics that
studies, using probability theory, the average behaviour of
a mechanical system made up of a large number of equivalent components where the microscopic realization of the
system is uncertain or undened.[1][2][3][note 1]
A common use of statistical mechanics is in explaining the
thermodynamic behaviour of large systems. This branch
of statistical mechanics which treats and extends classical
thermodynamics is known as statistical thermodynamics
or equilibrium statistical mechanics. Microscopic mechanical laws do not contain concepts such as temperature,
heat, or entropy; however, statistical mechanics shows how
these concepts arise from the natural uncertainty about the
state of a system when that system is prepared in practice. The benet of using statistical mechanics is that it provides exact methods to connect thermodynamic quantities
(such as heat capacity) to microscopic behaviour, whereas
in classical thermodynamics the only available option would
be to just measure and tabulate such quantities for various
materials. Statistical mechanics also makes it possible to extend the laws of thermodynamics to cases which are not considered in classical thermodynamics, such as microscopic
systems and other mechanical systems with few degrees of
freedom.[1]

1. The complete state of the mechanical system at a given


time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics).
2. An equation of motion which carries the state forward
in time: Hamiltons equations (classical mechanics)
or the time-dependent Schrdinger equation (quantum
mechanics)
Using these two ingredients, the state at any other time, past
or future, can in principle be calculated. There is however a
disconnection between these laws and everyday life experiences, as we do not nd it necessary (nor even theoretically
possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when
performing a chemical reaction). Statistical mechanics lls
this disconnection between the laws of mechanics and the
practical experience of incomplete knowledge, by adding
some uncertainty about which state the system is in.
Whereas ordinary mechanics only considers the behaviour
of a single state, statistical mechanics introduces the
statistical ensemble, which is a large collection of virtual,
independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase
points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase
space with canonical coordinates. In quantum statistical
mechanics, the ensemble is a probability distribution over
pure states,[note 2] and can be compactly summarized as a
density matrix.

Statistical mechanics also nds use outside equilibrium. An As is usual for probabilities, the ensemble can be interpreted
important subbranch known as non-equilibrium statisti- in dierent ways:[1]
cal mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by
an ensemble can be taken to represent the various posimbalances. Examples of such processes include chemical
sible states that a single system could be in (epistemic
reactions or ows of particles and heat. Unlike with equiprobability, a form of knowledge), or
librium, there is no exact formalism that applies to non the members of the ensemble can be understood as
equilibrium statistical mechanics in general, and so this
the states of the systems in experiments repeated on
branch of statistical mechanics remains an active area of
independent systems which have been prepared in a
theoretical research.

1.2. STATISTICAL THERMODYNAMICS

21

similar but imperfectly controlled manner (empirical dierent equilibrium ensembles that can be considered, and
probability), in the limit of an innite number of trials. only some of them correspond to thermodynamics.[1] Additional postulates are necessary to motivate why the ensemThese two meanings are equivalent for many purposes, and ble for a given system should have one form or another.
will be used interchangeably in this article.
A common approach found in many textbooks is to take the
However the probability is interpreted, each state in the en- equal a priori probability postulate.[2] This postulate states
semble evolves over time according to the equation of mo- that
tion. Thus, the ensemble itself (the probability distribution
over states) also evolves, as the virtual systems in the enFor an isolated system with an exactly known
semble continually leave one state and enter another. The
energy and exactly known composition, the sysensemble evolution is given by the Liouville equation (clastem can be found with equal probability in any
sical mechanics) or the von Neumann equation (quantum
microstate consistent with that knowledge.
mechanics). These equations are simply derived by the application of the mechanical equation of motion separately The equal a priori probability postulate therefore provides a
to each virtual system contained in the ensemble, with the motivation for the microcanonical ensemble described beprobability of the virtual system being conserved over time low. There are various arguments in favour of the equal a
as it evolves from state to state.
priori probability postulate:
One special class of ensemble is those ensembles that do
not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical
equilibrium. Statistical equilibrium occurs if, for each state
in the ensemble, the ensemble also contains all of its future
and past states with probabilities equal to the probability of
being in that state.[note 3] The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses
the more general case of ensembles that change over time,
and/or ensembles of non-isolated systems.

1.2.2

Statistical thermodynamics

Ergodic hypothesis: An ergodic state is one that


evolves over time to explore all accessible states: all
those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only
possible equilibrium ensemble with xed energy. This
approach has limited applicability, since most systems
are not ergodic.
Principle of indierence: In the absence of any further
information, we can only assign equal probabilities to
each compatible situation.
Maximum information entropy: A more elaborate version of the principle of indierence states that the correct ensemble is the ensemble that is compatible with
the known information and that has the largest Gibbs
entropy (information entropy).[4]

The primary goal of statistical thermodynamics (also


known as equilibrium statistical mechanics) is to derive the
classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics Other fundamental postulates for statistical mechanics have
[5]
provides a connection between the macroscopic properties also been proposed.
of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the mateThree thermodynamic ensembles
rial.
Whereas statistical mechanics proper involves dynamics,
here the attention is focussed on statistical equilibrium
(steady state). Statistical equilibrium does not mean that the
particles have stopped moving (mechanical equilibrium),
rather, only that the ensemble is not evolving.

Main articles: Microcanonical ensemble, Canonical ensemble and Grand canonical ensemble

There are three equilibrium ensembles with a simple form


that can be dened for any isolated system bounded inside
a nite volume.[1] These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic
Fundamental postulate
limit (dened below) they all correspond to classical therA sucient (but not necessary) condition for statistical modynamics.
equilibrium with an isolated system is that the probability
distribution is a function only of conserved properties (to- Microcanonical ensemble describes a system with a precisely given energy and xed composition (precise
tal energy, total particle numbers, etc.).[1] There are many

22

CHAPTER 1. CHAPTER 1. INTRODUCTION

number of particles). The microcanonical ensemble a simple task, however, since it involves considering every
contains with equal probability each possible state that possible state of the system. While some hypothetical sysis consistent with that energy and composition.
tems have been exactly solved, the most general (and realistic) case is too complex for exact solution. Various apCanonical ensemble describes a system of xed compo- proaches exist to approximate the true ensemble and allow
sition that is in thermal equilibrium[note 4] with a heat calculation of average quantities.
bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the dierent states in the ensemble are ac- Exact There are some cases which allow exact solutions.
corded dierent probabilities depending on their total
energy.
For very small microscopic systems, the ensembles
Grand canonical ensemble describes a system with nonxed composition (uncertain particle numbers) that is
in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various
types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of
particles; the dierent states in the ensemble are accorded dierent probabilities depending on their total
energy and total particle numbers.
For systems containing many particles (the thermodynamic
limit), all three of the ensembles listed above tend to give
identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used.[6]
Important cases where the thermodynamic ensembles do
not give identical results include:

can be directly computed by simply enumerating over


all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all
phase space in classical mechanics).
Some large systems consist of many separable microscopic systems, and each of the subsystems can
be analysed independently. Notably, idealized gases
of non-interacting particles have this property, allowing exact derivations of MaxwellBoltzmann statistics,
FermiDirac statistics, and BoseEinstein statistics.[2]
A few large systems with interaction have been solved.
By the use of subtle mathematical techniques, exact
solutions have been found for a few toy models.[7]
Some examples include the Bethe ansatz, squarelattice Ising model in zero eld, hard hexagon model.
Monte Carlo Main article: Monte Carlo method

Microscopic systems.
Large systems at a phase transition.
Large systems with long-range interactions.
In these cases the correct thermodynamic ensemble must be
chosen as there are observable dierences between these
ensembles not just in the size of uctuations, but also
in average quantities such as the distribution of particles.
The correct ensemble is that which corresponds to the way
the system has been prepared and characterizedin other
words, the ensemble that reects the knowledge about that
system.[2]

One approximate approach that is particularly well suited


to computers is the Monte Carlo method, which examines
just a few of the possible states of the system, with the states
chosen randomly (with a fair weight). As long as these states
form a representative sample of the whole set of states of the
system, the approximate characteristic function is obtained.
As more and more random samples are included, the errors
are reduced to an arbitrarily low level.
The MetropolisHastings algorithm is a classic Monte
Carlo method which was initially used to sample the
canonical ensemble.
Path integral Monte Carlo, also used to sample the
canonical ensemble.

Calculation methods
Other
Once the characteristic state function for an ensemble has
been calculated for a given system, that system is 'solved'
(macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state
function of a thermodynamic ensemble is not necessarily

For rareed non-ideal gases, approaches such as the


cluster expansion use perturbation theory to include
the eect of weak interactions, leading to a virial expansion.[3]

1.2. STATISTICAL THERMODYNAMICS

23

For dense uids, another approximate approach is equations are fully reversible and do not destroy informabased on reduced distribution functions, in particular tion (the ensembles Gibbs entropy is preserved). In order
the radial distribution function.[3]
to make headway in modelling irreversible processes, it is
necessary to add additional ingredients besides probability
Molecular dynamics computer simulations can be used and reversible mechanics.
to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to Non-equilibrium mechanics is therefore an active area of
a stochastic heat bath, they can also model canonical theoretical research as the range of validity of these additional assumptions continues to be explored. A few apand grand canonical conditions.
proaches are described in the following subsections.
Mixed methods involving non-equilibrium statistical
mechanical results (see below) may be useful.
Stochastic methods

1.2.3

Non-equilibrium statistical mechanics One approach to non-equilibrium statistical mechanics is to

See also: Non-equilibrium thermodynamics


There are many physical phenomena of interest that involve
quasi-thermodynamic processes out of equilibrium, for example:
heat transport by the internal motions in a material,
driven by a temperature imbalance,
electric currents carried by the motion of charges in a
conductor, driven by a voltage imbalance,
spontaneous chemical reactions driven by a decrease
in free energy,
friction, dissipation, quantum decoherence,
systems being pumped by external forces (optical
pumping, etc.),
and irreversible processes in general.
All of these processes occur over time with characteristic
rates, and these rates are of importance for engineering.
The eld of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes
at the microscopic level. (Statistical thermodynamics can
only be used to calculate the nal result, after the external
imbalances have been removed and the ensemble has settled back down to equilibrium.)
In principle, non-equilibrium statistical mechanics could
be mathematically exact: ensembles for an isolated system
evolve over time according to deterministic equations such
as Liouvilles equation or its quantum equivalent, the von
Neumann equation. These equations are the result of applying the mechanical equations of motion independently to
each state in the ensemble. Unfortunately, these ensemble
evolution equations inherit much of the complexity of the
underlying mechanical motion, and so exact solutions are
very dicult to obtain. Moreover, the ensemble evolution

incorporate stochastic (random) behaviour into the system.


Stochastic behaviour destroys information contained in the
ensemble. While this is technically inaccurate (aside from
hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is
added to reect that information of interest becomes converted over time into subtle correlations within the system,
or to correlations between the system and environment.
These correlations appear as chaotic or pseudorandom inuences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be
made much easier.
Boltzmann transport equation: An early form of
stochastic mechanics appeared even before the term
statistical mechanics had been coined, in studies of
kinetic theory. James Clerk Maxwell had demonstrated that molecular collisions would lead to apparently chaotic motion inside a gas. Ludwig Boltzmann
subsequently showed that, by taking this molecular
chaos for granted as a complete randomization, the
motions of particles in a gas would follow a simple
Boltzmann transport equation that would rapidly restore a gas to an equilibrium state (see H-theorem).
The Boltzmann transport equation and related approaches are important tools in non-equilibrium statistical mechanics due to their extreme simplicity. These
approximations work well in systems where the interesting information is immediately (after just one
collision) scrambled up into subtle correlations, which
essentially restricts them to rareed gases. The Boltzmann transport equation has been found to be very
useful in simulations of electron transport in lightly
doped semiconductors (in transistors), where the electrons are indeed analogous to a rareed gas.
A quantum technique related in theme is the random
phase approximation.
BBGKY hierarchy: In liquids and dense gases, it is
not valid to immediately discard the correlations be-

24

CHAPTER 1. CHAPTER 1. INTRODUCTION


tween particles after one collision. The BBGKY hierarchy (BogoliubovBornGreenKirkwoodYvon hierarchy) gives a method for deriving Boltzmann-type
equations but also extending them beyond the dilute
gas case, to include correlations after a few collisions.

Keldysh formalism (a.k.a. NEGFnon-equilibrium


Green functions): A quantum approach to including
stochastic dynamics is found in the Keldysh formalism. This approach often used in electronic quantum
transport calculations.
Near-equilibrium methods

an electronic system is the use of the Green-Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh
method.[8][9]

1.2.4

Applications outside thermodynamics

The ensemble formalism also can be used to analyze general


mechanical systems with uncertainty in knowledge about
the state of a system. Ensembles are also used in:
propagation of uncertainty over time,[1]

regression analysis of gravitational orbits,


Another important class of non-equilibrium statistical mechanical models deals with systems that are only very
ensemble forecasting of weather,
slightly perturbed from equilibrium. With very small
perturbations, the response can be analysed in linear re dynamics of neural networks,
sponse theory. A remarkable result, as formalized by the
uctuation-dissipation theorem, is that the response of a
bounded-rational potential games in game theory and
system when near equilibrium is precisely related to the
economics.
uctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from
equilibriumwhether put there by external forces or by 1.2.5 History
uctuationsrelaxes towards equilibrium in the same way,
since the system cannot tell the dierence or know how In 1738, Swiss physicist and mathematician Daniel
it came to be away from equilibrium.[3]:664
Bernoulli published Hydrodynamica which laid the basis for
the
kinetic theory of gases. In this work, Bernoulli posited
This provides an indirect avenue for obtaining numbers
the
argument, still used to this day, that gases consist of
such as ohmic conductivity and thermal conductivity by
great
numbers of molecules moving in all directions, that
extracting results from equilibrium statistical mechanics.
their
impact
on a surface causes the gas pressure that we
Since equilibrium statistical mechanics is mathematically
feel,
and
that
what we experience as heat is simply the kiwell dened and (in some cases) more amenable for calcunetic
energy
of
their motion.[5]
lations, the uctuation-dissipation connection can be a convenient shortcut for calculations in near-equilibrium statis- In 1859, after reading a paper on the diusion of molecules
tical mechanics.
by Rudolf Clausius, Scottish physicist James Clerk Maxwell
A few of the theoretical tools used to make this connection formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a
include:
certain velocity in a specic range. This was the rstever statistical law in physics.[10] Five years later, in 1864,
Fluctuationdissipation theorem
Ludwig Boltzmann, a young student in Vienna, came across
Onsager reciprocal relations
Maxwells paper and spent much of his life developing the
subject further.
GreenKubo relations
Statistical mechanics proper was initiated in the 1870s with
LandauerBttiker formalism
the work of Boltzmann, much of which was collectively
published in his 1896 Lectures on Gas Theory.[11] Boltz MoriZwanzig formalism
manns original papers on the statistical interpretation of
thermodynamics, the H-theorem, transport theory, thermal
equilibrium, the equation of state of gases, and similar subHybrid methods
jects, occupy about 2,000 pages in the proceedings of the
An advanced approach uses a combination of stochastic Vienna Academy and other societies. Boltzmann intromethods and linear response theory. As an example, one duced the concept of an equilibrium statistical ensemble
approach to compute quantum coherence eects (weak lo- and also investigated for the rst time non-equilibrium stacalization, conductance uctuations) in the conductance of tistical mechanics, with his H-theorem.

1.2. STATISTICAL THERMODYNAMICS


The term statistical mechanics was coined by the
American mathematical physicist J. Willard Gibbs in
1884.[12][note 5] Probabilistic mechanics might today seem
a more appropriate term, but statistical mechanics is
rmly entrenched.[13] Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a
fully general approach to address all mechanical systems
macroscopic or microscopic, gaseous or non-gaseous.[1]
Gibbs methods were initially derived in the framework
classical mechanics, however they were of such generality
that they were found to adapt easily to the later quantum
mechanics, and still form the foundation of statistical mechanics to this day.[2]

1.2.6

See also

Thermodynamics: non-equilibrium, chemical


Mechanics: classical, quantum
Probability, statistical ensemble
Numerical methods: Monte Carlo method, molecular
dynamics
Statistical physics
Quantum statistical mechanics
List of notable textbooks in statistical mechanics
List of important publications in statistical mechanics

25

[4] The transitive thermal equilibrium (as in, X is thermal equilibrium with Y) used here means that the ensemble for the
rst system is not perturbed when the system is allowed to
weakly interact with the second system.
[5] According to Gibbs, the term statistical, in the context of
mechanics, i.e. statistical mechanics, was rst used by the
Scottish physicist James Clerk Maxwell in 1871. From: J.
Clerk Maxwell, Theory of Heat (London, England: Longmans, Green, and Co., 1871), p. 309: In dealing with
masses of matter, while we do not perceive the individual
molecules, we are compelled to adopt what I have described
as the statistical method of calculation, and to abandon the
strict dynamical method, in which we follow every motion
by the calculus.

1.2.8

References

[1] Gibbs, Josiah Willard (1902). Elementary Principles in Statistical Mechanics. New York: Charles Scribners Sons.
[2] Tolman, R. C. (1938). The Principles of Statistical Mechanics. Dover Publications. ISBN 9780486638966.
[3] Balescu, Radu (1975). Equilibrium and Non-Equilibrium
Statistical Mechanics.
John Wiley & Sons.
ISBN
9780471046004.
[4] Jaynes, E. (1957).
Information Theory and Statistical Mechanics.
Physical Review 106 (4): 620.
doi:10.1103/PhysRev.106.620.
[5] J. Unk, "Compendium of the foundations of classical statistical physics." (2006)

Fundamentals of Statistical Mechanics Wikipedia book

[6] Reif, F. (1965). Fundamentals of Statistical and Thermal


Physics. McGrawHill. p. 227. ISBN 9780070518001.

1.2.7

[7] Baxter, Rodney J. (1982). Exactly solved models in statistical


mechanics. Academic Press Inc. ISBN 9780120831807.

Notes

[1] The term statistical mechanics is sometimes used to refer


to only statistical thermodynamics. This article takes the
broader view. By some denitions, statistical physics is an
even broader term which statistically studies any type of
physical system, but is often taken to be synonymous with
statistical mechanics.
[2] The probabilities in quantum statistical mechanics should
not be confused with quantum superposition. While a quantum ensemble can contain states with quantum superpositions, a single quantum state cannot be used to represent an
ensemble.
[3] Statistical equilibrium should not be confused with
mechanical equilibrium. The latter occurs when a mechanical system has completely ceased to evolve even on a
microscopic scale, due to being in a state with a perfect balancing of forces. Statistical equilibrium generally involves
states that are very far from mechanical equilibrium.

[8] Altshuler, B. L.; Aronov, A. G.; Khmelnitsky, D. E. (1982).


Eects of electron-electron collisions with small energy
transfers on quantum localisation. Journal of Physics
C: Solid State Physics 15 (36): 7367. doi:10.1088/00223719/15/36/018.
[9] Aleiner, I.; Blanter, Y. (2002). Inelastic scattering time
for conductance uctuations. Physical Review B 65 (11).
doi:10.1103/PhysRevB.65.115317.
[10] Mahon, Basil (2003). The Man Who Changed Everything
the Life of James Clerk Maxwell. Hoboken, NJ: Wiley. ISBN
0-470-86171-1. OCLC 52358254.
[11] Ebeling, Werner; Sokolov, Igor M. (2005). Statistical Thermodynamics and Stochastic Theory of Nonequilibrium Systems. World Scientic Publishing Co. Pte. Ltd. pp. 312.
ISBN 978-90-277-1674-3. (section 1.2)

26

CHAPTER 1. CHAPTER 1. INTRODUCTION

[12] J. W. Gibbs, On the Fundamental Formula of Statistical


Mechanics, with Applications to Astronomy and Thermodynamics. Proceedings of the American Association for the
Advancement of Science, 33, 57-58 (1884). Reproduced in
The Scientic Papers of J. Willard Gibbs, Vol II (1906), pp.
16.

1.3.1

History

[13] Mayants, Lazar (1984). The enigma of probability and


physics. Springer. p. 174. ISBN 978-90-277-1674-3.

1.2.9

External links

Philosophy of Statistical Mechanics article by


Lawrence Sklar for the Stanford Encyclopedia of
Philosophy.
Sklogwiki - Thermodynamics, statistical mechanics,
and the computer simulation of materials. SklogWiki
is particularly orientated towards liquids and soft condensed matter.
Statistical Thermodynamics - Historical Timeline
Thermodynamics and Statistical Mechanics by
Richard Fitzpatrick
Lecture Notes in Statistical Mechanics and MesoscopJ. Willard Gibbs - founder of chemical thermodynamics
ics by Doron Cohen
Videos of lecture series in statistical mechanics on In 1865, the German physicist Rudolf Clausius, in his
YouTube taught by Leonard Susskind.
Mechanical Theory of Heat, suggested that the prin Vu-Quoc, L., Conguration integral (statistical me- ciples of thermochemistry, e.g. the heat evolved in
could be applied to the principles of
chanics), 2008. this wiki site is down; see this article combustion reactions,
[2]
thermodynamics.
Building
on the work of Clausius, bein the web archive on 2012 April 28.
tween the years 1873-76 the American mathematical physicist Willard Gibbs published a series of three papers, the
most famous one being the paper On the Equilibrium of
1.3 Chemical Thermodynamics
Heterogeneous Substances. In these papers, Gibbs showed
how the rst two laws of thermodynamics could be meaChemical thermodynamics is the study of the interrela- sured graphically and mathematically to determine both the
tion of heat and work with chemical reactions or with phys- thermodynamic equilibrium of chemical reactions as well
ical changes of state within the connes of the laws of ther- as their tendencies to occur or proceed. Gibbs collection
modynamics. Chemical thermodynamics involves not only of papers provided the rst unied body of thermodynamic
laboratory measurements of various thermodynamic prop- theorems from the principles developed by others, such as
erties, but also the application of mathematical methods to Clausius and Sadi Carnot.
the study of chemical questions and the spontaneity of pro- During the early 20th century, two major publications
cesses.
successfully applied the principles developed by Gibbs to
The structure of chemical thermodynamics is based on the
rst two laws of thermodynamics. Starting from the rst
and second laws of thermodynamics, four equations called
the fundamental equations of Gibbs can be derived. From
these four, a multitude of equations, relating the thermodynamic properties of the thermodynamic system can be derived using relatively simple mathematics. This outlines the
mathematical framework of chemical thermodynamics.[1]

chemical processes, and thus established the foundation of


the science of chemical thermodynamics. The rst was
the 1923 textbook Thermodynamics and the Free Energy
of Chemical Substances by Gilbert N. Lewis and Merle
Randall. This book was responsible for supplanting the
chemical anity for the term free energy in the Englishspeaking world. The second was the 1933 book Modern
Thermodynamics by the methods of Willard Gibbs written

1.3. CHEMICAL THERMODYNAMICS


by E. A. Guggenheim. In this manner, Lewis, Randall,
and Guggenheim are considered as the founders of modern chemical thermodynamics because of the major contribution of these two books in unifying the application of
thermodynamics to chemistry.[1]

1.3.2

Overview

The primary objective of chemical thermodynamics is the


establishment of a criterion for the determination of the feasibility or spontaneity of a given transformation.[3] In this
manner, chemical thermodynamics is typically used to predict the energy exchanges that occur in the following processes:
1. Chemical reactions
2. Phase changes
3. The formation of solutions

27
of chemical bonds involves energy or heat, which may be
either absorbed or evolved from a chemical system.
Energy that can be released (or absorbed) because of a reaction between a set of chemical substances is equal to the
dierence between the energy content of the products and
the reactants. This change in energy is called the change in
internal energy of a chemical reaction. Where Uf reactants
is the internal energy of formation of the reactant molecules
that can be calculated from the bond energies of the various chemical bonds of the molecules under consideration
and Uf products is the internal energy of formation of the
product molecules. The change in internal energy is a process which is equal to the heat change if it is measured under conditions of constant volume(at STP condition), as
in a closed rigid container such as a bomb calorimeter.
However, under conditions of constant pressure, as in reactions in vessels open to the atmosphere, the measured heat
change is not always equal to the internal energy change,
because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is called the
enthalpy change; in this case the enthalpy of formation).

The following state functions are of primary concern in Another useful term is the heat of combustion, which is the
chemical thermodynamics:
energy released due to a combustion reaction and often applied in the study of fuels. Food is similar to hydrocar Internal energy (U)
bon fuel and carbohydrate fuels, and when it is oxidized, its
caloric content is similar (though not assessed in the same
Enthalpy (H)
way as a hydrocarbon fuel see food energy).
Entropy (S)
In chemical thermodynamics the term used for the chemical potential energy is chemical potential, and for chemical
Gibbs free energy (G)
transformation an equation most often used is the GibbsDuhem equation.
Most identities in chemical thermodynamics arise from application of the rst and second laws of thermodynamics,
particularly the law of conservation of energy, to these state
functions.
The 3 laws of thermodynamics:
1. The energy of the universe is constant.

1.3.4

Chemical reactions

Main article: Chemical reaction

2. In any spontaneous process, there is always an increase


In most cases of interest in chemical thermodynamics there
in entropy of the universe
are internal degrees of freedom and processes, such as
3. The entropy of a perfect crystal(well ordered) at 0 chemical reactions and phase transitions, which always creKelvin is zero
ate entropy unless they are at equilibrium, or are maintained
at a running equilibrium through quasi-static changes
by being coupled to constraining devices, such as pistons
1.3.3 Chemical energy
or electrodes, to deliver and receive external work. Even
for homogeneous bulk materials, the free energy funcMain article: Chemical energy
tions depend on the composition, as do all the extensive
thermodynamic potentials, including the internal energy. If
Chemical energy is the potential of a chemical substance to the quantities { Ni }, the number of chemical species, are
undergo a transformation through a chemical reaction or to omitted from the formulae, it is impossible to describe comtransform other chemical substances. Breaking or making positional changes.

28

CHAPTER 1. CHAPTER 1. INTRODUCTION

Gibbs function or Gibbs Energy

for the extent of reaction (Prigogine & Defay, p. 18; Prigogine, pp. 47; Guggenheim, p. 37.62), and to the use
of the partial derivative G/ (in place of the widely used
"G", since the quantity at issue is not a nite change). The
result is an understandable expression for the dependence of
dG on chemical reactions (or other processes). If there is
just one reaction

For a bulk (unstructured) system they are the last remaining extensive variables. For an unstructured, homogeneous
bulk system, there are still various extensive compositional variables { Ni } that G depends on, which specify
the composition, the amounts of each chemical substance,
expressed as the numbers of molecules present or (dividing by Avogadros number = 6.023 1023 ), the numbers of
(
)
G
moles
d.
(dG)T,P =
T,P

If we introduce the stoichiometric coecient for the i-th


component in the reaction

G = G(T, P, {Ni }) .
For the case where only PV work is possible

dG = SdT + V dP +

i = Ni /
i dNi

which tells how many molecules of i are produced or consumed, we obtain an algebraic expression for the partial
in which i is the chemical potential for the i-th component derivative
in the system
(
)

G
i i = A
=
(
)
T,P
G
i
i =
.
Ni T,P,Nj=i ,etc.
where, (De Donder; Progoine & Defay, p. 69; Guggenheim, pp. 37,240), we introduce a concise and historical
The expression for dG is especially useful at constant T and
name for this quantity, the "anity", symbolized by A, as
P, conditions which are easy to achieve experimentally and
introduced by Thophile de Donder in 1923. The minus
which approximates the condition in living creatures
sign comes from the fact the anity was dened to represent
the rule that spontaneous changes will ensue only when the

change in the Gibbs free energy of the process is negative,


i dNi .
(dG)T,P =
meaning that the chemical species have a positive anity
i
for each other. The dierential for G takes on a simple form
which displays its dependence on compositional change
Chemical anity
i

Main article: Chemical anity


While this formulation is mathematically defensible, it is
not particularly transparent since one does not simply add or
remove molecules from a system. There is always a process
involved in changing the composition; e.g., a chemical reaction (or many), or movement of molecules from one phase
(liquid) to another (gas or solid). We should nd a notation which does not seem to imply that the amounts of the
components ( Ni } can be changed independently. All real
processes obey conservation of mass, and in addition, conservation of the numbers of atoms of each kind. Whatever
molecules are transferred to or from should be considered
part of the system.

(dG)T,P = A d .
If there are a number of chemical reactions going on simultaneously, as is usually the case
(dG)T,P =

Ak dk .

a set of reaction coordinates { j }, avoiding the notion that


the amounts of the components ( Ni } can be changed independently. The expressions above are equal to zero at
thermodynamic equilibrium, while in the general case for
real systems, they are negative because all chemical reactions proceeding at a nite rate produce entropy. This can
be made even more explicit by introducing the reaction rates
Consequently, we introduce an explicit variable to represent dj/dt. For each and every physically independent process
the degree of advancement of a process, a progress variable (Prigogine & Defay, p. 38; Prigogine, p. 24)

1.3. CHEMICAL THERMODYNAMICS

A 0 .
This is a remarkable result since the chemical potentials
are intensive system variables, depending only on the local
molecular milieu. They cannot know whether the temperature and pressure (or any other system variables) are going
to be held constant over time. It is a purely local criterion
and must hold regardless of any such constraints. Of course,
it could have been obtained by taking partial derivatives of
any of the other fundamental state functions, but nonetheless is a general criterion for (T times) the entropy production from that spontaneous process; or at least any part
of it that is not captured as external work. (See Constraints
below.)
We now relax the requirement of a homogeneous bulk
system by letting the chemical potentials and the anity
apply to any locality in which a chemical reaction (or any
other process) is occurring. By accounting for the entropy
production due to irreversible processes, the inequality for
dG is now replaced by an equality

dG = SdT + V dP

Ak dk + W

or

dGT,P =

Ak dk + W .

Any decrease in the Gibbs function of a system is the upper


limit for any isothermal, isobaric work that can be captured
in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of
the system and/or its surrounding. Or it may go partly toward doing external work and partly toward creating entropy. The important point is that the extent of reaction for
a chemical reaction may be coupled to the displacement of
some external mechanical or electrical quantity in such a
way that one can advance only if the other one also does.
The coupling may occasionally be rigid, but it is often exible and variable.

29
assertion that all spontaneous reactions have a negative G
is merely a restatement of the fundamental thermodynamic
relation, giving it the physical dimensions of energy and
somewhat obscuring its signicance in terms of entropy.
When there is no useful work being done, it would be less
misleading to use the Legendre transforms of the entropy
appropriate for constant T, or for constant T and P, the
Massieu functions F/T and G/T respectively.

1.3.5

Non equilibrium

Main article: non-equilibrium thermodynamics


Generally the systems treated with the conventional chemical thermodynamics are either at equilibrium or near equilibrium. Ilya Prigogine developed the thermodynamic treatment of open systems that are far from equilibrium. In doing so he has discovered phenomena and structures of completely new and completely unexpected types. His generalized, nonlinear and irreversible thermodynamics has found
surprising applications in a wide variety of elds.
The non equilibrium thermodynamics has been applied for
explaining how ordered structures e.g. the biological systems, can develop from disorder. Even if Onsagers relations are utilized, the classical principles of equilibrium in
thermodynamics still show that linear systems close to equilibrium always develop into states of disorder which are stable to perturbations and cannot explain the occurrence of
ordered structures.
Prigogine called these systems dissipative systems, because
they are formed and maintained by the dissipative processes
which take place because of the exchange of energy between the system and its environment and because they disappear if that exchange ceases. They may be said to live in
symbiosis with their environment.
The method which Prigogine used to study the stability of
the dissipative structures to perturbations is of very great
general interest. It makes it possible to study the most varied problems, such as city trac problems, the stability of
insect communities, the development of ordered biological
structures and the growth of cancer cells to mention but a
few examples.

Solutions

System constraints

In solution chemistry and biochemistry, the Gibbs free energy decrease (G/, in molar units, denoted cryptically
by G) is commonly used as a surrogate for (T times) the
entropy produced by spontaneous chemical reactions in situations where there is no work being done; or at least no
useful work; i.e., other than perhaps some PdV. The

In this regard, it is crucial to understand the role of walls


and other constraints, and the distinction between independent processes and coupling. Contrary to the clear implications of many reference sources, the previous analysis is not
restricted to homogeneous, isotropic bulk systems which
can deliver only PdV work to the outside world, but ap-

30

CHAPTER 1. CHAPTER 1. INTRODUCTION

plies even to the most structured systems. There are complex systems with many chemical reactions going on at the
same time, some of which are really only parts of the same,
overall process. An independent process is one that could
proceed even if all others were unaccountably stopped in
their tracks. Understanding this is perhaps a thought experiment in chemical kinetics, but actual examples exist.

its Applications to the Steam Engine and to Physical Properties


of Bodies. London: John van Voorst, 1 Paternoster Row.
MDCCCLXVII.
[3] Klotz, I. (1950). Chemical Thermodynamics. New York:
Prentice-Hall, Inc.

A gas reaction which results in an increase in the number 1.3.8 Further reading
of molecules will lead to an increase in volume at constant
Herbert B. Callen (1960). Thermodynamics. Wiley
external pressure. If it occurs inside a cylinder closed with
& Sons. The clearest account of the logical foundaa piston, the equilibrated reaction can proceed only by dotions of the subject. ISBN 0-471-13035-4. Library of
ing work against an external force on the piston. The exCongress Catalog No. 60-5597
tent variable for the reaction can increase only if the piston
moves, and conversely, if the piston is pushed inward, the
Ilya Prigogine & R. Defay, translated by D.H. Evreaction is driven backwards.
erett; Chapter IV (1954). Chemical Thermodynamics.
Longmans, Green & Co. Exceptionally clear on the
Similarly, a redox reaction might occur in an
logical foundations as applied to chemistry; includes
electrochemical cell with the passage of current in
non-equilibrium thermodynamics.
wires connecting the electrodes. The half-cell reactions at
the electrodes are constrained if no current is allowed to
Ilya Prigogine (1967). Thermodynamics of Irreversible
ow. The current might be dissipated as joule heating, or
Processes, 3rd ed. Interscience: John Wiley & Sons.
it might in turn run an electrical device like a motor doing
A simple, concise monograph explaining all the basic
mechanical work. An automobile lead-acid battery can
ideas. Library of Congress Catalog No. 67-29540
be recharged, driving the chemical reaction backwards.
In this case as well, the reaction is not an independent
E.A. Guggenheim (1967). Thermodynamics: An Adprocess. Some, perhaps most, of the Gibbs free energy of
vanced Treatment for Chemists and Physicists, 5th ed.
reaction may be delivered as external work.
North Holland; John Wiley & Sons (Interscience). A
remarkably astute treatise. Library of Congress CataThe hydrolysis of ATP to ADP and phosphate can drive
log No. 67-20003
the force times distance work delivered by living muscles,
and synthesis of ATP is in turn driven by a redox chain in
Th. De Donder (1922). Bull. Ac. Roy. Belg. (Cl. Sc.)
mitochondria and chloroplasts, which involves the transport
(5) 7: 197, 205. Missing or empty |title= (help)
of ions across the membranes of these cellular organelles.
The coupling of processes here, and in the previous examples, is often not complete. Gas can leak slowly past a pis- 1.3.9 External links
ton, just as it can slowly leak out of a rubber balloon. Some
reaction may occur in a battery even if no external current is
Chemical Thermodynamics - University of North Carowing. There is usually a coupling coecient, which may
olina
depend on relative rates, which determines what percent Chemical energetics (Introduction to thermodynamics
age of the driving free energy is turned into external work,
and the First Law)
or captured as chemical work"; a misnomer for the free
energy of another chemical process.
Thermodynamics of chemical equilibrium (Entropy,
Second Law and free energy)

1.3.6

See also

Thermodynamic databases for pure substances

1.4

Equilibrium Thermodynamics

Equilibrium Thermodynamics is the systematic study of


transformations of matter and energy in systems in terms
of a concept called thermodynamic equilibrium. The word
[1] Ott, Bevan J.; Boerio-Goates, Juliana (2000). Chemical
equilibrium implies a state of balance. Equilibrium therThermodynamics Principles and Applications. Academic
modynamics, in origins, derives from analysis of the Carnot
Press. ISBN 0-12-530990-2.
cycle. Here, typically a system, as cylinder of gas, initially
[2] Clausius, R. (1865). The Mechanical Theory of Heat with in its own state of internal thermodynamic equilibrium, is

1.3.7

References

1.5. NON-EQUILIBRIUM THERMODYNAMICS


set out of balance via heat input from a combustion reaction. Then, through a series of steps, as the system settles
into its nal equilibrium state, work is extracted.

31
Kondepudi, D. & Prigogine, I. (2004). Modern Thermodynamics From Heat Engines to Dissipative Structures (textbook). New York: John Wiley & Sons.

In an equilibrium state the potentials, or driving forces,


within the system, are in exact balance. A central aim in
Perrot, P. (1998). A to Z of Thermodynamics (dictioequilibrium thermodynamics is: given a system in a wellnary). New York: Oxford University Press.
dened initial state of thermodynamic equilibrium, subject
to accurately specied constraints, to calculate, when the
constraints are changed by an externally imposed intervention, what the state of the system will be once it has reached
a new equilibrium. An equilibrium state is mathematically
Thermodyascertained by seeking the extrema of a thermodynamic po- 1.5 Non-equilibrium
tential function, whose nature depends on the constraints
namics
imposed on the system. For example, a chemical reaction
at constant temperature and pressure will reach equilibrium
at a minimum of its components Gibbs free energy and a Non-equilibrium thermodynamics is a branch of
maximum of their entropy.
thermodynamics that deals with physical systems that are
not
in thermodynamic equilibrium but can be adequately
Equilibrium thermodynamics diers from non-equilibrium
described
in terms of variables (non-equilibrium state
thermodynamics, in that, with the latter, the state of the sysvariables)
that
represent an extrapolation of the variables
tem under investigation will typically not be uniform but
used
to
specify
the system in thermodynamic equilibrium.
will vary locally in those as energy, entropy, and temperNon-equilibrium
thermodynamics is concerned with
ature distributions as gradients are imposed by dissipative
transport
processes
and with the rates of chemical reacthermodynamic uxes. In equilibrium thermodynamics, by
tions.
It
relies
on
what
may be thought of as more or less
contrast, the state of the system will be considered uniform
nearness
to
thermodynamic
equilibrium. Non-equilibrium
throughout, dened macroscopically by such quantities as
thermodynamics
is
a
work
in
progress, not an established
temperature, pressure, or volume. Systems are studied in
edice.
This
article
will
try
to
sketch some approaches to
terms of change from one equilibrium state to another; such
it
and
some
concepts
important
for it.
a change is called a thermodynamic process.
Ruppeiner geometry is a type of information geometry
used to study thermodynamics. It claims that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the
model. This geometrical model is based on the idea that
there exist equilibrium states which can be represented by
points on two-dimensional surface and the distance between
these equilibrium states is related to the uctuation between
them.

1.4.1

See also

Non-equilibrium thermodynamics

Almost all systems found in nature are not in thermodynamic equilibrium; for they are changing or can be triggered to change over time, and are continuously and discontinuously subject to ux of matter and energy to and
from other systems and to chemical reactions. Some systems and processes are, however, in a useful sense, near
enough to thermodynamic equilibrium to allow description
with useful accuracy by currently known non-equilibrium
thermodynamics. Nevertheless, many natural systems and
processes will always remain far beyond the scope of nonequilibrium thermodynamic methods. This is because of
the very small size of atoms, as compared with macroscopic
systems.

The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by
Thermodynamics
equilibrium thermodynamics. One fundamental dierence
between equilibrium thermodynamics and non-equilibrium
thermodynamics lies in the behaviour of inhomogeneous
1.4.2 References
systems, which require for their study knowledge of rates of
Adkins, C.J. (1983). Equilibrium Thermodynamics, reaction which are not considered in equilibrium thermodynamics of homogeneous systems. This is discussed below.
3rd Ed. Cambridge: Cambridge University Press.
Another fundamental and very important dierence is the
Cengel, Y. & Boles, M. (2002). Thermodynamics diculty or impossibility in dening entropy at an instant
an Engineering Approach, 4th Ed. (textbook). New of time in macroscopic terms for systems not in thermodyYork: McGraw Hill.
namic equilibrium.[1][2]

32

1.5.1

CHAPTER 1. CHAPTER 1. INTRODUCTION

Scope of non-equilibrium thermody- spond to extensive thermodynamic state variables have to


be dened as spatial densities of the corresponding extennamics

Dierence between equilibrium and non-equilibrium


thermodynamics
A profound dierence separates equilibrium from nonequilibrium thermodynamics. Equilibrium thermodynamics ignores the time-courses of physical processes. In
contrast, non-equilibrium thermodynamics attempts to describe their time-courses in continuous detail.
Equilibrium thermodynamics restricts its considerations to
processes that have initial and nal states of thermodynamic
equilibrium; the time-courses of processes are deliberately
ignored. Consequently, equilibrium thermodynamics allows processes that pass through states far from thermodynamic equilibrium, that cannot be described even by the
variables admitted for non-equilibrium thermodynamics,[3]
such as time rates of change of temperature and pressure.[4]
For example, in equilibrium thermodynamics, a process is
allowed to include even a violent explosion that cannot be
described by non-equilibrium thermodynamics.[3] Equilibrium thermodynamics does, however, for theoretical development, use the idealized concept of the quasi-static
process. A quasi-static process is a conceptual (timeless and physically impossible) smooth mathematical passage along a continuous path of states of thermodynamic
equilibrium.[5] It is an exercise in dierential geometry
rather than a process that could occur in actuality.
Non-equilibrium thermodynamics, on the other hand, attempting to describe continuous time-courses, need its state
variables to have a very close connection with those of
equilibrium thermodynamics.[6] This profoundly restricts
the scope of non-equilibrium thermodynamics, and places
heavy demands on its conceptual framework.

Non-equilibrium state variables


The suitable relationship that denes non-equilibrium thermodynamic state variables is as follows. On occasions when
the system happens to be in states that are suciently close
to thermodynamic equilibrium, non-equilibrium state variables are such that they can be measured locally with sucient accuracy by the same techniques as are used to measure thermodynamic state variables, or by corresponding
and time and space derivatives, including uxes of matter
and energy. In general, non-equilibrium thermodynamic
systems are spatially and temporally non-uniform, but their
non-uniformity still has a sucient degree of smoothness to
support the existence of suitable time and space derivatives
of non-equilibrium state variables. Because of the spatial
non-uniformity, non-equilibrium state variables that corre-

sive equilibrium state variables. On occasions when the system is suciently close to thermodynamic equilibrium, intensive non-equilibrium state variables, for example temperature and pressure, correspond closely with equilibrium
state variables. It is necessary that measuring probes be
small enough, and rapidly enough responding, to capture
relevant non-uniformity. Further, the non-equilibrium state
variables are required to be mathematically functionally related to one another in ways that suitably resemble corresponding relations between equilibrium thermodynamic
state variables.[7] In reality, these requirements are very demanding, and it may be dicult or practically, or even theoretically, impossible to satisfy them. This is part of why
non-equilibrium thermodynamics is a work in progress.

1.5.2

Overview

Non-equilibrium thermodynamics is a work in progress, not


an established edice. This article will try to sketch some
approaches to it and some concepts important for it.
Some concepts of particular importance for nonequilibrium thermodynamics include time rate of dissipation of energy (Rayleigh 1873,[8] Onsager 1931,[9]
also[7][10] ), time rate of entropy production (Onsager
1931),[9] thermodynamic elds,[11][12][13] dissipative
structure,[14] and non-linear dynamical structure.[10]
One problem of interest is the thermodynamic study of nonequilibrium steady states, in which entropy production and
some ows are non-zero, but there is no time variation of
physical variables.
One initial approach to non-equilibrium thermodynamics is
sometimes called 'classical irreversible thermodynamics.[2]
There are other approaches to non-equilibrium thermodynamics, for example extended irreversible thermodynamics,[2][15] and generalized thermodynamics,[16] but they are
hardly touched on in the present article.

Quasi-radiationless non-equilibrium thermodynamics


of matter in laboratory conditions
According to Wildt[17] (see also Essex[18][19][20] ), current
versions of non-equilibrium thermodynamics ignore radiant
heat; they can do so because they refer to laboratory quantities of matter under laboratory conditions with temperatures well below those of stars. At laboratory temperatures,
in laboratory quantities of matter, thermal radiation is weak
and can be practically nearly ignored. But, for example, atmospheric physics is concerned with large amounts of matter, occupying cubic kilometers, that, taken as a whole, are

1.5. NON-EQUILIBRIUM THERMODYNAMICS

33

not within the range of laboratory quantities; then thermal tively treated as two-dimensional surfaces, with no spatial
radiation cannot be ignored.
volume, and no spatial variation.

Local equilibrium thermodynamics


The terms 'classical irreversible thermodynamics[2] and 'local equilibrium thermodynamics are sometimes used to refer to a version of non-equilibrium thermodynamics that demands certain simplifying assumptions, as follows. The assumptions have the eect of making each very small volume element of the system eectively homogeneous, or
well-mixed, or without an eective spatial structure, and
without kinetic energy of bulk ow or of diusive ux.
Even within the thought-frame of classical irreversible thermodynamics, care[10] is needed in choosing the independent
variables[21] for systems. In some writings, it is assumed
that the intensive variables of equilibrium thermodynamics are sucient as the independent variables for the task
(such variables are considered to have no 'memory', and do
not show hysteresis); in particular, local ow intensive variables are not admitted as independent variables; local ows
are considered as dependent on quasi-static local intensive
variables.

Local equilibrium thermodynamics with materials


with memory A further extension of local equilibrium
thermodynamics is to allow that materials may have memory, so that their constitutive equations depend not only on
present values but also on past values of local equilibrium
variables. Thus time comes into the picture more deeply
than for time-dependent local equilibrium thermodynamics
with memoryless materials, but uxes are not independent
variables of state.[28]
Extended irreversible thermodynamics
Extended irreversible thermodynamics is a branch of
non-equilibrium thermodynamics that goes outside the restriction to the local equilibrium hypothesis. The space of
state variables is enlarged by including the uxes of mass,
momentum and energy and eventually higher order uxes.
The formalism is well-suited for describing high-frequency
processes and small-length scales materials.

Also it is assumed that the local entropy density is the same


function of the other local intensive variables as in equilibrium; this is called the local thermodynamic equilibrium assumption[7][10][14][15][22][23][24][25] (see also Keizer
(1987)[26] ). Radiation is ignored because it is transfer of
energy between regions, which can be remote from one
another. In the classical irreversible thermodynamic approach, there is allowed very small spatial variation, from
very small volume element to adjacent very small volume
element, but it is assumed that the global entropy of the
system can be found by simple spatial integration of the local entropy density; this means that spatial structure cannot contribute as it properly should to the global entropy
assessment for the system. This approach assumes spatial
and temporal continuity and even dierentiability of locally
dened intensive variables such as temperature and internal
energy density. All of these are very stringent demands.
Consequently, this approach can deal with only a very limited range of phenomena. This approach is nevertheless
valuable because it can deal well with some macroscopically observable phenomena.

1.5.3

Basic concepts

In other writings, local ow variables are considered; these


might be considered as classical by analogy with the timeinvariant long-term time-averages of ows produced by
endlessly repeated cyclic processes; examples with ows are
in the thermoelectric phenomena known as the Seebeck and
the Peltier eects, considered by Kelvin in the nineteenth
century and by Onsager in the twentieth.[22][27] These effects occur at metal junctions, which were originally eec-

The mechanics of macroscopic systems depends on a number of extensive quantities. It should be stressed that all systems are permanently interacting with their surroundings,
thereby causing unavoidable uctuations of extensive quantities. Equilibrium conditions of thermodynamic systems
are related to the maximum property of the entropy. If the
only extensive quantity that is allowed to uctuate is the internal energy, all the other ones being kept strictly constant,

There are many examples of stationary non-equilibrium


systems, some very simple, like a system conned between
two thermostats at dierent temperatures or the ordinary
Couette ow, a uid enclosed between two at walls moving
in opposite directions and dening non-equilibrium conditions at the walls. Laser action is also a non-equilibrium
process, but it depends on departure from local thermodynamic equilibrium and is thus beyond the scope of classical irreversible thermodynamics; here a strong temperature dierence is maintained between two molecular degrees of freedom (with molecular laser, vibrational and rotational molecular motion), the requirement for two component 'temperatures in the one small region of space, precluding local thermodynamic equilibrium, which demands
that only one temperature be needed. Damping of acoustic perturbations or shock waves are non-stationary nonequilibrium processes. Driven complex uids, turbulent
systems and glasses are other examples of non-equilibrium
systems.

34

CHAPTER 1. CHAPTER 1. INTRODUCTION

the temperature of the system is measurable and meaningful. The systems properties are then most conveniently described using the thermodynamic potential Helmholtz free
energy (A = U - TS), a Legendre transformation of the energy. If, next to uctuations of the energy, the macroscopic
dimensions (volume) of the system are left uctuating, we
use the Gibbs free energy (G = U + PV - TS), where the
systems properties are determined both by the temperature
and by the pressure.

1.5.4

Stationary states, uctuations, and


stability

In thermodynamics one is often interested in a stationary


state of a process, allowing that the stationary state include
the occurrence of unpredictable and experimentally unreproducible uctuations in the state of the system. The uctuations are due to the systems internal sub-processes and
to exchange of matter or energy with the systems surroundNon-equilibrium systems are much more complex and they ings that create the constraints that dene the process.
may undergo uctuations of more extensive quantities. The If the stationary state of the process is stable, then the unreboundary conditions impose on them particular intensive producible uctuations involve local transient decreases of
variables, like temperature gradients or distorted collective entropy. The reproducible response of the system is then
motions (shear motions, vortices, etc.), often called ther- to increase the entropy back to its maximum by irreversible
modynamic forces. If free energies are very useful in equi- processes: the uctuation cannot be reproduced with a siglibrium thermodynamics, it must be stressed that there is no nicant level of probability. Fluctuations about stable stageneral law dening stationary non-equilibrium properties tionary states are extremely small except near critical points
of the energy as is the second law of thermodynamics for (Kondepudi and Prigogine 1998, page 323).[29] The stable
the entropy in equilibrium thermodynamics. That is why stationary state has a local maximum of entropy and is loin such cases a more generalized Legendre transformation cally the most reproducible state of the system. There are
should be considered. This is the extended Massieu poten- theorems about the irreversible dissipation of uctuations.
tial. By denition, the entropy (S) is a function of the col- Here 'local' means local with respect to the abstract space
lection of extensive quantities Ei . Each extensive quantity of thermodynamic coordinates of state of the system.
has a conjugate intensive variable Ii (a restricted denition
If the stationary state is unstable, then any uctuation will
of intensive variable is used here by comparison to the defalmost surely trigger the virtually explosive departure of the
inition given in this link) so that:
system from the unstable stationary state. This can be accompanied by increased export of entropy.
Ii = S/Ei .
We then dene the extended Massieu function as follows:

1.5.5

kb M = S
(Ii Ei ),

The scope of present-day non-equilibrium thermodynamics


does not cover all physical processes. A condition for the
validity of many studies in non-equilibrium thermodynamics of matter is that they deal with what is known as local
thermodynamic equilibrium.

where kb is Boltzmanns constant, whence

kb dM =

(Ei dIi ).
i

The independent variables are the intensities.


Intensities are global values, valid for the system as a whole.
When boundaries impose to the system dierent local conditions, (e.g. temperature dierences), there are intensive
variables representing the average value and others representing gradients or higher moments. The latter are the
thermodynamic forces driving uxes of extensive properties through the system.
It may be shown that the Legendre transformation changes
the maximum condition of the entropy (valid at equilibrium) in a minimum condition of the extended Massieu
function for stationary states, no matter whether at equilibrium or not.

Local thermodynamic equilibrium

Local thermodynamic equilibrium of ponderable matter


Local thermodynamic equilibrium of matter[7][14][23][24][25]
(see also Keizer (1987)[26] means that conceptually, for
study and analysis, the system can be spatially and temporally divided into 'cells or 'micro-phases of small (innitesimal) size, in which classical thermodynamical equilibrium
conditions for matter are fullled to good approximation.
These conditions are unfullled, for example, in very rareed gases, in which molecular collisions are infrequent;
and in the boundary layers of a star, where radiation is passing energy to space; and for interacting fermions at very low
temperature, where dissipative processes become ineective. When these 'cells are dened, one admits that matter and energy may pass freely between contiguous 'cells,

1.5. NON-EQUILIBRIUM THERMODYNAMICS

35

slowly enough to leave the 'cells in their respective individ- thermomechanics,[36][37][38][39] which evolved completely
ual local thermodynamic equilibria with respect to intensive independently of statistical mechanics and maximumvariables.
entropy principles.
One can think here of two 'relaxation times separated by
order of magnitude.[30] The longer relaxation time is of the
order of magnitude of times taken for the macroscopic dynamical structure of the system to change. The shorter
is of the order of magnitude of times taken for a single
'cell' to reach local thermodynamic equilibrium. If these
two relaxation times are not well separated, then the classical non-equilibrium thermodynamical concept of local thermodynamic equilibrium loses its meaning[30] and other approaches have to be proposed, see for instance Extended
irreversible thermodynamics. For example, in the atmosphere, the speed of sound is much greater than the wind
speed; this favours the idea of local thermodynamic equilibrium of matter for atmospheric heat transfer studies at
altitudes below about 60 km where sound propagates, but
not above 100 km, where, because of the paucity of intermolecular collisions, sound does not propagate.

1.5.7

Flows and forces

The fundamental relation of classical equilibrium thermodynamics [40]


i
1
p
dU + dV
dNi
T
T
T
i=1
s

dS =

expresses the change in entropy dS of a system as a function of the intensive quantities temperature T , pressure p
and ith chemical potential i and of the dierentials of the
extensive quantities energy U , volume V and ith particle
number Ni .

Following Onsager (1931,I),[9] let us extend our considerations to thermodynamically non-equilibrium systems. As
a basis, we need locally dened versions of the extensive
Milnes 1928 denition of local thermodynamic equi- macroscopic quantities U , V and Ni and of the intensive
librium in terms of radiative equilibrium
macroscopic quantities T , p and i .
Milne (1928),[31] thinking about stars, gave a denition of
'local thermodynamic equilibrium' in terms of the thermal
radiation of the matter in each small local 'cell'. He dened
'local thermodynamic equilibrium' in a 'cell' by requiring
that it macroscopically absorb and spontaneously emit radiation as if it were in radiative equilibrium in a cavity at
the temperature of the matter of the 'cell'. Then it strictly
obeys Kirchhos law of equality of radiative emissivity and
absorptivity, with a black body source function. The key
to local thermodynamic equilibrium here is that the rate of
collisions of ponderable matter particles such as molecules
should far exceed the rates of creation and annihilation of
photons.

1.5.6

Entropy in evolving systems

It is pointed out[32][33][34][35] by W.T. Grandy Jr that entropy, though it may be dened for a non-equilibrium system, is when strictly considered, only a macroscopic quantity that refers to the whole system, and is not a dynamical variable and in general does not act as a local potential
that describes local physical forces. Under special circumstances, however, one can metaphorically think as if the
thermal variables behaved like local physical forces. The
approximation that constitutes classical irreversible thermodynamics is built on this metaphoric thinking.

For classical non-equilibrium studies, we will consider some


new locally dened intensive macroscopic variables. We
can, under suitable conditions, derive these new variables
by locally dening the gradients and ux densities of the
basic locally dened macroscopic quantities.
Such locally dened gradients of intensive macroscopic
variables are called 'thermodynamic forces. They 'drive'
ux densities, perhaps misleadingly often called 'uxes,
which are dual to the forces. These quantities are dened
in the article on Onsager reciprocal relations.
Establishing the relation between such forces and ux densities is a problem in statistical mechanics. Flux densities (
Ji ) may be coupled. The article on Onsager reciprocal relations considers the stable near-steady thermodynamically
non-equilibrium regime, which has dynamics linear in the
forces and ux densities.
In stationary conditions, such forces and associated ux
densities are by denition time invariant, as also are the
systems locally dened entropy and rate of entropy production. Notably, according to Ilya Prigogine and others,
when an open system is in conditions that allow it to reach a
stable stationary thermodynamically non-equilibrium state,
it organizes itself so as to minimize total entropy production
dened locally. This is considered further below.

One wants to take the analysis to the further stage of describing the behaviour of surface and volume integrals of
This point of view shares many points in common non-stationary local quantities; these integrals are macrowith the concept and the use of entropy in continuum scopic uxes and production rates. In general the dynam-

36

CHAPTER 1. CHAPTER 1. INTRODUCTION

ics of these integrals are not adequately described by linear discussion of the possibilities for principles of extrema of
equations, though in special cases they can be so described. entropy production and of dissipation of energy: Chapter
12 of Grandy (2008)[1] is very cautious, and nds diculty
in dening the 'rate of internal entropy production' in many
1.5.8 The Onsager relations
cases, and nds that sometimes for the prediction of the
course of a process, an extremum of the quantity called the
Main article: Onsager reciprocal relations
rate of dissipation of energy may be more useful than that
of the rate of entropy production; this quantity appeared in
[9]
Following Section III of Rayleigh (1873),[8] Onsager (1931, Onsagers 1931 origination of this subject. Other writI)[9] showed that in the regime where both the ows ( Ji ) ers have also felt that prospects for general global extremal
are small and the thermodynamic forces ( Fi ) vary slowly, principles are clouded. Such writers include Glansdor and
the rate of creation of entropy () is linearly related to the Prigogine (1971), Lebon, Jou and Casas-Vsquez (2008),
and ilhav (1997).
ows:

Ji

Fi
xi

A recent proposal may perhaps by-pass those clouded


prospects.[42][43]

1.5.10

Applications

1.5.11

See also

of

non-equilibrium

and the ows are related to the gradient of the forces,


thermodynamics
parametrized by a matrix of coecients conventionally denoted L :
Non-equilibrium thermodynamics has been successfully
applied to describe biological processes such as protein
folding/unfolding and transport through membranes.

Fj
Lij
Ji =
Also, ideas from non-equilibrium thermodynamics and the
xj
j
informatic theory of entropy have been adapted to describe
general economic systems.[44] [45]
from which it follows that:

i,j

Lij

Fi Fj
xi xj

Dissipative system

The second law of thermodynamics requires that the matrix


L be positive denite. Statistical mechanics considerations
involving microscopic reversibility of dynamics imply that
the matrix L is symmetric. This fact is called the Onsager
reciprocal relations.

Entropy production

1.5.9

Speculated extremal principles for


non-equilibrium processes

Autocatalytic reactions and order creation

Main article: Extremal principles in non-equilibrium


thermodynamics

Bogoliubov-Born-Green-Kirkwood-Yvon
of equations

Until recently, prospects for useful extremal principles in


this area have seemed clouded. C. Nicolis (1999)[41] concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum
dissipation; she says this seems to rule out the existence
of a global organizing principle, and comments that this
is to some extent disappointing; she also points to the difculty of nding a thermodynamically consistent form of
entropy production. Another top expert oers an extensive

Extremal principles in non-equilibrium thermodynamics


Self-organization

Self-organizing criticality

Boltzmann equation
Vlasov equation
Maxwells demon
Information entropy
Constructal theory
Spontaneous symmetry breaking

hierarchy

1.5. NON-EQUILIBRIUM THERMODYNAMICS

1.5.12

References

[1] Grandy, W.T., Jr (2008).


[2] Lebon, G., Jou, D., Casas-Vzquez, J. (2008). Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers, Springer-Verlag, Berlin, e-ISBN 978-3540-74252-4.
[3] Lieb, E.H., Yngvason, J. (1999), p. 5.
[4] Gyarmati, I. (1967/1970), pp. 812.
[5] Callen, H.B. (1960/1985), 42.
[6] Glansdor, P., Prigogine, I. (1971), Ch. II, 2.
[7] Gyarmati, I. (1967/1970).
[8] Strutt, J. W. (1871). Some General Theorems relating to
Vibrations. Proceedings of the London Mathematical Society s14: 357368. doi:10.1112/plms/s1-4.1.357.
[9] Onsager, L. (1931).
Reciprocal relations in irreversible processes, I.
Physical Review 37
(4):
405426.
Bibcode:1931PhRv...37..405O.
doi:10.1103/PhysRev.37.405.
[10] Lavenda, B.H. (1978). Thermodynamics of Irreversible Processes, Macmillan, London, ISBN 0-333-21616-4.
[11] Gyarmati, I. (1967/1970), pages 4-14.
[12] Ziegler, H., (1983). An Introduction to Thermomechanics,
North-Holland, Amsterdam, ISBN 0-444-86503-9.
[13] Balescu, R. (1975). Equilibrium and Non-equilibrium Statistical Mechanics, Wiley-Interscience, New York, ISBN 0471-04600-0, Section 3.2, pages 64-72.
[14] Glansdor, P., Prigogine, I. (1971). Thermodynamic
Theory of Structure, Stability, and Fluctuations, WileyInterscience, London, 1971, ISBN 0-471-30280-5.
[15] Jou, D., Casas-Vzquez, J., Lebon, G. (1993). Extended Irreversible Thermodynamics, Springer, Berlin, ISBN 3-54055874-8, ISBN 0-387-55874-8.
[16] Eu, B.C. (2002).

37

[20] Essex, C. (1984c). Radiation and the violation of bilinearity in the irreversible thermodynamics of irreversible
processes. Planetary and Space Science 32 (8): 1035
1043. Bibcode:1984P&SS...32.1035E. doi:10.1016/00320633(84)90060-6
[21] Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London, page 1.
[22] De Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam.
[23] Balescu, R. (1975). Equilibrium and Non-equilibrium Statistical Mechanics, John Wiley & Sons, New York, ISBN 0471-04600-0.
[24] Mihalas, D., Weibel-Mihalas, B. (1984). Foundations of
Radiation Hydrodynamics, Oxford University Press, New
York, ISBN 0-19-503437-6.
[25] Schloegl, F. (1989). Probability and Heat: Fundamentals
of Thermostatistics, Freidr. Vieweg & Sohn, Brausnchweig,
ISBN 3-528-06343-2.
[26] Keizer, J. (1987). Statistical Thermodynamics of Nonequilibrium Processes, Springer-Verlag, New York, ISBN 0-38796501-7.
[27] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester UK, ISBN 978-0-470-01598-8,
pages 333-338.
[28] Coleman, B.D., Noll, W. (1963). The thermodynamics of
elastic materials with heat conduction and viscosity, Arch.
Ration. Mach. Analysis, 13: 167178.
[29] Kondepudi, D., Prigogine, I, (1998). Modern Thermodynamics. From Heat Engines to Dissipative Structures, Wiley,
Chichester, 1998, ISBN 0-471-97394-7.
[30] Zubarev D. N.,(1974). Nonequilibrium Statistical Thermodynamics, translated from the Russian by P.J. Shepherd, New
York, Consultants Bureau. ISBN 0-306-10895-X; ISBN
978-0-306-10895-2.

[17] Wildt, R. (1972). Thermodynamics of the gray atmosphere. IV. Entropy transfer and production. Astrophysical Journal 174: 6977. Bibcode:1972ApJ...174...69W.
doi:10.1086/151469

[31] Milne, E.A. (1928).


The eect of collisions on
monochromatic radiative equilibrium.
Monthly
Notices of the Royal Astronomical Society 88:
493502.
Bibcode:1928MNRAS..88..493M.
doi:10.1093/mnras/88.6.493.

[18] Essex, C. (1984a).


Radiation and the irreversible thermodynamics of climate.
Journal of
the Atmospheric Sciences 41 (12):
19851991.
Bibcode:1984JAtS...41.1985E.
doi:10.1175/15200469(1984)041<1985:RATITO>2.0.CO;2.

[32] Grandy, W.T., Jr.


(2004).
Time Evolution
in Macroscopic Systems.
I. Equations of Motion.
Foundations of Physics 34: 1.
arXiv:condBibcode:2004FoPh...34....1G.
mat/0303290.
doi:10.1023/B:FOOP.0000012007.06843.ed.

[19] Essex, C. (1984b). Minimum entropy production in


the steady state and radiative transfer. Astrophysical
Journal 285: 279293. Bibcode:1984ApJ...285..279E.
doi:10.1086/162504

[33] Grandy, W.T., Jr. (2004). Time Evolution in Macroscopic


Systems. II. The Entropy. Foundations of Physics 34: 21.
arXiv:cond-mat/0303291. Bibcode:2004FoPh...34...21G.
doi:10.1023/B:FOOP.0000012008.36856.c1.

38

[34] Grandy, W. T., Jr (2004). Time Evolution in Macroscopic Systems. III: Selected Applications. Foundations
of Physics 34 (5): 771. Bibcode:2004FoPh...34..771G.
doi:10.1023/B:FOOP.0000022187.45866.81.
[35] Grandy 2004 see also .
[36] Truesdell, Cliord (1984). Rational Thermodynamics (2
ed.). Springer.
[37] Maugin, Grard A. (2002). Continuum Thermomechanics.
Kluwer.
[38] Gurtin, Morton E. (2010). The Mechanics and Thermodynamics of Continua. Cambridge University Press.
[39] Amendola, Giovambattista (2012). Thermodynamics of
Materials with Memory: Theory and Applications. Springer.
[40] W. Greiner, L. Neise, and H. Stcker (1997), Thermodynamics and Statistical Mechanics (Classical Theoretical Physics) ,Springer-Verlag, New York, P85, 91,
101,108,116, ISBN 0-387-94299-8.
[41] Nicolis, C. (1999). Entropy production and dynamical
complexity in a low-order atmospheric model. Quarterly Journal of the Royal Meteorological Society 125
(557): 18591878. Bibcode:1999QJRMS.125.1859N.
doi:10.1002/qj.49712555718.
[42] Attard, P. (2012).
Optimising Principle for NonEquilibrium Phase Transitions and Pattern Formation with
Results for Heat Convection. arXiv:1208.5105.
[43] Attard, P. (2012). Non-Equilibrium Thermodynamics and
Statistical Mechanics: Foundations and Applications, Oxford
University Press, Oxford UK, ISBN 978-0-19-966276-0.
[44] Pokrovskii, Vladimir (2011).
Econodynamics.
The
Theory of Social Production. http://www.springer.com/
physics/complexity/book/978-94-007-2095-4: Springer,
Dordrecht-Heidelberg-London-New York.
[45] Chen, Jing (2015). The Unity of Science and Economics: A
New Foundation of Economic Theory. http://www.springer.
com/us/book/9781493934645: Springer.

Bibliography of cited references


Callen, H.B. (1960/1985). Thermodynamics and an
Introduction to Thermostatistics, (1st edition 1960) 2nd
edition 1985, Wiley, New York, ISBN 0-471-862568.
Eu, B.C. (2002). Generalized Thermodynamics. The
Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers,
Dordrecht, ISBN 1-4020-0788-4.
Glansdor, P., Prigogine, I. (1971). Thermodynamic
Theory of Structure, Stability, and Fluctuations, WileyInterscience, London, 1971, ISBN 0-471-30280-5.

CHAPTER 1. CHAPTER 1. INTRODUCTION


Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press.
ISBN 978-0-19-954617-6.
Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles,
translated from the Hungarian (1967) by E. Gyarmati
and W.F. Heinz, Springer, Berlin.
Lieb, E.H., Yngvason, J. (1999). 'The physics and
mathematics of the second law of thermodynamics,
Physics Reports, 310: 196. See also this.

1.5.13

Further reading

Ziegler, Hans (1977): An introduction to Thermomechanics. North Holland, Amsterdam. ISBN 0-44411080-1. Second edition (1983) ISBN 0-444-865039.
Kleidon, A., Lorenz, R.D., editors (2005). Nonequilibrium Thermodynamics and the Production of
Entropy, Springer, Berlin. ISBN 3-540-22495-5.
Prigogine, I. (1955/1961/1967). Introduction to Thermodynamics of Irreversible Processes. 3rd edition, Wiley Interscience, New York.
Zubarev D. N. (1974): Nonequilibrium Statistical
Thermodynamics. New York, Consultants Bureau.
ISBN 0-306-10895-X; ISBN 978-0-306-10895-2.
Keizer, J. (1987).
Statistical Thermodynamics
of Nonequilibrium Processes, Springer-Verlag, New
York, ISBN 0-387-96501-7.
Zubarev D. N., Morozov V., Ropke G. (1996): Statistical Mechanics of Nonequilibrium Processes: Basic
Concepts, Kinetic Theory. John Wiley & Sons. ISBN
3-05-501708-0.
Zubarev D. N., Morozov V., Ropke G. (1997): Statistical Mechanics of Nonequilibrium Processes: Relaxation and Hydrodynamic Processes. John Wiley &
Sons. ISBN 3-527-40084-2.
Tuck, Adrian F. (2008). Atmospheric turbulence : a
molecular dynamics perspective. Oxford University
Press. ISBN 978-0-19-923653-4.
Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press.
ISBN 978-0-19-954617-6.
Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures.
John Wiley & Sons, Chichester. ISBN 0-471-973939.

1.5. NON-EQUILIBRIUM THERMODYNAMICS


de Groot S.R., Mazur P. (1984). Non-Equilibrium
Thermodynamics (Dover). ISBN 0-486-64741-2

1.5.14

External links

Stephan Herminghaus Dynamics of Complex Fluids


Department at the Max Planck Institute for Dynamics
and Self Organization
Non-equilibrium Statistical Thermodynamics applied
to Fluid Dynamics and Laser Physics - 1992- book by
Xavier de Hemptinne.
Nonequilibrium Thermodynamics of Small Systems PhysicsToday.org
Into the Cool - 2005 book by Dorion Sagan and Eric
D. Schneider, on nonequilibrium thermodynamics and
evolutionary theory.
Thermodynamics beyond local equilibrium

39

Chapter 2

Chapter 2. Laws of Thermodynamics


2.1 Zeroth law of Thermodynamics
The zeroth law of thermodynamics states that if two
thermodynamic systems are each in thermal equilibrium
with a third, then they are in thermal equilibrium with each
other.

a member of any other subset. This means that a unique


tag can be assigned to every system, and if the tags of
two systems are the same, they are in thermal equilibrium
with each other, and if dierent, they are not. This property is used to justify the use of empirical temperature as a
tagging system. Empirical temperature provides further relations of thermally equilibrated systems, such as order and
continuity with regard to hotness or coldness, but these
are not implied by the standard statement of the zeroth law.

Two systems are said to be in the relation of thermal equilibrium if they are linked by a wall permeable only to heat and
they do not change over time.[1] As a convenience of language, systems are sometimes also said to be in a relation of If it is dened that a thermodynamic system is in thermal
is reexthermal equilibrium if they are not linked so as to be able equilibrium with itself (i.e., thermal equilibrium [6]
ive),
then
the
zeroth
law
may
be
stated
as
follows:
to transfer heat to each other, but would not do so if they
were connected by a wall permeable only to heat. Thermal
If a body A, be in thermal equilibrium with
equilibrium between two systems is a transitive relation.
two other bodies, B and C, then B and C are in
The physical meaning of the law was expressed by Maxwell
thermal equilibrium with one another.
in the words: All heat is of the same kind.[2] For this reason, another statement of the law is All diathermal walls
This statement asserts that thermal equilibrium is a leftare equivalent.[3]
Euclidean relation between thermodynamic systems. If we
The law is important for the mathematical formulation of also dene that every thermodynamic system is in thermal
thermodynamics, which needs the assertion that the rela- equilibrium with itself, then thermal equilibrium is also a
tion of thermal equilibrium is an equivalence relation. This reexive relation. Binary relations that are both reexive
information is needed for a mathematical denition of tem- and Euclidean are equivalence relations. Thus, again imperature that will agree with the physical existence of valid plicitly assuming reexivity, the zeroth law is therefore ofthermometers.[4]
ten expressed as a right-Euclidean statement:[7]

2.1.1

If two systems are in thermal equilibrium with


a third system, then they are in thermal equilibrium with each other.

Zeroth law as equivalence relation

A thermodynamic system is by denition in its own state


of internal thermodynamic equilibrium, that is to say, there
is no change in its observable state (i.e. macrostate) over
time and no ows occur in it. One precise statement of the
zeroth law is that the relation of thermal equilibrium is an
equivalence relation on pairs of thermodynamic systems.[5]
In other words, the set of all systems each in its own state
of internal thermodynamic equilibrium may be divided into
subsets in which every system belongs to one and only one
subset, and is in thermal equilibrium with every other member of that subset, and is not in thermal equilibrium with

One consequence of an equivalence relationship is that the


equilibrium relationship is symmetric: If A is in thermal
equilibrium with B, then B is in thermal equilibrium with
A. Thus we may say that two systems are in thermal equilibrium with each other, or that they are in mutual equilibrium. Another consequence of equivalence is that thermal
equilibrium is a transitive relationship and is occasionally
expressed as such:[4][8]

40

If A is in thermal equilibrium with B and if

2.1. ZEROTH LAW OF THERMODYNAMICS


B is in thermal equilibrium with C, then A is in
thermal equilibrium with C .
A reexive, transitive relationship does not guarantee an
equivalence relationship. In order for the above statement
to be true, both reexivity and symmetry must be implicitly
assumed.
It is the Euclidean relationships which apply directly to
thermometry. An ideal thermometer is a thermometer
which does not measurably change the state of the system it
is measuring. Assuming that the unchanging reading of an
ideal thermometer is a valid tagging system for the equivalence classes of a set of equilibrated thermodynamic systems, then if a thermometer gives the same reading for two
systems, those two systems are in thermal equilibrium, and
if we thermally connect the two systems, there will be no
subsequent change in the state of either one. If the readings are dierent, then thermally connecting the two systems will cause a change in the states of both systems and
when the change is complete, they will both yield the same
thermometer reading. The zeroth law provides no information regarding this nal reading.

2.1.2

Foundation of temperature

The zeroth law establishes thermal equilibrium as an equivalence relationship. An equivalence relationship on a set
(such as the set of all systems each in its own state of internal thermodynamic equilibrium) divides that set into a
collection of distinct subsets (disjoint subsets) where any
member of the set is a member of one and only one such
subset. In the case of the zeroth law, these subsets consist of systems which are in mutual equilibrium. This partitioning allows any member of the subset to be uniquely
tagged with a label identifying the subset to which it belongs. Although the labeling may be quite arbitrary,[9] temperature is just such a labeling process which uses the real
number system for tagging. The zeroth law justies the
use of suitable thermodynamic systems as thermometers to
provide such a labeling, which yield any number of possible empirical temperature scales, and justies the use of
the second law of thermodynamics to provide an absolute,
or thermodynamic temperature scale. Such temperature
scales bring additional continuity and ordering (i.e., hot
and cold) properties to the concept of temperature.[7]
In the space of thermodynamic parameters, zones of constant temperature form a surface, that provides a natural
order of nearby surfaces. One may therefore construct a
global temperature function that provides a continuous ordering of states. The dimensionality of a surface of constant
temperature is one less than the number of thermodynamic
parameters, thus, for an ideal gas described with three ther-

41
modynamic parameters P, V and N, it is a two-dimensional
surface.
For example, if two systems of ideal gases are in equilibrium, then P 1 V 1 /N 1 = P 2 V 2 /N 2 where Pi is the pressure
in the ith system, Vi is the volume, and Ni is the amount (in
moles, or simply the number of atoms) of gas.
The surface PV/N = const denes surfaces of equal thermodynamic temperature, and one may label dening T so that
PV/N = RT, where R is some constant. These systems can
now be used as a thermometer to calibrate other systems.
Such systems are known as ideal gas thermometers.
In a sense, focused on in the zeroth law, there is only one
kind of diathermal wall or one kind of heat, as expressed by
Maxwells dictum that All heat is of the same kind.[2] But
in another sense, heat is transferred in dierent ranks, as
expressed by Sommerfelds dictum Thermodynamics investigates the conditions that govern the transformation of
heat into work. It teaches us to recognize temperature as
the measure of the work-value of heat. Heat of higher temperature is richer, is capable of doing more work. Work
may be regarded as heat of an innitely high temperature,
as unconditionally available heat.[10] This is why temperature is the particular variable indicated by the zeroth laws
statement of equivalence.

2.1.3

Physical meaning of the usual statement of the zeroth law

The present article states the zeroth law as it is often summarized in textbooks. Nevertheless, this usual statement
perhaps does not explicitly convey the full physical meaning that underlies it. The underlying physical meaning was
perhaps rst claried by Maxwell in his 1871 textbook.[2]
In Carathodorys (1909) theory, it is postulated that there
exist walls permeable only to heat, though heat is not explicitly dened in that paper. This postulate is a physical
postulate of existence. It does not, however, as worded just
previously, say that there is only one kind of heat. This
paper of Carathodory states as proviso 4 of its account
of such walls: Whenever each of the systems S 1 and S 2
is made to reach equilibrium with a third system S 3 under identical conditions, systems S 1 and S 2 are in mutual
equilibrium.[11] It is the function of this statement in the
paper, not there labeled as the zeroth law, to provide not
only for the existence of transfer of energy other than by
work or transfer of matter, but further to provide that such
transfer is unique in the sense that there is only one kind
of such wall, and one kind of such transfer. This is signaled in the postulate of this paper of Carathodory that
precisely one non-deformation variable is needed to complete the specication of a thermodynamic state, beyond the

42

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

necessary deformation variables, which are not restricted in namics. Fowler, with co-author Edward A. Guggenheim,
number. It is therefore not exactly clear what Carathodory wrote of the zeroth law as follows:
means when in the introduction of this paper he writes "It
...we introduce the postulate: If two
is possible to develop the whole theory without assuming the
assemblies are each in thermal equilibexistence of heat, that is of a quantity that is of a dierent
rium with a third assembly, they are in
nature from the normal mechanical quantities."
thermal equilibrium with each other.
Maxwell (1871) discusses at some length ideas which he
summarizes by the words All heat is of the same kind.[2] They then proposed that it may be shown to follow that
Modern theorists sometimes express this idea by postulat- the condition for thermal equilibrium between several asing the existence of a unique one-dimensional hotness mani- semblies is the equality of a certain single-valued function
fold, into which every proper temperature scale has a mono- of the thermodynamic states of the assemblies, which may
tonic mapping.[12] This may be expressed by the statement be called the temperature t, any one of the assemblies bethat there is only one kind of temperature, regardless of the ing used as a thermometer reading the temperature t on
variety of scales in which it is expressed. Another mod- a suitable scale. This postulate of the "Existence of temperern expression of this idea is that All diathermal walls are ature" could with advantage be known as the zeroth law of
equivalent.[13] This might also be expressed by saying that thermodynamics". The rst sentence of this present article
there is precisely one kind of non-mechanical, non-matter- is a version of this statement.[18] It is not explicitly evident
transferring contact equilibrium between thermodynamic in the existence statement of Fowler and Guggenheim that
systems.
temperature refers to a unique attribute of a state of a sysThese ideas may be regarded as helping to clarify the physical meaning of the usual statement of the zeroth law of thermodynamics. It is the opinion of Lieb and Yngvason (1999)
that the derivation from statistical mechanics of the law of
entropy increase is a goal that has so far eluded the deepest thinkers.[14] Thus the idea remains open to consideration that the existence of heat and temperature are needed
as coherent primitive concepts for thermodynamics, as expressed, for example, by Maxwell and Planck. On the other
hand, Planck in 1926 claried how the second law can be
stated without reference to heat or temperature, by referring to the irreversible and universal nature of friction in
natural thermodynamic processes.[15]

tem, such as is expressed in the idea of the hotness manifold.


Also their statement refers explicitly to statistical mechanical assemblies, not explicitly to macroscopic thermodynamically dened systems.

2.1.5

References

Citations
[1] Carathodory, C. (1909).
[2] Maxwell, J.C. (1871), p. 57.
[3] Bailyn, M. (1994), pp. 24, 144.
[4] Lieb, E.H., Yngvason, J. (1999), p. 56.

2.1.4

History

[5] Lieb, E.H., Yngvason, J. (1999), p. 52.


[6] Planck. M. (1914), p. 2.

According to Arnold Sommerfeld, Ralph H. Fowler invented the title 'the zeroth law of thermodynamics when he
was discussing the 1935 text of Saha and Srivastava. They
write on page 1 that every physical quantity must be measurable in numerical terms. They presume that temperature is a physical quantity and then deduce the statement
If a body A is in temperature equilibrium with two bodies
B and C, then B and C themselves will be in temperature
equilibrium with each other. They then in a self-standing
paragraph italicize as if to state their basic postulate: "Any
of the physical properties of A which change with the application of heat may be observed and utilised for the measurement of temperature." They do not themselves here use the
term 'zeroth law of thermodynamics.[16][17] There are very
many statements of these physical ideas in the physics literature long before this text, in very similar language. What
was new here was just the label 'zeroth law of thermody-

[7] Buchdahl, H.A. (1966), p. 73.


[8] Kondepudi, D. (2008), p. 7.
[9] Dugdale, J.S. (1996), p. 35.
[10] Sommerfeld, A. (1923), p. 36.
[11] Carathodory, C. (1909), Section 6.
[12] Serrin, J. (1986), p. 6.
[13] Bailyn, M. (1994), p. 23.
[14] Lieb, E.H., Yngvason, J. (1999), p. 5.
[15] Planck, M. (1926).
[16] Sommerfeld, A. (1951/1955), p. 1.
[17] Saha, M.N., Srivastava, B.N. (1935), p. 1.
[18] Fowler, R., Guggenheim, E.A. (1939/1965), p. 56.

2.2. FIRST LAW OF THERMODYNAMICS


Works cited
Bailyn, M. (1994). A Survey of Thermodynamics,
American Institute of Physics Press, New York, ISBN
978-0-88318-797-5.

43
Sommerfeld, A. (1951/1955). Thermodynamics and
Statistical Mechanics, vol. 5 of Lectures on Theoretical
Physics, edited by F. Bopp, J. Meixner, translated by
J. Kestin, Academic Press, New York.

H.A. Buchdahl (1966). The Concepts of Classical 2.1.6


Thermodynamics. Cambridge University Press.

Further reading

Atkins, Peter (2007). Four Laws That Drive the Uni C. Carathodory (1909). Untersuchungen ber
verse. New York: Oxford University Press. ISBN
die Grundlagen der Thermodynamik.
Math978-0-19-923236-9.
ematische Annalen (in German) 67: 355386.
doi:10.1007/BF01450409. A translation may be
found here. A partly reliable translation is to be found
at Kestin, J. (1976). The Second Law of Thermody- 2.2 First law of Thermodynamics
namics, Dowden, Hutchinson & Ross, Stroudsburg
PA.
The rst law of thermodynamics is a version of the law
of conservation of energy, adapted for thermodynamic sys Dugdale, J. S. (1996). Entropy and its Physical Intertems. The law of conservation of energy states that the topretation. Taylor & Francis. ISBN 0-7484-0569-0.
tal energy of an isolated system is constant; energy can be
Fowler, R., Guggenheim, E.A. (1939/1965). Statisti- transformed from one form to another, but cannot be crecal Thermodynamics. A version of Statistical Mechan- ated or destroyed. The rst law is often formulated by statics for Students of Physics and Chemistry, rst print- ing that the change in the internal energy of a closed system
ing 1939, reprinted with corrections 1965, Cambridge is equal to the amount of heat supplied to the system, minus
the amount of work done by the system on its surroundings.
University Press, Cambridge UK.
Equivalently, perpetual motion machines of the rst kind
D. Kondepudi (2008). Introduction to Modern Ther- are impossible.
modynamics. Wiley. ISBN 978-0470-01598-8.
Lieb, E.H., Yngvason, J. (1999). The physics and
2.2.1 History
mathematics of the second law of thermodynamics,
Physics Reports, 310: 196.
Investigations into the nature of heat and work and their re Maxwell, J.C. (1871). Theory of Heat, Longmans, lationship began with the invention of the rst engines used
to extract water from mines. Improvements to such engines
Green, and Co., London.
so as to increase their eciency and power output came rst
Planck. M. (1914). The Theory of Heat Radiation, a from mechanics that tinkered with such machines but only
translation by Masius, M. of the second German edi- slowly advanced the art. Deeper investigations that placed
tion, P. Blakistons Son & Co., Philadelphia.
those on a mathematical and physics basis came later.
Planck, M. (1926). ber die Begrnding des zweiten The process of development of the rst law of thermodyHauptsatzes der Thermodynamik, S.B. Preu. Akad. namics was by way of much investigative trial and error
Wiss. phys. math. Kl.: 453463.
over a period of about half a century. The rst full statements of the law came in 1850 from Rudolf Clausius and
Saha, M.N., Srivastava, B.N. (1935). A Treatise on from William Rankine; Rankines statement was perhaps
Heat. (Including Kinetic Theory of Gases, Thermody- not quite as clear and distinct as was Clausius.[1] A main
namics and Recent Advances in Statistical Thermody- aspect of the struggle was to deal with the previously pronamics), the second and revised edition of A Text Book posed caloric theory of heat.
of Heat, The Indian Press, Allahabad and Calcutta.
Germain Hess in 1840 stated a conservation law for the so Serrin, J. (1986). Chapter 1, 'An Outline of Ther- called 'heat of reaction' for chemical reactions.[2] His law
modynamical Structure', pages 332, in New Perspec- was later recognized as a consequence of the rst law of
tives in Thermodynamics, edited by J. Serrin, Springer, thermodynamics, but Hesss statement was not explicitly
Berlin, ISBN 3-540-15931-2.
concerned with the relation between energy exchanges by
Sommerfeld, A. (1923). Atomic Structure and Spec- heat and work.
tral Lines, translated from the third German edition According to Truesdell (1980), Julius Robert von Mayer in
1841 made a statement that meant that in a process at conby H.L. Brose, Methuen, London.

44

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

stant pressure, the heat used to produce expansion is univer- The concept of internal energy is considered by Bailyn to
sally interconvertible with work, but this is not a general be of enormous interest. Its quantity cannot be immedistatement of the rst law.[3][4]
ately measured, but can only be inferred, by dierencing
actual immediate measurements. Bailyn likens it to the energy states of an atom, that were revealed by Bohrs energy
Original statements: the thermodynamic approach
relation h = En'' En'. In each case, an unmeasurable
quantity (the internal energy, the atomic energy level) is reThe original nineteenth century statements of the rst law
vealed by considering the dierence of measured quantities
of thermodynamics appeared in a conceptual framework in
(increments of internal energy, quantities of emitted or abwhich transfer of energy as heat was taken as a primitive
sorbed radiative energy).[8]
notion, not dened or constructed by the theoretical development of the framework, but rather presupposed as prior
to it and already accepted. The primitive notion of heat
was taken as empirically established, especially through Conceptual revision: the mechanical approach
calorimetry regarded as a subject in its own right, prior
to thermodynamics. Jointly primitive with this notion of In 1907, George H. Bryan wrote about systems between
heat were the notions of empirical temperature and thermal which there is no transfer of matter (closed systems):
equilibrium. This framework also took as primitive the no- "Denition. When energy ows from one system or part
tion of transfer of energy as work. This framework did not of a system to another otherwise than by the performance
presume a concept of energy in general, but regarded it as of mechanical work, the energy so transferred is called
derived or synthesized from the prior notions of heat and heat.[9] This denition may be regarded as expressing a
work. By one author, this framework has been called the conceptual revision, as follows. This was systematically
expounded in 1909 by Constantin Carathodory, whose
thermodynamic approach.[5]
attention had been drawn to it by Max Born. Largely
The rst explicit statement of the rst law of thermodynamthrough Borns[10] inuence, this revised conceptual apics, by Rudolf Clausius in 1850, referred to cyclic thermoproach to the denition of heat came to be preferred by
dynamic processes.
many twentieth-century writers. It might be called the mechanical approach.[11]
In all cases in which work is produced by the agency of heat, a quantity of heat is consumed which is proportional to the work done; and conversely, by the expenditure of an equal
quantity of work an equal quantity of
heat is produced.[6]

Energy can also be transferred from one thermodynamic


system to another in association with transfer of matter.
Born points out that in general such energy transfer is not
resolvable uniquely into work and heat moieties. In general, when there is transfer of energy associated with matter
transfer, work and heat transfers can be distinguished only
when they pass through walls physically separate from those
Clausius also stated the law in another form, referring to the for matter transfer.
existence of a function of state of the system, the internal
energy, and expressed it in terms of a dierential equa- The mechanical approach postulates the law of conservation for the increments of a thermodynamic process.[7] This tion of energy. It also postulates that energy can be transferred from one thermodynamic system to another adiabatiequation may described as follows:
cally as work, and that energy can be held as the internal energy of a thermodynamic system. It also postulates that enIn a thermodynamic process involving
ergy can be transferred from one thermodynamic system to
a closed system, the increment in the
another by a path that is non-adiabatic, and is unaccompainternal energy is equal to the diernied by matter transfer. Initially, it cleverly (according to
ence between the heat accumulated by
Bailyn) refrains from labelling as 'heat' such non-adiabatic,
the system and the work done by it.
unaccompanied transfer of energy. It rests on the primitive
Because of its denition in terms of increments, the value notion of walls, especially adiabatic walls and non-adiabatic
of the internal energy of a system is not uniquely dened. walls, dened as follows. Temporarily, only for purpose of
It is dened only up to an arbitrary additive constant of in- this denition, one can prohibit transfer of energy as work
tegration, which can be adjusted to give arbitrary reference across a wall of interest. Then walls of interest fall into two
zero levels. This non-uniqueness is in keeping with the ab- classes, (a) those such that arbitrary systems separated by
stract mathematical nature of the internal energy. The in- them remain independently in their own previously estabternal energy is customarily stated relative to a convention- lished respective states of internal thermodynamic equilibrium; they are dened as adiabatic; and (b) those without
ally chosen standard reference state of the system.

2.2. FIRST LAW OF THERMODYNAMICS


such independence; they are dened as non-adiabatic.[12]
This approach derives the notions of transfer of energy as
heat, and of temperature, as theoretical developments, not
taking them as primitives. It regards calorimetry as a derived theory. It has an early origin in the nineteenth century, for example in the work of Helmholtz,[13] but also in
the work of many others.[5]

2.2.2

45
fer of matter, and it has been widely followed in textbooks
(examples:[17][18][19] ). Born observes that a transfer of matter between two systems is accompanied by a transfer of
internal energy that cannot be resolved into heat and work
components. There can be pathways to other systems, spatially separate from that of the matter transfer, that allow
heat and work transfer independent of and simultaneous
with the matter transfer. Energy is conserved in such transfers.

Conceptually revised statement, according to the mechanical approach


2.2.3

Description

The revised statement of the rst law postulates that a


change in the internal energy of a system due to any arbitrary process, that takes the system from a given initial
thermodynamic state to a given nal equilibrium thermodynamic state, can be determined through the physical existence, for those given states, of a reference process that
occurs purely through stages of adiabatic work.

The rst law of thermodynamics for a closed system was


expressed in two ways by Clausius. One way referred to
cyclic processes and the inputs and outputs of the system,
but did not refer to increments in the internal state of the
system. The other way referred to an incremental change
in the internal state of the system, and did not expect the
process to be cyclic.

The revised statement is then

A cyclic process is one that can be repeated indenitely often, returning the system to its initial state. Of particular
interest for single cycle of a cyclic process are the net work
done, and the net heat taken in (or 'consumed', in Clausius
statement), by the system.

For a closed system, in any arbitrary


process of interest that takes it from an
initial to a nal state of internal thermodynamic equilibrium, the change of
internal energy is the same as that
for a reference adiabatic work process that links those two states. This
is so regardless of the path of the
process of interest, and regardless of
whether it is an adiabatic or a nonadiabatic process. The reference adiabatic work process may be chosen arbitrarily from amongst the class of all
such processes.
This statement is much less close to the empirical basis than
are the original statements,[14] but is often regarded as conceptually parsimonious in that it rests only on the concepts
of adiabatic work and of non-adiabatic processes, not on
the concepts of transfer of energy as heat and of empirical temperature that are presupposed by the original statements. Largely through the inuence of Max Born, it is
often regarded as theoretically preferable because of this
conceptual parsimony. Born particularly observes that the
revised approach avoids thinking in terms of what he calls
the imported engineering concept of heat engines.[10]

In a cyclic process in which the system does net work on


its surroundings, it is observed to be physically necessary
not only that heat be taken into the system, but also, importantly, that some heat leave the system. The dierence is
the heat converted by the cycle into work. In each repetition of a cyclic process, the net work done by the system,
measured in mechanical units, is proportional to the heat
consumed, measured in calorimetric units.
The constant of proportionality is universal and independent
of the system and in 1845 and 1847 was measured by James
Joule, who described it as the mechanical equivalent of heat.
In a non-cyclic process, the change in the internal energy of
a system is equal to net energy added as heat to the system
minus the net work done by the system, both being measured in mechanical units. Taking U as a change in internal energy, one writes
U = Q W (sign convention of Clausius and generally in this article) ,
where Q denotes the net quantity of heat supplied to the system by its surroundings and W denotes the net work done
by the system. This sign convention is implicit in Clausius
statement of the law given above. It originated with the
study of heat engines that produce useful work by consumption of heat.

Basing his thinking on the mechanical approach, Born in


1921, and again in 1949, proposed to revise the denition of heat.[10][15] In particular, he referred to the work
of Constantin Carathodory, who had in 1909 stated the
rst law without dening quantity of heat.[16] Borns deni- Often nowadays, however, writers use the IUPAC convention was specically for transfers of energy without trans- tion by which the rst law is formulated with work done on

46

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

the system by its surroundings having a positive sign. With are made on it in the section below headed 'First law of therthis now often used sign convention for work, the rst law modynamics for open systems.
for a closed system may be written:
There are two main ways of stating a law of thermodynamics, physically or mathematically. They should be logically
U = Q + W (sign convention of IUPAC) .
coherent and consistent with one another.[23]
[20]

This convention follows physicists such as Max Planck,[21]


and considers all net energy transfers to the system as positive and all net energy transfers from the system as negative,
irrespective of any use for the system as an engine or other
device.

An example of a physical statement is that of Planck


(1897/1903):
It is in no way possible, either by mechanical, thermal, chemical, or other devices, to obtain perpetual motion, i.e. it is impossible to
construct an engine which will work in a cycle
and produce continuous work, or kinetic energy,
from nothing.[24]

When a system expands in a ctive quasistatic process, the


work done by the system on the environment is the product,
P dV, of pressure, P, and volume change, dV, whereas the
work done on the system is -P dV. Using either sign conven- This physical statement is restricted neither to closed systion for work, the change in internal energy of the system tems nor to systems with states that are strictly dened only
for thermodynamic equilibrium; it has meaning also for
is:
open systems and for systems with states that are not in thermodynamic equilibrium.
dU = Q P dV process) (quasi-static,
where Q denotes the innitesimal increment of heat supplied to the system from its surroundings.
Work and heat are expressions of actual physical processes
of supply or removal of energy, while the internal energy
U is a mathematical abstraction that keeps account of the
exchanges of energy that befall the system. Thus the term
heat for Q means that amount of energy added or removed
by conduction of heat or by thermal radiation, rather than
referring to a form of energy within the system. Likewise,
the term work energy for W means that amount of energy
gained or lost as the result of work. Internal energy is a
property of the system whereas work done and heat supplied
are not. A signicant result of this distinction is that a given
internal energy change U can be achieved by, in principle,
many combinations of heat and work.

2.2.4

Various statements of the law for


closed systems

The law is of great importance and generality and is consequently thought of from several points of view. Most careful textbook statements of the law express it for closed systems. It is stated in several ways, sometimes even by the
same author.[5][22]
For the thermodynamics of closed systems, the distinction
between transfers of energy as work and as heat is central
and is within the scope of the present article. For the thermodynamics of open systems, such a distinction is beyond
the scope of the present article, but some limited comments

An example of a mathematical statement is that of Crawford (1963):


For a given
system
we
let E kin =
large-scale
mechanical energy,
E pot
=
large-scale
potential
energy, and
E tot = total
energy. The
rst
two
quantities are
speciable
in terms of
appropriate
mechanical
variables, and
by denition

E tot = E kin + E pot + U .


For any nite
process,
whether reversible
or
irreversible,

2.2. FIRST LAW OF THERMODYNAMICS

E tot = E kin + E pot + U .


The rst law
in a form
that involves
the principle
of
conservation
of
energy more
generally is

E tot = Q + W .
Here Q and
W are heat
and
work
added, with
no
restrictions as to
whether the
process
is
reversible,
quasistatic,
or
irreversible.[Warner,
Am. J. Phys.,
29,
124
(1961)][25]
This statement by Crawford, for W, uses the sign convention
of IUPAC, not that of Clausius. Though it does not explicitly say so, this statement refers to closed systems, and to
internal energy U dened for bodies in states of thermodynamic equilibrium, which possess well-dened temperatures.
The history of statements of the law for closed systems
has two main periods, before and after the work of Bryan
(1907),[26] of Carathodory (1909),[16] and the approval of
Carathodorys work given by Born (1921).[15] The earlier
traditional versions of the law for closed systems are nowadays often considered to be out of date.

47
in equilibrium is a function of state, that the sum of the internal energies of the phases is the total internal energy of
the system, and that the value of the total internal energy
of the system is changed by the amount of work done adiabatically on it, considering work as a form of energy. That
article considered this statement to be an expression of the
law of conservation of energy for such systems. This version
is nowadays widely accepted as authoritative, but is stated
in slightly varied ways by dierent authors.
Such statements of the rst law for closed systems assert
the existence of internal energy as a function of state dened in terms of adiabatic work. Thus heat is not dened
calorimetrically or as due to temperature dierence. It is
dened as a residual dierence between change of internal
energy and work done on the system, when that work does
not account for the whole of the change of internal energy
and the system is not adiabatically isolated.[17][18][19]
The 1909 Carathodory statement of the law in axiomatic form does not mention heat or temperature, but
the equilibrium states to which it refers are explicitly
dened by variable sets that necessarily include nondeformation variables, such as pressures, which, within
reasonable restrictions, can be rightly interpreted as empirical temperatures,[27] and the walls connecting the phases of
the system are explicitly dened as possibly impermeable
to heat or permeable only to heat.
According to Mnster (1970), A somewhat unsatisfactory
aspect of Carathodorys theory is that a consequence of
the Second Law must be considered at this point [in the
statement of the rst law], i.e. that it is not always possible to reach any state 2 from any other state 1 by means of
an adiabatic process. Mnster instances that no adiabatic
process can reduce the internal energy of a system at constant volume.[17] Carathodorys paper asserts that its statement of the rst law corresponds exactly to Joules experimental arrangement, regarded as an instance of adiabatic
work. It does not point out that Joules experimental arrangement performed essentially irreversible work, through
friction of paddles in a liquid, or passage of electric current
through a resistance inside the system, driven by motion of a
coil and inductive heating, or by an external current source,
which can access the system only by the passage of electrons, and so is not strictly adiabatic, because electrons are
a form of matter, which cannot penetrate adiabatic walls.
The paper goes on to base its main argument on the possibility of quasi-static adiabatic work, which is essentially
reversible. The paper asserts that it will avoid reference to
Carnot cycles, and then proceeds to base its argument on cycles of forward and backward quasi-static adiabatic stages,
with isothermal stages of zero magnitude.

Carathodorys celebrated presentation of equilibrium


thermodynamics[16] refers to closed systems, which are allowed to contain several phases connected by internal walls
of various kinds of impermeability and permeability (explicitly including walls that are permeable only to heat).
Carathodorys 1909 version of the rst law of thermodynamics was stated in an axiom which refrained from den- Sometimes the concept of internal energy is not made exing or mentioning temperature or quantity of heat trans- plicit in the statement.[28][29][30]
ferred. That axiom stated that the internal energy of a phase

48
Sometimes the existence of the internal energy is made explicit but work is not explicitly mentioned in the statement
of the rst postulate of thermodynamics. Heat supplied is
then dened as the residual change in internal energy after work has been taken into account, in a non-adiabatic
process.[31]
A respected modern author states the rst law of thermodynamics as Heat is a form of energy, which explicitly mentions neither internal energy nor adiabatic work. Heat is dened as energy transferred by thermal contact with a reservoir, which has a temperature, and is generally so large that
addition and removal of heat do not alter its temperature.[32]
A current student text on chemistry denes heat thus: "heat
is the exchange of thermal energy between a system and
its surroundings caused by a temperature dierence. The
author then explains how heat is dened or measured by
calorimetry, in terms of heat capacity, specic heat capacity, molar heat capacity, and temperature.[33]
A respected text disregards the Carathodorys exclusion of
mention of heat from the statement of the rst law for closed
systems, and admits heat calorimetrically dened along with
work and internal energy.[34] Another respected text denes
heat exchange as determined by temperature dierence, but
also mentions that the Born (1921) version is completely
rigorous.[35] These versions follow the traditional approach
that is now considered out of date, exemplied by that of
Planck (1897/1903).[36]

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS


Adiabatic processes
In an adiabatic process, there is transfer of energy as work
but not as heat. For all adiabatic process that takes a system from a given initial state to a given nal state, irrespective of how the work is done, the respective eventual total quantities of energy transferred as work are one and the
same, determined just by the given initial and nal states.
The work done on the system is dened and measured by
changes in mechanical or quasi-mechanical variables external to the system. Physically, adiabatic transfer of energy
as work requires the existence of adiabatic enclosures.
For instance, in Joules experiment, the initial system is a
tank of water with a paddle wheel inside. If we isolate
the tank thermally, and move the paddle wheel with a pulley and a weight, we can relate the increase in temperature
with the distance descended by the mass. Next, the system
is returned to its initial state, isolated again, and the same
amount of work is done on the tank using dierent devices
(an electric motor, a chemical battery, a spring,...). In every
case, the amount of work can be measured independently.
The return to the initial state is not conducted by doing adiabatic work on the system. The evidence shows that the nal
state of the water (in particular, its temperature and volume) is the same in every case. It is irrelevant if the work
is electrical, mechanical, chemical,... or if done suddenly or
slowly, as long as it is performed in an adiabatic way, that
is to say, without heat transfer into or out of the system.

Evidence of this kind shows that to increase the temperature of the water in the tank, the qualitative kind of adiabatically performed work does not matter. No qualitative
kind of adiabatic work has ever been observed to decrease
2.2.5 Evidence for the rst law of thermody- the temperature of the water in the tank.
A change from one state to another, for example an innamics for closed systems
crease of both temperature and volume, may be conducted
in several stages, for example by externally supplied electriThe rst law of thermodynamics for closed systems was cal work on a resistor in the body, and adiabatic expansion
originally induced from empirically observed evidence, in- allowing the body to do work on the surroundings. It needs
cluding calorimetric evidence. It is nowadays, however, to be shown that the time order of the stages, and their reltaken to provide the denition of heat via the law of con- ative magnitudes, does not aect the amount of adiabatic
servation of energy and the denition of work in terms of work that needs to be done for the change of state. Accordchanges in the external parameters of a system. The original ing to one respected scholar: Unfortunately, it does not
discovery of the law was gradual over a period of perhaps seem that experiments of this kind have ever been carried
half a century or more, and some early studies were in terms out carefully. ... We must therefore admit that the statement
which we have enunciated here, and which is equivalent to
of cyclic processes.[1]
is not well founded on diThe following is an account in terms of changes of state of the rst law of thermodynamics,
[14]
rect
experimental
evidence.
Another
expression of this
a closed system through compound processes that are not
view
is
"...
no
systematic
precise
experiments
to verify this
necessarily cyclic. This account rst considers processes for
[37]
generalization
directly
have
ever
been
attempted.
which the rst law is easily veried because of their simplicity, namely adiabatic processes (in which there is no transfer This kind of evidence, of independence of sequence of
as heat) and adynamic processes (in which there is no trans- stages, combined with the above-mentioned evidence, of
fer as work).
independence of qualitative kind of work, would show the

2.2. FIRST LAW OF THERMODYNAMICS


existence of an important state variable that corresponds
with adiabatic work, but not that such a state variable represented a conserved quantity. For the latter, another step
of evidence is needed, which may be related to the concept
of reversibility, as mentioned below.

49
slowly, the frictional or viscous dissipation is less. In
the limit of innitely slow performance, the dissipation
tends to zero and then the limiting process, though ctional rather than actual, is notionally reversible, and
is called quasi-static. Throughout the course of the
ctional limiting quasi-static process, the internal intensive variables of the system are equal to the external intensive variables, those that describe the reactive forces exerted by the surroundings.[44] This can
be taken to justify the formula

That important state variable was rst recognized and denoted U by Clausius in 1850, but he did not then name it,
and he dened it in terms not only of work but also of heat
transfer in the same process. It was also independently recognized in 1850 by Rankine, who also denoted it U ; and
in 1851 by Kelvin who then called it mechanical energy,
quasi-static adiabatic,
quasi-static adiabatic,
= WOA
.
and later intrinsic energy. In 1865, after some hestitation, (1) WAO
Clausius began calling his state function U energy. In
Another way to deal with it is to allow that experiments
1882 it was named as the internal energy by Helmholtz.[38]
with processes of heat transfer to or from the system
If only adiabatic processes were of interest, and heat could
may be used to justify the formula (1) above. Morebe ignored, the concept of internal energy would hardly
over, it deals to some extent with the problem of lack
arise or be needed. The relevant physics would be largely
of direct experimental evidence that the time order of
covered by the concept of potential energy, as was intended
stages of a process does not matter in the determinain the 1847 paper of Helmholtz on the principle of consertion of internal energy. This way does not provide thevation of energy, though that did not deal with forces that
oretical purity in terms of adiabatic work processes,
cannot be described by a potential, and thus did not fully
but is empirically feasible, and is in accord with exjustify the principle. Moreover, that paper was critical of
[39]
periments actually done, such as the Joule experiments
the early work of Joule that had by then been performed.
mentioned just above, and with older traditions.
A great merit of the internal energy concept is that it frees
thermodynamics from a restriction to cyclic processes, and
The formula (1) above allows that to go by processes of
allows a treatment in terms of thermodynamic states.
quasi-static adiabatic work from the state A to the state B
In an adiabatic process, adiabatic work takes the system ei- we can take a path that goes through the reference state O
ther from a reference state O with internal energy U (O) to , since the quasi-static adiabatic work is independent of the
an arbitrary one A with internal energy U (A) , or from the path
state A to the state O :

adiabatic
adiabatic
U (A) = U (O)WOA
or U (O) = U (A)WAO
.

adiabatic, quasistatic
adiabatic, quasistatic
adiabatic, quasistatic
adiab
WAB
= WAO
WOB
= WOA

This kind of empirical evidence, coupled with theory of this


Except under the special, and strictly speaking, ctional, kind, largely justies the following statement:
condition of reversibility, only one of the processes
adiabatic, O A or adiabatic, A O is empirically feaFor all adiabatic processes between two specied
sible by a simple application of externally supplied work.
states of a closed system of any nature, the net
The reason for this is given as the second law of thermodywork done is the same regardless the details of the
namics and is not considered in the present article.
process, and determines a state function called inThe fact of such irreversibility may be dealt with in two
ternal energy, U .
main ways, according to dierent points of view:
Since the work of Bryan (1907), the most accepted way to deal with it nowadays, followed by
Carathodory,[16][19][40] is to rely on the previously established concept of quasi-static processes,[41][42][43]
as follows. Actual physical processes of transfer of
energy as work are always at least to some degree irreversible. The irreversibility is often due to mechanisms known as dissipative, that transform bulk kinetic energy into internal energy. Examples are friction and viscosity. If the process is performed more

Adynamic processes
See also: Thermodynamic processes
A complementary observable aspect of the rst law is about
heat transfer. Adynamic transfer of energy as heat can be
measured empirically by changes in the surroundings of the
system of interest by calorimetry. This again requires the
existence of adiabatic enclosure of the entire process, system and surroundings, though the separating wall between

50
the surroundings and the system is thermally conductive
or radiatively permeable, not adiabatic. A calorimeter can
rely on measurement of sensible heat, which requires the
existence of thermometers and measurement of temperature change in bodies of known sensible heat capacity under specied conditions; or it can rely on the measurement
of latent heat, through measurement of masses of material
that change phase, at temperatures xed by the occurrence
of phase changes under specied conditions in bodies of
known latent heat of phase change. The calorimeter can
be calibrated by adiabatically doing externally determined
work on it. The most accurate method is by passing an
electric current from outside through a resistance inside the
calorimeter. The calibration allows comparison of calorimetric measurement of quantity of heat transferred with
quantity of energy transferred as work. According to one
textbook, The most common device for measuring U
is an adiabatic bomb calorimeter.[45] According to another textbook, Calorimetry is widely used in present day
laboratories.[46] According to one opinion, Most thermodynamic data come from calorimetry...[47] According to
another opinion, The most common method of measuring
heat is with a calorimeter.[48]

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS


This combined statement is the expression the rst law
of thermodynamics for reversible processes for closed systems.
In particular, if no work is done on a thermally isolated
closed system we have

U = 0
This is one aspect of the law of conservation of energy and
can be stated:
The internal energy of an isolated system remains
constant.
General case for irreversible processes

If, in a process of change of state of a closed system, the


energy transfer is not under a practically zero temperature
gradient and practically frictionless, then the process is irreversible. Then the heat and work transfers may be diWhen the system evolves with transfer of energy as heat, cult to calculate, and irreversible thermodynamics is called
without energy being transferred as work, in an adynamic for. Nevertheless, the rst law still holds and provides a
of the work
process,[49] the heat transferred to the system is equal to the check on the measurements and calculations
path P1 , irreversible
done
irreversibly
on
the
system,
W
, and the
AB
increase in its internal energy:
P1 , irreversible
heat transferred irreversibly to the system, Qpath
AB
, which belong to the same particular process dened by its
particular irreversible path, P1 , through the space of therQadynamic
AB = U .
modynamic states.
General case for reversible processes
Heat transfer is practically reversible when it is driven by
practically negligibly small temperature gradients. Work
transfer is practically reversible when it occurs so slowly
that there are no frictional eects within the system; frictional eects outside the system should also be zero if the
process is to be globally reversible. For a particular reversible process in general, the work done reversibly on
path P0 , reversible
the system, WAB
, and the heat transferred reP0 , reversible
versibly to the system, Qpath
are not required to
AB
occur respectively adiabatically or adynamically, but they
must belong to the same particular process dened by its
particular reversible path, P0 , through the space of thermodynamic states. Then the work and heat transfers can
occur and be calculated simultaneously.

path P1 , irreversible
P1 , irreversible
WAB
+ Qpath
= U .
AB

This means that the internal energy U is a function of state


and that the internal energy change U between two states
is a function only of the two states.
Overview of the weight of evidence for the law

The rst law of thermodynamics is so general that its predictions cannot all be directly tested. In many properly
conducted experiments it has been precisely supported, and
never violated. Indeed, within its scope of applicability, the
law is so reliably established, that, nowadays, rather than
experiment being considered as testing the accuracy of the
law, it is more practical and realistic to think of the law as
Putting the two complementary aspects together, the rst testing the accuracy of experiment. An experimental result
law for a particular reversible process can be written
that seems to violate the law may be assumed to be inaccurate or wrongly conceived, for example due to failure to
account for an important physical factor. Thus, some may
path P0 , reversible
path P0 , reversible
WAB
+ QAB
= U .
regard it as a principle more abstract than a law.

2.2. FIRST LAW OF THERMODYNAMICS

2.2.6

51

State functional formulation for in- which the dening state variables are S and V, with respect
to which T and P are partial derivatives of U.[50][51][52] It
nitesimal processes

When the heat and work transfers in the equations above


are innitesimal in magnitude, they are often denoted by ,
rather than exact dierentials denoted by d, as a reminder
that heat and work do not describe the state of any system.
The integral of an inexact dierential depends upon the particular path taken through the space of thermodynamic parameters while the integral of an exact dierential depends
only upon the initial and nal states. If the initial and nal
states are the same, then the integral of an inexact dierential may or may not be zero, but the integral of an exact
dierential is always zero. The path taken by a thermodynamic system through a chemical or physical change is
known as a thermodynamic process.
The rst law for a closed homogeneous system may be
stated in terms that include concepts that are established
in the second law. The internal energy U may then be expressed as a function of the systems dening state variables
S, entropy, and V, volume: U = U (S, V). In these terms,
T, the systems temperature, and P, its pressure, are partial
derivatives of U with respect to S and V. These variables
are important throughout thermodynamics, though not necessary for the statement of the rst law. Rigorously, they
are dened only when the system is in its own state of internal thermodynamic equilibrium. For some purposes, the
concepts provide good approximations for scenarios suciently near to the systems internal thermodynamic equilibrium.

is only in the ctive reversible case, when isochoric work is


excluded, that the work done and heat transferred are given
by P dV and T dS.

In the case of a closed system in which the particles of the


system are of dierent types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for
dU becomes:
dU = T dS P dV +

i dNi .

where dN is the (small) increase in amount of type-i particles in the reaction, and is known as the chemical potential of the type-i particles in the system. If dN is expressed in mol then is expressed in J/mol. If the system
has more external mechanical variables than just the volume
that can change, the fundamental thermodynamic relation
further generalizes to:
dU = T dS

Xi dxi +

j dNj .

Here the X are the generalized forces corresponding to the


external variables x. The parameters X are independent of
the size of the system and are called intensive parameters
and the x are proportional to the size and called extensive
parameters.

The rst law requires that:

For an open system, there can be transfers of particles as


well as energy into or out of the system during a process.
this case,
the(closed
rst law of thermodynamics still holds, in
dU = QW
irreversible). or quasi-static process,For
general
system,
the form that the internal energy is a function of state and
Then, for the ctive case of a reversible process, dU can be the change of internal energy in a process is a function only
written in terms of exact dierentials. One may imagine re- of its initial and nal states, as noted in the section below
versible changes, such that there is at each instant negligible headed First law of thermodynamics for open systems.
departure from thermodynamic equilibrium within the sysA useful idea from mechanics is that the energy gained by
tem. This excludes isochoric work. Then, mechanical work
a particle is equal to the force applied to the particle mulis given by W = - P dV and the quantity of heat added can
tiplied by the displacement of the particle while that force
be expressed as Q = T dS. For these conditions
is applied. Now consider the rst law without the heating
term: dU = -PdV. The pressure P can be viewed as a force
(and in fact has units of force per unit area) while dVis the
dU = T dsP dV
process). reversible system, (closed
displacement (with units of distance times area). We may
While this has been shown here for reversible changes, it is say, with respect to this work term, that a pressure diervalid in general, as U can be considered as a thermodynamic ence forces a transfer of volume, and that the product of
state function of the dening state variables S and V:
the two (work) is the amount of energy transferred out of
the system as a result of the process. If one were to make
this term negative then this would be the work done on the
(2) dU = T dSP dV irreversible). or quasi-static process,
general system, (closed
system.
Equation (2) is known as the fundamental thermodynamic It is useful to view the TdS term in the same light: here the
relation for a closed system in the energy representation, for temperature is known as a generalized force (rather than

52

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

pot
an actual mechanical force) and the entropy is a generalized of interaction E12
between the subsystems. Thus, in an obdisplacement.
vious notation, one may write

Similarly, a dierence in chemical potential between


groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor.
There is a generalized force of evaporation that drives
water molecules out of the liquid. There is a generalized
force of condensation that drives vapor molecules out of
the vapor. Only when these two forces (or chemical potentials) are equal is there equilibrium, and the net rate of
transfer zero.

pot
E = E1kin + E1pot + U1 + E2kin + E2pot + U2 + E12
pot
in general lacks an assignment to either
The quantity E12
subsystem in a way that is not arbitrary, and this stands in
the way of a general non-arbitrary denition of transfer of
energy as work. On occasions, authors make their various
respective arbitrary assignments.[55]

The distinction between internal and kinetic energy is hard


to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk ow into molecular random motion
The two thermodynamic parameters that form a generof molecules that is classied as internal energy.[56] The
alized force-displacement pair are called conjugate varirate of dissipation by friction of kinetic energy of localised
ables. The two most familiar pairs are, of course, pressurebulk ow into internal energy,[57][58][59] whether in turbuvolume, and temperature-entropy.
lent or in streamlined ow, is an important quantity in nonequilibrium thermodynamics. This is a serious diculty for
attempts to dene entropy for time-varying spatially inho2.2.7 Spatially inhomogeneous systems
mogeneous systems.
Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903[36] ), which
might be regarded as 'zero-dimensional' in the sense that 2.2.8 First law of thermodynamics for open
systems
they have no spatial variation. But it is desired to study also
systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of For the rst law of thermodynamics, there is no trivial pasenergy is expressed in terms not only of internal energy as sage of physical conception from the closed system view
[60][61]
For closed systems, the condened for homogeneous systems, but also in terms of ki- to an open system view.
netic energy and potential energies of parts of the inhomo- cepts of an adiabatic enclosure and of an adiabatic wall are
geneous system with respect to each other and with respect fundamental. Matter and internal energy cannot permeate
to long-range external forces.[53] How the total energy of a or penetrate such a wall. For an open system, there is a
system is allocated between these three more specic kinds wall that allows penetration by matter. In general, matter in
of energy varies according to the purposes of dierent writ- diusive motion carries with it some internal energy, and
ers; this is because these components of energy are to some some microscopic potential energy changes accompany the
extent mathematical artefacts rather than actually measured motion. An open system is not adiabatically enclosed.
physical quantities. For any closed homogeneous compo- There are some cases in which a process for an open system
nent of an inhomogeneous closed system, if E denotes the can, for particular purposes, be considered as if it were for a
total energy of that component system, one may write
closed system. In an open system, by denition hypothetically or potentially, matter can pass between the system and
its surroundings. But when, in a particular case, the process
of interest involves only hypothetical or potential but no acE = E kin + E pot + U
tual passage of matter, the process can be considered as if
where E kin and E pot denote respectively the total kinetic it were for a closed system.
energy and the total potential energy of the component
closed homogeneous system, and U denotes its internal
Internal energy for an open system
energy.[25][54]
Potential energy can be exchanged with the surroundings of Since the revised and more rigorous denition of the interthe system when the surroundings impose a force eld, such nal energy of a closed system rests upon the possibility of
as gravitational or electromagnetic, on the system.
processes by which adiabatic work takes the system from
A compound system consisting of two interacting closed one state to another, this leaves a problem for the denition
homogeneous component subsystems has a potential energy of internal energy for an open system, for which adiabatic

2.2. FIRST LAW OF THERMODYNAMICS


work is not in general possible. According to Max Born,
the transfer of matter and energy across an open connection cannot be reduced to mechanics.[62] In contrast to
the case of closed systems, for open systems, in the presence of diusion, there is no unconstrained and unconditional physical distinction between convective transfer of
internal energy by bulk ow of matter, the transfer of internal energy without transfer of matter (usually called heat
conduction and work transfer), and change of various potential energies.[63][64][65] The older traditional way and the
conceptually revised (Carathodory) way agree that there
is no physically unique denition of heat and work transfer
processes between open systems.[66][67][68][69][70][71]
In particular, between two otherwise isolated open systems
an adiabatic wall is by denition impossible.[72] This problem is solved by recourse to the principle of conservation of
energy. This principle allows a composite isolated system to
be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously
isolated systems can be subjected to the thermodynamic
operation of placement between them of a wall permeable
to matter and energy, followed by a time for establishment
of a new thermodynamic state of internal equilibrium in
the new single unpartitioned system.[73] The internal energies of the initial two systems and of the nal new system,
considered respectively as closed systems as above, can be
measured.[60] Then the law of conservation of energy requires that
Us + Uo = 0 , [74][75]
where U and U denote the changes in internal energy
of the system and of its surroundings respectively. This is a
statement of the rst law of thermodynamics for a transfer
between two otherwise isolated open systems,[76] that ts
well with the conceptually revised and rigorous statement
of the law stated above.
For the thermodynamic operation of adding two systems
with internal energies U 1 and U 2 , to produce a new system with internal energy U, one may write U = U 1 + U 2 ;
the reference states for U, U 1 and U 2 should be specied
accordingly, maintaining also that the internal energy of a
system be proportional to its mass, so that the internal energies are extensive variables.[60][77]
There is a sense in which this kind of additivity expresses a
fundamental postulate that goes beyond the simplest ideas
of classical closed system thermodynamics; the extensivity
of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be
recognized as a fourth law of thermodynamics, though this
is not repeated by other authors.[78][79]

53
Also of course
Ns + No = 0 , [74][75]
where N and N denote the changes in mole number of a
component substance of the system and of its surroundings
respectively. This is a statement of the law of conservation
of mass.
Process of transfer of matter between an open system
and its surroundings
A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an
open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between
them if the surrounding subsystem is subjected to some
thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem.
The removal of the partition in the surroundings initiates a
process of exchange between the system and its contiguous
surrounding subsystem.
An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except
where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its
contiguous surrounding subsystem, and subject to control
of its volume and temperature.
A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically
increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the
vapor, but also some of the parent liquid will evaporate and
enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the
vapor that leaves the system, but it will not make sense to
try to uniquely identify part of that internal energy as heat
and part of it as work. Consequently, the energy transfer
that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split
into heat and work transfers to or from the open system.
The component of total energy transfer that accompanies
the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of
the word heat is a quirk of customary historical language,
not in strict compliance with the thermodynamic denition
of transfer of energy as heat. In this example, kinetic energy of bulk ow and potential energy with respect to longrange external forces such as gravity are both considered
to be zero. The rst law of thermodynamics refers to the
change of internal energy of the open system, between its
initial and nal states of internal equilibrium.

54

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

Open system with multiple contacts


An open system can be in contact equilibrium with several
other systems at once.[16][80][81][82][83][84][85][86]
This includes cases in which there is contact equilibrium
between the system, and several subsystems in its surroundings, including separate connections with subsystems
through walls that are permeable to the transfer of matter
and internal energy as heat and allowing friction of passage
of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate
connections through diathermic walls impermeable to matter with yet others. Because there are physically separate
connections that are permeable to energy but impermeable
to matter, between the system and its surroundings, energy
transfers between them can occur with denite heat and
work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of
the variables that measure heat and work.[87]
With such independence of variables, the total increase of
internal energy in the process is then determined as the
sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are
permeable to it, and of the internal energy transferred to
the system as heat through the diathermic walls, and of the
energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred
quantities of energy are dened by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable
into heat and work components, the total energy transfer
cannot in general be uniquely resolved into heat and work
components.[88] Under these conditions, the following formula can describe the process in terms of externally dened
thermodynamic variables, as a statement of the rst law of
thermodynamics:

Combination of rst and second laws If the system


is described by the energetic fundamental equation, U 0 =
U 0 (S, V, Nj), and if the process can be described in the
quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described
by a combination of the rst and second laws of thermodynamics, by the formula

(4)

dU0 = T dS P dV +

j dNj

j=1

where there are n chemical constituents of the system and


permeably connected surrounding subsystems, and where
T, S, P, V, Nj, and j, are dened as above.[89]
For a general natural process, there is no immediate termwise correspondence between equations (3) and (4), because they describe the process in dierent conceptual
frames.
Nevertheless, a conditional correspondence exists. There
are three relevant kinds of wall here: purely diathermal, adiabatic, and permeable to matter. If two of those kinds of
wall are sealed o, leaving only one that permits transfers of
energy, as work, as heat, or with matter, then the remaining
permitted terms correspond precisely. If two of the kinds
of wall are left unsealed, then energy transfer can be shared
between them, so that the two remaining permitted terms
do not correspond precisely.
For the special ctive case of quasi-static transfers, there
is a simple correspondence.[90] For this, it is supposed that
the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely
diathermal walls, and open connections with surrounding
subsystems of completely controllable chemical potential
(or equivalent controls for charged species). Then, for a
suitable ctive quasi-static transfer, one can write

Q = T dS and W = P dV
m

energy) of transfers quasi-static subsystems

For process,
ctive quasi-static
transfers for
which the dened
chemical
poUi irreversible), or quasi-static
general subsystems,
surrounding
(suitably
tentials
in
the
connected
surrounding
subsystems
are
suiti=1
ably controlled, these can be put into equation (4) to yield
where U 0 denotes the change of internal energy of the
system, and Ui denotes the change of internal energy of
n
the ith of the m surrounding subsystems that are in open

j dNj transfers) quasi-static subsystems, s


contact with the system, due to transfer between the system (5) dU0 = Q W +
j=1
and that ith surrounding subsystem, and Q denotes the internal energy transferred as heat from the heat reservoir of
[90]
does not actually write equation (5), but
the surroundings to the system, and W denotes the energy The reference
what
it
does
write
is
fully compatible with it. Another helptransferred from the system to the surrounding subsystems
ful
account
is
given
by
Tschoegl.[91]
that are in adiabatic connection with it. The case of a wall
that is permeable to matter and can move so as to allow There are several other accounts of this, in apparent mutual
conict.[69][92][93]
transfer of energy as work is not considered here.
(3)

U0 = Q W

2.2. FIRST LAW OF THERMODYNAMICS

55

Non-equilibrium transfers

non-conservation of internal energy because of local conversion of kinetic energy of bulk ow to internal energy by
The transfer of energy between an open system and a sin- viscosity.
gle contiguous subsystem of its surroundings is considered Gyarmati shows that his denition of the heat ow vector
also in non-equilibrium thermodynamics. The problem of is strictly speaking a denition of ow of internal energy,
denition arises also in this case. It may be allowed that not specically of heat, and so it turns out that his use here
the wall between the system and the subsystem is not only of the word heat is contrary to the strict thermodynamic
permeable to matter and to internal energy, but also may denition of heat, though it is more or less compatible with
be movable so as to allow work to be done when the two historical custom, that often enough did not clearly distinsystems have dierent pressures. In this case, the transfer guish between heat and internal energy; he writes that this
of energy as heat is not dened.
relation must be considered to be the exact denition of
Methods for study of non-equilibrium processes mostly deal
with spatially continuous ow systems. In this case, the
open connection between system and surroundings is usually taken to fully surround the system, so that there are no
separate connections impermeable to matter but permeable
to heat. Except for the special case mentioned above when
there is no actual transfer of matter, which can be treated
as if for a closed system, in strictly dened thermodynamic
terms, it follows that transfer of energy as heat is not dened. In this sense, there is no such thing as 'heat ow' for
a continuous-ow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but
in general, for open systems, one can speak safely only of
transfer of internal energy. A factor here is that there are
often cross-eects between distinct transfers, for example
that transfer of one substance may cause transfer of another
even when the latter has zero chemical potential gradient.
Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law,
that the amount lost by the donor system is equal to the
amount gained by the receptor system. Heat is not a state
variable. For his 1947 denition of heat transfer for discrete open systems, the author Prigogine carefully explains
at some length that his denition of it does not obey a balance law. He describes this as paradoxical.[94]
The situation is claried by Gyarmati, who shows that his
denition of heat transfer, for continuous-ow systems,
really refers not specically to heat, but rather to transfer
of internal energy, as follows. He considers a conceptual
small cell in a situation of continuous-ow as a system dened in the so-called Lagrangian way, moving with the local
center of mass. The ow of matter across the boundary is
zero when considered as a ow of total mass. Nevertheless,
if the material constitution is of several chemically distinct
components that can diuse with respect to one another,
the system is considered to be open, the diusive ows of
the components being dened with respect to the center of
mass of the system, and balancing one another as to mass
transfer. Still there can be a distinction between bulk ow
of internal energy and diusive ow of internal energy in
this case, because the internal energy density does not have
to be constant per unit mass of material, and allowing for

the concept of heat ow, fairly loosely used in experimental physics and heat technics.[95] Apparently in a dierent
frame of thinking from that of the above-mentioned paradoxical usage in the earlier sections of the historic 1947
work by Prigogine, about discrete systems, this usage of
Gyarmati is consistent with the later sections of the same
1947 work by Prigogine, about continuous-ow systems,
which use the term heat ux in just this way. This usage
is also followed by Glansdor and Prigogine in their 1971
text about continuous-ow systems. They write: Again
the ow of internal energy may be split into a convection
ow uv and a conduction ow. This conduction ow is
by denition the heat ow W. Therefore: j[U] = uv + W
where u denotes the [internal] energy per unit mass. [These
authors actually use the symbols E and e to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors
actually use the symbol U to refer to total energy, including kinetic energy of bulk ow.]"[96] This usage is followed
also by other writers on non-equilibrium thermodynamics
such as Lebon, Jou, and Casas-Vsquez,[97] and de Groot
and Mazur.[98] This usage is described by Bailyn as stating the non-convective ow of internal energy, and is listed
as his denition number 1, according to the rst law of
thermodynamics.[70] This usage is also followed by workers in the kinetic theory of gases.[99][100][101] This is not the
ad hoc denition of reduced heat ux of Haase.[102]
In the case of a owing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk ow and diusion of matter. Moreover, the ow of matter is zero into or out of the cell that
moves with the local center of mass. In eect, in this description, one is dealing with a system eectively closed to
the transfer of matter. But still one can validly talk of a
distinction between bulk ow and diusive ow of internal
energy, the latter driven by a temperature gradient within
the owing material, and being dened with respect to the
local center of mass of the bulk ow. In this case of a virtually closed system, because of the zero matter transfer, as
noted above, one can safely distinguish between transfer of
energy as work, and transfer of internal energy as heat.[103]

56

2.2.9

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

See also

[16] Carathodory, C. (1909).

Laws of thermodynamics

[17] Mnster, A. (1970), pp. 2324.

Perpetual motion

[18] Reif, F. (1965), p. 122.

Microstate (statistical mechanics) includes Microscopic denitions of internal energy, heat and work
Entropy production
Relativistic heat conduction

2.2.10

References

[19] Haase, R. (1971), pp. 2425.


[20] Quantities, Units and Symbols in Physical Chemistry (IUPAC Green Book) See Sec. 2.11 Chemical Thermodynamics
[21] Planck, M. (1897/1903). Treatise on Thermodynamics,
translated by A. Ogg, Longmans, Green & Co., London.,
p. 43
[22] Mnster, A. (1970).

[1] Truesdell, C. A. (1980).

[23] Kirkwood, J. G., Oppenheim, I. (1961), pp. 3133.

[2] Hess, H. (1840).


Thermochemische Untersuchungen.
Annalen der Physik und Chemie 126
(6):
385404.
Bibcode:1840AnP...126..385H.
doi:10.1002/andp.18401260620.

[24] Planck, M. (1897/1903), p. 86.

[3] Truesdell, C. A. (1980), pp. 157158.

[25] Crawford, F. H. (1963), pp. 106107.


[26] Bryan, G. H. (1907), p. 47.
[27] Buchdahl, H. A. (1966), p. 34.

[4] Mayer, Robert (1841). Paper: 'Remarks on the Forces of


Nature"; as quoted in: Lehninger, A. (1971). Bioenergetics the Molecular Basis of Biological Energy Transformations, 2nd. Ed. London: The Benjamin/Cummings Publishing Company.

[28] Pippard, A. B. (1957/1966), p. 14.

[5] Bailyn, M. (1994), p. 79.

[31] Callen, H. B. (1960/1985), pp. 13, 17.

[6] Clausius, R. (1850), page 373, translation here taken from


Truesdell, C. A. (1980), pp. 188189.

[32] Kittel, C. Kroemer, H. (1980). Thermal Physics, (rst edition by Kittel alone 1969), second edition, W. H. Freeman,
San Francisco, ISBN 0-7167-1088-9, pp. 49, 227.

[7] Clausius, R. (1850), page 384, equation (IIa.).


[8] Bailyn, M. (1994), p. 80.
[9] Bryan, G. H. (1907), p.47. Also Bryan had written about
this in the Enzyklopdie der Mathematischen Wissenschaften,
volume 3, p. 81. Also in 1906 Jean Baptiste Perrin wrote
about it in Bull. de la socit franais de philosophie, volume
6, p. 81.
[10] Born, M. (1949), Lecture V, pp. 3145.
[11] Bailyn, M. (1994), pp. 65, 79.
[12] Bailyn, (1994), p. 82.
[13] Helmholtz, H. (1847).
[14] Pippard, A. B. (1957/1966), p. 15. According to Herbert
Callen, in his most widely cited text, Pippards text gives
a scholarly and rigorous treatment"; see Callen, H. B.
(1960/1985), p. 485. It is also recommended by Mnster,
A. (1970), p. 376.
[15] Born, M. (1921). Kritische Betrachtungen zur traditionellen Darstellung der Thermodynamik. Physik. Zeitschr
22: 218224.

[29] Reif, F. (1965), p. 82.


[30] Adkins, C. J. (1968/1983), p. 31.

[33] Tro, N. J. (2008). Chemistry. A Molecular Approach,


Pearson/Prentice Hall, Upper Saddle River NJ, ISBN 0-13100065-9, p. 246.
[34] Kirkwood, J. G., Oppenheim, I. (1961), pp. 1718. Kirkwood & Oppenheim 1961 is recommended by Mnster,
A. (1970), p. 376. It is also cited by Eu, B. C. (2002),
Generalized Thermodynamics, the Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer
Academic Publishers, Dordrecht, ISBN 1-4020-0788-4, pp.
18, 29, 66.
[35] Guggenheim, E. A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, (rst edition
1949), fth edition 1967, North-Holland, Amsterdam, pp.
910. Guggenheim 1949/1965 is recommended by Buchdahl, H. A. (1966), p. 218. It is also recommended by Mnster, A. (1970), p. 376.
[36] Planck, M. (1897/1903).
[37] Kestin, J. (1966), p. 156.
[38] Cropper, W. H. (1986).
Rudolf Clausius and the
road to entropy. Am. J. Phys. 54: 10681074.
Bibcode:1986AmJPh..54.1068C. doi:10.1119/1.14740.

2.2. FIRST LAW OF THERMODYNAMICS

57

[39] Truesdell, C. A. (1980), pp. 161162.

[60] Mnster A. (1970), Sections 14, 15, pp. 4551.

[40] Buchdahl, H. A. (1966), p. 43.

[61] Landsberg, P. T. (1978), p. 78.

[41] Maxwell, J. C. (1871). Theory of Heat, Longmans, Green,


and Co., London, p. 150.

[62] Born, M. (1949), p. 44.

[42] Planck, M. (1897/1903), Section 71, p. 52.


[43] Bailyn, M. (1994), p. 95.
[44] Adkins, C. J. (1968/1983), p. 35.
[45] Atkins, P., de Paula, J. (1978/2010). Physical Chemistry,
(rst edition 1978), ninth edition 2010, Oxford University
Press, Oxford UK, ISBN 978-0-19-954337-3, p. 54.
[46] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester, ISBN 978-0-470-01598-8, p.
63.
[47] Gislason, E. A.; Craig, N. C. (2005).
Cementing the foundations of thermodynamics:comparison of
system-based and surroundings-based denitions of work
and heat. J. Chem. Thermodynamics 37: 954966.
doi:10.1016/j.jct.2004.12.012.
[48] Rosenberg, R. M. (2010). From Joule to Caratheodory
and Born: A conceptual evolution of the rst law of
thermodynamics. J. Chem. Edu. 87: 691693.
Bibcode:2010JChEd..87..691R. doi:10.1021/ed1001976.

[63] Denbigh, K. G. (1951), p. 56. Denbigh states in a footnote


that he is indebted to correspondence with E. A. Guggenheim and with N. K. Adam. From this, Denbigh concludes
It seems, however, that when a system is able to exchange
both heat and matter with its environment, it is impossible
to make an unambiguous distinction between energy transported as heat and by the migration of matter, without already assuming the existence of the 'heat of transport'.
[64] Fitts, D. D. (1962), p. 28.
[65] Denbigh, K. (1954/1971), pp. 8182.
[66] Mnster, A. (1970), p. 50.
[67] Haase, R. (1963/1969), p. 15.
[68] Haase, R. (1971), p. 20.
[69] Smith, D. A. (1980). Denition of heat in open systems,
Aust. J. Phys., 33: 95105.
[70] Bailyn, M. (1994), p. 308.
[71] Balian, R. (1991/2007), p. 217

[49] Partington, J.R. (1949), p. 183: "Rankine calls the curves


representing changes without performance of work, adynamics.

[72] Mnster, A. (1970), p. 46.

[50] Denbigh, K. (1954/1981), p. 45.

[74] Callen H. B. (1960/1985), p. 54.

[51] Adkins, C. J. (1968/1983), p. 75.

[75] Tisza, L. (1966), p. 110.

[52] Callen, H. B. (1960/1985), pp. 36, 41, 63.

[76] Tisza, L. (1966), p. 111.

[53] Bailyn, M. (1994), 254256.

[77] Prigogine, I., (1955/1967), p. 12.

[54] Glansdor, P., Prigogine, I. (1971), page 8.

[78] Landsberg, P. T. (1961), pp. 142, 387.

[55] Tisza, L. (1966), p. 91.

[79] Landsberg, P. T. (1978), pp. 79,102.

[56] Denbigh, K. G. (1951), p. 50.

[80] Prigogine, I. (1947), p. 48.

[57] Thomson, W. (1852 a). "On a Universal Tendency in Nature


to the Dissipation of Mechanical Energy" Proceedings of the
Royal Society of Edinburgh for April 19, 1852 [This version
from Mathematical and Physical Papers, vol. i, art. 59, pp.
511.]

[73] Tisza, L. (1966), p. 41.

[81] Born, M. (1949), Appendix 8, pp. 146149.


[82] Aston, J. G., Fritz, J. J. (1959), Chapter 9.
[83] Kestin, J. (1961).

[58] Thomson, W. (1852 b). On a universal tendency in nature


to the dissipation of mechanical energy, Philosophical Magazine 4: 304306.

[84] Landsberg, P. T. (1961), pp. 128142.

[59] Helmholtz, H. (1869/1871). Zur Theorie der stationren


Strme in reibenden Flssigkeiten, Verhandlungen des
naturhistorisch-medizinischen Vereins zu Heidelberg, Band
V: 17.
Reprinted in Helmholtz, H. (1882), Wissenschaftliche Abhandlungen, volume 1, Johann Ambrosius
Barth, Leipzig, pages 223230

[86] Tschoegl, N. W. (2000), p. 201.

[85] Tisza, L. (1966), p. 108.

[87] Born, M. (1949), pp. 146147.


[88] Haase, R. (1971), p. 35.
[89] Callen, H. B., (1960/1985), p. 35.

58

[90] Aston, J. G., Fritz, J. J. (1959), Chapter 9. This is an unusually explicit account of some of the physical meaning of the
Gibbs formalism.
[91] Tschoegl, N. W. (2000), pp. 1214.
[92] Buchdahl, H. A. (1966), Section 66, pp. 121125.
[93] Callen, J. B. (1960/1985), Section 2-1, pp. 3537.
[94] Prigogine, I., (1947), pp. 4849.
[95] Gyarmati, I. (1970), p. 68.
[96] Glansdor, P, Prigogine, I, (1971), p. 9.
[97] Lebon, G., Jou, D., Casas-Vzquez, J. (2008), p. 45.
[98] de Groot, S. R., Mazur, P. (1962), p. 18.
[99] de Groot, S. R., Mazur, P. (1962), p. 169.
[100] Truesdell, C., Muncaster, R. G. (1980), p. 3.
[101] Balescu, R. (1997), p. 9.
[102] Haase, R. (1963/1969), p. 18.
[103] Eckart, C. (1940).

Cited sources
Adkins, C. J. (1968/1983). Equilibrium Thermodynamics, (rst edition 1968), third edition 1983, Cambridge University Press, ISBN 0-521-25445-0.
Aston, J. G., Fritz, J. J. (1959). Thermodynamics and
Statistical Thermodynamics, John Wiley & Sons, New
York.
Balian, R. (1991/2007).
From Microphysics to
Macrophysics: Methods and Applications of Statistical Physics, volume 1, translated by D. ter Haar, J.F.
Gregg, Springer, Berlin, ISBN 978-3-540-45469-4.
Bailyn, M. (1994). A Survey of Thermodynamics,
American Institute of Physics Press, New York, ISBN
0-88318-797-3.
Born, M. (1949). Natural Philosophy of Cause and
Chance, Oxford University Press, London.
Bryan, G. H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and
their Direct Applications, B. G. Teubner, Leipzig.
Balescu, R. (1997). Statistical Dynamics; Matter out
of Equilibrium, Imperial College Press, London, ISBN
978-1-86094-045-3.
Buchdahl, H. A. (1966), The Concepts of Classical
Thermodynamics, Cambridge University Press, London.

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS


Callen, H. B. (1960/1985), Thermodynamics and an
Introduction to Thermostatistics, (rst edition 1960),
second edition 1985, John Wiley & Sons, New York,
ISBN 0-471-86256-8.
Carathodory, C. (1909). Untersuchungen ber die
Grundlagen der Thermodynamik. Mathematische
Annalen 67: 355386. doi:10.1007/BF01450409. A
translation may be found here. Also a mostly reliable
translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson &
Ross, Stroudsburg PA.
Clausius, R. (1850), Part I], [http://gallica.
bnf.fr/ark:/12148/bpt6k15164w/f518.table
Part II, Annalen der Physik 79:
368397,
500524,
Bibcode:1850AnP...155..500C,
doi:10.1002/andp.18501550403 External link in
|title= (help). See English Translation: On the
Moving Force of Heat, and the Laws regarding the
Nature of Heat itself which are deducible therefrom.
Phil. Mag. (1851), series 4, 2, 121, 102119. Also
available on Google Books.
Crawford, F. H. (1963). Heat, Thermodynamics, and
Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc.
de Groot, S. R., Mazur, P. (1962).
Nonequilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc.,
New York, ISBN 0486647412.
Denbigh, K. G. (1951). The Thermodynamics of the
Steady State, Methuen, London, Wiley, New York.
Denbigh, K. (1954/1981). The Principles of Chemical Equilibrium. With Applications in Chemistry and
Chemical Engineering, fourth edition, Cambridge University Press, Cambridge UK, ISBN 0-521-23682-7.
Eckart, C. (1940). The thermodynamics of irreversible processes. I. The simple uid, Phys. Rev. 58:
267269.
Fitts, D. D. (1962). Nonequilibrium Thermodynamics.
Phenomenological Theory of Irreversible Processes in
Fluid Systems, McGraw-Hill, New York.
Glansdor, P., Prigogine, I., (1971). Thermodynamic
Theory of Structure, Stability and Fluctuations, Wiley,
London, ISBN 0-471-30280-5.
Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles,
translated from the 1967 Hungarian by E. Gyarmati
and W. F. Heinz, Springer-Verlag, New York.

2.2. FIRST LAW OF THERMODYNAMICS


Haase, R. (1963/1969). Thermodynamics of Irreversible Processes, English translation, AddisonWesley Publishing, Reading MA.
Haase, R. (1971). Survey of Fundamental Laws,
chapter 1 of Thermodynamics, pages 197 of volume
1, ed. W. Jost, of Physical Chemistry. An Advanced
Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73117081.
Helmholtz, H. (1847). Ueber die Erhaltung der
Kraft. Eine physikalische Abhandlung, G. Reimer
(publisher), Berlin, read on 23 July in a session of the
Physikalischen Gesellschaft zu Berlin. Reprinted in
Helmholtz, H. von (1882), Wissenschaftliche Abhandlungen, Band 1, J. A. Barth, Leipzig. Translated and
edited by J. Tyndall, in Scientic Memoirs, Selected
from the Transactions of Foreign Academies of Science and from Foreign Journals. Natural Philosophy
(1853), volume 7, edited by J. Tyndall, W. Francis,
published by Taylor and Francis, London, pp. 114
162, reprinted as volume 7 of Series 7, The Sources of
Science, edited by H. Woolf, (1966), Johnson Reprint
Corporation, New York, and again in Brush, S. G., The
Kinetic Theory of Gases. An Anthology of Classic Papers with Historical Commentary, volume 1 of History
of Modern Physical Sciences, edited by N. S. Hall, Imperial College Press, London, ISBN 1-86094-347-0,
pp. 89110.
Kestin, J. (1961).
On intersecting isentropics.
Am.
J. Phys.
29:
329
331.
Bibcode:1961AmJPh..29..329K.
doi:10.1119/1.1937763.

59
Partington, J.R. (1949). An Advanced Treatise on
Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and
Co., London.
Pippard, A. B. (1957/1966). Elements of Classical
Thermodynamics for Advanced Students of Physics,
original publication 1957, reprint 1966, Cambridge
University Press, Cambridge UK.
Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co.,
London.
Prigogine, I. (1947). tude Thermodynamique des
Phnomnes irrversibles, Dunod, Paris, and Desoers,
Lige.
Prigogine, I., (1955/1967). Introduction to Thermodynamics of Irreversible Processes, third edition, Interscience Publishers, New York.
Reif, F. (1965). Fundamentals of Statistical and
Thermal Physics, McGraw-Hill Book Company, New
York.
Tisza, L. (1966). Generalized Thermodynamics,
M.I.T. Press, Cambridge MA.
Truesdell, C. A. (1980). The Tragicomical History of
Thermodynamics, 18221854, Springer, New York,
ISBN 0-387-90403-4.

Kestin, J. (1966). A Course in Thermodynamics, Blaisdell Publishing Company, Waltham MA.

Truesdell, C. A., Muncaster, R. G. (1980). Fundamentals of Maxwells Kinetic Theory of a Simple


Monatomic Gas, Treated as a branch of Rational Mechanics, Academic Press, New York, ISBN 0-12701350-4.

Kirkwood, J. G., Oppenheim, I. (1961). Chemical


Thermodynamics, McGraw-Hill Book Company, New
York.

Tschoegl, N. W. (2000). Fundamentals of Equilibrium


and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5.

Landsberg, P. T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York.
Landsberg, P. T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK,
ISBN 0-19-851142-6.
Lebon, G., Jou, D., Casas-Vzquez, J. (2008). Understanding Non-equilibrium Thermodynamics, Springer,
Berlin, ISBN 978-3-540-74251-7.
Mnster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, WileyInterscience, London, ISBN 0-471-62430-6.

2.2.11

Further reading

Goldstein, Martin, and Inge F. (1993). The Refrigerator and the Universe. Harvard University Press. ISBN
0-674-75325-9. OCLC 32826343. Chpts. 2 and 3
contain a nontechnical treatment of the rst law.
engel Y. A. and Boles M. (2007). Thermodynamics:
an engineering approach. McGraw-Hill Higher Education. ISBN 0-07-125771-3. Chapter 2.
Atkins P. (2007). Four Laws that drive the Universe.
OUP Oxford. ISBN 0-19-923236-9.

60

2.2.12

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

External links

of heat to work in a cyclic heat engine operating between


two given temperatures.

MISN-0-158, The First Law of Thermodynamics


(PDF le) by Jerzy Borysowicz for Project PHYS2.3.1
NET.

Introduction

Intuitive meaning of the law


First law of thermodynamics in the MIT Course
Unied Thermodynamics and Propulsion from Prof.
The second law is about thermodynamic systems or bodZ. S. Spakovszky
ies of matter and radiation, initially each in its own
state of internal thermodynamic equilibrium, and separated from one another by walls that partly or wholly
2.3 Second law of Thermodynamics allow or prevent the passage of matter and energy between them, or make them mutually inaccessible for their
[8][9][10][11][12][13]
The second law of thermodynamics states that for a ther- constituents.
modynamically dened process to actually occur, the sum
of the entropies of the participating bodies must increase.
In an idealized limiting case, that of a reversible process,
this sum remains unchanged. A simplied version of the
law states that the ow of heat is from a hotter to a colder
body.

The law envisages that the walls are changed by some external agency, making them less restrictive or constraining
and more permeable in various ways, and increasing the
accessibility, to parts of the overall system, of matter and
energy.[14][15][16][17] Thereby a process is dened, establishing new equilibrium states.

A thermodynamically dened process consists of transfers


of matter and energy between bodies of matter and radiation, each participating body being initially in its own state
of internal thermodynamic equilibrium. The bodies are initially separated from one another by walls that obstruct the
passage of matter and energy between them. The transfers
are initiated by a thermodynamic operation: some external
agency intervenes[1] to make one or more of the walls less
obstructive.[2] This establishes new equilibrium states in the
bodies. If, instead of making the walls less obstructive, the
thermodynamic operation makes them more obstructive, no
transfers are occasioned, and there is no eect on an established thermodynamic equilibrium.

The process invariably spreads,[18][19][20][21] disperses,[22]


and dissipates[6][23] matter or energy, or both, amongst the
bodies. Some energy, inside or outside the system, is degraded in its ability to do work.[24] This is quantitatively
described by increase of entropy. It is the consequence of
decrease of constraint by a wall, with a corresponding increase in the accessibility, to the parts of the system, of
matter and energy. An increase of constraint by a wall has
no eect on an established thermodynamic equilibrium.

The law expresses the irreversibility of the process. The


transfers invariably bring about spread,[3][4][5] dispersal,
or dissipation[6] of matter or energy, or both, amongst
the bodies. They occur because more kinds of transfer
through the walls have become possible.[7] Irreversibility
in thermodynamic processes is a consequence of the asymmetric character of thermodynamic operations, and not of
any internally irreversible microscopic properties of the
bodies.
The second law is an empirical nding that has been accepted as an axiom of thermodynamic theory. When its
presuppositions may be only approximately fullled, often
enough, the law can give a very useful approximation to
the observed facts. Statistical thermodynamics, classical or
quantum, explains the microscopic origin of the law. The
second law has been expressed in many ways. Its rst formulation is credited to the French scientist Sadi Carnot in
1824 (see Timeline of thermodynamics). Carnot showed
that there is an upper limit to the eciency of conversion

For an example of the spreading of matter due to increase


of accessibility, one may consider a gas initially conned
by an impermeable wall to one of two compartments of
an isolated system. The wall is then removed. The gas
spreads throughout both compartments.[17] The sum of the
entropies of the two compartments increases. Reinsertion
of the impermeable wall does not change the spread of the
gas between the compartments. For an example of the
spreading of energy due to increase of accessibility, one
may consider a wall impermeable to matter and energy initially separating two otherwise isolated bodies at dierent
temperatures. A thermodynamic operation makes the wall
become permeable only to heat, which then passes from the
hotter to the colder body, until their temperatures become
equal. The sum of the entropies of the two bodies increases.
Restoration of the complete impermeability of the wall does
not change the equality of the temperatures. The spreading
is a change from heterogeneity towards homogeneity.
It is the unconstraining of the initial equilibrium that
causes the increase of entropy and the change towards
homogeneity.[15] The following reasoning oers intuitive
understanding of this fact. One may imagine that the freshly
unconstrained system, still relatively heterogeneous, imme-

2.3. SECOND LAW OF THERMODYNAMICS


diately after the intervention that increased the wall permeability, in its transient condition, arose by spontaneous
evolution from an unconstrained previous transient condition of the system. One can then ask, what is the probable
such imagined previous condition. The answer is that, overwhelmingly probably, it is just the very same kind of homogeneous condition as that to which the relatively heterogeneous condition will overwhelmingly probably evolve. Obviously, this is possible only in the imagined absence of the
constraint that was actually present until its removal. In this
light, the reversibility of the dynamics of the evolution of
the unconstrained system is evident, in accord with the ordinary laws of microscopic dynamics. It is the removal of the
constraint that is eective in causing the change towards homogeneity, not some imagined or apparent irreversibility
of the laws of spontaneous evolution.[25] This reasoning is
of intuitive interest, but is essentially about microstates, and
therefore does not belong to macroscopic equilibrium thermodynamics, which studiously ignores consideration of microstates, and non-equilibrium considerations of this kind.
It does, however, forestall futile puzzling about some famous proposed paradoxes, imagining of a derivation of
an arrow of time from the second law,[26] and meaningless speculation about an imagined low entropy state of
the early universe.[27]

61
ies, and never the reverse, unless external work is performed
on the system. The key concept for the explanation of this
phenomenon through the second law of thermodynamics is
the denition of a new physical quantity, the entropy.[33][34]
For mathematical analysis of processes, entropy is introduced as follows. In a ctive reversible process, an innitesimal increment in the entropy (dS) of a system results from
an innitesimal transfer of heat (Q) to a closed system divided by the common temperature (T) of the system and
the surroundings which supply the heat:[35]

dS =

Q
T

process) reversible ctive idealized system, (closed.

For an actually possible innitesimal process without exchange of matter with the surroundings, the second law requires that the increment in system entropy be greater than
that:

dS >

Q
T

process). irreversible possible, actually system, (closed

This is because a general process for this case may include


work being done on the system by its surroundings, which
Though it is more or less intuitive to imagine 'spreadmust have frictional or viscous eects inside the system,
ing', such loose intuition is, for many thermodynamic proand because heat transfer actually occurs only irreversibly,
cesses, too vague or imprecise to be usefully quantitatively
driven by a nite temperature dierence.[36][37]
informative, because competing possibilities of spreading
can coexist, for example due to an increase of some con- The zeroth law of thermodynamics in its usual short statestraint combined with decrease of another. The second ment allows recognition that two bodies in a relation of
law justies the concept of entropy, which makes the no- thermal equilibrium have the same temperature, especially
tion of 'spreading' suitably precise, allowing quantitative that a test body has the same temperature as a reference
predictions of just how spreading will occur in partic- thermometric body.[38] For a body in thermal equilibrium
ular circumstances. It is characteristic of the physical with another, there are indenitely many empirical temperquantity entropy that it refers to states of thermodynamic ature scales, in general respectively depending on the properties of a particular reference thermometric body. The
equilibrium.[28][29][30]
second law allows a distinguished temperature scale, which
denes an absolute, thermodynamic temperature, independent of the properties of any particular reference thermoGeneral signicance of the law
metric body.[39][40]
The rst law of thermodynamics provides the basic denition of thermodynamic energy, also called internal energy, 2.3.2 Various statements of the law
associated with all thermodynamic systems, but unknown
in classical mechanics, and states the rule of conservation The second law of thermodynamics may be expressed
of energy in nature.[31][32]
in many specic ways,[41] the most prominent classiThe concept of energy in the rst law does not, however, cal statements[42] being the statement by Rudolf Clauaccount for the observation that natural processes have a sius (1854), the statement by Lord Kelvin (1851), and
preferred direction of progress. The rst law is symmetri- the statement in axiomatic thermodynamics by Constantin
cal with respect to the initial and nal states of an evolving Carathodory (1909). These statements cast the law in gensystem. But the second law asserts that a natural process eral physical terms citing the impossibility of certain proruns only in one sense, and is not reversible. For example, cesses. The Clausius and the Kelvin statements have been
heat always ows spontaneously from hotter to colder bod- shown to be equivalent.[43]

62

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

Carnots principle

It is impossible, by means of inanimate material agency, to derive mechanical eect from any
portion of matter by cooling it below the temperature of the coldest of the surrounding objects.[53]

The historical origin of the second law of thermodynamics


was in Carnots principle. It refers to a cycle of a Carnot
heat engine, ctively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and
work transfers are between subsystems that are always in Equivalence of the Clausius and the Kelvin statements
their own internal states of thermodynamic equilibrium.
The Carnot engine is an idealized device of special interest to engineers who are concerned with the eciency of
heat engines. Carnots principle was recognized by Carnot
at a time when the caloric theory of heat was seriously considered, before the recognition of the rst law of thermodynamics, and before the mathematical expression of the
concept of entropy. Interpreted in the light of the rst law,
Carnot
Imagined
it is physically equivalent to the second law of thermodyEngine
Engine
namics, and remains valid today. It states
The eciency of a quasi-static or reversible
Carnot cycle depends only on the temperatures
of the two heat reservoirs, and is the same,
whatever the working substance. A Carnot
engine operated in this way is the most efcient possible heat engine using those two
temperatures.[44][45][46][47][48][49][50]
Derive Kelvin Statement from Clausius Statement

Clausius statement
The German scientist Rudolf Clausius laid the foundation
for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work.[51] His formulation of the second law, which was published in German
in 1854, is known as the Clausius statement:
Heat can never pass from a colder to a
warmer body without some other change, connected therewith, occurring at the same time.[52]

Suppose there is an engine violating the Kelvin statement:


i.e., one that drains heat and converts it completely into
work in a cyclic fashion without any other result. Now pair
it with a reversed Carnot engine as shown by the gure.
The net and sole eect of this newly created engine consisting of the
( two )engines mentioned is transferring heat
Q = Q 1 1 from the cooler reservoir to the hotter one, which violates the Clausius statement. Thus a violation of the Kelvin statement implies a violation of the
Clausius statement, i.e. the Clausius statement implies the
Kelvin statement. We can prove in a similar manner that the
Kelvin statement implies the Clausius statement, and hence
the two are equivalent.

The statement by Clausius uses the concept of 'passage of


heat'. As is usual in thermodynamic discussions, this means
'net transfer of energy as heat', and does not refer to conPlancks proposition
tributory transfers one way and the other.
Heat cannot spontaneously ow from cold regions to hot regions without external work being performed on the system,
which is evident from ordinary experience of refrigeration,
for example. In a refrigerator, heat ows from cold to hot,
but only when forced by an external agent, the refrigeration
system.
Kelvin statement
Lord Kelvin expressed the second law as

Planck oered the following proposition as derived directly


from experience. This is sometimes regarded as his statement of the second law, but he regarded it as a starting point
for the derivation of the second law.
It is impossible to construct an engine
which will work in a complete cycle,
and produce no eect except the raising of a weight and cooling of a heat
reservoir.[54][55]

2.3. SECOND LAW OF THERMODYNAMICS

63

Relation between Kelvins statement and Plancks Though it is almost customary in textbooks to say that
proposition
Carathodorys principle expresses the second law and to
treat it as equivalent to the Clausius or to the KelvinIt is almost customary in textbooks to speak of the Kelvin- Planck statements, such is not the case. To get all the
Planck statement of the law, as for example in the text by content of the second law, Carathodorys principle needs
ter Haar and Wergeland.[56] One text gives a statement very to be supplemented by Plancks principle, that isochoric
like Plancks proposition, but attributes it to Kelvin with- work always increases the internal energy of a closed sysout mention of Planck.[57] One monograph quotes Plancks tem that was initially in its own internal thermodynamic
proposition as the Kelvin-Planck formulation, the text equilibrium.[37][66][67][68]
naming Kelvin as its author, though it correctly cites Planck
in its references.[58] The reader may compare the two statements quoted just above here.
Plancks Principle
Plancks statement

In 1926, Max Planck wrote an important paper on the basics


of thermodynamics.[67][69] He indicated the principle

Planck stated the second law as follows.


Every process occurring in nature proceeds in the sense in which the sum
of the entropies of all bodies taking
part in the process is increased. In
the limit, i.e. for reversible processes,
the sum of the entropies remains unchanged.[59][60][61]

The internal energy of a closed system


is increased by an adiabatic process,
throughout the duration of which,
the volume of the system remains
constant.[37][66]

This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarRather like Plancks statement is that of Uhlenbeck and ily implicitly rely on those concepts, but it implies the conFord for irreversible phenomena.
tent of the second law. A closely related statement is that
Frictional pressure never does positive work.[70] Using a
... in an irreversible or spontaneous
now-obsolete form of words, Planck himself wrote: The
change from one equilibrium state to
production of heat by friction is irreversible.[71][72]
another (as for example the equalizaNot mentioning entropy, this principle of Planck is stated
tion of temperature of two bodies A
in physical terms. It is very closely related to the Kelvin
and B, when brought in contact) the
[62]
statement given just above.[73] It is relevant that for a sysentropy always increases.
tem at constant volume and mole numbers, the entropy is
a monotonic function of the internal energy. Nevertheless,
Principle of Carathodory
this principle of Planck is not actually Plancks preferred
statement of the second law, which is quoted above, in a
Constantin Carathodory formulated thermodynamics on previous sub-section of the present section of this present
a purely mathematical axiomatic foundation. His state- article, and relies on the concept of entropy.
ment of the second law is known as the Principle of
A statement that in a sense is complementary to Plancks
Carathodory, which may be formulated as follows:[63]
principle is made by Borgnakke and Sonntag. They do not
oer it as a full statement of the second law:
In every neighborhood of any state S of an
adiabatically enclosed system there are states inaccessible from S.[64]
... there is only one way in which the
entropy of a [closed] system can be
With this formulation, he described the concept of adiabatic
decreased, and that is to transfer heat
accessibility for the rst time and provided the foundafrom the system.[74]
tion for a new subeld of classical thermodynamics, often called geometrical thermodynamics. It follows from
Carathodorys principle that quantity of energy quasi- Diering from Plancks just foregoing principle, this one is
statically transferred as heat is a holonomic process func- explicitly in terms of entropy change. Of course, removal
tion, in other words, Q = T dS .[65]
of matter from a system can also decrease its entropy.

64

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

Statement for a system that has a known expression of Clausius Inequality


its internal energy as a function of its extensive state
The Clausius theorem (1854) states that in a cyclic process
variables
The second law has been shown to be equivalent to the I
Q
internal energy U being a weakly convex function, when
0.
T
written as a function of extensive properties (mass, volume,
entropy, ...).[75][76]
The equality holds in the reversible case[77] and the '<' is in
the irreversible case. The reversible case is used to introduce the state function entropy. This is because in cyclic
2.3.3 Corollaries
processes the variation of a state function is zero from state
functionality.
Perpetual motion of the second kind
Main article: Perpetual motion

Thermodynamic temperature

Main article: Thermodynamic temperature


Before the establishment of the Second Law, many people
who were interested in inventing a perpetual motion maFor an arbitrary heat engine, the eciency is:
chine had tried to circumvent the restrictions of rst law of
thermodynamics by extracting the massive internal energy
of the environment as the power of the machine. Such a
Wn
qH qC
qC
=
=1
(1)
machine is called a perpetual motion machine of the sec- =
qH
qH
qH
ond kind. The second law declared the impossibility of
such machines.
where W is for the net work done per cycle. Thus the eciency depends only on qC/qH.
Carnots theorem states that all reversible engines operating
between the same heat reservoirs are equally ecient. Thus,
any reversible heat engine operating between temperatures
Carnots theorem (1824) is a principle that limits the max- T 1 and T 2 must have the same eciency, that is to say,
imum eciency for any possible engine. The eciency the eciency is the function of temperatures only: qC =
qH
solely depends on the temperature dierence between the f (T , T )
(2).
H
C
hot and cold thermal reservoirs. Carnots theorem states:
In addition, a reversible heat engine operating between temperatures T 1 and T 3 must have the same eciency as one
All irreversible heat engines between two heat reser- consisting of two cycles, one between T 1 and another (invoirs are less ecient than a Carnot engine operating termediate) temperature T 2 , and the second between T 2
between the same reservoirs.
andT 3 . This can only be the case if
Carnot theorem

All reversible heat engines between two heat reservoirs


q
q q
are equally ecient with a Carnot engine operating be- f (T1 , T3 ) = 3 = 2 3 = f (T1 , T2 )f (T2 , T3 ).
q
q1 q2
1
tween the same reservoirs.
Now consider the case where T1 is a xed reference temperature: the temperature of the triple point of water. Then
In his ideal model, the heat of caloric converted into work
for any T 2 and T 3 ,
could be reinstated by reversing the motion of the cycle, a
concept subsequently known as thermodynamic reversibility. Carnot, however, further postulated that some caloric
273.16 f (T1 , T3 )
f (T1 , T3 )
is lost, not being converted to mechanical work. Hence, no f (T2 , T3 ) =
=
.
f
(T
,
T
)
273.16 f (T1 , T2 )
1
2
real heat engine could realise the Carnot cycle's reversibility
and was condemned to be less ecient.
Therefore, if thermodynamic temperature is dened by
Though formulated in terms of caloric (see the obsolete
caloric theory), rather than entropy, this was an early insight into the second law.
T = 273.16 f (T1 , T )

2.3. SECOND LAW OF THERMODYNAMICS

65

then the function f, viewed as a function of thermodynamic Energy, available useful work
temperature, is simply
See also: Exergy
f (T2 , T3 ) =

T3
,
T2

An important and revealing idealized special case is to consider applying the Second Law to the scenario of an isoand the reference temperature T 1 will have the value lated system (called the total system or universe), made up
273.16. (Of course any reference temperature and any pos- of two parts: a sub-system of interest, and the sub-systems
itive numerical value could be usedthe choice here cor- surroundings. These surroundings are imagined to be so
responds to the Kelvin scale.)
large that they can be considered as an unlimited heat reservoir at temperature TR and pressure PR so that no matter
how much heat is transferred to (or from) the sub-system,
Entropy
the temperature of the surroundings will remain TR; and no
matter how much the volume of the sub-system expands (or
Main article: entropy (classical thermodynamics)
contracts), the pressure of the surroundings will remain PR.
According to the Clausius equality, for a reversible process
I

Q
=0
T

That means the line integral

Q
L T

is path independent.

Whatever changes to dS and dSR occur in the entropies of


the sub-system and the surroundings individually, according to the Second Law the entropy Stot of the isolated total
system must not decrease:

dStot = dS + dSR 0

So we can dene a state function S called entropy, which


According to the First Law of Thermodynamics, the change
satises
dU in the internal energy of the sub-system is the sum of the
heat q added to the sub-system, less any work w done by
the sub-system, plus any net chemical energy entering the
Q
dS =
sub-system d iRNi, so that:
T
With this we can only obtain the dierence of entropy by
integrating the above formula. To obtain the absolute value, dU = q w + d( N )
iR i
we need the Third Law of Thermodynamics, which states
that S=0 at absolute zero for perfect crystals.
where R are the chemical potentials of chemical species
For any irreversible process, since entropy is a state func- in the external surroundings.
tion, we can always connect the initial and terminal states Now the heat leaving the reservoir and entering the subwith an imaginary reversible process and integrating on that system is
path to calculate the dierence in entropy.
Now reverse the reversible process and combine it with the
said irreversible process. Applying Clausius inequality on q = TR (dSR ) TR dS
this loop,
where we have rst used the denition of entropy in classical thermodynamics (alternatively, in statistical thermody
I
namics, the relation between entropy change, temperature
Q
Q
S +
=
<0
and absorbed heat can be derived); and then the Second
T
T
Law inequality from above.
Thus,
It therefore follows that any net work w done by the subsystem must obey

Q
S
T

w dU + TR dS +
iR dNi
where the equality holds if the transformation is reversible.
Notice that if the process is an adiabatic process, then Q = It is useful to separate the work w done by the subsystem
0 , so S 0 .
into the useful work wu that can be done by the sub-system,

66

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

over and beyond the work pR dV done merely by the subsystem expanding against the surrounding external pressure,
giving the following relation for the useful work (exergy)
that can be done:
wu d(U TR S + pR V

iR Ni )

It is convenient to dene the right-hand-side as the exact


derivative of a thermodynamic potential, called the availability or exergy E of the subsystem,
E = U TR S + pR V

iR Ni

The Second Law therefore implies that for any process


which can be considered as divided simply into a subsystem, and an unlimited temperature and pressure reservoir
with which it is in contact,
dE + wu 0
i.e. the change in the subsystems exergy plus the useful
work done by the subsystem (or, the change in the subsys- Nicolas Lonard Sadi Carnot in the traditional uniform of a student
tems exergy less any work, additional to that done by the of the cole Polytechnique.
pressure reservoir, done on the system) must be less than or
equal to zero.
In sum, if a proper innite-reservoir-like reference state is
cal work is due to Nicolas Lonard Sadi Carnot in 1824.
chosen as the system surroundings in the real world, then
He was the rst to realize correctly that the eciency of
the Second Law predicts a decrease in E for an irreversible
this conversion depends on the dierence of temperature
process and no change for a reversible process.
between an engine and its environment.
dStot 0 Is equivalent to dE + wu 0
This expression together with the associated reference state
permits a design engineer working at the macroscopic scale
(above the thermodynamic limit) to utilize the Second Law
without directly measuring or considering entropy change in
a total isolated system. (Also, see process engineer). Those
changes have already been considered by the assumption
that the system under consideration can reach equilibrium
with the reference state without altering the reference state.
An eciency for a process or collection of processes that
compares it to the reversible ideal may also be found (See
second law eciency.)

Recognizing the signicance of James Prescott Joule's work


on the conservation of energy, Rudolf Clausius was the rst
to formulate the second law during 1850, in this form: heat
does not ow spontaneously from cold to hot bodies. While
common knowledge now, this was contrary to the caloric
theory of heat popular at the time, which considered heat
as a uid. From there he was able to infer the principle of
Sadi Carnot and the denition of entropy (1865).
Established during the 19th century, the Kelvin-Planck
statement of the Second Law says, It is impossible for any
device that operates on a cycle to receive heat from a single reservoir and produce a net amount of work. This was
shown to be equivalent to the statement of Clausius.

This approach to the Second Law is widely utilized in The ergodic hypothesis is also important for the Boltzmann
engineering practice, environmental accounting, systems approach. It says that, over long periods of time, the time
ecology, and other disciplines.
spent in some region of the phase space of microstates with
the same energy is proportional to the volume of this region,
i.e. that all accessible microstates are equally probable over
2.3.4 History
a long period of time. Equivalently, it says that time average
and average over the statistical ensemble are the same.
See also: History of entropy
The rst theory of the conversion of heat into mechani- There is a traditional doctrine, starting with Clausius, that

2.3. SECOND LAW OF THERMODYNAMICS

67

entropy can be understood in terms of molecular 'dis- This statement is the best-known phrasing of the second
order' within a macroscopic system. This doctrine is law. Because of the looseness of its language, e.g. universe,
obsolescent.[78][79][80]
as well as lack of specic conditions, e.g. open, closed, or
isolated, many people take this simple statement to mean
that the second law of thermodynamics applies virtually to
Account given by Clausius
every subject imaginable. This, of course, is not true; this
statement is only a simplied version of a more extended
and precise description.
In terms of time variation, the mathematical statement of
the second law for an isolated system undergoing an arbitrary transformation is:
dS
0
dt
where
S is the entropy of the system and
t is time.
The equality sign applies after equilibration. An alternative
way of formulating of the second law for isolated systems
is:
dS
dt

Rudolf Clausius

= S i with S i 0

with S i the sum of the rate of entropy production by all


processes inside the system. The advantage of this formulation is that it shows the eect of the entropy production.
The rate of entropy production is a very important concept
since it determines (limits) the eciency of thermal machines. Multiplied with ambient temperature Ta it gives
the so-called dissipated energy Pdiss = Ta S i .

The expression of the second law for closed systems (so,


In 1856, the German physicist Rudolf Clausius stated allowing heat exchange and moving boundaries, but not exwhat he called the second fundamental theorem in the change of matter) is:
mechanical theory of heat" in the following form:[81]

Q
dS

dt = T + Si with Si 0

Q
Here
= N
T
where Q is heat, T is temperature and N is the equivalencevalue of all uncompensated transformations involved in a
cyclical process. Later, in 1865, Clausius would come to
dene equivalence-value as entropy. On the heels of this
denition, that same year, the most famous version of the
second law was read in a presentation at the Philosophical
Society of Zurich on April 24, in which, in the end of his
presentation, Clausius concludes:
The entropy of the universe tends to a maximum.

Q
T
The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes
take place (which is the case in real systems in operation)
the >-sign holds. If heat is supplied to the system at several
places we have to take the algebraic sum of the corresponding terms.
For open systems (also allowing exchange of matter):

68

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

Q
T

+ S + S i with S i 0

which states that the entropy of a thermally isolated system


can only increase, is a trivial consequence of the equal prior
probability postulate, if we restrict the notion of the entropy
Here S is the ow of entropy into the system associated
to systems in thermal equilibrium. The entropy of an isowith the ow of matter entering the system. It should not be
lated system in thermal equilibrium containing an amount
confused with the time derivative of the entropy. If matter
of energy of E is:
is supplied at several places we have to take the algebraic
sum of these contributions.
dS
dt

2.3.5

Statistical mechanics

Statistical mechanics gives an explanation for the second


law by postulating that a material is composed of atoms
and molecules which are in constant motion. A particular
set of positions and velocities for each particle in the system is called a microstate of the system and because of the
constant motion, the system is constantly changing its microstate. Statistical mechanics postulates that, in equilibrium, each microstate that the system might be in is equally
likely to occur, and when this assumption is made, it leads
directly to the conclusion that the second law must hold in a
statistical sense. That is, the second law will hold on average, with a statistical variation on the order of 1/N where
N is the number of particles in the system. For everyday
(macroscopic) situations, the probability that the second law
will be violated is practically zero. However, for systems
with a small number of particles, thermodynamic parameters, including the entropy, may show signicant statistical
deviations from that predicted by the second law. Classical
thermodynamic theory does not deal with these statistical
variations.

2.3.6

Derivation from statistical mechanics

Further information: H-theorem


Due to Loschmidts paradox, derivations of the Second Law
have to make an assumption regarding the past, namely that
the system is uncorrelated at some time in the past; this allows for simple probabilistic treatment. This assumption is
usually thought as a boundary condition, and thus the second Law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of
the universe (the Big Bang), though other scenarios have
also been suggested.[82][83][84]
Given these assumptions, in statistical mechanics, the Second Law is not a postulate, rather it is a consequence of the
fundamental postulate, also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the
past there are auxiliary sources of information which tell us
that it was low entropy. The rst part of the second law,

S = kB ln [ (E)]
where (E) is the number of quantum states in a small
interval between E and E + E . Here E is a macroscopically small energy interval that is kept xed. Strictly speaking this means that the entropy depends on the choice of E
. However, in the thermodynamic limit (i.e. in the limit of
innitely large system size), the specic entropy (entropy
per unit volume or per unit mass) does not depend on E .
Suppose we have an isolated system whose macroscopic
state is specied by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then will depend
on the values of these variables. If a variable is not xed,
(e.g. we do not clamp a piston in a certain position), then
because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that
is maximized as that is the most probable situation in equilibrium.
If the variable was initially xed to some value then upon
release and when the new equilibrium has been reached, the
fact the variable will adjust itself so that is maximized,
implies that the entropy will have increased or it will have
stayed the same (if the value at which the variable was xed
happened to be the equilibrium value). Suppose we start
from an equilibrium situation and we suddenly remove a
constraint on a variable. Then right after we do this, there
are a number of accessible microstates, but equilibrium
has not yet been reached, so the actual probabilities of the
system being in some accessible state are not yet equal to
the prior probability of 1/ . We have already seen that in
the nal equilibrium state, the entropy will have increased
or have stayed the same relative to the previous equilibrium
state. Boltzmanns H-theorem, however, proves that the
quantity H increases monotonically as a function of time
during the intermediate out of equilibrium state.

Derivation of the entropy change for reversible processes


The second part of the Second Law states that the entropy
change of a system undergoing a reversible process is given
by:

2.3. SECOND LAW OF THERMODYNAMICS

dS =

Q
T

where the temperature is dened as:

d ln [ (E)]
1

kB T
dE
See here for the justication for this denition. Suppose
that the system has some external parameter, x, that can be
changed. In general, the energy eigenstates of the system
will depend on x. According to the adiabatic theorem of
quantum mechanics, in the limit of an innitely slow change
of the systems Hamiltonian, the system will stay in the same
energy eigenstate and thus change its energy according to
the change in energy of the energy eigenstate it is in.
The generalized force, X, corresponding to the external
variable x is dened such that Xdx is the work performed
by the system if x is increased by an amount dx. E.g., if x
is the volume, then X is the pressure. The generalized force
for a system known to be in energy eigenstate Er is given
by:

X=

dEr
dx

69
change x to x + dx. Then (E) will change because the
energy eigenstates depend on x, causing energy eigenstates
to move into or out of the range between E and E + E
r
. Lets focus again on the energy eigenstates for which dE
dx
lies within the range between Y and Y + Y . Since these
energy eigenstates increase in energy by Y dx, all such energy eigenstates that are in the interval ranging from E Y
dx to E move from below E to above E. There are

NY (E) =

Y (E)
Y dx
E

such energy eigenstates. If Y dx E , all these energy


eigenstates will move into the range between E and E +E
and contribute to an increase in . The number of energy
eigenstates that move from below E + E to above E + E
is, of course, given by NY (E + E) . The dierence

NY (E) NY (E + E)
is thus the net contribution to the increase in . Note that
if Y dx is larger than E there will be the energy eigenstates
that move from below E to above E+E . They are counted
in both NY (E) and NY (E + E) , therefore the above
expression is also valid in that case.
Expressing the above expression as a derivative with respect
to E and summing over Y yields the expression:

Since the system can be in any energy eigenstate within an


interval of E , we dene the generalized force for the sys(
)
(
)
( Y )
tem as the expectation value of the above expression:

(X)
=
Y
=
x E
E x
E
x
Y

dEr
The logarithmic derivative of with respect to x is thus
X=
dx
given by:

To evaluate the average, we partition the (E) energy


(
)
eigenstates by counting how many of them have a value for ( ln () )
X
dEr
= X +
dx within a range between Y and Y + Y . Calling this
x
E x
E
number Y (E) , we have:
The rst term is intensive, i.e. it does not scale with system
size. In contrast, the last term scales as the inverse system

size and will thus vanishes in the thermodynamic limit. We


(E) =
Y (E)
have thus found that:
Y
The average dening the generalized force can now be writ( )
ten:
S
X
=
x E
T
X=

1
Y Y (E)
(E)

Combining this with

We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we

S
E

)
=
x

1
T

70

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

Gives:
(
dS =

S
E

(
dE +

S
x

)
dx =
E

dE X
Q
+ dx =
T
T
T

Derivation for systems described by the canonical ensemble


If a system is in thermal contact with a heat bath at some
temperature T then, in equilibrium, the probability distribution over the energy eigenvalues are given by the canonical
ensemble:

Pj =

(
)
E
exp kB Tj
Z

Here Z is a factor that normalizes the sum of all the probabilities to 1, this function is known as the partition function.
We now consider an innitesimal reversible change in the
temperature and in the external parameters on which the
energy levels depend. It follows from the general formula
for the entropy:

S = kB

Pj ln (Pj )

that

dS = kB

ln (Pj ) dPj

To a fair approximation, living organisms may be considered as examples of (b). Approximately, an animals physical state cycles by the day, leaving the animal nearly unchanged. Animals take in food, water, and oxygen, and, as
a result of metabolism, give out breakdown products and
heat. Plants take in radiative energy from the sun, which
may be regarded as heat, and carbon dioxide and water.
They give out oxygen. In this way they grow. Eventually
they die, and their remains rot. This can be regarded as a
cyclic process. Overall, the sunlight is from a high temperature source, the sun, and its energy is passed to a lower
temperature sink, the soil. This is an increase of entropy of
the surroundings of the plant. Thus animals and plants obey
the second law of thermodynamics, considered in terms of
cyclic processes. Simple concepts of eciency of heat engines are hardly applicable to this problem because they assume closed systems.
From the thermodynamic viewpoint that considers (a), passages from one equilibrium state to another, only a roughly
approximate picture appears, because living organisms are
never in states of thermodynamic equilibrium. Living organisms must often be considered as open systems, because
they take in nutrients and give out waste products. Thermodynamics of open systems is currently often considered in
terms of passages from one state of thermodynamic equilibrium to another, or in terms of ows in the approximation of
local thermodynamic equilibrium. The problem for living
organisms may be further simplied by the approximation
of assuming a steady state with unchanging ows. General
principles of entropy production for such approximations
are subject to unsettled current debate or research. Nevertheless, ideas derived from this viewpoint on the second law
of thermodynamics are enlightening about living creatures.

Inserting the formula for Pj for the canonical ensemble in


here gives:

2.3.8

Gravitational systems

1
1
dE + W
1
Q
Ej dPj =
d (Ej Pj )
Pj dEj =
dS =
=
In systems
thatTdo not require for their descriptions the genT j
T j
T j
T
eral theory of relativity, bodies always have positive heat
capacity, meaning that the temperature rises with energy.
Therefore, when energy ows from a high-temperature ob2.3.7 Living organisms
ject to a low-temperature object, the source temperature is
There are two principal ways of formulating thermodynam- decreased while the sink temperature is increased; hence
ics, (a) through passages from one state of thermodynamic temperature dierences tend to diminish over time. This is
equilibrium to another, and (b) through cyclic processes, by not always the case for systems in which the gravitational
which the system is left unchanged, while the total entropy force is important and the general theory of relativity is reof the surroundings is increased. These two ways help to un- quired. Such systems can spontaneously change towards
derstand the processes of life. This topic is mostly beyond uneven spread of mass and energy. This applies to the unithe scope of this present article, but has been considered verse in large scale, and consequently it may be dicult or
by several authors, such as Erwin Schrdinger, Lon Bril- impossible to apply the second law to it.[27] Beyond this, the
louin[85] and Isaac Asimov. It is also the topic of current thermodynamics of systems described by the general theory
research.
of relativity is beyond the scope of the present article.

2.3. SECOND LAW OF THERMODYNAMICS

2.3.9

Non-equilibrium states

71
beyond the scope of this article.

Main article: Non-equilibrium thermodynamics

2.3.10

The theory of classical or equilibrium thermodynamics is


idealized. A main postulate or assumption, often not even
explicitly stated, is the existence of systems in their own
internal states of thermodynamic equilibrium. In general, a
region of space containing a physical system at a given time,
that may be found in nature, is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms,
nothing in the entire universe is or has ever been truly in
exact thermodynamic equilibrium.[27][86]

Further information: Entropy (arrow of time)


See also: Arrow of time

For purposes of physical analysis, it is often enough convenient to make an assumption of thermodynamic equilibrium. Such an assumption may rely on trial and error for
its justication. If the assumption is justied, it can often be very valuable and useful because it makes available
the theory of thermodynamics. Elements of the equilibrium assumption are that a system is observed to be unchanging over an indenitely long time, and that there are
so many particles in a system, that its particulate nature can
be entirely ignored. Under such an equilibrium assumption, in general, there are no macroscopically detectable
uctuations. There is an exception, the case of critical
states, which exhibit to the naked eye the phenomenon
of critical opalescence. For laboratory studies of critical
states, exceptionally long observation times are needed.

Arrow of time

The second law of thermodynamics is a physical law that is


not symmetric to reversal of the time direction.
The second law has been proposed to supply an explanation
of the dierence between moving forward and backwards in
time, such as why the cause precedes the eect (the causal
arrow of time).[92]

2.3.11

Irreversibility

Irreversibility in thermodynamic processes is a consequence


of the asymmetric character of thermodynamic operations,
and not of any internally irreversible microscopic properties of the bodies. Thermodynamic operations are macroscopic external interventions imposed on the participating
bodies, not derived from their internal properties. There
are reputed paradoxes that arise from failure to recognize
this.
Loschmidts paradox

In all cases, the assumption of thermodynamic equilibrium, Main article: Loschmidts paradox
once made, implies as a consequence that no putative candidate uctuation alters the entropy of the system.
Loschmidts paradox, also known as the reversibility paraIt can easily happen that a physical system exhibits internal dox, is the objection that it should not be possible to deduce
macroscopic changes that are fast enough to invalidate the
an irreversible process from the time-symmetric dynamics
assumption of the constancy of the entropy. Or that a phys- that describe the microscopic evolution of a macroscopic
ical system has so few particles that the particulate nature
system.
is manifest in observable uctuations. Then the assumption of thermodynamic equilibrium is to be abandoned. In the opinion of Schrdinger, It is now quite obvious in
There is no unqualied general denition of entropy for what manner you have to reformulate the law of entropy
or for that matter, all other irreversible statementsso that
non-equilibrium states.[87]
they be capable of being derived from reversible models.
There are intermediate cases, in which the assump- You must not speak of one isolated system but at least of
tion of local thermodynamic equilibrium is a very good two, which you may for the moment consider isolated from
approximation,[88][89][90][91] but strictly speaking it is still an the rest of the world, but not always from each other.[93]
approximation, not theoretically ideal. For non-equilibrium The two systems are isolated from each other by the wall,
situations in general, it may be useful to consider statistical until it is removed by the thermodynamic operation, as enmechanical denitions of other quantities that may be con- visaged by the law. The thermodynamic operation is exveniently called 'entropy', but they should not be confused ternally imposed, not subject to the reversible microscopic
or conated with thermodynamic entropy properly dened dynamical laws that govern the constituents of the systems.
for the second law. These other quantities indeed belong to It is the cause of the irreversibility. The statement of the law
statistical mechanics, not to thermodynamics, the primary in this present article complies with Schrdingers advice.
realm of the second law.
The causeeect relation is logically prior to the second law,
The physics of macroscopically observable uctuations is not derived from it.

72

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

Poincar recurrence theorem


Main article: Poincar recurrence theorem
The Poincar recurrence theorem considers a theoretical
microscopic description of an isolated physical system.
This may be considered as a model of a thermodynamic system after a thermodynamic operation has removed an internal wall. The system will, after a suciently long time, return to a microscopically dened state very close to the initial one. The Poincar recurrence time is the length of time
elapsed until the return. It is exceedingly long, likely longer
than the life of the universe, and depends sensitively on the
geometry of the wall that was removed by the thermodynamic operation. The recurrence theorem may be perceived
as apparently contradicting the second law of thermodynamics. More obviously, however, it is simply a microscopic model of thermodynamic equilibrium in an isolated
system formed by removal of a wall between two systems.
For a typical thermodynamical system, the recurrence time
is so large (many many times longer than the lifetime of the
universe) that, for all practical purposes, one cannot observe
the recurrence. One might wish, nevertheless, to imagine
that one could wait for the Poincar recurrence, and then
re-insert the wall that was removed by the thermodynamic
operation. It is then evident that the appearance of irreversibility is due to the utter unpredictability of the Poincar
recurrence given only that the initial state was one of thermodynamic equilibrium, as is the case in macroscopic thermodynamics. Even if one could wait for it, one has no practical possibility of picking the right instant at which to reinsert the wall. The Poincar recurrence theorem provides
a solution to Loschmidts paradox. If an isolated thermodynamic system could be monitored over increasingly many
multiples of the average Poincar recurrence time, the thermodynamic behavior of the system would become invariant
under time reversal.
Maxwells demon

James Clerk Maxwell

in B, contrary to the second law of thermodynamics.


One response to this question was suggested in 1929 by Le
Szilrd and later by Lon Brillouin. Szilrd pointed out
that a real-life Maxwells demon would need to have some
means of measuring molecular speed, and that the act of
acquiring information would require an expenditure of energy.
Maxwells demon repeatedly alters the permeability of
the wall between A and B. It is therefore performing
thermodynamic operations on a microscopic scale, not just
observing ordinary spontaneous or natural macroscopic
thermodynamic processes.

Main article: Maxwells demon


James Clerk Maxwell imagined one container divided into
two parts, A and B. Both parts are lled with the same gas
at equal temperatures and placed next to each other, separated by a wall. Observing the molecules on both sides,
an imaginary demon guards a microscopic trapdoor in the
wall. When a faster-than-average molecule from A ies towards the trapdoor, the demon opens it, and the molecule
will y from A to B. The average speed of the molecules in
B will have increased while in A they will have slowed down
on average. Since average molecular speed corresponds to
temperature, the temperature decreases in A and increases

2.3.12

Quotations

The law that entropy always increases holds,


I think, the supreme position among the laws of
Nature. If someone points out to you that your
pet theory of the universe is in disagreement
with Maxwells equations then so much the
worse for Maxwells equations. If it is found to
be contradicted by observation well, these
experimentalists do bungle things sometimes.
But if your theory is found to be against the
second law of thermodynamics I can give you

2.3. SECOND LAW OF THERMODYNAMICS

73

no hope; there is nothing for it but to collapse in


deepest humiliation.
Sir Arthur Stanley Eddington, The Nature of
the Physical World (1927)

[2] Pippard, A.B. (1957/1966), p. 96: In the rst two examples


the changes treated involved the transition from one equilibrium state to another, and were eected by altering the constraints imposed upon the systems, in the rst by removal of
an adiabatic wall and in the second by altering the volume to
which the gas was conned.

There have been nearly as many formulations


of the second law as there have been discussions
of it.
Philosopher / Physicist P.W. Bridgman,
(1941)

[3] Guggenheim, E.A. (1949).

Clausius is the author of the sibyllic utterance, The energy of the universe is constant; the
entropy of the universe tends to a maximum.
The objectives of continuum thermomechanics
stop far short of explaining the universe,
but within that theory we may easily derive an
explicit statement in some ways reminiscent of
Clausius, but referring only to a modest object:
an isolated body of nite size.
Truesdell, C., Muncaster, R.G. (1980).
Fundamentals of Maxwells Kinetic Theory of
a Simple Monatomic Gas, Treated as a Branch
of Rational Mechanics, Academic Press, New
York, ISBN 0-12-701350-4, p.17.

2.3.13

See also

ClausiusDuhem inequality
Fluctuation theorem
History of thermodynamics
Jarzynski equality
Laws of thermodynamics
Maximum entropy thermodynamics
Reections on the Motive Power of Fire
Thermal diode
Relativistic heat conduction

2.3.14

References

[1] Guggenheim, E.A. (1949), p.454: It is usually when a system is tampered with that changes take place.

[4] Denbigh, K. (1954/1981), p. 75.


[5] Atkins, P.W., de Paula, J. (2006), p. 78: The opposite
change, the spreading of the objects energy into the surroundings as thermal motion, is natural. It may seem very
puzzling that the spreading out of energy and matter, the
collapse into disorder, can lead to the formation of such ordered structures as crystals or proteins. Nevertheless, in due
course, we shall see that dispersal of energy and matter accounts for change in all its forms.
[6] W. Thomson (1852).
[7] Pippard, A.B. (1957/1966), p. 97: it is the act of removing
the wall and not the subsequent ow of heat which increases
the entropy.
[8] Adkins, C.J. (1968/1983), p. 4: contain the system within
walls of some special kind that allow or prevent various sorts
of interaction between the system and its surroundings"; p.
5: a section of wall through which one or more of the components may pass while others are contained; such a wall
is called semipermeable; p. 49: Take the walls to be thermally insulating"; p. 144: If isotropic radiation is trapped
in a vessel with perfectly reecting walls ...
[9] Callen, H.B. (1960/1985), p. 16: In general, a wall that
constrains an extensive parameter of a system is said to be
restrictive with respect to that parameter.
[10] Baierlein, R. (1999), p. 22: no energy passes through any
walls"; p. 118: cubical cavity with perfectly reecting metal
walls.
[11] Pippard, A.B. (1957/1966), p. 106: In this example we
have considered the constraint to take the form of a mechanical barrier"; p. 108: We shall consider the equilibrium between a liquid and its vapour, under two dierent
constraints: rst, with the vessel immersed in a constanttemperature bath, and open to a constant external pressure;
and secondly, with the vessel closed and thermally isolated.
[12] Buchdahl, H.A. (1966), p. 72: It should be noted that there
need be no restriction on heat exchange within the system.
[13] Blundell, S.J., Blundell, K.M. (2006), p. 16: we are applying a constraint to the system, either constraining the
volume of the gas to be xed, or constraining the pressure
of the gas to be xed"; p. 108: The rst constraint is that
of keeping the volume constant.
[14] Guggenheim, E.A. (1949), p.454: It is usually when a system is tampered with that changes take place.

74

[15] Pippard, A.B. (1957/1966), p. 96: In the rst two examples


the changes treated involved the transition from one equilibrium state to another, and were eected by altering the constraints imposed upon the systems, in the rst by removal of
an adiabatic wall and in the second by altering the volume
to which the gas was conned"; "It is not possible to vary the
constraints of an isolated system in such a way as to decrease
the entropy"; p. 97: It will be seen then that our second
statement of the entropy law has much to recommend it in
that it concentrates upon the essential feature of a thermodynamic change, the variation of the constraints to which
a system is subjected"; In the same way when two bodies
at dierent temperatures are placed in thermal contact by
removal of an adiabatic wall, it is the act of removing the
wall and not the subsequent ow of heat which increases the
entropy.

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

called dissipative work"; p. 321: But in any real case the


motion of the piston will cause viscous damping within the
systems and the kinetic energy of the piston will eventually
be dissipated.
[24] Adkins, C.J. (1968/1983), p. 76.
[25] Phil Attard (2012). Non-equilibrium Thermodynamics and
Statistical Mechanics: Foundations and Applications. Oxford
University Press. p. 24. ISBN 0-19-163977-X.: Physically, the result means that for every uctuation from equilibrium, there is a reverse uctuation back to equilibrium,
where equilibrium means the most likely macrostate.
[26] Unk, J. (2001).
[27] Grandy, W.T. (Jr) (2008), p. 151.
Tisza, L. (1966), p. 40: This conforms to the notion that entropy, a state function, is associated with
equilibrium states.

[16] Kittel, C., Kroemer, H. (1969/1980), p. 46: the entropy of


a closed system tends to remain constant or to increase when
a constraint internal to the system is removed.

[28]

[17] Unk, J. (2003), p. 133: A process is then conceived of


as being triggered by the cancellation of one or more of
these constraints (e.g. mixing or expansion of gases after
the removal of a partition, loosening a previously xed piston, etc.).

[29] Callen, H.B. (1960/1985), p. 27: It must be stressed that


we postulate the existence of the entropy only for equilibrium states and that our postulate makes no reference whatsoever to nonequilibrium states.

[18] Bridgman, P.W. (1943), p. 153: The entropy increase


arising from this process, that is, exchange of radiation between bodies at dierent temperatures, is not to be sought
in the initial act of absorption, which may be non-entropyincreasing, but is to be found after the initial absorption in
the spreading out of the spectrum of the absorbed energy
from the distribution characteristic of the higher temperature of its source to the distribution characteristic of the
lower temperature of the sink.
[19] Guggenheim, E.A. (1949), p. 452: To the question what in
one word does entropy really mean, the author would have
no hesitation in replying 'Accessibility' or 'Spread'.

[30] Unk, J. (2001), p. 306: A common and preliminary description of the Second Law is that it guarantees that all physical systems in thermal equilibrium can be characterised by
a quantity called entropy"; p. 308: In thermodynamics, entropy is not dened for arbitrary states out of equilibrium.
[31] Planck, M. (1897/1903), pp. 4041.
[32] Munster A. (1970), pp. 89, 5051.
[33] Planck, M. (1897/1903), pp. 79107.
[34] Bailyn, M. (1994), Section 71, pp. 113154.
[35] Bailyn, M. (1994), p. 120.

[20] Denbigh, K. (1954/1981), p. 75.

[36] Adkins, C.J. (1968/1983), p. 75.

[21] Atkins, P.W., de Paula, J. (2006), p. 78: The opposite


change, the spreading of the objects energy into the surroundings as thermal motion, is natural. It may seem very
puzzling that the spreading out of energy and matter, the
collapse into disorder, can lead to the formation of such ordered structures as crystals or proteins. Nevertheless, in due
course, we shall see that dispersal of energy and matter accounts for change in all its forms.

[37] Mnster, A. (1970), p. 45.

[22] Tisza, L., Quay, P.M. (1963), 'The statistical thermodynamics of equilibrium', Annals of Physics, 26: 4890; p. 65;
p. 88: The dispersal of the d.f.[distribution function] over
these states is their entropy.
[23] Callen, H.B. (1960/1985). P. 63: In order for the change to
be quasi-static, this compression must be dissipated throughout the entire volume of gas before the next appreciable compression occurs"; p. 69: The excess work done in an irreversible process over that done in a reversible process, is

[38] J. S. Dugdale (1996). Entropy and its Physical Meaning.


Taylor & Francis. p. 13. ISBN 0-7484-0569-0. This law is
the basis of temperature.
[39] Zemansky, M.W. (1968), pp. 207209.
[40] Quinn, T.J. (1983), p. 8.
[41] Concept and Statements of the Second Law. web.mit.edu.
Retrieved 2010-10-07.
[42] Lieb & Yngvason (1999).
[43] Rao (2004), p. 213.
[44] Carnot, S. (1824/1986).
[45] Truesdell, C. (1980), Chapter 5.

2.3. SECOND LAW OF THERMODYNAMICS

[46] Adkins, C.J. (1968/1983), pp. 5658.


[47] Mnster, A. (1970), p. 11.
[48] Kondepudi, D., Prigogine, I. (1998), pp.6775.
[49] Lebon, G., Jou, D., Casas-Vzquez, J. (2008), p. 10.
[50] Eu, B.C. (2002), pp. 3235.
[51] Clausius (1850).

75

[75] van Gool, W.; Bruggink, J.J.C. (Eds) (1985). Energy and
time in the economic and physical sciences. North-Holland.
pp. 4156. ISBN 0-444-87748-7.
[76] Grubbstrm, Robert W. (2007).
An Attempt
to Introduce Dynamics Into Generalised Exergy
Considerations.
Applied Energy 84:
701718.
doi:10.1016/j.apenergy.2007.01.003.
[77] Clausius theorem at Wolfram Research

[52] Clausius (1854), p. 86.


[53] Thomson (1851).
[54] Planck, M. (1897/1903), p. 86.
[55] Roberts, J.K., Miller, A.R. (1928/1960), p. 319.
[56] ter Haar, D., Wergeland, H. (1966), p. 17.
[57] Pippard, A.B. (1957/1966), p. 30.
[58] pek, V., Sheehan, D.P. (2005), p. 3

[78] Denbigh, K.G., Denbigh, J.S. (1985). Entropy in Relation to


Incomplete Knowledge, Cambridge University Press, Cambridge UK, ISBN 0-521-25677-1, pp. 4344.
[79] Grandy, W.T., Jr (2008). Entropy and the Time Evolution
of Macroscopic Systems, Oxford University Press, Oxford,
ISBN 978-0-19-954617-6, pp. 5558.
[80] Entropy Sites A Guide Content selected by Frank L.
Lambert

[59] Planck, M. (1897/1903), p. 100.

[81] Clausius (1867).

[60] Planck, M. (1926), p. 463, translation by Unk, J. (2003),


p. 131.

[82] Hawking,
SW (1985).
Arrow of time
in cosmology.
Phys.
Rev.
D 32 (10):
24892495.
Bibcode:1985PhRvD..32.2489H.
doi:10.1103/PhysRevD.32.2489.
Retrieved 2013-0215.

[61] Roberts, J.K., Miller, A.R. (1928/1960), p. 382. This


source is partly verbatim from Plancks statement, but does
not cite Planck. This source calls the statement the principle
of the increase of entropy.
[62] Uhlenbeck, G.E., Ford, G.W. (1963), p. 16.
[63] Carathodory, C. (1909).
[64] Buchdahl, H.A. (1966), p. 68.
[65] Sychev, V. V. (1991). The Dierential Equations of Thermodynamics. Taylor & Francis. ISBN 978-1-56032-121-7.
Retrieved 2012-11-26.

[83] Greene, Brian (2004). The Fabric of the Cosmos. Alfred A.


Knopf. p. 171. ISBN 0-375-41288-3.
[84] Lebowitz, Joel L. (September 1993). Boltzmanns Entropy
and Times Arrow (PDF). Physics Today 46 (9): 3238.
Bibcode:1993PhT....46i..32L. doi:10.1063/1.881363. Retrieved 2013-02-22.
[85] Lon Brillouin Science and Information Theory (Academic
Press, 1962) (Dover, 2004)

[66] Lieb & Yngvason (1999), p. 49.


[67] Planck, M. (1926).

[86] Callen, H.B. (1960/1985), p. 15.

[68] Buchdahl, H.A. (1966), p. 69.

[87] Lieb, E.H., Yngvason, J. (2003), p. 190.

[69] Unk, J. (2003), pp. 129132.

[88] Gyarmati, I. (1967/1970), pp. 4-14.

[70] Truesdell, C., Muncaster, R.G. (1980). Fundamentals


of Maxwells Kinetic Theory of a Simple Monatomic Gas,
Treated as a Branch of Rational Mechanics, Academic Press,
New York, ISBN 0-12-701350-4, p. 15.

[89] Glansdor, P., Prigogine, I. (1971).


[90] Mller, I. (1985).

[71] Planck, M. (1897/1903), p. 81.

[91] Mller, I. (2003).

[72] Planck, M. (1926), p. 457, Wikipedia editors translation.


[73] Lieb, E.H., Yngvason, J. (2003), p. 149.

[92] Halliwell, J.J.; et al. (1994). Physical Origins of Time Asymmetry. Cambridge. ISBN 0-521-56837-4. chapter 6

[74] Borgnakke, C., Sonntag., R.E. (2009), p. 304.

[93] Schrdinger, E. (1950), p. 192.

76
Bibliography of citations
Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, (1st edition 1968), third edition 1983, Cambridge University Press, Cambridge UK, ISBN 0-52125445-0.
Atkins, P.W., de Paula, J. (2006). Atkins Physical
Chemistry, eighth edition, W.H. Freeman, New York,
ISBN 978-0-7167-8759-4.
Attard, P. (2012). Non-equilibrium Thermodynamics and Statistical Mechanics: Foundations and Applications, Oxford University Press, Oxford UK, ISBN
978-0-19-966276-0.
Baierlein, R. (1999). Thermal Physics, Cambridge
University Press, Cambridge UK, ISBN 0-521-590825.
Bailyn, M. (1994). A Survey of Thermodynamics,
American Institute of Physics, New York, ISBN 088318-797-3.
Blundell, S.J., Blundell, K.M. (2006). Concepts in
Thermal Physics, Oxford University Press, Oxford
UK, ISBN 978-0-19-856769-1.
Boltzmann, L. (1896/1964). Lectures on Gas Theory, translated by S.G. Brush, University of California
Press, Berkeley.
Borgnakke, C., Sonntag., R.E. (2009). Fundamentals of Thermodynamics, seventh edition, Wiley, ISBN
978-0-470-04192-5.
Buchdahl, H.A. (1966). The Concepts of Classical
Thermodynamics, Cambridge University Press, Cambridge UK.
Bridgman, P.W. (1943). The Nature of Thermodynamics, Harvard University Press, Cambridge MA.
Callen, H.B. (1960/1985). Thermodynamics and an
Introduction to Thermostatistics, (1st edition 1960) 2nd
edition 1985, Wiley, New York, ISBN 0-471-862568.
pek, V., Sheehan, D.P. (2005). Challenges to the
Second Law of Thermodynamics: Theory and Experiment, Springer, Dordrecht, ISBN 1-4020-3015-0.
C. Carathodory (1909). Untersuchungen ber die
Grundlagen der Thermodynamik. Mathematische
Annalen 67: 355386. doi:10.1007/bf01450409.
Axiom II: In jeder beliebigen Umgebung eines
willkrlich vorgeschriebenen Anfangszustandes gibt
es Zustnde, die durch adiabatische Zustandsnderungen nicht beliebig approximiert werden knnen.

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS


(p.363). A translation may be found here. Also a
mostly reliable translation is to be found at Kestin, J.
(1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.
Carnot, S. (1824/1986). Reections on the motive
power of re, Manchester University Press, Manchester UK, ISBN 0-7190-1741-6. Also here.
Chapman, S., Cowling, T.G. (1939/1970). The Mathematical Theory of Non-uniform gases. An Account of
the Kinetic Theory of Viscosity, Thermal Conduction
and Diusion in Gases, third edition 1970, Cambridge
University Press, London.
Clausius, R. (1850).
Ueber Die Bewegende
Kraft Der Wrme Und Die Gesetze, Welche
Sich Daraus Fr Die Wrmelehre Selbst Ableiten
Lassen.
Annalen der Physik 79: 368397,
500524.
Bibcode:1850AnP...155..500C.
doi:10.1002/andp.18501550403.
Retrieved 26
June 2012. Translated into English: Clausius, R.
(July 1851). On the Moving Force of Heat, and the
Laws regarding the Nature of Heat itself which are
deducible therefrom. London, Edinburgh and Dublin
Philosophical Magazine and Journal of Science. 4th 2
(VIII): 121; 102119. Retrieved 26 June 2012.
Clausius, R. (1854). "ber eine vernderte Form des
zweiten Hauptsatzes der mechanischen Wrmetheorie (PDF). Annalen der Physik (Poggendo).
xciii: 481506.
Bibcode:1854AnP...169..481C.
doi:10.1002/andp.18541691202. Retrieved 24 March
2014. Translated into English: Clausius, R. (July
1856). On a Modied Form of the Second Fundamental Theorem in the Mechanical Theory of Heat.
London, Edinburgh and Dublin Philosophical Magazine and Journal of Science. 4th 2: 86. Retrieved 24
March 2014. Reprinted in: Clausius, R. (1867). The
Mechanical Theory of Heat with its Applications to
the Steam Engine and to Physical Properties of Bodies.
London: John van Voorst. Retrieved 19 June 2012.
Denbigh, K. (1954/1981). The Principles of Chemical Equilibrium. With Applications in Chemistry and
Chemical Engineering, fourth edition, Cambridge University Press, Cambridge UK, ISBN 0-521-23682-7.
Eu, B.C. (2002). Generalized Thermodynamics. The
Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers,
Dordrecht, ISBN 1-4020-0788-4.
Gibbs, J.W. (1876/1878). On the equilibrium of heterogeneous substances, Trans. Conn. Acad., 3: 108248, 343-524, reprinted in The Collected Works of J.
Willard Gibbs, Ph.D, LL. D., edited by W.R. Longley,

2.3. SECOND LAW OF THERMODYNAMICS

77

R.G. Van Name, Longmans, Green & Co., New York,


1928, volume 1, pp. 55353.

Maxwell, J.C. (1867). On the dynamical theory of


gases. Phil. Trans. Roy. Soc. London 157: 4988.

Griem, H.R. (2005). Principles of Plasma Spectroscopy (Cambridge Monographs on Plasma Physics),
Cambridge University Press, New York ISBN 0-52161941-6.

Mller, I. (1985). Thermodynamics, Pitman, London,


ISBN 0-273-08577-8.

Glansdor, P., Prigogine, I. (1971). Thermodynamic


Theory of Structure, Stability, and Fluctuations, WileyInterscience, London, 1971, ISBN 0-471-30280-5.
Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press.
ISBN 978-0-19-954617-6.
Greven, A., Keller, G., Warnecke (editors) (2003).
Entropy, Princeton University Press, Princeton NJ,
ISBN 0-691-11338-6.
Guggenheim, E.A. (1949). 'Statistical basis of thermodynamics, Research, 2: 450454.
Guggenheim, E.A. (1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, fth revised edition, North Holland, Amsterdam.
Gyarmati, I. (1967/1970) Non-equilibrium Thermodynamics. Field Theory and Variational Principles,
translated by E. Gyarmati and W.F. Heinz, Springer,
New York.
Kittel, C., Kroemer, H. (1969/1980). Thermal
Physics, second edition, Freeman, San Francisco CA,
ISBN 0-7167-1088-9.
Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures,
John Wiley & Sons, Chichester, ISBN 0-471-973939.
Lebon, G., Jou, D., Casas-Vzquez, J. (2008). Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers, Springer-Verlag, Berlin,
ISBN 978-3-540-74252-4.
Lieb, E. H.; Yngvason, J. (1999). The Physics and
Mathematics of the Second Law of Thermodynamics
(PDF). Physics Reports 310: 196. arXiv:condBibcode:1999PhR...310....1L.
mat/9708200.
doi:10.1016/S0370-1573(98)00082-9. Retrieved 24
March 2014.
Lieb, E.H., Yngvason, J. (2003). The Entropy of Classical Thermodynamics, pp. 147195, Chapter 8 of
Entropy, Greven, A., Keller, G., Warnecke (editors)
(2003).

Mller, I. (2003). Entropy in Nonequilibrium, pp.


79109, Chapter 5 of Entropy, Greven, A., Keller, G.,
Warnecke (editors) (2003).
Mnster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, WileyInterscience, London, ISBN 0-471-62430-6.
Pippard, A.B. (1957/1966). Elements of Classical
Thermodynamics for Advanced Students of Physics,
original publication 1957, reprint 1966, Cambridge
University Press, Cambridge UK.
Planck, M. (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans Green, London,
p. 100.
Planck. M. (1914). The Theory of Heat Radiation, a
translation by Masius, M. of the second German edition, P. Blakistons Son & Co., Philadelphia.
Planck, M. (1926). ber die Begrndung des zweiten
Hauptsatzes der Thermodynamik, Sitzungsberichte
der Preussischen Akademie der Wissenschaften:
Physikalisch-mathematische Klasse: 453463.
Quinn, T.J. (1983). Temperature, Academic Press,
London, ISBN 0-12-569680-9.
Rao, Y.V.C. (2004). An Introduction to thermodynamics. Universities Press. p. 213. ISBN 978-81-7371461-0.
Roberts, J.K., Miller, A.R. (1928/1960). Heat and
Thermodynamics, (rst edition 1928), fth edition,
Blackie & Son Limited, Glasgow.
Schrdinger, E. (1950). Irreversibility, Proc. Roy.
Irish Acad., A53: 189195.
ter Haar, D., Wergeland, H. (1966). Elements of
Thermodynamics, Addison-Wesley Publishing, Reading MA.
Thomson, W. (1851). On the Dynamical Theory of
Heat, with numerical results deduced from Mr Joules
equivalent of a Thermal Unit, and M. Regnaults Observations on Steam. Transactions of the Royal Society of Edinburgh XX (part II): 261268; 289298.
Also published in Thomson, W. (December 1852).
On the Dynamical Theory of Heat, with numerical
results deduced from Mr Joules equivalent of a Thermal Unit, and M. Regnaults Observations on Steam.
Philos. Mag. 4 IV (22): 13. Retrieved 25 June 2012.

78

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

Thomson, W. (1852). On the universal tendency in


nature to the dissipation of mechanical energy Philosophical Magazine, Ser. 4, p. 304.
Tisza, L. (1966). Generalized Thermodynamics, M.I.T
Press, Cambridge MA.

Kostic, M (2011). Revisiting The Second Law


of Energy Degradation and Entropy Generation: From Sadi Carnots Ingenious Reasoning
AIP Conf.
Proc.
to Holistic Generalization.
1411: 327350. Bibcode:2011AIPC.1411..327K.
doi:10.1063/1.3665247. ISBN 978-0-7354-0985-9.
also at .

Truesdell, C. (1980). The Tragicomical History of


Thermodynamics 18221854, Springer, New York,
ISBN 0-387-90403-4.
2.3.16
Unk, J. (2001). Blu your way in the second law of
thermodynamics, Stud. Hist. Phil. Mod. Phys., 32(3):
305394.
Unk, J. (2003). Irreversibility and the Second Law
of Thermodynamics, Chapter 7 of Entropy, Greven,
A., Keller, G., Warnecke (editors) (2003), Princeton
University Press, Princeton NJ, ISBN 0-691-11338-6.
Uhlenbeck, G.E., Ford, G.W. (1963). Lectures in Statistical Mechanics, American Mathematical Society,
Providence RI.
Zemansky, M.W. (1968). Heat and Thermodynamics.
An Intermediate Textbook, fth edition, McGraw-Hill
Book Company, New York.

2.3.15

Further reading

External links

Stanford Encyclopedia of Philosophy: "Philosophy of


Statistical Mechanics" by Lawrence Sklar.
Second law of thermodynamics in the MIT Course
Unied Thermodynamics and Propulsion from Prof.
Z. S. Spakovszky
E.T. Jaynes, 1988, "The evolution of Carnots
principle," in G. J. Erickson and C. R. Smith
(eds.)Maximum-Entropy and Bayesian Methods in Science and Engineering, Vol 1: p. 267.
Caratheodory, C., Examination of the foundations of
thermodynamics, trans. by D. H. Delphenich

2.4

Third law of Thermodynamics

The third law of thermodynamics is sometimes stated as


Goldstein, Martin, and Inge F., 1993. The Refrigerator follows, regarding the properties of systems in equilibrium
and the Universe. Harvard Univ. Press. Chpts. 4 at absolute zero temperature:
9 contain an introduction to the Second Law, one a
bit less technical than this entry. ISBN 978-0-674The entropy of a perfect crystal at absolute
75324-2
zero is exactly equal to zero.
Le, Harvey S., and Rex, Andrew F. (eds.) 2003.
Maxwells Demon 2 : Entropy, classical and quantum At absolute zero (zero kelvin), the system must be in a state
information, computing. Bristol UK; Philadelphia PA: with the minimum possible energy, and the above statement
Institute of Physics. ISBN 978-0-585-49237-7
of the third law holds true provided that the perfect crystal has only one minimum energy state. Entropy is related
Halliwell, J.J. (1994). Physical Origins of Time Asymto the number of accessible microstates, and for a system
metry. Cambridge. ISBN 0-521-56837-4.(technical).
consisting of many particles, quantum mechanics indicates
that there is only one unique state (called the ground state)
Carnot, Sadi; Thurston, Robert Henry (editor and
with minimum energy.[1] If the system does not have a welltranslator) (1890). Reections on the Motive Power of
dened order (if its order is glassy, for example), then in
Heat and on Machines Fitted to Develop That Power.
practice there will remain some nite entropy as the system
New York: J. Wiley & Sons. Cite uses deprecated
is brought to very low temperatures as the system becomes
parameter |coauthors= (help) (full text of 1897 ed.)
locked into a conguration with non-minimal energy. The
(html)
constant value is called the residual entropy of the system.[2]
Stephen Jay Kline (1999). The Low-Down on En- The NernstSimon statement of the third law of thermodytropy and Interpretive Thermodynamics, La Caada, namics concerns thermodynamic processes at a xed, low
CA: DCW Industries. ISBN 1-928729-01-0.
temperature:

2.4. THIRD LAW OF THERMODYNAMICS


The entropy change associated with any condensed system undergoing a reversible isothermal process approaches zero as the temperature
at which it is performed approaches 0 K.

79
(derived from even more basic laws). The basic law from
which it is primarily derived is the statistical-mechanics definition of entropy for a large system:

Here a condensed system refers to liquids and solids. A S S0 = kB ln


classical formulation by Nernst (actually a consequence of
where S is entropy, kB is the Boltzmann constant, and is
the Third Law) is:
the number of microstates consistent with the macroscopic
conguration. The counting of states is from the reference
It is impossible for any process, no matter
state of absolute zero, which corresponds to the entropy of
how idealized, to reduce the entropy of a system
S0 .
to its absolute-zero value in a nite number of
operations.

2.4.2

Explanation

Physically, the NernstSimon statement implies that it is


impossible for any procedure to bring a system to the abso- In simple terms, the third law states that the entropy of a
lute zero of temperature in a nite number of steps.[3]
perfect crystal of a pure substance approaches zero as the
temperature approaches zero. The alignment of a perfect
crystal leaves no ambiguity as to the location and orientation
2.4.1 History
of each part of the crystal. As the energy of the crystal is
reduced, the vibrations of the individual atoms are reduced
The 3rd law was developed by the chemist Walther Nernst to nothing, and the crystal becomes the same everywhere.
during the years 190612, and is therefore often referred
to as Nernsts theorem or Nernsts postulate. The third The third law provides an absolute reference point for the
law of thermodynamics states that the entropy of a system determination of entropy at any other temperature. The enat absolute zero is a well-dened constant. This is because tropy of a system, determined relative to this zero point,
a system at zero temperature exists in its ground state, so is then the absolute entropy of that system. Mathematithat its entropy is determined only by the degeneracy of the cally, the absolute entropy of any system at zero temperature is the natural log of the number of ground states times
ground state.
Boltzmanns constant kB=1.38x1023, JK1 .
In 1912 Nernst stated the law thus: It is impossible for any
procedure to lead to the isotherm T = 0 in a nite number The entropy of a perfect crystal lattice as dened by Nernsts
theorem is zero provided that its ground state is unique, beof steps.[4]
cause ln(1) = 0. If the system is composed of one-billion
An alternative version of the third law of thermodynamics atoms, all alike, and lie within the matrix of a perfect crysas stated by Gilbert N. Lewis and Merle Randall in 1923:
tal, the number of permutations of one-billion identical
things taken one-billion at a time is = 1. Hence:
If the entropy of each element in some (perfect)
crystalline state be taken as zero at the absolute
zero of temperature, every substance has a nite
S S0 = kB ln = kB ln 1 = 0
positive entropy; but at the absolute zero of temperature the entropy may become zero, and does
The dierence is zero, hence the initial entropy S0 can be
so become in the case of perfect crystalline subany selected value so long as all other such calculations instances.
clude that as the initial entropy. As a result the initial entropy value of zero is selected S0 = 0 is used for conveThis version states not only S will reach zero at 0 K, but S nience.
itself will also reach zero as long as the crystal has a ground
state with only one conguration. Some crystals form defects which causes a residual entropy. This residual entropy S S0 = S 0 = 0
disappears when the kinetic barriers to transitioning to one
S=0
ground state are overcome.[5]
With the development of statistical mechanics, the third law By way of example, suppose a system consists of 1 cm3 of
of thermodynamics (like the other laws) changed from a matter with a mass of 1 g and 20 g/mol. The system consists
fundamental law (justied by experiments) to a derived law of 3x1022 identical atoms at 0 K. If one atom should absorb

80

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

a photon of wavelength of 1 cm that atom is then unique and


the permutations of one unique atom among the 3x1022 is
N=3x1022 . The entropy, energy, and temperature of the
system rises and can be calculated. The entropy change is:

S = S S0 = kB ln
From the second law of thermodynamics:

S = S S0 =

Q
T

Hence:

S = S S0 = kB ln() =

Q
T

ground states, trapped out of equilibrium, is ice Ih, which


has proton disorder.
For the entropy at absolute zero to be zero, the magnetic
moments of a perfectly ordered crystal must themselves
be perfectly ordered; from an entropic perspective, this
can be considered to be part of the denition of a perfect crystal. Only ferromagnetic, antiferromagnetic, and
diamagnetic materials can satisfy this condition. However,
ferromagnetic materials do not in fact have zero entropy at
zero temperature, because the spins of the unpaired electrons are all aligned and this gives a ground-state spin degeneracy. Materials that remain paramagnetic at 0 K, by
contrast, may have many nearly-degenerate ground states
(for example, in a spin glass), or may retain dynamic disorder (a quantum spin liquid).

2.4.3

Mathematical formulation

Calculating entropy change:

Consider a closed system in internal equilibrium. As the


system is in equilibrium, there are no irreversible processes
so 1
the entropy production is zero. During slow heating,
S0 = kB ln N = 1.381023 ln 3 1022 = 701023 Jsmall
K temperature gradients are generated in the material,
but the associated entropy production can be kept arbitrarThe energy change of the system as a result of absorbing ily low if the heat is supplied slowly enough. The increase
the single photon whose energy is :
in entropy due to the added heat Q is then given by the
second part of the Second law of thermodynamics which
states that the entropy change of a system is given by
hc
6.62 1034 J s 3 108 m s1
23
Q = =
=
= 210
J
The temperature
rise dT due to the heat Q is determined

0.01 m
by the heat capacity C(T,X) according to
The temperature of the system rises by:
The parameter X is a symbolic notation for all parameters
(such as pressure, magnetic eld, liquid/solid fraction, etc.)
which are kept constant during the heat supply. E.g. if the
23

2 10
J
1
T =
=
=
K
volume is constant we get the heat capacity at constant volS
70 1023 J K1
35
ume CV. In the case of a phase transition from liquid to
This can be interpreted as the average temperature of the solid, or from gas to liquid the parameter X can be one of
system over the range from 0 < S < 70x1023 J/K[6] A single the two components. Combining relations (1) and (2) gives
atom was assumed to absorb the photon but the temperature Integration of Eq.(3) from a reference temperature T to an
0
and entropy change characterizes the entire system.
arbitrary temperature T gives the entropy at temperature T
An example of a system which does not have a unique
ground state is one whose net spin is a half-integer, for
which time-reversal symmetry gives two degenerate ground
states. For such systems, the entropy at zero temperature
is at least kB*ln(2) (which is negligible on a macroscopic
scale). Some crystalline systems exhibit geometrical frustration, where the structure of the crystal lattice prevents the
emergence of a unique ground state. Ground-state helium
(unless under pressure) remains liquid.

We now come to the mathematical formulation of the third


law. There are three steps:

In addition, glasses and solid solutions retain large entropy


at 0 K, because they are large collections of nearly degenerate states, in which they become trapped out of equilibrium.
Another example of a solid with many nearly-degenerate

Equation (6) can also be formulated as

1: in the limit T 0 0 the integral in Eq.(4) is nite. So that


we may take T 0 =0 and write
2. the value of S(0,X) is independent of X. In mathematical
form
So Eq.(5) can be further simplied to
In words: at absolute zero all isothermal processes are isentropic. Eq.(8) is the mathematical formulation of the third
law.

2.4. THIRD LAW OF THERMODYNAMICS

81

3: Classically, one is free to choose the zero of the entropy, can always be made zero by cooling the material down far
and it is convenient to take
enough.[9] A modern, quantitative analysis follows.
so that Eq.(7) reduces to the nal form

Suppose that the heat capacity of a sample in the low tem


However, reinterpreting Eq. (9) in view of the quantized perature region has the form of a power law C(T,X)=C 0 T
asymptotically as T0, and we wish to nd which values
nature of the lowest-lying energy states, the physical meaning of Eq.(9) goes deeper than just a convenient selection of are compatible with the third law. We have
of the zero of the entropy. It is due to the perfect order at By the discussion of third law (above), this integral must be
zero kelvin as explained above.
bounded as T 0 0, which is only possible if >0. So the
heat capacity must go to zero at absolute zero

2.4.4

Consequences of the third law

if it has the form of a power law. The same argument shows


that it cannot be bounded below by a positive constant, even
if we drop the power-law assumption.
On the other hand, the molar specic heat at constant volume of a monatomic classical ideal gas, such as helium at
room temperature, is given by CV=(3/2)R with R the molar
ideal gas constant. But clearly a constant heat capacity does
not satisfy Eq. (12). That is, a gas with a constant heat capacity all the way to absolute zero violates the third law of
thermodynamics. We can verify this more fundamentally
by substituting CV in Eq. (4), which yields
In the limit T 0 0 this expression diverges, again contradicting the third law of thermodynamics.

Fig.1 Left side: Absolute zero can be reached in a nite number of


steps if S(0,X1 )S(0, X2 ). Right: An innite number of steps is
needed since S(0,X1 )= S(0,X2 ).

Absolute zero
The third law is equivalent to the statement that
It is impossible by any procedure, no matter how
idealized, to reduce the temperature of any system to zero temperature in a nite number of nite operations.[7]

The conict is resolved as follows: At a certain temperature


the quantum nature of matter starts to dominate the behavior. Fermi particles follow FermiDirac statistics and Bose
particles follow BoseEinstein statistics. In both cases the
heat capacity at low temperatures is no longer temperature
independent, even for ideal gases. For Fermi gases
with the Fermi temperature TF given by
Here NA is Avogadros number, V the molar volume, and
M the molar mass.
For Bose gases
with TB given by

The specic heats given by Eq.(14) and (16) both satisfy


The reason that T=0 cannot be reached according to the Eq.(12). Indeed, they are power laws with =1 and =3/2
third law is explained as follows: Suppose that the temper- respectively.
ature of a substance can be reduced in an isentropic process by changing the parameter X from X2 to X1 . One can
think of a multistage nuclear demagnetization setup where a Vapor pressure
magnetic eld is switched on and o in a controlled way.[8]
If there were an entropy dierence at absolute zero, T=0 The only liquids near absolute zero are He and He. Their
could be reached in a nite number of steps. However, at heat of evaporation has a limiting value given by
T=0 there is no entropy dierence so an innite number of with L0 and C constant. If we consider a container, partly
steps would be needed. The process is illustrated in Fig.1. lled with liquid and partly gas, the entropy of the liquid
gas mixture is
where S (T) is the entropy of the liquid and x is the gas fraction. Clearly the entropy change during the liquidgas tranA non-quantitative description of his third law that Nernst sition (x from 0 to 1) diverges in the limit of T0. This
gave at the very beginning was simply that the specic heat violates Eq.(8). Nature solves this paradox as follows: at
Specic heat

82

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

temperatures below about 50 mK the vapor pressure is so


low that the gas density is lower than the best vacuum in the
universe. In other words: below 50 mK there is simply no
gas above the liquid.
Latent heat of melting

[4] Bailyn, M. (1994). A Survey of Thermodynamics, American


Institute of Physics, New York, ISBN 0-88318-797-3, page
342.
[5] Kozliak, Evguenii; Lambert, Frank L. (2008). Residual Entropy, the Third Law and Latent Heat. Entropy 10 (3): 27484. Bibcode:2008Entrp..10..274K.
doi:10.3390/e10030274.

The melting curves of He and He both extend down to


absolute zero at nite pressure. At the melting pressure liquid and solid are in equilibrium. The third law demands
that the entropies of the solid and liquid are equal at T=0.
As a result the latent heat of melting is zero and the slope
of the melting curve extrapolates to zero as a result of the
ClausiusClapeyron equation.

[6] Reynolds and Perkins (1977). Engineering Thermodynamicsq. McGraw Hill. p. 438. ISBN 0-07-052046-1.

Thermal expansion coecient

[9] Einstein and the Quantum, A. Douglas Stone, Princeton University Press, 2013.

[7] Guggenheim, E.A. (1967). Thermodynamics. An Advanced


Treatment for Chemists and Physicists, fth revised edition,
North-Holland Publishing Company, Amsterdam, page 157.
[8] F. Pobell, Matter and Methods at Low Temperatures,
(Springer-Verlag, Berlin, 2007)

The thermal expansion coecient is dened as


With the Maxwell relation

J. Wilks The Third Law of Thermodynamics Oxford University Press (1961)p 83.

and Eq.(8) with X=p it is shown that


So the thermal expansion coecient of all materials must
2.4.7
go to zero at zero kelvin.

2.4.5

See also

Adiabatic process
Ground state
Laws of thermodynamics
Quantum thermodynamics
Residual entropy
Thermodynamic entropy
Timeline of thermodynamics, statistical mechanics,
and random processes
Quantum refrigerators

2.4.6

References

[1] J. Wilks The Third Law of Thermodynamics Oxford University Press (1961).
[2] Kittel and Kroemer, Thermal Physics (2nd ed.), page 49.
[3] Wilks, J. (1971). The Third Law of Thermodynamics,
Chapter 6 in Thermodynamics, volume 1, ed. W. Jost, of
H. Eyring, D. Henderson, W. Jost, Physical Chemistry. An
Advanced Treatise, Academic Press, New York, page 477.

Further reading

Goldstein, Martin & Inge F. (1993) The Refrigerator


and the Universe. Cambridge MA: Harvard University
Press. ISBN 0-674-75324-0. Chpt. 14 is a nontechnical discussion of the Third Law, one including the
requisite elementary quantum mechanics.
Braun, S.; Ronzheimer, J. P.; Schreiber, M.; Hodgman, S. S.; Rom, T.; Bloch, I.; Schneider, U.
(2013). Negative Absolute Temperature for Motional Degrees of Freedom. Science 339 (6115): 52
5. arXiv:1211.0545. Bibcode:2013Sci...339...52B.
doi:10.1126/science.1227831. PMID 23288533. Lay
summary New Scientist (3 January 2013).
Levy, A.; Alicki, R.; Koslo, R. (2012). Quantum refrigerators and the third law of thermodynamics.
Phys.
Rev.
E 85: 061126.
arXiv:1205.1347. Bibcode:2012PhRvE..85f1126L.
doi:10.1103/PhysRevE.85.061126.

Chapter 3

Chapter 3. History
3.1 History of thermodynamics

eration. The development of thermodynamics both drove


and was driven by atomic theory. It also, albeit in a subtle manner, motivated new directions in probability and
statistics; see, for example, the timeline of thermodynamics.

3.1.1

History

See also: Timeline of thermodynamics

Contributions from ancient and medieval times


See also: History of heat and Vacuum
The ancients viewed heat as that related to re. In 3000
BC, the ancient Egyptians viewed heat as related to origin
mythologies.[1] In the Western philosophical tradition, after much debate about the primal element among earlier
pre-Socratic philosophers, Empedocles proposed a fourelement theory, in which all substances derive from earth,
water, air, and re. The Empedoclean element of re
is perhaps the principal ancestor of later concepts such
as phlogiston and caloric. Around 500 BC, the Greek
philosopher Heraclitus became famous as the ux and
re philosopher for his proverbial utterance: All things
are owing. Heraclitus argued that the three principal elements in nature were re, earth, and water.

The 1698 Savery Engine the worlds rst commercially-useful


steam engine: built by Thomas Savery

The history of thermodynamics is a fundamental strand


in the history of physics, the history of chemistry, and
the history of science in general. Owing to the relevance
of thermodynamics in much of science and technology,
its history is nely woven with the developments of
classical mechanics, quantum mechanics, magnetism, and
chemical kinetics, to more distant applied elds such as
meteorology, information theory, and biology (physiology),
and to technological developments such as the steam engine,
internal combustion engine, cryogenics and electricity gen-

Atomism is a central part of todays relationship between thermodynamics and statistical mechanics. Ancient
thinkers such as Leucippus and Democritus, and later the
Epicureans, by advancing atomism, laid the foundations for
the later atomic theory. Until experimental proof of atoms
was later provided in the 20th century, the atomic theory
was driven largely by philosophical considerations and scientic intuition.
The 5th century BC, Greek philosopher Parmenides, in his
only known work, a poem conventionally titled On Nature,

83

84

CHAPTER 3. CHAPTER 3. HISTORY


To prove this theory, he lled a long glass tube (sealed at one
end) with mercury and upended it into a dish also containing mercury. Only a portion of the tube emptied (as shown
adjacent); ~30 inches of the liquid remained. As the mercury emptied, and a partial vacuum was created at the top
of the tube. The gravitational force on the heavy element
Mercury prevented it from lling the vacuum.
Transition from chemistry to thermochemistry
See also: History of chemistry
The theory of phlogiston arose in the 17th century, late

Heating a body, such as a segment of protein alpha helix (above),


tends to cause its atoms to vibrate more, and to expand or change
phase, if heating is continued; an axiom of nature noted by Herman
Boerhaave in the 1700s.

uses verbal reasoning to postulate that a void, essentially


what is now known as a vacuum, in nature could not occur.
This view was supported by the arguments of Aristotle, but
was criticized by Leucippus and Hero of Alexandria. From
antiquity to the Middle Ages various arguments were put
forward to prove or disapprove the existence of a vacuum
and several attempts were made to construct a vacuum but
all proved unsuccessful.
The European scientists Cornelius Drebbel, Robert Fludd,
Galileo Galilei and Santorio Santorio in the 16th and 17th
centuries were able to gauge the relative "coldness" or
"hotness" of air, using a rudimentary air thermometer (or
thermoscope). This may have been inuenced by an earlier
device which could expand and contract the air constructed
by Philo of Byzantium and Hero of Alexandria.
Around 1600, the English philosopher and scientist Francis
Bacon surmised: Heat itself, its essence and quiddity is
motion and nothing else. In 1643, Galileo Galilei, while
generally accepting the 'sucking' explanation of horror
vacui proposed by Aristotle, believed that natures vacuumabhorrence is limited. Pumps operating in mines had already proven that nature would only ll a vacuum with water up to a height of ~30 feet. Knowing this curious fact,
Galileo encouraged his former pupil Evangelista Torricelli
to investigate these supposed limitations. Torricelli did not
believe that vacuum-abhorrence (Horror vacui) in the sense
of Aristotles 'sucking' perspective, was responsible for raising the water. Rather, he reasoned, it was the result of the
pressure exerted on the liquid by the surrounding air.

The worlds rst ice-calorimeter, used in the winter of 1782-83,


by Antoine Lavoisier and Pierre-Simon Laplace, to determine the
heat evolved in various chemical changes; calculations which were
based on Joseph Black's prior discovery of latent heat. These experiments mark the foundation of thermochemistry.

in the period of alchemy. Its replacement by caloric theory in the 18th century is one of the historical markers of
the transition from alchemy to chemistry. Phlogiston was
a hypothetical substance that was presumed to be liberated

3.1. HISTORY OF THERMODYNAMICS

85

from combustible substances during burning, and from metals during the process of rusting. Caloric, like phlogiston,
was also presumed to be the substance of heat that would
ow from a hotter body to a cooler body, thus warming it.
The rst substantial experimental challenges to caloric theory arose in Rumford's 1798 work, when he showed that
boring cast iron cannons produced great amounts of heat
which he ascribed to friction, and his work was among the
rst to undermine the caloric theory. The development of
the steam engine also focused attention on calorimetry and
the amount of heat produced from dierent types of coal.
The rst quantitative research on the heat changes during
chemical reactions was initiated by Lavoisier using an ice
calorimeter following research by Joseph Black on the latent
heat of water.
More quantitative studies by James Prescott Joule in 1843
onwards provided soundly reproducible phenomena, and
helped to place the subject of thermodynamics on a solid
footing. William Thomson, for example, was still trying to
explain Joules observations within a caloric framework as
late as 1850. The utility and explanatory power of kinetic
theory, however, soon started to displace caloric and it was
largely obsolete by the end of the 19th century. Joseph
Black and Lavoisier made important contributions in the Robert Boyle. 1627-1691
precise measurement of heat changes using the calorimeter,
a subject which became known as thermochemistry.
Phenomenological thermodynamics
Boyles law (1662)
Charless law was rst published by Joseph Louis GayLussac in 1802, but he referenced unpublished work
by Jacques Charles from around 1787. The relationship had been anticipated by the work of Guillaume
Amontons in 1702.
Gay-Lussacs law (1802)
Birth of thermodynamics as science
At its origins, thermodynamics was the study of engines. A
precursor of the engine was designed by the German scientist Otto von Guericke who, in 1650, designed and built
the worlds rst vacuum pump and created the worlds rst
ever vacuum known as the Magdeburg hemispheres. He
was driven to make a vacuum in order to disprove Aristotle's
long-held supposition that 'Nature abhors a vacuum'.

assumed to be a system of motionless particles, and not interpreted as a system of moving molecules. The concept of
thermal motion came two centuries later. Therefore Boyles
publication in 1660 speaks about a mechanical concept: the
air spring.[2] Later, after the invention of the thermometer, the property temperature could be quantied. This tool
gave Gay-Lussac the opportunity to derive his law, which
led shortly later to the ideal gas law. But, already before the
establishment of the ideal gas law, an associate of Boyles
named Denis Papin built in 1679 a bone digester, which is
a closed vessel with a tightly tting lid that connes steam
until a high pressure is generated.
Later designs implemented a steam release valve to keep
the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea
of a piston and cylinder engine. He did not however follow through with his design. Nevertheless, in 1697, based
on Papins designs, engineer Thomas Savery built the rst
engine. Although these early engines were crude and inefcient, they attracted the attention of the leading scientists
of the time. One such scientist was Sadi Carnot, the father
of thermodynamics, who in 1824 published Reections on
the Motive Power of Fire, a discourse on heat, power, and
engine eciency. This marks the start of thermodynamics
as a modern science.

Shortly thereafter, Irish physicist and chemist Robert Boyle


had learned of Guerickes designs and in 1656, in coordination with English scientist Robert Hooke, built an air pump.
Using this pump, Boyle and Hooke noticed the pressurevolume correlation: P.V=constant. In that time, air was Hence, prior to 1698 and the invention of the Savery En-

86

CHAPTER 3. CHAPTER 3. HISTORY

A Watt steam engine, the steam engine that propelled the Industrial
Revolution in Britain and the world

gine, horses were used to power pulleys, attached to buckets, which lifted water out of ooded salt mines in England. In the years to follow, more variations of steam engines were built, such as the Newcomen Engine, and later
the Watt Engine. In time, these early engines would eventually be utilized in place of horses. Thus, each engine began
to be associated with a certain amount of horse power depending upon how many horses it had replaced. The main
problem with these rst engines was that they were slow and
clumsy, converting less than 2% of the input fuel into useful Sadi Carnot (1796-1832): the father of thermodynamics
work. In other words, large quantities of coal (or wood) had
to be burned to yield only a small fraction of work output.
The name thermodynamics, however, did not arrive unHence the need for a new science of engine dynamics was
til 1854, when the British mathematician and physicist
born.
William Thomson (Lord Kelvin) coined the term thermoMost cite Sadi Carnots 1824 book Reections on the Mo- dynamics in his paper On the Dynamical Theory of Heat.[3]
tive Power of Fire as the starting point for thermodynamics
In association with Clausius, in 1871, the Scottish matheas a modern science. Carnot dened motive power to be
matician and physicist James Clerk Maxwell formulated a
the expression of the useful eect that a motor is capable of
new branch of thermodynamics called Statistical Thermodyproducing. Herein, Carnot introduced us to the rst modnamics, which functions to analyze large numbers of partiern day denition of "work": weight lifted through a height.
cles at equilibrium, i.e., systems where no changes are ocThe desire to understand, via formulation, this useful eect
curring, such that only their average properties as temperain relation to work is at the core of all modern day therture T, pressure P, and volume V become important.
modynamics.
Soon thereafter, in 1875, the Austrian physicist Ludwig
In 1843, James Joule experimentally found the mechanical
Boltzmann formulated a precise connection between enequivalent of heat. In 1845, Joule reported his best-known
tropy S and molecular motion:
experiment, involving the use of a falling weight to spin a
paddle-wheel in a barrel of water, which allowed him to
estimate a mechanical equivalent of heat of 819 ftlbf/Btu
S = k log W
(4.41 J/cal). This led to the theory of conservation of energy
and explained why heat can do work.
being dened in terms of the number of possible states [W]
In 1850, the famed mathematical physicist Rudolf Clausius such motion could occupy, where k is the Boltzmanns condened the term entropy S to be the heat lost or turned into stant.
waste, stemming from the Greek word entrepein meaning The following year, 1876, was a seminal point in the development of human thought. During this essential period,
to turn.

3.1. HISTORY OF THERMODYNAMICS


chemical engineer Willard Gibbs, the rst person in America to be awarded a PhD in engineering (Yale), published
an obscure 300-page paper titled: On the Equilibrium of
Heterogeneous Substances, wherein he formulated one grand
equality, the Gibbs free energy equation, which suggested
a measure of the amount of useful work attainable in reacting systems. Gibbs also originated the concept we now
know as enthalpy H, calling it a heat function for constant
pressure.[4] The modern word enthalpy would be coined
many years later by Heike Kamerlingh Onnes,[5] who based
it on the Greek word enthalpein meaning to warm.

87
John Herapath later independently formulated a kinetic theory in 1820, but mistakenly associated temperature with
momentum rather than vis viva or kinetic energy. His
work ultimately failed peer review and was neglected. John
James Waterston in 1843 provided a largely accurate account, again independently, but his work received the same
reception, failing peer review even from someone as welldisposed to the kinetic principle as Davy.

Further progress in kinetic theory started only in the middle of the 19th century, with the works of Rudolf Clausius,
James Clerk Maxwell, and Ludwig Boltzmann. In his 1857
Building on these foundations, those as Lars Onsager, work On the nature of the motion called heat, Clausius for
Erwin Schrdinger, and Ilya Prigogine, and others, func- the rst time clearly states that heat is the average kinetic
tioned to bring these engine concepts into the thorough- energy of molecules. This interested Maxwell, who in 1859
fare of almost every modern-day branch of science.
derived the momentum distribution later named after him.
Boltzmann subsequently generalized his distribution for the
case of gases in external elds.
Kinetic theory
Boltzmann is perhaps the most signicant contributor to kinetic theory, as he introduced many of the fundamental conMain article: Kinetic theory of gases
cepts in the theory. Besides the MaxwellBoltzmann distribution mentioned above, he also associated the kinetic
The idea that heat is a form of motion is perhaps an ancient energy of particles with their degrees of freedom. The
one and is certainly discussed by Francis Bacon in 1620 in Boltzmann equation for the distribution function of a gas
his Novum Organum. The rst written scientic reection in non-equilibrium states is still the most eective equaon the microscopic nature of heat is probably to be found tion for studying transport phenomena in gases and metals.
in a work by Mikhail Lomonosov, in which he wrote:
By introducing the concept of thermodynamic probability
as the number of microstates corresponding to the current
macrostate, he showed that its logarithm is proportional to
"(..) movement should not be denied based on
entropy.
the fact it is not seen. Who would deny that the
leaves of trees move when rustled by a wind, despite it being unobservable from large distances?
3.1.2 Branches of
Just as in this case motion remains hidden due
to perspective, it remains hidden in warm bodies
The following list gives a rough outline as to when the major
due to the extremely small sizes of the moving
branches of thermodynamics came into inception:
particles. In both cases, the viewing angle is so
small that neither the object nor their movement
Thermochemistry - 1780s
can be seen.
During the same years, Daniel Bernoulli published his book
Hydrodynamics (1738), in which he derived an equation for
the pressure of a gas considering the collisions of its atoms
with the walls of a container. He proves that this pressure
is two thirds the average kinetic energy of the gas in a unit
volume. Bernoullis ideas, however, made little impact on
the dominant caloric culture. Bernoulli made a connection
with Gottfried Leibniz's vis viva principle, an early formulation of the principle of conservation of energy, and the two
theories became intimately entwined throughout their history. Though Benjamin Thompson suggested that heat was
a form of motion as a result of his experiments in 1798, no
attempt was made to reconcile theoretical and experimental
approaches, and it is unlikely that he was thinking of the vis
viva principle.

Classical thermodynamics - 1824


Chemical thermodynamics - 1876
Statistical mechanics - c. 1880s
Equilibrium thermodynamics
Engineering thermodynamics
Chemical engineering thermodynamics - c. 1940s
Non-equilibrium thermodynamics - 1941
Small systems thermodynamics - 1960s
Biological thermodynamics - 1957
Ecosystem thermodynamics - 1959

88

CHAPTER 3. CHAPTER 3. HISTORY

Relativistic thermodynamics - 1965


Quantum thermodynamics - 1968
Black hole thermodynamics - c. 1970s
Geological thermodynamics - c. 1970s
Biological evolution thermodynamics - 1978
Geochemical thermodynamics - c. 1980s
Atmospheric thermodynamics - c. 1980s
Natural systems thermodynamics - 1990s
Supramolecular thermodynamics - 1990s
Earthquake thermodynamics - 2000

The phenomenon of heat conduction is immediately


grasped in everyday life. In 1701, Sir Isaac Newton published his law of cooling. However, in the 17th century, it
came to be believed that all materials had an identical conductivity and that dierences in sensation arose from their
dierent heat capacities.
Suggestions that this might not be the case came from the
new science of electricity in which it was easily apparent
that some materials were good electrical conductors while
others were eective insulators. Jan Ingen-Housz in 1785-9
made some of the earliest measurements, as did Benjamin
Thompson during the same period.
The fact that warm air rises and the importance of the phenomenon to meteorology was rst realised by Edmund Halley in 1686. Sir John Leslie observed that the cooling eect
of a stream of air increased with its speed, in 1804.

Carl Wilhelm Scheele distinguished heat transfer by


thermal radiation (radiant heat) from that by convection and
conduction in 1777. In 1791, Pierre Prvost showed that
Pharmaceutical systems thermodynamics 2002
all bodies radiate heat, no matter how hot or cold they are.
In 1804, Leslie observed that a matte black surface radiIdeas from thermodynamics have also been applied in other ates heat more eectively than a polished surface, suggestelds, for example:
ing the importance of black body radiation. Though it had
become to be suspected even from Scheeles work, in 1831
Macedonio Melloni demonstrated that black body radiation
Thermoeconomics - c. 1970s
could be reected, refracted and polarised in the same way
as light.
Drug-receptor thermodynamics - 2001

3.1.3

Entropy and the second law

3.1.4

Heat transfer

James Clerk Maxwell's 1862 insight that both light and radiant heat were forms of electromagnetic wave led to the
Main article: History of entropy
start of the quantitative analysis of thermal radiation. In
1879, Joef Stefan observed that the total radiant ux from
Even though he was working with the caloric theory, Sadi a blackbody is proportional to the fourth power of its temCarnot in 1824 suggested that some of the caloric avail- perature and stated the StefanBoltzmann law. The law was
able for generating useful work is lost in any real process. derived theoretically by Ludwig Boltzmann in 1884.
In March 1851, while grappling to come to terms with the
work of James Prescott Joule, Lord Kelvin started to spec3.1.5 Cryogenics
ulate that there was an inevitable loss of useful heat in all
processes. The idea was framed even more dramatically by
In 1702 Guillaume Amontons introduced the concept of
Hermann von Helmholtz in 1854, giving birth to the spectre
absolute zero based on observations of gases. In 1810, Sir
of the heat death of the universe.
John Leslie froze water to ice articially. The idea of absoIn 1854, William John Macquorn Rankine started to make lute zero was generalised in 1848 by Lord Kelvin. In 1906,
use in calculation of what he called his thermodynamic func- Walther Nernst stated the third law of thermodynamics.
tion. This has subsequently been shown to be identical to the
concept of entropy formulated by Rudolf Clausius in 1865.
Clausius used the concept to develop his classic statement 3.1.6 See also
of the second law of thermodynamics the same year.
Conservation of energy: Historical development

Main article: Heat transfer

History of Chemistry
History of Physics
Maxwells thermodynamic surface

3.2. AN EXPERIMENTAL ENQUIRY CONCERNING THE SOURCE OF THE HEAT WHICH IS EXCITED BY FRICTION89
Timeline of thermodynamics, statistical mechanics,
and random processes
Thermodynamics

History of Thermodynamics - ThermodynamicStudy.net


Historical Background of
Carnegie-Mellon University

Timeline of heat engine technology


Timeline of low-temperature technology

3.1.7

Brief History of Thermodynamics - Berkeley [PDF]

Thermodynamics

History of Thermodynamics - In Pictures

References

[1] J. Gwyn Griths (1955). The Orders of Gods in Greece


and Egypt (According to Herodotus)". The Journal of Hellenic Studies 75: 2123. doi:10.2307/629164. JSTOR
629164.

3.2

An Experimental Enquiry Concerning the Source of the Heat


which is Excited by Friction

[2] New Experiments physico-mechanicall, Touching the


Spring of the Air and its Eects (1660).
[3] Thomson, W. (1854). On the Dynamical Theory of Heat.
Transactions of the Royal Society of Edinburgh 21 (part I):
123. doi:10.1017/s0080456800032014. |chapter= ignored
(help) reprinted in Sir William Thomson, LL.D. D.C.L.,
F.R.S. (1882). Mathematical and Physical Papers 1. London, Cambridge: C.J. Clay, M.A. & Son, Cambridge University Press. p. 232. Hence Thermo-dynamics falls naturally into two Divisions, of which the subjects are respectively, the relation of heat to the forces acting between contiguous parts of bodies, and the relation of heat to electrical
agency.
[4] Laidler, Keith (1995). The World of Physical Chemistry.
Oxford University Press. p. 110.
[5] Howard, Irmgard (2002). H Is for Enthalpy, Thanks to
Heike Kamerlingh Onnes and Alfred W. Porter. Journal of Chemical Education (ACS Publications) 79 (6): 697.
Bibcode:2002JChEd..79..697H. doi:10.1021/ed079p697.

3.1.8

Further reading

Cardwell, D.S.L. (1971). From Watt to Clausius: The


Rise of Thermodynamics in the Early Industrial Age.
Benjamin Thompson
London: Heinemann. ISBN 0-435-54150-1.
Le, H.S. & Rex, A.F. (eds) (1990). Maxwells De- An Experimental Enquiry Concerning the Source of the
mon: Entropy, Information and Computing. Bristol: Heat which is Excited by Friction (1798), which was
Adam Hilger. ISBN 0-7503-0057-4.
published in the Philosophical Transactions of the Royal
Society, is a scientic paper by Benjamin Thompson,
Count Rumford that provided a substantial challenge to
3.1.9 External links
established theories of heat and began the 19th century
revolution in thermodynamics.
History of Statistical Mechanics and Thermodynamics
- Timeline (1575 to 1980) @ Hyperje.net
History of Thermodynamics - University of Waterloo
Thermodynamic
Science.com

History

Notes

3.2.1

Background

Wolfram- Main article: Caloric theory

90

CHAPTER 3. CHAPTER 3. HISTORY

Rumford was an opponent of the caloric theory of heat


which held that heat was a uid that could be neither created nor destroyed. He had further developed the view that
all gases and liquids were absolute non-conductors of heat.
His views were out of step with the accepted science of the
time and the latter theory had particularly been attacked by
John Dalton and John Leslie.
Rumford was heavily inuenced by the argument from design and it is likely that he wished to grant water a privileged
and providential status in the regulation of human life.
Though Rumford was to come to associate heat with
motion, there is no evidence that he was committed to the
kinetic theory or the principle of vis viva.
In his 1798 paper, Rumford acknowledged that he had predecessors in the notion that heat was a form of motion.
Those predecessors included Francis Bacon, Robert Boyle, Joules apparatus for measuring the mechanical equivalent of heat.
Robert Hooke, John Locke, and Henry Cavendish.
Charles Haldat made some penetrating criticisms of the
reproducibility of Rumfords results and it is possible to see
3.2.2 Experiments
the whole experiment as somewhat tendentious.
Rumford had observed the frictional heat generated by
boring cannon at the arsenal in Munich. Rumford immersed a cannon barrel in water and arranged for a specially blunted boring tool. He showed that the water could
be boiled within roughly two and a half hours and that the
supply of frictional heat was seemingly inexhaustible. Rumford conrmed that no physical change had taken place in
the material of the cannon by comparing the specic heats
of the material machined away and that remaining were the
same.
Rumford argued that the seemingly indenite generation
of heat was incompatible with the caloric theory. He contended that the only thing communicated to the barrel was
motion.

However, the experiment inspired the work of James


Prescott Joule in the 1840s. Joules more exact measurements were pivotal in establishing the kinetic theory at the
expense of caloric.

3.2.4

Notes

1. ^ Benjamin Count of Rumford (1798) An inquiry concerning the source of the heat which
is excited by friction, Philosophical Transactions
of the Royal Society of London, 88 : 80102.
doi:10.1098/rstl.1798.0006
2. ^ Cardwell (1971) p.99

Rumford made no attempt to further quantify the heat generated or to measure the mechanical equivalent of heat.

3. ^ Leslie, J. (1804). An Experimental Enquiry into the


Nature and Propagation of Heat. London.

3.2.3

4. ^ Rumford (1804) "An enquiry concerning the nature


of heat and the mode of its communication" Philosophical Transactions of the Royal Society p.77

Reception

Most established scientists, such as William Henry, as well


as Thomas Thomson, believed that there was enough uncertainty in the caloric theory to allow its adaptation to account for the new results. It had certainly proved robust and
adaptable up to that time. Furthermore, Thomson, Jns
Jakob Berzelius, and Antoine Csar Becquerel observed
that electricity could be indenitely generated by friction.
No educated scientist of the time was willing to hold that
electricity was not a uid.
Ultimately, Rumfords claim of the inexhaustible supply of heat was a reckless extrapolation from the study.

5. ^ Cardwell (1971) pp99-100


6. ^ From p. 100 of Rumfords paper of 1798: Before
I nish this paper, I would beg leave to observe, that
although, in treating the subject I have endeavoured to
investigate, I have made no mention of the names of
those who have gone over the same ground before me,
nor of the success of their labours; this omission has
not been owing to any want of respect for my predecessors, but was merely to avoid prolixity, and to be more
at liberty to pursue, without interruption, the natural
train of my own ideas.

3.2. AN EXPERIMENTAL ENQUIRY CONCERNING THE SOURCE OF THE HEAT WHICH IS EXCITED BY FRICTION91
7. ^ In his Novum Organum (1620), Francis Bacon concludes that heat is the motion of the particles composing matter. In Francis Bacon, Novum Organum (London, England: William Pickering, 1850), from page
164: " Heat appears to be Motion. From p. 165: "
the very essence of Heat, or the Substantial self of
Heat, is motion and nothing else, " From p. 168:
" Heat is not a uniform Expansive Motion of the
whole, but of the small particles of the body; "

Transactions of the Royal Society of London, 73 : 303328. From the footnote continued on p. 313: " I
think Sir Isaac Newtons opinion, that heat consists in
the internal motion of the particles of bodies, much
the most probable "
12. ^ Henry, W. (1802) A review of some experiments
which have been supposed to disprove the materiality
of heat, Manchester Memoirs v, p.603

13. ^ Thomson, T. Caloric, Supplement on Chemistry,


8. ^ Of the mechanical origin of heat and cold in:
Encyclopdia Britannica, 3rd ed.
Robert Boyle, Experiments, Notes, &c. About the Mechanical Origine or Production of Divers Particular 14. ^ Haldat, C.N.A (1810) Inquiries concerning the heat
Qualities: (London, England: E. Flesher (printer),
produced by friction, Journal de Physique lxv, p.213
1675). At the conclusion of Experiment VI, Boyle
notes that if a nail is driven completely into a piece of 15. ^ Cardwell (1971) p.102
wood, then further blows with the hammer cause it to
become hot as the hammers force is transformed into
random motion of the nails atoms. From pp. 61-62: " 3.2.5 Bibliography
the impulse given by the stroke, being unable either
Cardwell, D.S.L. (1971). From Watt to Clausius: The
to drive the nail further on, or destroy its interness [i.e.,
Rise of Thermodynamics in the Early Industrial Age.
entireness, integrity], must be spent in making various
Heinemann: London. ISBN 0-435-54150-1.
vehement and intestine commotion of the parts among
themselves, and in such an one we formerly observed
the nature of heat to consist.
9. ^ Lectures of Light (May 1681) in: Robert Hooke
with R. Waller, ed., The Posthumous Works of Robert
Hooke (London, England: Samuel Smith and Benjamin Walford, 1705). From page 116: Now Heat,
as I shall afterward prove, is nothing but the internal
Motion of the Particles of [a] Body; and the hotter a
Body is, the more violently are the Particles moved,
"
10. ^ Sometime during the period 1698-1704, John Locke
wrote his book Elements of Natural Philosophy, which
was rst published in 1720: John Locke with Pierre
Des Maizeaux, ed., A Collection of Several Pieces of
Mr. John Locke, Never Before Printed, Or Not Extant
in His Works (London, England: R. Francklin, 1720).
From p. 224: "Heat, is a very brisk agitation of the
insensible parts of the object, which produces in us that
sensation, from whence we denominate the object hot:
so what in our sensation is heat, in the object is nothing
but motion. This appears by the way, whereby heat is
produc'd: for we see that the rubbing of a brass-nail
upon a board, will make it very hot; and the axle-trees
of carts and coaches are often hot, and sometimes to a
degree, that it sets them on re, by rubbing of the nave
of the wheel upon it.
11. ^ Henry Cavendish (1783) Observations on Mr.
Hutchinss experiments for determining the degree
of cold at which quicksilver freezes, Philosophical

Chapter 4

Chapter 4. System State


4.1 Control volume

They therefore apply on volumes. Finding forms of the


equation that are independent of the control volumes allows
In continuum mechanics and thermodynamics, a control simplication of the integral signs.
volume is a mathematical abstraction employed in the process of creating mathematical models of physical processes. 4.1.2 Substantive derivative
In an inertial frame of reference, it is a volume xed in space
or moving with constant ow velocity through which the Main article: Material derivative
continuum (gas, liquid or solid) ows. The surface enclosing the control volume is referred to as the control surComputations in continuum mechanics often require that
face.[1]
the regular time derivation operator d/dt is replaced by the
At steady state, a control volume can be thought of as an substantive derivative operator D/Dt . This can be seen as
arbitrary volume in which the mass of the continuum re- follows.
mains constant. As a continuum moves through the control
volume, the mass entering the control volume is equal to Consider a bug that is moving through a volume where there
the mass leaving the control volume. At steady state, and is some scalar, e.g. pressure, that varies with time and poin the absence of work and heat transfer, the energy within sition: p = p(t, x, y, z) .
the control volume remains constant. It is analogous to the If the bug during the time interval from t to t + dt moves
classical mechanics concept of the free body diagram.
from (x, y, z) to (x + dx, y + dy, z + dz), then the bug
experiences a change dp in the scalar value,

4.1.1

Overview
dp =

p
p
p
p
dt +
dx +
dy +
dz
t
x
y
z

Typically, to understand how a given physical law applies to


the system under consideration, one rst begins by consid- (the total dierential). If the bug is moving with a velocity
ering how it applies to a small, control volume, or repre- v = (vx , vy , vz ), the change in particle position is vdt =
sentative volume. There is nothing special about a partic- (vx dt, vy dt, vz dt), and we may write
ular control volume, it simply represents a small part of the
system to which physical laws can be easily applied. This
p
p
p
p
gives rise to what is termed a volumetric, or volume-wise
dt +
vx dt +
vy dt +
vz dt
dp =
formulation of the mathematical model.
t
x
y
z
(
)
p p
p
p
One can then argue that since the physical laws behave in a
=
+
vx +
vy +
vz dt
t
x
y
z
certain way on a particular control volume, they behave the
(
)
same way on all such volumes, since that particular conp
=
+ v p dt.
trol volume was not special in any way. In this way, the
t
corresponding point-wise formulation of the mathematical
model can be developed so it can describe the physical be- where p is the gradient of the scalar eld p. So:
haviour of an entire (and maybe more complex) system.
In continuum mechanics the conservation equations (for instance, the Navier-Stokes equations) are in integral form.
92

=
+ v .
dt
t

4.2. IDEAL GAS


If the bug is just moving with the ow, the same formula
applies, but now the velocity vector,v, is that of the ow, u.
The last parenthesized expression is the substantive derivative of the scalar pressure. Since the pressure p in this computation is an arbitrary scalar eld, we may abstract it and
write the substantive derivative operator as

=
+ u .
Dt
t

4.1.3

See also

Continuum mechanics
Cauchy momentum equation
Special relativity
Substantive derivative

4.1.4

References

93
be treated like ideal gases within reasonable tolerances.[1]
Generally, a gas behaves more like an ideal gas at higher
temperature and lower pressure,[1] as the potential energy
due to intermolecular forces becomes less signicant compared with the particles kinetic energy, and the size of the
molecules becomes less signicant compared to the empty
space between them.
The ideal gas model tends to fail at lower temperatures or
higher pressures, when intermolecular forces and molecular
size become important. It also fails for most heavy gases,
such as many refrigerants,[1] and for gases with strong intermolecular forces, notably water vapor. At high pressures,
the volume of a real gas is often considerably greater than
that of an ideal gas. At low temperatures, the pressure of
a real gas is often considerably less than that of an ideal
gas. At some point of low temperature and high pressure,
real gases undergo a phase transition, such as to a liquid or
a solid. The model of an ideal gas, however, does not describe or allow phase transitions. These must be modeled
by more complex equations of state. The deviation from
the ideal gas behaviour can be described by a dimensionless
quantity, the compressibility factor, Z.

The ideal gas model has been explored in both the


James R. Welty, Charles E. Wicks, Robert E. Wilson
Newtonian dynamics (as in "kinetic theory") and in
& Gregory Rorrer Fundamentals of Momentum, Heat,
quantum mechanics (as a "gas in a box"). The ideal gas
and Mass Transfer ISBN 0-471-38149-7
model has also been used to model the behavior of electrons in a metal (in the Drude model and the free electron
model), and it is one of the most important models in staNotes
tistical mechanics.
[1] G.J. Van Wylen and R.E. Sonntag (1985), Fundamentals of
Classical Thermodynamics, Section 2.1 (3rd edition), John
Wiley & Sons, Inc., New York ISBN 0-471-82933-1

4.1.5

External links

Integral Approach to the Control Volume analysis of


Fluid Flow

4.2.1

Types of ideal gas

There are three basic classes of ideal gas:


the classical or MaxwellBoltzmann ideal gas,
the ideal quantum Bose gas, composed of bosons, and
the ideal quantum Fermi gas, composed of fermions.

4.2 Ideal gas


An ideal gas is a theoretical gas composed of many randomly moving point particles that do not interact except
when they collide elastically. The ideal gas concept is useful
because it obeys the ideal gas law, a simplied equation of
state, and is amenable to analysis under statistical mechanics. One mole of an ideal gas has a volume of 22.7 L at STP
as dened by IUPAC.
At normal conditions such as standard temperature and
pressure, most real gases behave qualitatively like an ideal
gas. Many gases such as nitrogen, oxygen, hydrogen, noble
gases, and some heavier gases like carbon dioxide can

The classical ideal gas can be separated into two types: The
classical thermodynamic ideal gas and the ideal quantum
Boltzmann gas. Both are essentially the same, except that
the classical thermodynamic ideal gas is based on classical
statistical mechanics, and certain thermodynamic parameters such as the entropy are only specied to within an undetermined additive constant. The ideal quantum Boltzmann gas overcomes this limitation by taking the limit of
the quantum Bose gas and quantum Fermi gas in the limit
of high temperature to specify these additive constants. The
behavior of a quantum Boltzmann gas is the same as that of
a classical ideal gas except for the specication of these constants. The results of the quantum Boltzmann gas are used

94

CHAPTER 4. CHAPTER 4. SYSTEM STATE

in a number of cases including the SackurTetrode equa- gas is a function only of its temperature. For the present
tion for the entropy of an ideal gas and the Saha ionization purposes it is convenient to postulate an exemplary version
equation for a weakly ionized plasma.
of this law by writing:

4.2.2

Classical thermodynamic ideal gas

Macroscopic account

U = cV nRT
where

U is the internal energy


The ideal gas law is an extension of experimentally discovered gas laws. Real uids at low density and high
cV is the dimensionless specic heat capactemperature approximate the behavior of a classical ideal
ity at constant volume, 3/2 for monatomic
gas. However, at lower temperatures or a higher density, a
gas, 5/2 for diatomic gas and 3 for more
real uid deviates strongly from the behavior of an ideal gas,
complex molecules.
particularly as it condenses from a gas into a liquid or as it
deposits from a gas into a solid. This deviation is expressed
Microscopic model
as a compressibility factor.
The classical thermodynamic properties of an ideal gas can In order to switch from macroscopic quantities (left hand
be described by two equations of state:.[2][3]
side of the following equation) to microscopic ones (right
hand side), we use
One of them is the well known ideal gas law

P V = nRT

nR = N kB

where

where
P is the pressure
V is the volume
n is the amount of substance of the gas (in
moles)

N is the number of gas particles


kB
is
the
Boltzmann
constant
(1.3811023 JK1 ).

R is the gas constant (8.314 JK1 mol1 )

The probability distribution of particles by velocity or energy is given by the Maxwell speed distribution.

T is the absolute temperature.

The ideal gas model depends on the following assumptions:

This equation is derived from Boyles law: V = k/P (at


constant T and n); Charless law: V = bT (at constant P
and n); and Avogadros law: V = an (at constant T and P);
where
k is a constant used in Boyles law
b is a proportionality constant; equal to
V /T
a is a proportionality constant; equal to
V /n .
By combining
( the
) three laws, it would demonstrate
( ) ( Tthat
)
n
3V = kba TPn which would mean that V = kba
3
P
.
( )
Under ideal conditions, V = R TPn ; that is, P V = nRT
.
The other equation of state of an ideal gas must express
Joules law, that the internal energy of a xed mass of ideal

The molecules of the gas are indistinguishable, small, hard spheres


All collisions are elastic and all motion is
frictionless (no energy loss in motion or collision)
Newtons laws apply
The average distance between molecules is
much larger than the size of the molecules
The molecules are constantly moving in
random directions with a distribution of
speeds
There are no attractive or repulsive forces
between the molecules apart from those
that determine their point-like collisions
The only forces between the gas molecules
and the surroundings are those that determine the point-like collisions of the
molecules with the walls

4.2. IDEAL GAS

95

In the simplest case, there are no long-range


forces between the molecules of the gas and
the surroundings.

ideal gas. This is an important step since, according to the


theory of thermodynamic potentials, if we can express the
entropy as a function of U (U is a thermodynamic potential), volume V and the number of particles N, then we will
The assumption of spherical particles is necessary so that have a complete statement of the thermodynamic behavior
there are no rotational modes allowed, unlike in a diatomic of the ideal gas. We will be able to derive both the ideal gas
gas. The following three assumptions are very related: law and the expression for internal energy from it.
molecules are hard, collisions are elastic, and there are no Since the entropy is an exact dierential, using the chain
inter-molecular forces. The assumption that the space be- rule, the change in entropy when going from a reference
tween particles is much larger than the particles themselves state 0 to some other state with entropy S may be written as
is of paramount importance, and explains why the ideal gas S where:
approximation fails at high pressures.

4.2.3

Heat capacity

S =

dS =
S0

T0

S
T

dT +
V

V0

S
V

)
dV
T

The heat capacity at constant volume, including an ideal gas where the reference variables may be functions of the
is:
number of particles N. Using the denition of the heat capacity at constant volume for the rst dierential and the
(
)
(
)
appropriate Maxwell relation for the second we have:
1
S
1
U
cV =
T
=
nR
T V
nR T V
)
T
V(
Cv
P
where S is the entropy. This is the dimensionless heat ca- S =
dT +
dV.
T V
pacity at constant volume, which is generally a function
T0 T
V0
of temperature due to intermolecular forces. For modExpressing CV in terms of cV as developed in the above
erate temperatures, the constant for a monatomic gas is
section, dierentiating the ideal gas equation of state, and
cV = 3/2 while for a diatomic gas it is cV = 5/2 . It is seen
integrating yields:
that macroscopic measurements on heat capacity provide
information on the microscopic structure of the molecules.
( )
( )
The heat capacity at constant pressure of 1/R mole of ideal
T
V
S = cV N k ln
+ N k ln
gas is:
T0
V0
1
cp =
T
nR

S
T

)
p

1
=
nR

H
T

which implies that the entropy may be expressed as:

)
= cV + 1

where H = U + pV is the enthalpy of the gas.

S = N k ln

V T cv
f (N )

Sometimes, a distinction is made between an ideal gas, where all constants have been incorporated into the logawhere cV and cp could vary with temperature, and a perfect rithm as f(N) which is some function of the particle numgas, for which this is not the case.
ber N having the same dimensions as V T cv in order that
The ratio of the constant volume and constant pressure heat the argument of the logarithm be dimensionless. We now
impose the constraint that the entropy be extensive. This
capacity is
will mean that when the extensive parameters (V and N)
are multiplied by a constant, the entropy will be multiplied
cP
by the same constant. Mathematically:
=
cV
For air, which is a mixture of gases, this ratio is 1.4.
S(T, aV, aN ) = aS(T, V, N ).

4.2.4

Entropy

From this we nd an equation for the function f(N)

Using the results of thermodynamics only, we can go a long


way in determining the expression for the entropy of an af (N ) = f (aN ).

96

CHAPTER 4. CHAPTER 4. SYSTEM STATE

Dierentiating this with respect to a, setting a equal to


unity, and then solving the dierential equation yields f(N):

S
= ln
kN

V T cV
N

f (N ) = N

The chemical potential of the ideal gas is calculated from


the corresponding equation of state (see thermodynamic
where may vary for dierent gases, but will be indepenpotential):
dent of the thermodynamic state of the gas. It will have the
dimensions of V T cv /N . Substituting into the equation for
(
)
the entropy:
G
=
N T,P
(
)
S
V T cv
where G is the Gibbs free energy and is equal to U + P V
= ln
.
Nk
N
T S so that:
and using the expression for the internal energy of an ideal
gas, the entropy may be written:
[ (
)cv ]
S
V
U
1
= ln
Nk
N cv kN

Since this is an expression for entropy in terms of U, V,


and N, it is a fundamental equation from which all other
properties of the ideal gas may be derived.
This is about as far as we can go using thermodynamics
alone. Note that the above equation is awed as the temperature approaches zero, the entropy approaches negative
innity, in contradiction to the third law of thermodynamics. In the above ideal development, there is a critical
point, not at absolute zero, at which the argument of the
logarithm becomes unity, and the entropy becomes zero.
This is unphysical. The above equation is a good approximation only when the argument of the logarithm is much
larger than unity the concept of an ideal gas breaks down
at low values of V/N. Nevertheless, there will be a best
value of the constant in the sense that the predicted entropy is as close as possible to the actual entropy, given
the awed assumption of ideality. A quantum-mechanical
derivation of this constant is developed in the derivation of
the SackurTetrode equation which expresses the entropy
of a monatomic (
cv = 3/2) ideal gas. In the SackurTetrode theory the constant depends only upon the mass
of the gas particle. The SackurTetrode equation also suffers from a divergent entropy at absolute zero, but is a good
approximation for the entropy of a monatomic ideal gas for
high enough temperatures.

4.2.5

Thermodynamic potentials

(
(
))
V T cV
(T, V, N ) = kT cP ln
N
The thermodynamic potentials for an ideal gas can now be
written as functions of T, V, and N as:

where, as before, cP = cV + 1 . The most informative way


of writing the potentials is in terms of their natural variables,
since each of these equations can be used to derive all of the
other thermodynamic variables of the system. In terms of
their natural variables, the thermodynamic potentials of a
single-species ideal gas are:
(

)1/cV
N S/N k
U (S, V, N ) = cV N k
e
V
))
(
(
V T cV
A(T, V, N ) = N kT cV ln
N
(
)1/cP
P S/N k
H(S, P, N ) = cP N k
e
k
(
( cP ))
kT
G(T, P, N ) = N kT cP ln
P
In statistical mechanics, the relationship between the
Helmholtz free energy and the partition function is fundamental, and is used to calculate the thermodynamic properties of matter; see conguration integral for more details.

4.2.6

Speed of sound

Main article: Thermodynamic potential

Main article: Speed of sound

Expressing the entropy as a function of T, V, and N:

The speed of sound in an ideal gas is given by

4.3. REAL GAS

(
csound =

97
Ideal Bose and Fermi gases

)
=
s

P
=

RT
M

where
is the adiabatic index (
cP /
cV )

An ideal gas of bosons (e.g. a photon gas) will be governed


by BoseEinstein statistics and the distribution of energy
will be in the form of a BoseEinstein distribution. An ideal
gas of fermions will be governed by FermiDirac statistics and the distribution of energy will be in the form of
a FermiDirac distribution.

s is the entropy per particle of the gas.


is the mass density of the gas.

4.2.9

See also

P is the pressure of the gas.


R is the universal gas constant

Compressibility factor

T is the temperature

Dynamical billiards - billiard balls as a model of an


ideal gas

M is the molar mass of the gas.

4.2.7

Table of ideal gas equations

See Table of thermodynamic equations: Ideal gas.

4.2.8

Ideal quantum gases

In the above-mentioned SackurTetrode equation, the best


choice of the entropy constant was found to be proportional
to the quantum thermal wavelength of a particle, and the
point at which the argument of the logarithm becomes zero
is roughly equal to the point at which the average distance
between particles becomes equal to the thermal wavelength.
In fact, quantum theory itself predicts the same thing. Any
gas behaves as an ideal gas at high enough temperature and
low enough density, but at the point where the Sackur
Tetrode equation begins to break down, the gas will begin
to behave as a quantum gas, composed of either bosons or
fermions. (See the gas in a box article for a derivation of the
ideal quantum gases, including the ideal Boltzmann gas.)

Table of thermodynamic equations


Scale-free ideal gas

4.2.10

References

[1] Cengel, Yunus A.; Boles, Michael A. Thermodynamics: An


Engineering Approach (Fourth ed.). p. 89. ISBN 0-07238332-1.
[2] Adkins, C.J. (1968/1983). Equilibrium Thermodynamics,
(1st edition 1968), third edition 1983, Cambridge University
Press, Cambridge UK, ISBN 0-521-25445-0, pp. 116120.
[3] Tschoegl, N.W. (2000). Fundamentals of Equilibrium and
Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN
0-444-50426-5, p 88.

4.3

Real gas

Gases tend to behave as an ideal gas over a wider range of


pressures when the temperature reaches the Boyle temper- Real gases are non-hypothetical gases whose molecules ocature.
cupy space and have interactions; consequently, they adhere
to gas laws. To understand the behaviour of real gases, the
following must be taken into account:
Ideal Boltzmann gas
The ideal Boltzmann gas yields the same results as the classical thermodynamic gas, but makes the following identication for the undetermined constant :

compressibility eects;
variable specic heat capacity;
van der Waals forces;

T 3/2 3
=
g

non-equilibrium thermodynamic eects;

where is the thermal de Broglie wavelength of the gas and


g is the degeneracy of states.

issues with molecular dissociation and elementary reactions with variable composition

98

CHAPTER 4. CHAPTER 4. SYSTEM STATE

For most applications, such a detailed analysis is unnecessary, and the ideal gas approximation can be used with
reasonable accuracy. On the other hand, real-gas models
have to be used near the condensation point of gases, near
critical points, at very high pressures, to explain the Joule
Thomson eect and in other less usual cases. The deviation
from ideality can be described by the compressibility factor
Z.

van der Waals model


Main article: van der Waals equation
Real gases are often modeled by taking into account their
molar weight and molar volume
(
RT =

4.3.1

Models

a
P+ 2
Vm

)
(Vm b)

Where P is the pressure, T is the temperature, R the ideal


gas constant, and V the molar volume. a and b are parameters that are determined empirically for each gas, but are
sometimes estimated from their critical temperature (T )
and critical pressure (P ) using these relations:

a=

27R2 Tc2
64Pc

b=

RTc
8Pc

RedlichKwong model
The RedlichKwong equation is another two-parameter
equation that is used to model real gases. It is almost always more accurate than the van der Waals equation, and
often more accurate than some equations with more than
two parameters. The equation is
(
RT =
Isotherms of real gas
Dark blue curves isotherms below the critical temperature. Green
sections metastable states.
The section to the left of point F normal liquid.
Point F boiling point.
Line FG equilibrium of liquid and gaseous phases.
Section FA superheated liquid.
Section FA stretched liquid (p<0).
Section AC analytic continuation of isotherm, physically impossible.
Section CG supercooled vapor.
Point G dew point.
The plot to the right of point G normal gas.
Areas FAB and GCB are equal.
Red curve Critical isotherm.
Point K critical point.
Light blue curves supercritical isotherms

)
(Vm b)

where a and b two empirical parameters that are not the


same parameters as in the van der Waals equation. These
parameters can be determined:

a = 0.42748

R2 Tc 5/2
Pc

b = 0.08664

RTc
Pc

Berthelot and modied Berthelot model


The Berthelot equation (named after D. Berthelot[1] is very
rarely used,
P =

Main article: Equation of state

a
P+
T Vm (Vm + b)

RT
Vm b

a
T Vm2

but the modied version is somewhat more accurate


[
(
)]
9P /Pc
6
1
+
1

P = RT
2
Vm
128T /Tc
(T /Tc )

4.3. REAL GAS

99
c = 4Pc Tc2 Vc3 .

Dieterici model

This model (named after C. Dieterici[2] ) fell out of usage in


BeattieBridgeman model
recent years
P = RT

exp ( Vma
RT )
Vm b

[5]

This equation is based on ve experimentally determined


constants. It is expressed as
(
)
c
A
P = RT
v 2 1 vT 3 (v + B) v 2

Clausius model

The Clausius equation (named after Rudolf Clausius) is a


where
very simple three-parameter equation used to model gases.
)
(
RT = P + T (Vma+c)2 (Vm b)
(
a)
A = A0 1
where
v
27R2 Tc3
(
)
a = 64Pc
b
B = B0 1
v
c
b = Vc RT
4Pc
This equation is known to be reasonably accurate for denc
c = 3RT
8Pc Vc
sities up to about 0.8 , where is the density of the
where V is critical volume.
substance at its critical point. The constants appearing in
the above equation are available in following table when P
3
3
[6]
is in KPa, v is in Kmmol , T is in K and R=8.314 KkPam
molK
Virial model
The Virial equation derives from a perturbative treatment BenedictWebbRubin model
of statistical mechanics.
(
)
)
C(T )
D(T )
Main article: BenedictWebbRubin equation
P Vm = RT 1 + B(T
+
+
+
...
2
3
Vm
V
V
m

or alternatively
(
P Vm = RT 1 +

The BWR equation, sometimes referred to as the BWRS


equation,
+
+
(
1
2
4
2
where A, B, C, A, B, and C are temperature dependent P = RT d+d RT (B + bd) (A + ad ad ) T 2 [C cd(1 + d ) e
constants.
where d is the molar density and where a, b, c, A, B, C, ,
and are empirical constants. Note that the constant is a
derivative of constant and therefore almost identical to 1.
PengRobinson model
B (T )
P

C (T )
P2

D (T )
P3

)
+ ...

PengRobinson equation of state (named after D.-Y. Peng 4.3.2 See also
and D. B. Robinson[3] ) has the interesting property being
useful in modeling some liquids as well as real gases.
Gas laws
P =

RT
Vm b

Wohl model

a(T )
Vm (Vm +b)+b(Vm b)

Ideal gas law by Boyle & Gay-Lussac


Compressibility factor

Equation of state
The Wohl equation (named after A. Wohl[4] ) is formulated
in terms of critical values, making it useful when real gas
4.3.3 References
constants are not available.
)
(
RT = P + T Vm (Va m b) T 2cV 3 (Vm b)
[1] D. Berthelot in Travaux et Mmoires du Bureau international
m
where
a = 6Pc Tc Vc2
b=

Vc
4

des Poids et Mesures Tome XIII (Paris: Gauthier-Villars,


1907)

[2] C. Dieterici, Ann. Phys. Chem. Wiedemanns Ann. 69, 685


(1899)

100

CHAPTER 4. CHAPTER 4. SYSTEM STATE

[3] Peng, D. Y., and Robinson, D. B. (1976). A New


Two-Constant Equation of State.
Industrial and
Engineering Chemistry:
Fundamentals 15:
5964.
doi:10.1021/i160057a011.
[4] A. Wohl Investigation of the condition equation, Zeitschrift
fr Physikalische Chemie (Leipzig) 87 pp. 139 (1914)
[5] Yunus A. Cengel and Michael A. Boles, Thermodynamics:
An Engineering Approach 7th Edition, McGraw-Hill, 2010,
ISBN 007-352932-X
[6] Gordan J. Van Wylen and Richard E. Sonntage, Fundamental of Classical Thermodynamics, 3rd ed, New York, John
Wiley & Sons, 1986 P46 table 3.3

Dilip Kondepudi, Ilya Prigogine, Modern Thermodynamics, John Wiley & Sons, 1998, ISBN 0-47197393-9
Hsieh, Jui Sheng, Engineering Thermodynamics,
Prentice-Hall Inc., Englewood Clis, New Jersey
07632, 1993. ISBN 0-13-275702-8
Stanley M. Walas, Phase Equilibria in Chemical Engineering, Butterworth Publishers, 1985. ISBN 0-40995162-5
M. Aznar, and A. Silva Telles, A Data Bank of Parameters for the Attractive Coecient of the Peng
Robinson Equation of State, Braz. J. Chem. Eng. vol.
14 no. 1 So Paulo Mar. 1997, ISSN 0104-6632
An introduction to thermodynamics by Y. V. C. Rao
The corresponding-states principle and its practice:
thermodynamic, transport and surface properties of
uids by Hong Wei Xiang

4.3.4

External links

http://www.ccl.net/cca/documents/dyoung/
topics-orig/eq_state.html

Chapter 5

Chapter 5. System Processes


5.1 Thermodynamic process

Thermodynamic process

Dened by change in a system, a thermodynamic process


Classical thermodynamics considers three main kinds of is a passage of a thermodynamic system from an initial
thermodynamic process: change in a system, cycles in a sys- to a nal state of thermodynamic equilibrium. The initem, and ow processes.
tial and nal states are the dening elements of the proDened by change in a system , a thermodynamic process cess. The actual course of the process is not the primary
is a passage of a thermodynamic system from an initial to concern, and often is ignored. A state of thermodynamic
a nal state of thermodynamic equilibrium. The initial and equilibrium endures unchangingly unless it is interrupted
nal states are the dening elements of the process. The by a thermodynamic operation that initiates a thermodyactual course of the process is not the primary concern, and namic process. The equilibrium states are each respectively
often is ignored. This is the customary default meaning fully specied by a suitable set of thermodynamic state variof the term 'thermodynamic process. In general, during ables, that depend only on the current state of the system,
the actual course of a thermodynamic process, the system not the path taken by the processes that produce that state.
passes through physical states which are not describable as In general, during the actual course of a thermodynamic
thermodynamic states, because they are far from internal process, the system passes through physical states which are
thermodynamic equilibrium. Such processes are useful for not describable as thermodynamic states, because they are
far from internal thermodynamic equilibrium. Such a prothermodynamic theory.
cess may therefore be admitted for equilibrium thermodyDened by a cycle of transfers into and out of a system, a
namics, but not be admitted for non-equilibrium thermocyclic process is described by the quantities transferred in
dynamics, which primarily aims to describe the continuous
the several stages of the cycle, which recur unchangingly.
passage along the path, at denite rates of progress.
The descriptions of the staged states of the system are not
the primary concern. Cyclic processes were important con- Though not so in general, it is, however, possible, that a
ceptual devices in the early days of thermodynamical in- process may take place slowly or smoothly enough to allow
vestigation, while the concept of the thermodynamic state its description to be usefully approximated by a continuous path of equilibrium thermodynamic states. Then it may
variable was being developed.
be approximately described by a process function that does
Dened by ows through a system, a ow process is a
depend on the path. Such a process may be idealized as a
steady state of ows into and out of a vessel with denite
quasi-static process, which is innitely slow, and which
wall properties. The internal state of the vessel contents is
is really a theoretical exercise in dierential geometry, as
not the primary concern. The quantities of primary concern
opposed to an actually possible physical process; in this idedescribe the states of the inow and the outow materials,
alized case, the calculation may be exact, though the process
and, on the side, the transfers of heat, work, and kinetic
does not actually occur in nature. Such idealized processes
and potential energies for the vessel. Flow processes are of
are useful in the theory of thermodynamics.
interest in engineering.
Cyclic process

5.1.1

Kinds of process

Dened by a cycle of transfers into and out of a system,


a cyclic process is described by the quantities transferred
in the several stages of the cycle. The descriptions of the
101

102

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

staged states of the system may be of little or even no interest. A cycle is a sequence of a small number of thermodynamic processes that indenitely often repeatedly returns
the system to its original state. For this, the staged states
themselves are not necessarily described, because it is the
transfers that are of interest. It is reasoned that if the cycle
can be repeated indenitely often, then it can be assumed
that the states are recurrently unchanged. The condition of
the system during the several staged processes may be of
even less interest than is the precise nature of the recurrent
states. If, however, the several staged processes are idealized and quasi-static, then the cycle is described by a path
through a continuous progression of equilibrium states.

Flow process
Dened by ows through a system, a ow process is a steady
state of ow into and out of a vessel with denite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe
the states of the inow and the outow materials, and, on
the side, the transfers of heat, work, and kinetic and potential energies for the vessel. The states of the inow and
outow materials consist of their internal states, and of their
kinetic and potential energies as whole bodies. Very often,
the quantities that describe the internal states of the input
and output materials are estimated on the assumption that
they are bodies in their own states of internal thermodynamic equilibrium. Because rapid reactions are permitted,
the thermodynamic treatment may be approximate, not exact.

5.1.2

A cycle of quasi-static processes

Main article: Stirling cycle


A quasi-static thermodynamic process can be visualized
by graphically plotting the path of idealized changes to the
systems state variables. In the example, a cycle consisting
of four quasi-static processes is shown. Each process has
a well-dened start and end point in the pressure-volume
state space. In this particular example, processes 1 and 3
are isothermal, whereas processes 2 and 4 are isochoric.
The PV diagram is a particularly useful visualization of a
quasi-static process, because the area under the curve of a
process is the amount of work done by the system during
that process. Thus work is considered to be a process variable, as its exact value depends on the particular path taken
between the start and end points of the process. Similarly,
heat may be transferred during a process, and it too is a
process variable.

An example of a cycle of idealized thermodynamic processes which


make up the Stirling cycle

5.1.3

Conjugate variable processes

It is often useful to group processes into pairs, in which each


variable held constant is one member of a conjugate pair.
Pressure - volume
The pressure-volume conjugate pair is concerned with the
transfer of mechanical or dynamic energy as the result of
work.
An isobaric process occurs at constant pressure. An
example would be to have a movable piston in a cylinder, so that the pressure inside the cylinder is always
at atmospheric pressure, although it is separated from
the atmosphere. In other words, the system is dynamically connected, by a movable boundary, to a
constant-pressure reservoir.
An isochoric process is one in which the volume is
held constant, with the result that the mechanical PV
work done by the system will be zero. On the other
hand, work can be done isochorically on the system,
for example by a shaft that drives a rotary paddle located inside the system. It follows that, for the simple
system of one deformation variable, any heat energy
transferred to the system externally will be absorbed as
internal energy. An isochoric process is also known as
an isometric process or an isovolumetric process. An
example would be to place a closed tin can of material
into a re. To a rst approximation, the can will not

5.1. THERMODYNAMIC PROCESS

103

expand, and the only change will be that the contents


gain internal energy, evidenced by increase in temperature and pressure. Mathematically, Q = dU . The
system is dynamically insulated, by a rigid boundary,
from the environment.

There is no energy added or subtracted from the


system by particle transfer. The system is particletransfer-insulated from its environment by a boundary that is impermeable to particles, but permissive
of transfers of energy as work or heat. These processes are the ones by which thermodynamic work and
heat are dened, and for them, the system is said to be
closed.

Temperature - entropy
The temperature-entropy conjugate pair is concerned with
the transfer of energy, especially for a closed system.

5.1.4

Thermodynamic potentials

An isothermal process occurs at a constant temAny of the thermodynamic potentials may be held constant
perature. An example would be a closed system
during a process. For example:
immersed in and thermally connected with a large
constant-temperature bath. Energy gained by the system, through work done on it, is lost to the bath, so
An isenthalpic process introduces no change in
that its temperature remains constant.
enthalpy in the system.
An adiabatic process is a process in which there is no
matter or heat transfer, because a thermally insulating wall separates the system from its surroundings.
For the process to be natural, either (a) work must be
done on the system at a nite rate, so that the internal energy of the system increases; the entropy of the
system increases even though it is thermally insulated;
or (b) the system must do work on the surroundings,
which then suer increase of entropy, as well as gaining energy from the system.
An isentropic process is customarily dened as an
idealized quasi-static reversible adiabatic process, of
transfer of energy as work. Otherwise, for a constantentropy process, if work is done irreversibly, heat
transfer is necessary, so that the process is not adiabatic, and an accurate articial control mechanism
is necessary; such is therefore not an ordinary natural
thermodynamic process.
Chemical potential - particle number
The processes just above have assumed that the boundaries
are also impermeable to particles. Otherwise, we may assume boundaries that are rigid, but are permeable to one
or more types of particle. Similar considerations then hold
for the chemical potentialparticle number conjugate pair,
which is concerned with the transfer of energy via this transfer of particles.

5.1.5

Polytropic processes

Main article: Polytropic process


A polytropic process is a thermodynamic process that
obeys the relation:

PV

= C,

where P is the pressure, V is volume, n is any real number


(the polytropic index), and C is a constant. This equation
can be used to accurately characterize processes of certain
systems, notably the compression or expansion of a gas, but
in some cases, liquids and solids.

5.1.6

Processes classied by the second law


of thermodynamics

According to Planck, one may think of three main classes


of thermodynamic process: natural, ctively reversible, and
impossible or unnatural.[1][2]

Natural process

In a constant chemical potential process the system Only natural processes occur in nature. For thermodynaminis particle-transfer connected, by a particle-permeable ics, a natural process is a transfer between systems that
[1]
Natcreases
the
sum
of
their
entropies,
and
is
irreversible.
boundary, to a constant- reservoir.
ural processes may occur spontaneously, or may be trig The conjugate here is a constant particle number pro- gered in a metastable or unstable system, as for example
cess. These are the processes outlined just above. in the condensation of a supersaturated vapour.[3]

104

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

Fictively reversible process


To describe the geometry of graphical surfaces that illustrate equilibrium relations between thermodynamic functions of state, one can ctively think of so-called reversible
processes. They are convenient theoretical objects that
trace paths across graphical surfaces. They are called processes but do not describe naturally occurring processes,
which are always irreversible. Because the points on the
paths are points of thermodynamic equilibrium, it is customary to think of the processes described by the paths
as ctively reversible.[1]
Unnatural process
Unnatural processes are logically conceivable but do not occur in nature. They would decrease the sum of the entropies
if they occurred.[1]
Quasistatic process
Main article: Quasistatic process
A quasistatic process is an idealized or ctive model of
a thermodynamic process considered in theoretical studies. It does not occur in physical reality. It may be imagined as happening innitely slowly so that the system passes
through a continuum of states that are innitesimally close
to equilibrium. The ctive process can be regarded as
reversible.

5.1.7

See also

5.1.9

Further reading

Physics for Scientists and Engineers - with Modern


Physics (6th Edition), P. A. Tipler, G. Mosca, Freeman, 2008, ISBN 0-7167-8964-7
Encyclopaedia of Physics (2nd Edition), R.G. Lerner,
G.L. Trigg, VHC publishers, 1991, ISBN 3-52726954-1 (Verlagsgesellschaft), ISBN 0-89573-752-3
(VHC Inc.)
McGraw Hill Encyclopaedia of Physics (2nd Edition),
C.B. Parker, 1994, ISBN 0-07-051400-3
Physics with Modern Applications, L.H. Greenberg,
Holt-Saunders International W.B. Saunders and Co,
1978, ISBN 0-7216-4247-0
Essential Principles of Physics, P.M. Whelan, M.J.
Hodgeson, 2nd Edition, 1978, John Murray, ISBN 07195-3382-1
Thermodynamics, From Concepts to Applications
(2nd Edition), A. Shavit, C. Gutnger, CRC Press
(Taylor and Francis Group, USA), 2009, ISBN
9781420073683
Chemical Thermodynamics, D.J.G. Ives, University Chemistry, Macdonald Technical and Scientic,
1971, ISBN 0-356-03736-3
Elements of Statistical Thermodynamics (2nd Edition),
L.K. Nash, Principles of Chemistry, Addison-Wesley,
1974, ISBN 0-201-05229-6
Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008, ISBN
9780471915331

Flow process
Heat
Kalina cycle
Phase transition
Work (thermodynamics)

5.1.8

5.2

Isobaric process

An isobaric process is a thermodynamic process in which


the pressure stays constant: P = 0. The term derives from
the Greek iso- (equal) and baros (weight). The heat transferred to the system does work, but also changes the internal
energy of the system:

References

[1] Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, fth revised
edition, North-Holland, Amsterdam, p. 12.
[2] Tisza, L. (1966). Generalized Thermodynamics, M.I.T.
Press, Cambridge MA, p. 32.
[3] Planck, M.(1897/1903). Treatise on Thermodynamics,
translated by A. Ogg, Longmans, Green & Co., London, p.
82.

Q = U W
According to the rst law of thermodynamics, W is work
done by the system, U is internal energy, and Q is heat.[1]
Pressure-volume work by the closed system is dened as:

W =

p dV

5.2. ISOBARIC PROCESS

105

P
P

= n cP T
where cP is specic heat at a constant pressure.

5.2.1

To nd the molar specic heat capacity of the gas involved,


the following equations apply for any general gas that is
calorically perfect. The property is either called the adiabatic index or the heat capacity ratio. Some published
sources might use k instead of .

W
O

VA

Specic heat capacity

Molar isochoric specic heat:

VB

R
1

cV =

Molar isobaric specic heat:

The yellow area represents the work done

cp =

R
1

where means change over the whole process, whereas d


denotes a dierential. Since pressure is constant, this means The values for are = 1.4 for diatomic gasses like air
and its major components, and = 35 for monatomic gasses
that
like the noble gasses. The formulas for specic heats would
reduce in these special cases:
W = pV
Applying the ideal gas law, this becomes

W = n R T

Monatomic:
cV =

3R
2

and cP =

5R
2

Diatomic:

7R
cV = 5R
2 and cP = 2
assuming that the quantity of gas stays constant, e.g., there
is no phase transition during a chemical reaction. According
An isobaric process is shown on a P-V diagram as a straight
to the equipartition theorem, the change in internal energy
horizontal line, connecting the initial and nal thermostatic
is related to the temperature of the system by
states. If the process moves towards the right, then it is an
expansion. If the process moves towards the left, then it is
a compression.
U = n cV T

where cV is specic heat at a constant volume.

5.2.2

Sign convention for work

Substituting the last two equations into the rst equation


The motivation for the specic sign conventions of
produces:
thermodynamics comes from early development of heat engines. When designing a heat engine, the goal is to have the
system produce and deliver work output. The source of enQ = n cV T + n R T
ergy in a heat engine, is a heat input.

= n (cV + R) T

If the volume compresses (delta V = nal volume - initial


volume < 0), then W < 0. That is, during isobaric compression the gas does negative work, or the environment

106

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

does positive work. Restated, the environment does pos- can stay constant as the density-temperature quadrant (,T
itive work on the gas.
) undergoes a squeeze mapping.[2]
If the volume expands (delta V = nal volume - initial volume > 0), then W > 0. That is, during isobaric expansion 5.2.5 See also
the gas does positive work, or equivalently, the environment
does negative work. Restated, the gas does positive work on
Adiabatic process
the environment.
Cyclic process
If heat is added to the system, then Q > 0. That is, during
isobaric expansion/heating, positive heat is added to the gas,
Isochoric process
or equivalently, the environment receives negative heat. Re Isothermal process
stated, the gas receives positive heat from the environment.
If the system rejects heat, then Q < 0. That is, during isobaric compression/cooling, negative heat is added to the
gas, or equivalently, the environment receives positive heat.
Restated, the environment receives positive heat from the
gas.

5.2.3

Dening enthalpy

An isochoric process is described by the equation Q = U


. It would be convenient to have a similar equation for isobaric processes. Substituting the second equation into the
rst yields

Q = U + (p V ) = (U + p V )
The quantity U + p V is a state function so that it can be
given a name. It is called enthalpy, and is denoted as H.
Therefore an isobaric process can be more succinctly described as

Q = H

Polytropic process
Isenthalpic process

5.2.6

References

[1] First Law of Thermodynamics. Hyperphysics.


[2] Peter Olver (1999), Classical Invariant Theory, p. 217

5.3

Isochoric process

An isochoric process, also called a constant-volume process, an isovolumetric process, or an isometric process,
is a thermodynamic process during which the volume of the
closed system undergoing such a process remains constant.
An isochoric process is exemplied by the heating or the
cooling of the contents of a sealed, inelastic container: The
thermodynamic process is the addition or removal of heat;
the isolation of the contents of the container establishes the
closed system; and the inability of the container to deform
imposes the constant-volume condition. The isochoric process here should be a quasi-static process.

Enthalpy and isochoric specic heat capacity are very useful


mathematical constructs, since when analyzing a process in
an open system, the situation of zero work occurs when the 5.3.1 Formalism
uid ows at constant pressure. In an open system, enthalpy
is the quantity which is useful to use to keep track of energy An isochoric thermodynamic process is characterized by
content of the uid.
constant volume, i.e., V = 0 . The process does no
pressure-volume work, since such work is dened by

5.2.4

Variable density viewpoint

A given quantity (mass m) of gas in a changing volume pro- W = P V


duces a change in density . In this context the ideal gas law
where P is pressure. The sign convention is such that posiis written
tive work is performed by the system on the environment.
R(T ) = M P

If the process is not quasi-static, the work can perhaps be


done in a volume constant thermodynamic process.[1]

where T is thermodynamic temperature and M is molar For a reversible process, the rst law of thermodynamics
mass. When R and M are taken as constant, then pressure P gives the change in the systems internal energy:

5.3. ISOCHORIC PROCESS

dU = dQ dW
Replacing work with a change in volume gives
dU = dQ P dV
Since the process is isochoric, dV = 0 , the previous equation now gives

107

P
PB

Final State

B
Initial State

dU = dQ
Using the denition of specic heat capacity at constant
volume,

PA

Cv = dU /dT

dQ = mcv dT

A
V

Integrating both sides yields

Isochoric process in the pressure volume diagram. In this diagram,


pressure increases, but volume remains constant.

T2

Q = m

cv dT.
T1

Where cv is the specic heat capacity at constant volume, 5.3.3 Etymology


T1 is the initial temperature and T2 is the nal temperature.
The noun isochor and the adjective isochoric are derived
We conclude with:
from the Greek words (isos) meaning equal, and
(choros) meaning space.
Q = mcv T
On a pressure volume diagram, an isochoric process ap- 5.3.4 See also
pears as a straight vertical line. Its thermodynamic conjugate, an isobaric process would appear as a straight horizon Isobaric process
tal line.
Adiabatic process
Ideal gas

Cyclic process

If an ideal gas is used in an isochoric process, and the quan Isothermal process
tity of gas stays constant, then the increase in energy is proportional to an increase in temperature and pressure. Take
Polytropic process
for example a gas heated in a rigid container: the pressure
and temperature of the gas will increase, but the volume
will remain the same.
5.3.5 References

5.3.2

Ideal Otto cycle

[1] https://www.physicsforums.com/threads/
if-gas-volume-remains-constant-it-can-does-work-to-others.
765131/

The ideal Otto cycle is an example of an isochoric process


when it is assumed that the burning of the gasoline-air mixture in an internal combustion engine car is instantaneous. 5.3.6 External links
There is an increase in the temperature and the pressure of
http://lorien.ncl.ac.uk/ming/webnotes/Therm1/
the gas inside the cylinder while the volume remains the
revers/isocho.htm
same.

108

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

5.4 Isothermal process

that the process is also isothermal.[7] Thus, specifying that


a process is isothermal is not sucient to specify a unique
An isothermal process is a change of a system, in which process.
the temperature remains constant: T = 0. This typically
occurs when a system is in contact with an outside thermal
5.4.2 Details for an ideal gas
reservoir (heat bath), and the change occurs slowly enough
to allow the system to continually adjust to the temperature of the reservoir through heat exchange. In contrast, an
adiabatic process is where a system exchanges no heat with
its surroundings (Q = 0). In other words, in an isothermal
process, the value T = 0 and therefore U = 0 (only for
an ideal gas) but Q 0, while in an adiabatic process, T
0 but Q = 0.

5.4.1

Examples

Isothermal processes can occur in any kind of system that


has some means of regulating the temperature, including
highly structured machines, and even living cells. Some
parts of the cycles of some heat engines are carried out
isothermally (for example, in the Carnot cycle).[1] In the
thermodynamic analysis of chemical reactions, it is usual to
rst analyze what happens under isothermal conditions and
then consider the eect of temperature.[2] Phase changes,
such as melting or evaporation, are also isothermal processes when, as is usually the case, they occur at constant
pressure.[3] Isothermal processes are often used and a starting point in analyzing more complex, non-isothermal processes.

Several isotherms of an ideal gas on a p-V diagram

For the special case of a gas to which Boyles law[7] applies,


the product pV is a constant if the gas is kept at isothermal
conditions. The value of the constant is nRT, where n is the
number of moles of gas present and R is the ideal gas conIsothermal processes are of special interest for ideal gases. stant. In other words, the ideal gas law pV = nRT applies.[8]
This is a consequence of Joules second law which states that This means that
the internal energy of a xed amount of an ideal gas depends
only on its temperature.[4] Thus, in an isothermal process
nRT
constant
the internal energy of an ideal gas is constant. This is a result
p=
=
of the fact that in an ideal gas there are no intermolecular
V
V
forces.[5] Note that this is true only for ideal gases; the in- holds. The family of curves generated by this equation is
ternal energy depends on pressure as well as on temperature shown in the graph presented at the bottom right-hand of the
for liquids, solids, and real gases.[6]
page.[or the upper part of section when viewed in mobile]
In the isothermal compression of a gas there is work is Each curve is called an isotherm. Such graphs are termed
done on the system to decrease the volume and increase indicator diagrams and were rst used by James Watt and
the pressure.[7] Doing work on the gas increases the inter- others to monitor the eciency of engines. The tempernal energy and will tend to increase the temperature. To ature corresponding to each curve in the gure increases
maintain the constant temperature energy must leave the from the lower left to the upper right.
system as heat and enter the environment. If the gas is ideal,
the amount of energy entering the environment is equal to
the work done on the gas, because internal energy does not 5.4.3 Calculation of work
change. For details of the calculations, see calculation of
In thermodynamics, the reversible work involved when a gas
work.
changes from state A to state B is[9]
For an adiabatic process, in which no heat ows into or out
of the gas because its container is well insulated, Q = 0 .
VB
If there is also no work done, i.e. a free expansion, there is
WAB =
p dV
no change in internal energy. For an ideal gas, this means
VA

5.4. ISOTHERMAL PROCESS

109
where Qrev is the heat transferred reversibly to the system
and T is absolute temperature.[10] This formula is valid only
for a hypothetical reversible process; that is, a process in
which equilibrium is maintained at all times.
A simple example is an equilibrium phase transition (such
as melting or evaporation) taking place at constant temperature and pressure. For a phase transition at constant
pressure, the heat transferred to the system is equal to the
enthalpy of transformation, Htr , thus Q = Htr . [11] At
any given pressure, there will be a transition temperature,
Ttr , for which the two phases are in equilibrium (for example, the normal boiling point for vaporization of a liquid at
one atmosphere pressure). If the transition takes place under such equilibrium conditions, the formula above may be
used to directly calculate the entropy change[12]

Str =
The purple area represents work for this isothermal change

Htr
Ttr

Another example is the reversible isothermal expansion (or


compression) of an ideal gas from an initial volume VA and
For an isothermal, reversible process, this integral equals
pressure PA to a nal volume VB and pressure PB . As
the area under the relevant pressure-volume isotherm, and
shown in Calculation of work, the heat transferred to the
is indicated in purple in the gure (at the bottom right-hand
gas is
of the page) for an ideal gas. Again, p = nRT /V applies
and with T being constant (as this is an isothermal process),
the expression for work becomes:
VB
Q = W = nRT ln
VA
VB
VB
VB
nRT
1 result is for a reversible
VB
This
process, so it may be substiWAB =
p dV =
dV = nRT
dV = nRT ln
V
V
VAthe entropy change to obtain [13]
VA
VA
VA tuted in the formula for
By convention, work is dened as the work on the system
by its surroundings. If, for example, the system is compressed, then the work is positive and the internal energy of S = nR ln VB
VA
the system increases. Conversely, if the system expands, it
does work on the surroundings and the internal energy of Since an ideal gas obeys Boyles Law, this can be rewritten,
the system decreases.
if desired, as
It is also worth noting that for ideal gases, if the temperature is held constant, the internal energy of the system also is
PA
constant, and so U = 0 . Since the First Law of ThermoS = nR ln
dynamics states that U = Q + W (IUPAC convention),
PB
it follows that Q = W for the isothermal compression or
Once obtained, these formulas can be applied to an
expansion of ideal gases.
irreversible process, such as the free expansion of an ideal
gas. Such an expansion is also isothermal and may have the
same initial and nal states as in the reversible expansion.
5.4.4 Entropy changes
Since entropy is a state function, the change in entropy of
Isothermal processes are especially convenient for calculat- the system is the same as in the reversible process and is
ing changes in entropy since, in this case, the formula for given by the formulas above. Note that the result Q = 0 for
the free expansion can not be used in the formula for the
the entropy change, S , is simply
entropy change since the process is not reversible.
S =

Qrev
T

The dierence between the reversible and free expansions


is found in the entropy of the surroundings. In both cases,

110

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

the surroundings are at a constant temperature, T , so that


Ssur = Q/T ; the minus sign is used since the heat
transferred to the surroundings is equal in magnitude and
opposite in sign to the heat, Q , transferred to the system. In
the reversible case, the change in entropy of the surroundings is equal and opposite to the change in the system, so the
change in entropy of the universe is zero. In the free expansion, Q = 0 , so the entropy of the surroundings does not
change and the change in entropy of the universe is equal
to S for the system.

5.4.5

See also

JouleThomson eect
Joule expansion (also called free expansion)
Adiabatic process
Cyclic process
Isobaric process

[11] Petrucci, op cit.


[12] Atkins, Chapter 4, op.cit.
[13] Ibid

5.5

Adiabatic process

An adiabatic process is one that occurs without transfer of


heat or matter between a thermodynamic system and its surroundings. In an adiabatic process, energy is transferred
only as work.[1][2] The adiabatic process provides a rigorous conceptual basis for the theory used to expound the rst
law of thermodynamics, and as such it is a key concept in
thermodynamics.
Some chemical and physical processes occur so rapidly that
they may be conveniently described by the adiabatic approximation, meaning that there is not enough time for
the transfer of energy as heat to take place to or from the
system.[3]

Isochoric process

By way of example, the adiabatic ame temperature is an


idealization that uses the adiabatic approximation so as to
Polytropic process
provide an upper limit calculation of temperatures produced
by combustion of a fuel. The adiabatic ame temperature
is the temperature that would be achieved by a ame if the
5.4.6 References
process of combustion took place in the absence of heat loss
[1] Keenan, J.H. (1970). Thermodynamics, Chapter 12: Heat- to the surroundings.
engine cycles. M.I.T. Press, Cambridge, Massachusetts.
[2] Rock, P. A. (1983). Chemical Thermodynamics, Chapter
11: Thermodynamics of chemical reactions. University
Science Books, Mill Valley, CA. ISBN 0-935702-12-1
[3]

[4]

[5]
[6]

5.5.1

Description

A process that does not involve the transfer of heat or matPetrucci, R. H., W.S. Harwood, F.G Herring, J.D. Madura ter into or out of a system, so that Q = 0, is called an adia(2007). General Chemistry, Chapter 12. Pearson,Upper batic process, and such a system is said to be adiabatically
Saddle River, New Jersey. ISBN 0-13-149330-2
isolated.[4][5] The assumption that a process is adiabatic is
a frequently made simplifying assumption. For example,
Klotz, I.M. and R. M. Rosenberg (1991). Chemical Thermothe compression of a gas within a cylinder of an engine is
dynamics, Chapter 6, Application of the rst law to gases.
assumed to occur so rapidly that on the time scale of the
Benjamin, Meno Park, California.
compression process, little of the systems energy can be
Ibid
transferred out as heat. Even though the cylinders are not
insulated and are quite conductive, that process is idealized
Adkins, C. J.(1983). Equilibrium Thermodynamics. Camto be adiabatic. The same can be said to be true for the
bridge University Press.
expansion process of such a system.

[7] Klotz and Rosenberg. op. cit..

The assumption of adiabatic isolation of a system is a useful one, and is often combined with others so as to make
the calculation of the systems behaviour possible. Such
[9] Atkins, Peter (1997). Physical Chemistry (6th ed.). Chapter assumptions are idealizations. The behaviour of actual ma2, The rst law: the concepts. New York: W.H. Freeman chines deviates from these idealizations, but the assumption
and Co. ISBN 0-7167-2871-0.
of such perfect behaviour provide a useful rst approxi[10] Atkins, Peter (1997). Physical Chemistry (6th ed.). Chap- mation of how the real world works. According to Laplace,
ter 4, The second law: the concepts. New York: W.H. when sound travels in a gas, there is no loss of heat in the
Freeman and Co. ISBN 0-7167-2871-0.
medium and the propagation of sound is adiabatic. For this
[8] Ibid.

5.5. ADIABATIC PROCESS


adiabatic process, the modulus of elasticity E = P where
is the ratio of specic heats at constant pressure and at
constant volume ( = Cp /Cv ) and P is the pressure of the
gas .
Various applications of the adiabatic assumption
For a closed system, one may write the rst law of thermodynamics thus: U = Q + W, where U denotes the
change of the systems internal energy, Q the quantity of
energy added to it as heat, and W the work done on it by its
surroundings.
If the system has rigid walls such that work cannot be
transferred in or out (W = 0), and the walls of the system are not adiabatic and energy is added in the form
of heat (Q > 0), and there is no phase change, the temperature of the system will rise.

111
The transfer of energy as work into an adiabatically isolated
system can be imagined as being of two idealized extreme
kinds. In one such kind, there is no entropy produced within
the system (no friction, viscous dissipation, etc.), and the
work is only pressure-volume work (denoted by P dV). In
nature, this ideal kind occurs only approximately, because
it demands an innitely slow process and no sources of dissipation.
The other extreme kind of work is isochoric work (dV = 0),
for which energy is added as work solely through friction or
viscous dissipation within the system. A stirrer that transfers energy to a viscous uid of an adiabatically isolated
system with rigid walls, without phase change, will cause a
rise in temperature of the uid, but that work is not recoverable. Isochoric work is irreversible.[6] The second law of
thermodynamics observes that a natural process, of transfer of energy as work, always consists at least of isochoric
work and often both of these extreme kinds of work. Every
natural process, adiabatic or not, is irreversible, with S >
0, as friction or viscosity are always present to some extent.

If the system has rigid walls such that pressurevolume


work cannot be done, and the system walls are adiabatic (Q = 0), but energy is added as isochoric work
in the form of friction or the stirring of a viscous
uid within the system (W > 0), and there is no phase 5.5.2 Adiabatic heating and cooling
change, the temperature of the system will rise.
The adiabatic compression of a gas causes a rise in tem If the system walls are adiabatic (Q = 0), but not rigid perature of the gas. Adiabatic expansion against pressure,
(W 0), and, in a ctive idealized process, energy is or a spring, causes a drop in temperature. In contrast, free
added to the system in the form of frictionless, non- expansion is an isothermal process for an ideal gas.
viscous pressurevolume work, and there is no phase
change, the temperature of the system will rise. Such Adiabatic heating occurs when the pressure of a gas is
a process is called an isentropic process and is said to increased from work done on it by its surroundings, e.g.,
be reversible. Fictively, if the process is reversed, a piston compressing a gas contained within an adiabatic
the energy added as work can be recovered entirely cylinder. This nds practical application in diesel engines
as work done by the system. If the system contains a which rely on the lack of quick heat dissipation during their
compressible gas and is reduced in volume, the un- compression stroke to elevate the fuel vapor temperature
certainty of the position of the gas is reduced, and suciently to ignite it.
seemingly would reduce the entropy of the system, but Adiabatic heating occurs in the Earths atmosphere when an
the temperature of the system will rise as the process air mass descends, for example, in a katabatic wind, Foehn
is isentropic (S = 0). Should the work be added in wind, or chinook wind owing downhill over a mountain
such a way that friction or viscous forces are operating range. When a parcel of air descends, the pressure on
within the system, then the process is not isentropic, the parcel increases. Due to this increase in pressure, the
and if there is no phase change, then the temperature parcels volume decreases and its temperature increases as
of the system will rise, the process is said to be ir- work is done on the parcel of air, thus increasing its internal
reversible, and the work added to the system is not energy, which manifests itself by a rise in the temperature
entirely recoverable in the form of work.
of that mass of air. The parcel of air can only slowly dis If the walls of a system are not adiabatic, and energy sipate the energy by conduction or radiation (heat), and to
is transferred in as heat, entropy is transferred into the a rst approximation it can be considered adiabatically isosystem with the heat. Such a process is neither adia- lated and the process an adiabatic process.
batic nor isentropic, having Q > 0, and S > 0 accord- Adiabatic cooling occurs when the pressure on an adiabating to the second law of thermodynamics.
ically isolated system is decreased, allowing it to expand,
thus causing it to do work on its surroundings. When the
Naturally occurring adiabatic processes are irreversible (en- pressure applied on a parcel of air is reduced, the air in the
tropy is produced).
parcel is allowed to expand; as the volume increases, the

112

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

[3]
temperature falls as its internal energy decreases. Adiabatic
P V n = constant
cooling occurs in the Earths atmosphere with orographic
lifting and lee waves, and this can form pileus or lenticular where P is pressure, V is volume, and for this case n =
clouds.
where
Adiabatic cooling does not have to involve a uid. One
technique used to reach very low temperatures (thousandths
and even millionths of a degree above absolute zero) is via = CP = f + 2 ,
CV
f
adiabatic demagnetisation, where the change in magnetic
eld on a magnetic material is used to provide adiabatic C being the specic heat for constant pressure, C beP
V
cooling. Also, the contents of an expanding universe can ing the specic heat for constant volume, is the adiabatic
be described (to rst order) as an adiabatically cooling uid. index, and f is the number of degrees of freedom (3 for
(See - Heat death of the universe)
monatomic gas, 5 for diatomic gas and collinear molecules

Rising magma also undergoes adiabatic cooling before e.g. carbon dioxide).
eruption, particularly signicant in the case of magmas that For a monatomic ideal gas, = 5/3 , and for a diatomic
rise quickly from great depths such as kimberlites.[7]
gas (such as nitrogen and oxygen, the main components of
[8]
Such temperature changes can be quantied using the ideal air) = 7/5 . Note that the above formula is only apgas law, or the hydrostatic equation for atmospheric pro- plicable to classical ideal gases and not BoseEinstein or
Fermi gases.
cesses.
In practice, no process is truly adiabatic. Many processes For reversible adiabatic processes, it is also true that
rely on a large dierence in time scales of the process of interest and the rate of heat dissipation across a system boundP 1 T = constant [3]
ary, and thus are approximated by using an adiabatic assumption. There is always some heat loss, as no perfect V T f /2 = constant
insulators exist.
where T is an absolute temperature. This can also be written as

5.5.3

Ideal gas (reversible process)


T V 1 = constant [3]

Main article: Reversible adiabatic process


The mathematical equation for an ideal gas undergoing a

Example of adiabatic compression


The compression stroke in a gasoline engine can be used as
an example of adiabatic compression. The simplifying assumptions are: the uncompressed volume of the cylinder is
1000 cm3 (one litre); the gas within is nearly pure nitrogen
(thus a diatomic gas with ve degrees of freedom and so
= 7/5); the compression ratio of the engine is 10:1 (that is,
the 1000 cm3 volume of uncompressed gas be reduced to
100 cm3 by the piston); and that the uncompressed gas is
at approximately room temperature and pressure (a warm
room temperature of ~27 C or 300 K, and a pressure of
1 bar ~ 100 kPa, or about 14.7 PSI, i.e. typical sea-level
atmospheric pressure).
P V = constant = 100, 000 pa 10007/5 = 100 103
15.8 103 = 1.58 109

For a simple substance, during an adiabatic process in which the


volume increases, the internal energy of the working substance must
decrease

so our adiabatic constant for this example is about 1.58 billion.

The gas is now compressed to a 100 cm3 volume (we will


assume this happens quickly enough that no heat can enreversible (i.e., no entropy generation) adiabatic process can ter or leave the gas). The new volume is 100 cm3 , but the
be represented by the polytropic process equation
constant for this experiment is still 1.58 billion:

5.5. ADIABATIC PROCESS


P V = constant = 1.58 109 = P 1007/5
so solving for P:

113
portional to the volume, the entropy increases in this case,
therefore this process is irreversible.

P = 1.58 109 /1007/5 = 1.58 109 /630.9 = 2.50


Derivation of P-V relation for adiabatic heating and
106 Pa
cooling
or about 362 PSI or 24.5 atm. Note that this pressure increase is more than a simple 10:1 compression ratio would The denition of an adiabatic process is that heat transfer
indicate; this is because the gas is not only compressed, but to the system is zero, Q = 0 . Then, according to the rst
the work done to compress the gas also increases its internal law of thermodynamics,
energy which manifests itself by a rise in the gass temperature and an additional rise in pressure above what would
result from a simplistic calculation of 10 times the original
(1)
dU + W = Q = 0,
pressure.
We can solve for the temperature of the compressed gas in
the engine cylinder as well, using the ideal gas law, PV=RT
(R the specic gas constant for that gas). Our initial conditions are 100,000 pa of pressure, 1000 cm3 volume, and
300 K of temperature, so our experimental constant is:
PV
T

= constant =

105 103
300

where dU is the change in the internal energy of the system


and W is work done by the system. Any work ( W )
done must be done at the expense of internal energy U ,
since no heat Q is being supplied from the surroundings.
Pressure-volume work W done by the system is dened as

= 3.33 105

We know the compressed gas has V = 100 cm3 and P = (2)


W = P dV.
2.50E6 pascals, so we can solve for temperature by simple
algebra:
However, P does not remain constant during an adiabatic
process but instead changes along with V .
PV
2.50106 100
= 751
constant = T =
3.33105
It is desired to know how the values of dP and dV relate to
That is a nal temperature of 751 K, or 477 C, or 892 F,
each other as the adiabatic process proceeds. For an ideal
well above the ignition point of many fuels. This is why
gas the internal energy is given by
a high compression engine requires fuels specially formulated to not self-ignite (which would cause engine knocking when operated under these conditions of temperature
U = nRT,
and pressure), or that a supercharger with an intercooler (3)
to provide a pressure boost but with a lower temperature
rise would be advantageous. A diesel engine operates un- where is the number of degrees of freedom divided by
der even more extreme conditions, with compression ratios two, R is the universal gas constant and n is the number of
of 20:1 or more being typical, in order to provide a very moles in the system (a constant).
high gas temperature which ensures immediate ignition of Dierentiating Equation (3) and use of the ideal gas law,
the injected fuel.
P V = nRT , yields

Adiabatic free expansion of a gas

(4)

See also: Free expansion

Equation (4) is often expressed as dU = nCV dT because


CV = R .

For an adiabatic free expansion of an ideal gas, the gas is


contained in an insulated container and then allowed to expand in a vacuum. Because there is no external pressure
for the gas to expand against, the work done by or on the
system is zero. Since this process does not involve any heat
transfer or work, the First Law of Thermodynamics then
implies that the net internal energy change of the system is
zero. For an ideal gas, the temperature remains constant
because the internal energy only depends on temperature in
that case. Since at constant temperature, the entropy is pro-

Now substitute equations (2) and (4) into equation (1) to


obtain

dU = nR dT = d(P V ) = (P dV + V dP ).

P dV = P dV + V dP,
factorize : P dV, :

( + 1)P dV = V dP,

114

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

and divide both sides by PV:


U = RnT2 RnT1 = RnT

(1)
dV
dP
( + 1)
=
.
V
P
After integrating the left and right sides from V0 to V and
from P0 to P and changing the sides respectively,
(
ln

P
P0

+1
ln
=

V
V0

P
P0

(
=

V
V0

(2)

V2

W =

P dV
V1

Exponentiate both sides, and substitute


heat capacity ratio
(

At the same time, the work done by the pressure-volume


changes as a result from this process, is equal to

+1

with , the

Since we require the process to be adiabatic, the following


equation needs to be true

(3)

)
,

U + W = 0

By the previous derivation,

and eliminate the negative sign to obtain


P V = constant = P1 V1

(4)
(

P
P0

(
=

V0
V

)
.

Rearranging (4) gives


(

Therefore,
P = P1
(

P
P0

)(

V
V0

)
=1

V1
V

Substituting this into (2) gives

and

V2

W =

P1
V1

P0 V0 = P V = constant .
Derivation of P-T relation for adiabatic heating and
cooling
Substituting the ideal gas law into the above, we obtain

which simplies to

)
dV

Integrating,

W = P1 V1

V21 V11
1

Substituting =
P (nRT /P ) = constant .

V1
V

+1

(
)
W = P1 V1 V21 V11
Rearranging,

P (1) T = constant .
Derivation of discrete formula

((
W = P1 V1

V2
V1

)1

)
1

The change in internal energy of a system, measured from Using the ideal gas law and assuming a constant molar quanstate 1 to state 2, is equal to
tity (as often happens in practical cases),

5.5. ADIABATIC PROCESS

((
W = nRT1

V2
V1

115
1. Every adiabat asymptotically approaches both the V
axis and the P axis (just like isotherms).

)1

2. Each adiabat intersects each isotherm exactly once.


3. An adiabat looks similar to an isotherm, except that
during an expansion, an adiabat loses more pressure
than an isotherm, so it has a steeper inclination (more
vertical).

By the continuous formula,

P2
=
P1

V2
V1

4. If isotherms are concave towards the north-east direction (45), then adiabats are concave towards the
east north-east (31).

Or,
(

P2
P1

) 1

V2
=
V1

Substituting into the previous expression for W ,


((
W = nRT1

P2
P1

) 1

)
1

Substituting this expression and (1) in (3) gives


((
nR(T2 T1 ) = nRT1

P2
P1

) 1

)
1

Simplifying,
((
T2 T1 = T1

P2
P1

) 1

)
1

Simplifying,

T2
1=
T1

P2
P1

) 1

Simplifying,
(
T2 = T1

5.5.4

P2
P1

) 1

Graphing adiabats

An adiabat is a curve of constant entropy on the P-V diagram. Some properties of adiabats on a P-V diagram are
indicated. These properties may be read from the classical behaviour of ideal gases, except in the region where PV
becomes small (low temperature), where quantum eects
become important.

5. If adiabats and isotherms are graphed at regular intervals of entropy and temperature, respectively (like altitude on a contour map), then as the eye moves towards
the axes (towards the south-west), it sees the density of
isotherms stay constant, but it sees the density of adiabats grow. The exception is very near absolute zero,
where the density of adiabats drops sharply and they
become rare (see Nernsts theorem).
The following diagram is a P-V diagram with a superposition of adiabats and isotherms:

116

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

The isotherms are the red curves and the adiabats are the In the eighteenth century, the law of conservation of energy
black curves.
was yet to be fully formulated or established, and the nature
of heat was debated. One approach to these problems was
The adiabats are isentropic.
to regard heat, measured by calorimetry, as a primary subVolume is the horizontal axis and pressure is the vertical stance that is conserved in quantity. By the middle of the
axis.
nineteenth century, it was recognized as a form of energy,
and the law of conservation of energy was thereby also recognized. The view that eventually established itself, and is
5.5.5 Etymology
currently regarded as right, is that the law of conservation
of energy is a primary axiom, and that heat is to be analyzed
The term adiabatic/dibtk/, literally means 'not to as consequential. In this light, heat cannot be a component
be passed through'. It is formed from the ancient Greek of the total energy of a single body because it is not a state
privative "" (not) + , able to be passed variable, but, rather, is a variable that describes a process
through, in turn deriving from - (through), and of transfer between two bodies. The abiabatic process is
(to walk, go, come), thus .[9] Accord- important because it is a logical ingredient of this current
ing to Maxwell,[10] and to Partington,[11] the term was in- view.[17]
troduced by Rankine.[12]
The etymological origin corresponds here to an impossibility of transfer of energy as heat and of transfer of matter 5.5.7 Divergent usages of the word adiabatic
across the wall.
This present article is written from the viewpoint of macroscopic thermodynamics, and the word adiabatic is used in
5.5.6 Conceptual signicance in thermody- this article in the traditional way of thermodynamics, intronamic theory
duced by Rankine. It is pointed out in the present article
that, for example, if a compression of a gas is rapid, then
The adiabatic process has been important for thermody- there is little time for heat transfer to occur, even when the
namics since its early days. It was important in the work gas is not adiabatically isolated by a denite wall. In this
of Joule, because it provided a way of nearly directly relat- sense, a rapid compression of a gas is sometimes approximately or loosely said to be adiabatic, though often far from
ing quantities of heat and work.
isentropic,
even when the gas is not adiabatically isolated by
For a thermodynamic system that is enclosed by walls that
a
denite
wall.
do not pass matter, energy can pass in and out only as heat
or work. Thus a quantity of work can be related almost
directly to an equivalent quantity of heat in a cycle of two
limbs. The rst is an isochoric adiabatic work process that
adds to the systems internal energy. Then an isochoric and
workless heat transfer returns the system to its original state.
The rst limb adds a denite amount of energy and the
second removes it. Accordingly, Rankine measured quantity of heat in units of work, rather than as a calorimetric
quantity.[13] In 1854, Rankine used a quantity that he called
the thermodynamic function that later was called entropy,
and at that time he wrote also of the curve of no transmission of heat,[14] which he later called an adiabatic curve.[12]
Besides it two isothermal limbs, Carnots cycle has two adiabatic limbs.
For the foundations of thermodynamics, the conceptual importance of this was emphasized by Bryan,[15]
by Carathodory,[1] and by Born.[16] The reason is that
calorimetry presupposes temperature as already dened before the statement of the rst law of thermodynamics. But
it is better not to make such a presupposition. Rather, the
denition of absolute thermodynamic temperature is best
left till the second law is available as a conceptual basis.[17]

Quantum mechanics and quantum statistical mechanics,


however, use the word adiabatic in a very dierent sense,
one that can at times seem almost opposite to the classical
thermodynamic sense. In quantum theory, the word adiabatic can mean something perhaps near isentropic, or perhaps near quasi-static, but the usage of the word is very different between the two disciplines.
On one hand in quantum theory, if a perturbative element
of compressive work is done almost innitely slowly (that
is to say quasi-statically), it is said to have been done adiabatically. The idea is that the shapes of the eigenfunctions
change slowly and continuously, so that no quantum jump
is triggered, and the change is virtually reversible. While
the occupation numbers are unchanged, nevertheless there
is change in the energy levels of one-to-one corresponding,
pre-and post-compression, eigenstates. Thus a perturbative
element of work has been done without heat transfer and
without introduction of random change within the system.
For example, Max Born writes Actually, it is usually the
'adiabatic' case with which we have to do: i.e. the limiting
case where the external force (or the reaction of the parts
of the system on each other) acts very slowly. In this case,

5.5. ADIABATIC PROCESS


to a very high approximation

117
Quasistatic process
Total air temperature

c21 = 1, c22 = 0, c23 = 0, ... ,

Magnetic refrigeration

that is, there is no probability for a transition, and the sys- 5.5.9 References
tem is in the initial state after cessation of the perturbation. Such a slow perturbation is therefore reversible, as it [1] Carathodory, C. (1909). Untersuchungen ber die Grundlagen der Thermodynamik, Mathematische Annalen, 67:
is classically.[18]
On the other hand, in quantum theory, if a perturbative
element of compressive work is done rapidly, it randomly
changes the occupation numbers of the eigenstates, as well
as changing their shapes. In that theory, such a rapid change
is said not to be adiabatic, and the contrary word diabatic
is applied to it. One might guess that perhaps Clausius, if
he were confronted with this, in the now-obsolete language
he used in his day, would have said that internal work was
done and that 'heat was generated though not transferred'.
In classical thermodynamics, such a rapid change would still
be called adiabatic because the system is adiabatically isolated, and there is no transfer of energy as heat. The strong
irreversibility of the change, due to viscosity or other entropy production, does not impinge on this classical usage.
Thus for a mass of gas, in macroscopic thermodynamics,
words are so used that a compression is sometimes loosely
or approximately said to be adiabatic if it is rapid enough
to avoid heat transfer, even if the system is not adiabatically
isolated. But in quantum statistical theory, a compression
is not called adiabatic if it is rapid, even if the system is adiabatically isolated in the classical thermodynamic sense of
the term. The words are used dierently in the two disciplines, as stated just above.

5.5.8

See also

Cyclic process
First law of thermodynamics
Heat burst
Isobaric process
Isenthalpic process
Isentropic process
Isochoric process
Isothermal process
Polytropic process
Entropy (classical thermodynamics)

355386, doi:10.1007/BF01450409. A translation may be


found here. Also a mostly reliable translation is to be found
at Kestin, J. (1976). The Second Law of Thermodynamics,
Dowden, Hutchinson & Ross, Stroudsburg PA.
[2] Bailyn, M. (1994). A Survey of Thermodynamics, American
Institute of Physics Press, New York, ISBN 0-88318-797-3,
p. 21.
[3] Bailyn, M. (1994), pp. 5253.
[4] Tisza, L. (1966). Generalized Thermodynamics, M.I.T
Press, Cambridge MA: "(adiabatic partitions inhibit the
transfer of heat and mass)", p. 48.
[5] Mnster, A. (1970), p. 48: mass is an adiabatically inhibited variable.
[6] Mnster, A. (1970), Classical Thermodynamics, translated
by E.S. Halberstadt, WileyInterscience, London, ISBN 0471-62430-6, p. 45.
[7] Kavanagh, J.L.; Sparks R.S.J. (2009). Temperature
changes in ascending kimberlite magmas.
Earth
and Planetary Science Letters (Elsevier) 286 (3
4):
404413.
Bibcode:2009E&PSL.286..404K.
doi:10.1016/j.epsl.2009.07.011.
Retrieved 18 February 2012.
[8] Adiabatic Processes
[9] Liddell, H.G., Scott, R. (1940). A Greek-English Lexicon,
Clarendon Press, Oxford UK.
[10] Maxwell, J.C. (1871), Theory of Heat (rst ed.), London:
Longmans, Green and Co., p. 129
[11] Partington, J.R. (1949), An Advanced Treatise on Physical
Chemistry., volume 1, Fundamental Principles. The Properties of Gases, London: Longmans, Green and Co., p. 122.
[12] Rankine, W.J.McQ. (1866). On the theory of explosive
gas engines, The Engineer, July 27, 1866; at page 467 of
the reprint in Miscellaneous Scientic Papers, edited by W.J.
Millar, 1881, Charles Grin, London.
[13] Rankine, W.J.M. (1854). On the geometrical representation of the expansive action of heat, and theory of thermodynamic engines, Proc. Roy. Soc., 144: 115175,
Miscellaneous Scientic Papers p. 339

118

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

[14] Rankine, W.J.M. (1854). On the geometrical representation of the expansive action of heat, and theory of thermodynamic engines, Proc. Roy. Soc., 144: 115175,
Miscellaneous Scientic Papers p. 341.

uid as it escapes from the valve.[2] With a knowledge of the


specic enthalpy of the uid, and the pressure outside the
pressure vessel, it is possible to determine the temperature
and speed of the escaping uid.

[15] Bryan, G.H. (1907). Thermodynamics. An Introductory


Treatise dealing mainly with First Principles and their Direct
Applications, B.G. Teubner, Leipzig.

In an isenthalpic process:

[16] Born, M. (1949). Natural Philosophy of Cause and Chance,


Oxford University Press, London.

h1 = h2
dh = 0

[17] Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics, New York, ISBN 0-88318-797-3,
Chapter 3.

Isenthalpic processes on an ideal gas follow isotherms since


dh = 0 = cp dT .

[18] Born, M. (1927). Physical aspects of quantum mechanics, Nature, 119: 354357. (Translation by Robert Oppenheimer.)

5.6.1

Silbey, Robert J.; et al. (2004). Physical chemistry.


Hoboken: Wiley. p. 55. ISBN 978-0-471-21504-2.
Broholm, Collin. Adiabatic free expansion. Physics
& Astronomy @ Johns Hopkins University. N.p., 26
Nov. 1997. Web. 14 Apr. *Nave, Carl Rod. Adiabatic Processes. HyperPhysics. N.p., n.d. Web. 14
Apr. 2011. .

See also

Isentropic process
Adiabatic process

5.6.2

References

G.J. Van Wylen and R.E. Sonntag (1985), Fundamentals of


Classical Thermodynamics, John Wiley & Sons, Inc., New
York ISBN 0-471-82933-1

Thorngren, Dr. Jane R.. Adiabatic Processes.


Daphne A Palomar College Web Server. N.p., 21 Notes
July 1995. Web. 14 Apr. 2011. .

5.5.10

External links

Article in HyperPhysics Encyclopaedia

5.6 Isenthalpic process

[1] Atkins, Peter; Julio de Paula (2006). Atkins Physical Chemistry. Oxford: Oxford University Press. p. 64. ISBN 9780-19-870072-2.
[2] G.J. Van Wylen and R.E. Sonntag, Fundamentals of Classical Thermodynamics, Section 5.13 (3rd edition)
[3] G.J. Van Wylen and R.E. Sonntag, Fundamentals of Classical Thermodynamics, Section 2.1 (3rd edition)

An isenthalpic process or isoenthalpic process is a pro- 5.7 Isentropic process


cess that proceeds without any change in enthalpy, H; or
specic enthalpy, h.[1]
In thermodynamics, an isentropic process is an idealIn a steady-state, steady-ow process, signicant changes in ized thermodynamic process that is adiabatic and in which
pressure and temperature can occur to the uid and yet the the work transfers of the system are frictionless; there
process will be isenthalpic if there is no transfer of heat to or is no transfer of heat or of matter and the process is
from the surroundings, no work done on or by the surround- reversible.[1][2][3][4][5][6] Such an idealized process is useful
ings, and no change in the kinetic energy of the uid.[2] (If a in engineering as a model of and basis of comparison for
steady-state, steady-ow process is analysed using a control real processes.[7]
volume everything outside the control volume is considered The word 'isentropic' is occasionally, though not customarto be the surroundings.[3] )
ily, interpreted in another way, reading it as if its meaning
The throttling process is a good example of an isenthalpic
process. Consider the lifting of a relief valve or safety valve
on a pressure vessel. The specic enthalpy of the uid inside
the pressure vessel is the same as the specic enthalpy of the

were deducible from its etymology. This is contrary to its


original and customarily used denition. In this occasional
reading, it means a process in which the entropy of the system remains unchanged, for example because work done

5.7. ISENTROPIC PROCESS

119

on the system includes friction internal to the system, and


heat is withdrawn from the system, in just the right amount
to compensate for the internal friction, so as to leave the
entropy unchanged.[8]

5.7.1

Background

The second law of thermodynamics states that,

T dS Q
where Q is the amount of energy the system gains by
heating, T is the temperature of the system, and dS is the
change in entropy. The equal sign refers to a reversible process, which is an imagined idealized theoretical limit, never
actually occurring in physical reality.[9][10] For an isentropic
process, which by denition is reversible, there is no trans- T-s (Entropy vs. Temperature) diagram of an isentropic process,
fer of energy as heat because the process is adiabatic. In which is a vertical line segment.
an irreversible process of transfer of energy as work, entropy is produced within the system; consequently, in order
to maintain constant entropy within the system, energy must
be removed from the system as heat during the process.
Work Turbine Actual
Wa h1 h2a
=
=
=
For reversible processes, an isentropic transformation is T
Work Turbine Isentropic
Ws
h1 h2s
carried out by thermally insulating the system from its surroundings. Temperature is the thermodynamic conjugate Isentropic eciency of Compressors
variable to entropy, thus the conjugate process would be an
isothermal process in which the system is thermally conWs h2s h1
Work Compressor Isentropic
nected to a constant-temperature heat bath.
=
C =
=
Work Compressor Actual
Wa
h2a h1

5.7.2

Isentropic processes in thermodynamic systems

Isentropic eciency of Nozzles

2
Exit Nozzle at KE Actual
h1 h2a
V2a

=
=
=
N
2
The entropy of a given mass does not change during a proExit Nozzle at KE Isentropic
V2s
h1 h2s
cess that is internally reversible and adiabatic. A process
during which the entropy remains constant is called an isen- For all the above equations:
tropic process, written s = 0 or s1 = s2 . [11] Some isentropic thermodynamic devices include: pumps, gas comh1 is the enthalpy at the entrance state
pressors, turbines, nozzles, and diusers.
h2a is the enthalpy at the exit state for the actual
process

Isentropic eciencies of steady-ow devices in thermodynamic systems


Most steady-ow devices operate under adiabatic conditions, and the ideal process for these devices is the isentropic process.The parameter that describes how eciently
a device approximates a corresponding isentropic device is
called isentropic or adiabatic eciency.[12]
Isentropic eciency of Turbines:

h2s is the enthalpy at the exit state for the isentropic process
Isentropic devices in thermodynamic cycles
Ideal Rankine Cycle 1->2 Isentropic compression in a pump
Ideal Rankine Cycle 3->4 Isentropic expansion in a turbine
Ideal Carnot Cycle 2->3 Isentropic expansion
Ideal Carnot Cycle 4->1 Isentropic compression

120

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

Ideal Otto Cycle 1->2 Isentropic compression


Ideal Otto Cycle 3->4 Isentropic expansion
Ideal Diesel Cycle 1->2 Isentropic compression
Ideal Diesel Cycle 3->4 Isentropic expansion
Ideal Brayton Cycle 1->2 Isentropic compression in a
compressor
Ideal Brayton Cycle 3->4 Isentropic expansion in a turbine
Ideal Vapor-compression refrigeration Cycle 1->2 Isentropic compression in a compressor
NOTE: The isentropic assumptions are only applicable with
ideal cycles. Real world cycles have inherent losses due to
inecient compressors and turbines. The real world system
are not truly isentropic but are rather idealized as isentropic
for calculation purposes.

5.7.3

Isentropic ow

Then for a process that is both reversible and adiabatic


(i.e. no heat transfer occurs), Qrev = 0 , and so dS =
Qrev /T = 0 . All reversible adiabatic processes are isentropic. This leads to two important observations,
dU = W + Q = pdV + 0
dH = W +Q+pdV +V dp = pdV +0+pdV +V dp = V dp
Next, a great deal can be computed for isentropic processes
of an ideal gas. For any transformation of an ideal gas, it is
always true that
dU = nCv dT , and dH = nCp dT .
Using the general results derived above for dU and dH ,
then

In uid dynamics, an isentropic ow is a uid ow that is


both adiabatic and reversible. That is, no heat is added to dU = nCv dT = pdV
the ow, and no energy transformations occur due to friction
or dissipative eects. For an isentropic ow of a perfect dH = nCp dT = V dp
gas, several relations can be derived to dene the pressure, So for an ideal gas, the heat capacity ratio can be written as,
density and temperature along a streamline.
Note that energy can be exchanged with the ow in an isenCp
dp/p
=
tropic transformation, as long as it doesn't happen as heat =
CV
dV /V
exchange. An example of such an exchange would be an
isentropic expansion or compression that entails work done For an ideal gas is constant. Hence on integrating the
above equation, assuming a perfect gas, we get
on or by the ow.
For an isentropic ow, entropy density can vary between
dierent streamlines. If the entropy density is the same evpV = constant
erywhere, then ow is said to be homentropic.
( )
p2
V1
=
p1
V2
Derivation of the isentropic relations
Using the equation of state for an ideal gas, pV = nRT ,
For a closed system, the total change in energy of a system
is the sum of the work done and the heat added,

dU = W + Q
The reversible work done on a system by changing the volume is,

T V 1 = constant
p1
= constant
T
also, for constant Cp = Cv + R (per mole),
V
T

nR
p

and p =

nRT
V

)
( )
p2
T2
nR ln
T1
p1
dW = pdV
( )
(
)
( )
( )
S2 S1
T2
T2 V1
T2
V2
=
C
ln
R
ln
=
C
ln
+R
ln
where p is the pressure and V is the volume. The change in
p
v
n
T1
T1 V2
T1
V1
enthalpy ( H = U + pV ) is given by,
Thus for isentropic processes with an ideal gas,
(

S2 S1 = nCp ln

(
dH = dU + pdV + V dp

T2 = T1

V1
V2

)(R/Cv )

or V2 = V1

T1
T2

)(Cv /R)

5.7. ISENTROPIC PROCESS


Table of isentropic relations for an ideal gas

Derived from:

pV = constant
pV = mRs T
p = Rs T
Where:
p = Pressure
V = Volume
= Ratio of specic heats = Cp /Cv
T = Temperature
m = Mass
Rs = Gas constant for the specic gas = R/M
R = Universal gas constant
M = Molecular weight of the specic gas
= Density
Cp = Specic heat at constant pressure
Cv = Specic heat at constant volume

5.7.4

See also

Gas laws
Adiabatic process
Isenthalpic process
Isentropic analysis
Polytropic process

5.7.5

121

[3] Mnster, A. (1970). Classical Thermodynamics, translated


by E.S. Halberstadt, WileyInterscience, London, ISBN 0471-62430-6, p. 13.
[4] Haase, R. (1971). Survey of Fundamental Laws, chapter 1
of Thermodynamics, pages 197 of volume 1, ed. W. Jost,
of Physical Chemistry. An Advanced Treatise, ed. H. Eyring,
D. Henderson, W. Jost, Academic Press, New York, lcn 73
117081, p. 71.
[5] Borgnakke, C., Sonntag., R.E. (2009). Fundamentals of
Thermodynamics, seventh edition, Wiley, ISBN 978-0-47004192-5, p. 310.
[6] Massey, B.S. (1970), Mechanics of Fluids, Section 12.2 (2nd
edition) Van Nostrand Reinhold Company, London. Library
of Congress Catalog Card Number: 67-25005, p. 19.
[7] engel, Y.A., Boles, M.A. (2015). Thermodynamics: An
Engineering Approach, 8th edition, McGraw-Hill, New
York, ISBN 978-0-07-339817-4, p. 340.
[8] engel, Y.A., Boles, M.A. (2015). Thermodynamics: An
Engineering Approach, 8th edition, McGraw-Hill, New
York, ISBN 978-0-07-339817-4, pp. 340341.
[9] Guggenheim, E.A. (1985). Thermodynamics. An Advanced Treatment for Chemists and Physicists, seventh edition, North Holland, Amsterdam, ISBN 0444869514, p. 12:
As a limiting case between natural and unnatural processes
we have reversible processes, which consist of the passage in
either direction through a continuous series of equilibrium
states. Reversible processes do not actually occur ....
[10] Kestin, J. (1966). A Course in Thermodynamics, Blaisdell
Publishing Company, Waltham MA, p. 127: However, by
a stretch of imagination, it was accepted that a process, compression or expansion, as desired, could be performed innitely slowly or as is sometimes said, quasistatically." P.
130: It is clear that all natural processes are irreversible and
that reversible processes constitute convenient idealizations
only.
[11] Cengel, Yunus A., and Michaeul A. Boles. Thermodynamics: An Engineering Approach. 7th Edition ed. New York:
Mcgraw-Hill, 2012. Print.
[12] Cengel, Yunus A., and Michaeul A. Boles. Thermodynamics: An Engineering Approach. 7th Edition ed. New York:
Mcgraw-Hill, 2012. Print.

Notes

[1] Partington, J.R. (1949), An Advanced Treatise on Physical


Chemistry., volume 1, Fundamental Principles. The Properties of Gases, London: Longmans, Green and Co., p. 122.
[2] Kestin, J. (1966). A Course in Thermodynamics, Blaisdell
Publishing Company, Waltham MA, p. 196.

5.7.6

References

Van Wylen, G.J. and Sonntag, R.E. (1965), Fundamentals of Classical Thermodynamics, John Wiley &
Sons, Inc., New York. Library of Congress Catalog
Card Number: 65-19470

122

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

5.8 Polytropic process


Pantropic redirects here. For the term used in distributions, see pantropical.
A polytropic process is a thermodynamic process that
obeys the relation:

pv n = C
where p is the pressure, v is specic volume, n is the polytropic index (any real number), and C is a constant. All
processes that can be expressed as a pressure and volume product are polytropic processes. Some of those processes (n=0,1, , ), are unique. This equation can accurately characterize a very wide range of thermodynamic
processes, that range from n=0 to n= which covers,
n=0 (isobaric), n=1 (isothermal), n= (isentropic), n=
(isochoric) processes and all values of n in between. Hence
the equation is polytropic in the sense that it describes many
lines or many processes. In addition to the behavior of
gases, it can in some cases represent some liquids and solids.
The polytropic process equation is particularly useful for
characterizing expansion and compression processes which
include heat transfer. The one restriction is that the process
should display a constant energy transfer ratio K during that
process:

Polytropic processes behave dierently with various polytropic indice. Polytropic process can generate other basic thermodynamic
processes.

Energy entering the system increases the energy of the system, and energy leaving the system decreases the energy of
the system. The sign convention is that heat transfer into the
system is positive. Work done by the system is also positive.
With this sign convention, the heat transfer term is added
to du , and the work term is subtracted from du .
Dene the energy transfer ratio,

K = Q/W = constant
q
If it deviates from that restriction it suggests the exponent is K =
w
not a constant.
For a particular exponent, other points along the curve that q = Kw
describes that thermodynamic process can be calculated:
For an internally reversible process the only type of work
interaction is moving boundary work, given by w = pdv .
p1 v1n = p2 v2n = ... = C

5.8.1

By substituting the above expressions for w , and q into


The First Law it can then be written

Derivation

The following derivation is taken from Christians.[1] Consider a gas in a closed system undergoing an internally reversible process with negligible changes in kinetic and potential energy. The First Law of Thermodynamics states
that the energy added to a system as heat, minus the energy
that leaves the system as work, is equal to the change in the
internal energy of the system:

q w = du

(K 1)pdv = c v dT
Consider the Ideal Gas equation of state with the wellknown compressibility factor, Z: pv = ZRT. Assume the
compressibility factor is constant for the process. Assume
the gas constant is also xed (i.e. no chemical reactions are
occurring, hence R is constant). The pv = ZRT equation of
state can be dierentiated to give

pdv + vdp = ZRdT

5.8. POLYTROPIC PROCESS

123

Based on the well-known specic heat relationship arising 5.8.3 Polytropic Specic Heat Capacity
from the denition of enthalpy, the term ZR can be replaced
by cp - cv. With these observations the First Law (Eq. 1) It is denoted by cn and it is equal to cn = cv n
1n
becomes

5.8.4

Relationship to ideal processes

vdp
= (1 )K +

pdv

For certain values of the polytropic index, the process will


where is the ratio of specic heats c /c. This equation be synonymous with other common processes. Some exwill be important for understanding the basis of the poly- amples of the eects of varying index values are given in
tropic process equation. Now consider the polytropic pro- the table.
cess equation itself:
When the index n is between any two of the former values
(0, 1, , or ), it means that the polytropic curve will be
bounded by the curves of the two corresponding indices.
pv n = C
c
Note that 1 < < 2 , since = cvp = cvc+R
= 1 + cRv =
v
cp
Taking the natural log of both sides (recognizing that the cp R .
exponent n is constant for a polytropic process) gives

5.8.5

Notation

ln p + n ln v = C
which can be dierentiated and re-arranged to give

n=

vdp
pdv

By comparing this result to the result obtained from the First


Law, it is concluded that when the energy transfer ratio
is constant for the process, the polytropic exponent is a
constant and therefore the process is polytropic. In fact
the polytropic exponent can be expressed in terms of the
energy transfer ratio:
n = (1 )K +

In the case of an isentropic ideal gas, is the ratio of specic heats, known as the adiabatic index or as adiabatic exponent.
An isothermal ideal gas is also a polytropic gas. Here, the
polytropic index is equal to one, and diers from the adiabatic index .
In order to discriminate between the two gammas, the polytropic gamma is sometimes capitalized, .
To confuse matters further, some authors refer to as the
polytropic index, rather than n . Note that
n=

1
1 .

5.8.6

Other

where the term (1 ) is negative for an ideal gas.


A solution to the Lane-Emden equation using a polytropic
This derivation can be expanded to include polytropic prouid is known as a polytrope.
cesses in open systems, including instances where the kinetic energy (i.e. Mach Number) is signicant. It can also
be expanded to include irreversible polytropic processes
5.8.7 See also
(see Ref [1] ).
Polytrope

5.8.2

Applicability

The polytropic process equation is usually applicable for reversible or irreversible processes of ideal or near-ideal gases
involving heat transfer and/or work interactions when the
energy transfer ratio q/w is constant for the process. The
equation may not be applicable for processes in an open system if the kinetic energy (i.e. Mach Number) is signicant.
The polytropic process equation may also be applicable in
some cases to processes involving liquids, or even solids.

Adiabatic process
Isentropic process
Isobaric process
Isochoric process
Isothermal process
Vapor compression refrigeration

124

CHAPTER 5. CHAPTER 5. SYSTEM PROCESSES

Gas compressor
Internal combustion engine
Quasistatic equilibrium
Thermodynamics

5.8.8

References

[1] Christians, Joseph, Approach for Teaching Polytropic Processes Based on the Energy Transfer Ratio, International
Journal of Mechanical Engineering Education, Volume 40,
Number 1 (January 2012), Manchester University Press
[2] G. P. Horedt Polytropes: Applications In Astrophysics And
Related Fields, Springer, 10/08/2004, pp.24.
[3] GPSA book section 13

Chapter 6

Chapter 6. System Properties


6.1 Introduction to entropy

of the fact that the entropy of the Universe never decreases


is found in the second law of thermodynamics.

In a physical system, entropy provides a measure of the


This article is a non-technical introduction to the subject. amount of thermal energy that cannot be used to do work.
For the main encyclopedia article, see Entropy.
In some other denitions of entropy, it is a measure of how
evenly energy (or some analogous property) is distributed
The idea of "irreversibility" is central to the understanding in a system. Work and heat are determined by a process
of entropy. Everyone has an intuitive understanding of ir- that a system undergoes, and only occur at the boundary of
reversibility (a dissipative process) - if one watches a movie a system. Entropy is a function of the state of a system, and
of everyday life running forward and in reverse, it is easy to has a value determined by the state variables of the system.
distinguish between the two. The movie running in reverse The concept of entropy is central to the second law of thershows impossible things happening - water jumping out of modynamics. The second law determines which physical
a glass into a pitcher above it, smoke going down a chimney, processes can occur. For example, it predicts that the ow
water in a glass freezing to form ice cubes, crashed cars re- of heat from a region of high temperature to a region of
assembling themselves, and so on. The intuitive meaning of low temperature is a spontaneous process it can proceed
expressions such as you can't unscramble an egg, don't along by itself without needing any extra external energy.
cry over spilled milk or you can't take the cream out of When this process occurs, the hot region becomes cooler
the coee is that these are irreversible processes. There is and the cold region becomes warmer. Heat is distributed
a direction in time by which spilled milk does not go back more evenly throughout the system and the systems ability
into the glass.
to do work has decreased because the temperature dierIn thermodynamics, one says that the forward processes ence between the hot region and the cold region has de pouring water from a pitcher, smoke going up a chimney, creased. Referring back to our denition of entropy, we
etc. are irreversible": they cannot happen in reverse, can see that the entropy of this system has increased. Thus,
even though, on a microscopic level, no laws of physics the second law of thermodynamics can be stated to say that
would be violated if they did. All real physical processes the entropy of an isolated system always increases, and such
involving systems in everyday life, with many atoms or processes which increase entropy can occur spontaneously.
molecules, are irreversible. For an irreversible process in an The entropy of a system increases as its components have
isolated system, the thermodynamic state variable known as the range of their momentum and/or position increased.
entropy is always increasing. The reason that the movie in The term entropy was coined in 1865 by the German physireverse is so easily recognized is because it shows processes cist Rudolf Clausius, from the Greek words en-, in, and
for which entropy is decreasing, which is physically impos- trope a turning, in analogy with energy.[1]
sible. In everyday life, there may be processes in which the
increase of entropy is practically unobservable, almost zero.
In these cases, a movie of the process run in reverse will not 6.1.1 Explanation
seem unlikely. For example, in a 1-second video of the collision of two billiard balls, it will be hard to distinguish the The concept of thermodynamic entropy arises from the
forward and the backward case, because the increase of en- second law of thermodynamics. By this law of entropy intropy during that time is relatively small. In thermodynam- crease it quanties the reduction in the capacity of a system
ics, one says that this process is practically reversible, with for change, for example heat always ows from a region
an entropy increase that is practically zero. The statement of higher temperature to one with lower temperature un125

126

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

til temperature becomes uniform or determines whether a which result in those values. The number of arrangements
thermodynamic process may occur.
of molecules which could result in the same values for temEntropy is calculated in two ways, the rst is the entropy perature, pressure and volume is the number of microstates.
change (S) to a system containing a sub-system which undergoes heat transfer to its surroundings (inside the system
of interest). It is based on the macroscopic relationship between heat ow into the sub-system and the temperature
at which it occurs summed over the boundary of that subsystem. The second calculates the absolute entropy (S) of
a system based on the microscopic behaviour of its individual particles. This is based on the natural logarithm of the
number of microstates possible in a particular macrostate
(W or ) called the thermodynamic probability. Roughly,
it gives the probability of the systems being in that state. In
this sense it eectively denes entropy independently from
its eects due to changes which may involve heat, mechanical, electrical, chemical energies etc. but also includes logical states such as information.

The concept of energy is related to the rst law of thermodynamics, which deals with the conservation of energy
and under which the loss in heat will result in a decrease
in the internal energy of the thermodynamic system. Thermodynamic entropy provides a comparative measure of the
amount of this decrease in internal energy of the system
and the corresponding increase in internal energy of the surroundings at a given temperature. A simple and more concrete visualization of the second law is that energy of all
types changes from being localized to becoming dispersed
or spread out, if it is not hindered from doing so. Entropy
change is the quantitative measure of that kind of a spontaneous process: how much energy has owed or how widely
it has become spread out at a specic temperature.

The concept of entropy has been developed to describe any


Following the formalism of Clausius, the rst calculation of several phenomena, depending on the eld and the concan be mathematically stated as:[2]
text in which it is being used. Information entropy takes the
mathematical concepts of statistical thermodynamics into
areas of probability theory unconnected with heat and enq
ergy.
S = .
T
Where S is the increase or decrease in entropy, q is the
heat added to the system or subtracted from it, and T is
temperature. The equal sign indicates that the change is reversible . If the temperature is allowed to vary, the equation
must be integrated over the temperature path. This calculation of entropy change does not allow the determination of
absolute value, only dierences. In this context, the Second Law of Thermodynamics may be stated that for heat
transferred over any valid process for any system, whether
isolated or not,

q
.
T

The second calculation denes entropy in absolute terms


and comes from statistical mechanics. The entropy of a
particular macrostate is dened to be Boltzmanns constant
times the natural logarithm of the number of microstates
corresponding to that macrostate, or mathematically

S = kB ln ,
Where S is the entropy, kB is Boltzmanns constant, and
is the number of microstates.
The macrostate of a system is what we know about the system, for example the temperature, pressure, and volume of Ice melting provides an example of entropy increasing
a gas in a box. For each set of values of temperature, pressure, and volume there are many arrangements of molecules

6.1. INTRODUCTION TO ENTROPY

6.1.2

Example of increasing entropy

Main article: Disgregation


Ice melting provides an example in which entropy increases
in a small system, a thermodynamic system consisting of
the surroundings (the warm room) and the entity of glass
container, ice, water which has been allowed to reach
thermodynamic equilibrium at the melting temperature of
ice. In this system, some heat (Q) from the warmer surroundings at 298 K (77 F, 25 C) transfers to the cooler
system of ice and water at its constant temperature (T)
of 273 K (32 F, 0 C), the melting temperature of ice.
The entropy of the system, which is Q/T, increases by
Q/273K. The heat Q for this process is the energy required to change water from the solid state to the liquid
state, and is called the enthalpy of fusion, i.e. H for ice
fusion.
It is important to realize that the entropy of the surrounding
room decreases less than the entropy of the ice and water
increases: the room temperature of 298 K is larger than 273
K and therefore the ratio, (entropy change), of Q/298K for
the surroundings is smaller than the ratio (entropy change),
of Q/273K for the ice and water system. This is always
true in spontaneous events in a thermodynamic system and
it shows the predictive importance of entropy: the nal net
entropy after such an event is always greater than was the
initial entropy.
As the temperature of the cool water rises to that of the
room and the room further cools imperceptibly, the sum
of the Q/T over the continuous range, at many increments, in the initially cool to nally warm water can be
found by calculus. The entire miniature universe, i.e. this
thermodynamic system, has increased in entropy. Energy
has spontaneously become more dispersed and spread out
in that universe than when the glass of ice and water was
introduced and became a 'system' within it.

6.1.3

Origins and uses

Originally, entropy was named to describe the waste heat,


or more accurately, energy loss, from heat engines and other
mechanical devices which could never run with 100% efciency in converting energy into work. Later, the term
came to acquire several additional descriptions, as more was
understood about the behavior of molecules on the microscopic level. In the late 19th century, the word disorder
was used by Ludwig Boltzmann in developing statistical
views of entropy using probability theory to describe the
increased molecular movement on the microscopic level.
That was before quantum behavior came to be better understood by Werner Heisenberg and those who followed.

127
Descriptions of thermodynamic (heat) entropy on the microscopic level are found in statistical thermodynamics and
statistical mechanics.
For most of the 20th century, textbooks tended to describe entropy as disorder, following Boltzmanns early
conceptualisation of the motional (i.e. kinetic) energy of
molecules. More recently, there has been a trend in chemistry and physics textbooks to describe entropy as energy
dispersal.[3] Entropy can also involve the dispersal of particles, which are themselves energetic. Thus there are instances where both particles and energy disperse at dierent rates when substances are mixed together.
The mathematics developed in statistical thermodynamics were found to be applicable in other disciplines. In
particular, information sciences developed the concept of
information entropy where a constant replaces the temperature which is inherent in thermodynamic entropy.

6.1.4

Heat and entropy

At a microscopic level, kinetic energy of molecules is responsible for the temperature of a substance or a system.
Heat is the kinetic energy of molecules being transferred:
when motional energy is transferred from hotter surroundings to a cooler system, faster-moving molecules in the
surroundings collide with the walls of the system which
transfers some of their energy to the molecules of the system and makes them move faster.
Molecules in a gas like nitrogen at room temperature
at any instant are moving at an average speed of nearly
500 miles per hour (210 m/s), repeatedly colliding
and therefore exchanging energy so that their individual speeds are always changing. Assuming an idealgas model, average kinetic energy increases linearly
with temperature, so the average speed increases as
the square root of temperature.
Thus motional molecular energy (heat energy)
from hotter surroundings, like faster-moving
molecules in a ame or violently vibrating iron
atoms in a hot plate, will melt or boil a substance
(the system) at the temperature of its melting or
boiling point. That amount of motional energy
from the surroundings that is required for melting or boiling is called the phase-change energy,
specically the enthalpy of fusion or of vaporization, respectively. This phase-change energy
breaks bonds between the molecules in the system (not chemical bonds inside the molecules
that hold the atoms together) rather than contributing to the motional energy and making
the molecules move any faster so it does not

128

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES


raise the temperature, but instead enables the
molecules to break free to move as a liquid or
as a vapor.
In terms of energy, when a solid becomes a liquid or a vapor, motional energy coming from the
surroundings is changed to potential energy in
the substance (phase change energy, which is released back to the surroundings when the surroundings become cooler than the substances
boiling or melting temperature, respectively).
Phase-change energy increases the entropy of
a substance or system because it is energy that
must be spread out in the system from the surroundings so that the substance can exist as a liquid or vapor at a temperature above its melting
or boiling point. When this process occurs in a
'universe' that consists of the surroundings plus
the system, the total energy of the 'universe' becomes more dispersed or spread out as part of
the greater energy that was only in the hotter surroundings transfers so that some is in the cooler
system. This energy dispersal increases the entropy of the 'universe'.

The important overall principle is that Energy of all types


changes from being localized to becoming dispersed or
spread out, if not hindered from doing so. Entropy (or better, entropy change) is the quantitative measure of that kind
of a spontaneous process: how much energy has been transferred/T or how widely it has become spread out at a specic
temperature.
Classical calculation of entropy
When entropy was rst dened and used in 1865 the very
existence of atoms was still controversial and there was no
concept that temperature was due to the motional energy
of molecules or that heat was actually the transferring of
that motional molecular energy from one place to another.
Entropy change, S , was described in macroscopic terms
that could be directly measured, such as volume, temperature, or pressure. However, today the classical equation
of entropy, S = qrev
T can be explained, part by part, in
modern terms describing how molecules are responsible for
what is happening:
S is the change in entropy of a system (some physical substance of interest) after some motional energy (heat) has been transferred to it by fast-moving
molecules. So, S = Sf inal Sinitial .
Then, S = Sf inal Sinitial = qrev
T , the quotient
of the motional energy (heat) q that is transferred

reversibly (rev) to the system from the surroundings


(or from another system in contact with the rst system) divided by T, the absolute temperature at which
the transfer occurs.
Reversible or reversibly (rev) simply means
that T, the temperature of the system, has to stay
(almost) exactly the same while any energy is being transferred to or from it. Thats easy in the
case of phase changes, where the system absolutely must stay in the solid or liquid form until enough energy is given to it to break bonds
between the molecules before it can change to
a liquid or a gas. For example in the melting
of ice at 273.15 K, no matter what temperature
the surroundings are from 273.20 K to 500 K
or even higher, the temperature of the ice will
stay at 273.15 K until the last molecules in the
ice are changed to liquid water, i.e., until all the
hydrogen bonds between the water molecules in
ice are broken and new, less-exactly xed hydrogen bonds between liquid water molecules are
formed. This amount of energy necessary for
ice melting per mole has been found to be 6008
joules at 273 K. Therefore, the entropy change
6008J
per mole is qrev
T = 273K , or 22 J/K.
When the temperature isn't at the melting or
boiling point of a substance no intermolecular
bond-breaking is possible, and so any motional
molecular energy (heat) from the surroundings
transferred to a system raises its temperature,
making its molecules move faster and faster. As
the temperature is constantly rising, there is no
longer a particular value of T at which energy
is transferred. However, a reversible energy
transfer can be measured at a very small temperature increase, and a cumulative total can be
found by adding each of many small temperature
intervals or increments. For example, to nd the
entropy change qrev
T from 300 K to 310 K, measure the amount of energy transferred at dozens
or hundreds of temperature increments, say from
300.00 K to 300.01 K and then 300.01 to 300.02
and so on, dividing the q by each T, and nally
adding them all.
Calculus can be used to make this calculation
easier if the eect of energy input to the system
is linearly dependent on the temperature change,
as in simple heating of a system at moderate
to relatively high temperatures. Thus, the energy being transferred per incremental change
in temperature (the heat capacity, Cp ), multiplied by the integral of dT
T from Tinitial to
Tf inal
Tf inal , is directly given by S = Cp ln Tinitial

6.2. ENTROPY
.

6.1.5

Introductory descriptions of entropy

Traditionally, 20th century textbooks have introduced


entropy as order and disorder so that it provides a measurement of the disorder or randomness of a system. It
has been argued that ambiguities in the terms used (such
as disorder and chaos) contribute to widespread confusion and can hinder comprehension of entropy for most
students. A more recent formulation associated with Frank
L. Lambert describing entropy as energy dispersal.[3]

6.1.6

a measure of molecular disorder within a macroscopic system. The second law of thermodynamics states that an isolated systems entropy never decreases. Such a system spontaneously evolves towards thermodynamic equilibrium, the
state with maximum entropy. Non-isolated systems may
lose entropy, provided they increase their environments entropy by that increment. Since entropy is a state function,
the change in entropy of a system is constant for any process
with known initial and nal states. This applies whether the
process is reversible or irreversible. However, irreversible
processes increase the combined entropy of the system and
its environment.
The change in entropy (S) of a system was originally dened for a thermodynamically reversible process as

See also

Entropy (energy dispersal)


Second law of thermodynamics
Statistical mechanics
Thermodynamics

6.1.7

129

References

[1] etymonline.com:entropy". Retrieved 2009-06-15.


[2] I. Klotz, R. Rosenberg, Chemical Thermodynamics - Basic
Concepts and Methods, 7th ed., Wiley (2008), p. 125
[3] Entropy Sites A Guide Content selected by Frank L.
Lambert

S =

Qrev
T

where T is the absolute temperature of the system, dividing an incremental reversible transfer of heat into that system (Q). (If heat is transferred out the sign would be
reversed giving a decrease in entropy of the system.) The
above denition is sometimes called the macroscopic definition of entropy because it can be used without regard to
any microscopic description of the contents of a system.
The concept of entropy has been found to be generally useful and has several other formulations. Entropy was discovered when it was noticed to be a quantity that behaves as
a function of state, as a consequence of the second law of
thermodynamics.

Entropy is an extensive property. It has the dimension of


energy divided by temperature, which has a unit of joules
per kelvin (J K1 ) in the International System of Units (or
6.1.8 Further reading
kg m2 s2 K1 in terms of base units). But the entropy of a
Goldstein, Martin and Inge F. (1993). The Refrigera- pure substance is usually given as an intensive property
tor and the Universe: Understanding the Laws of En- either entropy per unit mass (SI unit: J K1 kg1 ) or entropy
ergy. Harvard Univ. Press. ISBN 9780674753259. per unit amount of substance (SI unit: J K1 mol1 ).
chapters=4-12 touch on entropy
The absolute entropy (S rather than S) was dened later,
using either statistical mechanics or the third law of thermodynamics.

6.2 Entropy

This article is about entropy in thermodynamics. For other


uses, see Entropy (disambiguation).
Not to be confused with Enthalpy.
For a more accessible and less technical introduction to
this topic, see Introduction to entropy.
In thermodynamics, entropy (usual symbol S) is a measure
of the number of specic realizations or microstates that
may realize a thermodynamic system in a dened state specied by macroscopic variables. Most understand entropy as

In the modern microscopic interpretation of entropy in statistical mechanics, entropy is the amount of additional information needed to specify the exact physical state of a system, given its thermodynamic specication. Understanding
the role of thermodynamic entropy in various processes requires an understanding of how and why that information
changes as the system evolves from its initial to its nal condition. It is often said that entropy is an expression of the
disorder, or randomness of a system, or of our lack of information about it. The second law is now often seen as an
expression of the fundamental postulate of statistical mechanics through the modern denition of entropy.

130

6.2.1

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

History

The rst law of thermodynamics, deduced from the heatfriction experiments of James Joule in 1843, expresses the
concept of energy, and its conservation in all processes; the
rst law, however, is unable to quantify the eects of friction
and dissipation.
In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in
the working body, and gave this change a mathematical interpretation by questioning the nature of the inherent loss of usable heat when work is done, e.g. heat
produced by friction.[3] Clausius described entropy as the
transformation-content, i.e. dissipative energy use, of
a thermodynamic system or working body of chemical
species during a change of state.[3] This was in contrast to
earlier views, based on the theories of Isaac Newton, that
heat was an indestructible particle that had mass.

Rudolf Clausius (18221888), originator of the concept of entropy

Main article: History of entropy


The French mathematician Lazare Carnot proposed in his
1803 paper Fundamental Principles of Equilibrium and
Movement that in any machine the accelerations and shocks
of the moving parts represent losses of moment of activity. In other words, in any natural process there exists an
inherent tendency towards the dissipation of useful energy.
Building on this work, in 1824 Lazares son Sadi Carnot
published Reections on the Motive Power of Fire which
posited that in all heat-engines, whenever "caloric" (what is
now known as heat) falls through a temperature dierence,
work or motive power can be produced from the actions of
its fall from a hot to cold body. He made the analogy with
that of how water falls in a water wheel. This was an early
insight into the second law of thermodynamics.[1] Carnot
based his views of heat partially on the early 18th century Newtonian hypothesis that both heat and light were
types of indestructible forms of matter, which are attracted
and repelled by other matter, and partially on the contemporary views of Count Rumford who showed (1789) that
heat could be created by friction as when cannon bores are
machined.[2] Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its
original state at the end of a complete engine cycle, that no
change occurs in the condition of the working body.

Later, scientists such as Ludwig Boltzmann, Josiah Willard


Gibbs, and James Clerk Maxwell gave entropy a statistical
basis. In 1877 Boltzmann visualized a probabilistic way to
measure the entropy of an ensemble of ideal gas particles,
in which he dened entropy to be proportional to the logarithm of the number of microstates such a gas could occupy. Henceforth, the essential problem in statistical thermodynamics, i.e. according to Erwin Schrdinger, has been
to determine the distribution of a given amount of energy
E over N identical systems. Carathodory linked entropy
with a mathematical denition of irreversibility, in terms
of trajectories and integrability.

6.2.2

Denitions and descriptions

Any method involving the notion of entropy, the very existence of which depends on the second law of thermodynamics, will doubtless seem to many far-fetched, and may
repel beginners as obscure and dicult of comprehension.
Willard Gibbs, Graphical Methods in the Thermodynamics
of Fluids[4]
There are two related denitions of entropy: the
thermodynamic denition and the statistical mechanics definition. Historically, the classical thermodynamics denition developed rst. In the classical thermodynamics viewpoint, the system is composed of very large numbers of
constituents (atoms, molecules) and the state of the system
is described by the average thermodynamic properties of
those constituents; the details of the systems constituents
are not directly considered, but their behavior is described
by macroscopically averaged properties, e.g. temperature,
pressure, entropy, heat capacity. The early classical definition of the properties of the system assumed equilibrium. The classical thermodynamic denition of entropy
has more recently been extended into the area of non-

6.2. ENTROPY
equilibrium thermodynamics. Later, the thermodynamic
properties, including entropy, were given an alternative definition in terms of the statistics of the motions of the microscopic constituents of a system modeled at rst classically, e.g. Newtonian particles constituting a gas, and later
quantum-mechanically (photons, phonons, spins, etc.). The
statistical mechanics description of the behavior of a system
is necessary as the denition of the properties of a system
using classical thermodynamics become an increasingly unreliable method of predicting the nal state of a system that
is subject to some process.

131
treated as an irreversible process, which is usually a complex task. An irreversible process increases entropy.[7]

Heat transfer situations require two or more non-isolated


systems in thermal contact. In irreversible heat transfer, heat energy is irreversibly transferred from the higher
temperature system to the lower temperature system, and
the combined entropy of the systems increases. Each system, by denition, must have its own absolute temperature
applicable within all areas in each respective system in order to calculate the entropy transfer. Thus, when a system
at higher temperature TH transfers heat dQ to a system of
lower temperature TC, the former loses entropy dQ/TH and
the
latter gains entropy dQ/TC. Since TH > TC, it follows
Function of state
that dQ/TH < dQ/TC, whence there is a net gain in the comThere are many thermodynamic properties that are bined entropy.
functions of state. This means that at a particular thermodynamic state (which should not be confused with the Carnot cycle
microscopic state of a system), these properties have a certain value. Often, if two properties of the system are deter- The concept of entropy arose from Rudolf Clausius's study
mined, then the state is determined and the other proper- of the Carnot cycle.[8] In a Carnot cycle, heat QH is abties values can also be determined. For instance, a quantity sorbed at temperature TH from a 'hot' reservoir (an isotherof gas at a particular temperature and pressure has its state mal process), and given up as heat QC to a 'cold' reservoir
xed by those values, and has a particular volume that is (isothermal process) at TC. According to Carnots princidetermined by those values. As another instance, a system ple, work can only be produced by the system when there
composed of a pure substance of a single phase at a particu- is a temperature dierence, and the work should be some
lar uniform temperature and pressure is determined (and is function of the dierence in temperature and the heat abthus a particular state) and is at not only a particular volume sorbed (QH). Carnot did not distinguish between QH and
but also at a particular entropy.[5] The fact that entropy is QC, since he was using the incorrect hypothesis that caloric
a function of state is one reason it is useful. In the Carnot theory was valid, and hence heat was conserved (the incorcycle, the working uid returns to the same state it had at rect assumption that QH and QC were equal) when, in fact,
the start of the cycle, hence the line integral of any state QH is greater than QC.[9] Through the eorts of Clausius
function, such as entropy, over the cycle is zero.
and Kelvin, it is now known that the maximum work that a
Reversible process
Entropy is dened for a reversible process and for a system that, at all times, can be treated as being at a uniform
state and thus at a uniform temperature. Reversibility is an
ideal that some real processes approximate and that is often presented in study exercises. For a reversible process,
entropy behaves as a conserved quantity and no change occurs in total entropy. More specically, total entropy is
conserved in a reversible process and not conserved in an
irreversible process.[6] One has to be careful about system
boundaries. For example, in the Carnot cycle, while the
heat ow from the hot reservoir to the cold reservoir represents an increase in entropy, the work output, if reversibly
and perfectly stored in some energy storage mechanism,
represents a decrease in entropy that could be used to operate the heat engine in reverse and return to the previous
state, thus the total entropy change is still zero at all times
if the entire process is reversible. Any process that does
not meet the requirements of a reversible process must be

system can produce is the product of the Carnot eciency


and the heat absorbed from the hot reservoir:

In order to derive the Carnot eciency, 1-(TC/TH) (a number less than one), Kelvin had to evaluate the ratio of the
work output to the heat absorbed during the isothermal expansion with the help of the Carnot-Clapeyron equation
which contained an unknown function, known as the Carnot
function. The possibility that the Carnot function could
be the temperature as measured from a zero temperature,
was suggested by Joule in a letter to Kelvin. This allowed
Kelvin to establish his absolute temperature scale.[10] It is
also known that the work produced by the system is the difference between the heat absorbed from the hot reservoir
and the heat given up to the cold reservoir:

Since the latter is valid over the entire cycle, this gave Clausius the hint that at each stage of the cycle, work and heat

132
would not be equal, but rather their dierence would be a
state function that would vanish upon completion of the cycle. The state function was called the internal energy and it
became the rst law of thermodynamics.[11]
Now equating (1) and (2) gives

QH
QC

=0
TH
TC
or

QC
QH
=
TH
TC

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES


irreversible process prevents the cycle from outputting the
maximum amount of work as predicted by the Carnot equation.
The Carnot cycle and eciency are useful because they dene the upper bound of the possible work output and the
eciency of any classical thermodynamic system. Other
cycles, such as the Otto cycle, Diesel cycle and Brayton
cycle, can be analyzed from the standpoint of the Carnot
cycle. Any machine or process that is claimed to produce
an eciency greater than the Carnot eciency is not viable because it violates the second law of thermodynamics.
For very small numbers of particles in the system, statistical thermodynamics must be used. The eciency of devices such as photovoltaic cells require an analysis from the
standpoint of quantum mechanics.

This implies that there is a function of state which is conserved over a complete cycle of the Carnot cycle. Clausius
called this state function entropy. One can see that entropy Classical thermodynamics
was discovered through mathematics rather than through
laboratory results. It is a mathematical construct and has Main article: Entropy (classical thermodynamics)
no easy physical analogy. This makes the concept somewhat obscure or abstract, akin to how the concept of energy
The thermodynamic denition of entropy was developed
arose.
in the early 1850s by Rudolf Clausius and essentially deClausius then asked what would happen if there should be scribes how to measure the entropy of an isolated system in
less work produced by the system than that predicted by thermodynamic equilibrium with its parts. Clausius created
Carnots principle. The right-hand side of the rst equation the term entropy as an extensive thermodynamic variable
would be the upper bound of the work output by the system, that was shown to be useful in characterizing the Carnot
which would now be converted into an inequality
cycle. Heat transfer along the isotherm steps of the Carnot
cycle was found to be proportional to the temperature of a
system (known as its absolute temperature). This relation(
)
TC
ship was expressed in increments of entropy equal to the
W < 1
QH
TH
ratio of incremental heat transfer divided by temperature,
which was found to vary in the thermodynamic cycle but
When the second equation is used to express the work as a eventually return to the same value at the end of every cydierence in heats, we get
cle. Thus it was found to be a function of state, specically
a
thermodynamic state of the system. Clausius wrote that
(
)
C
he intentionally formed the word Entropy as similar as posQH QC < 1 TTH
QH
sible to the word Energy, basing the term on the Greek
or
trop, transformation.[12][note 1]
C
QH
QC > TTH
While Clausius based his denition on a reversible process,
there are also irreversible processes that change entropy.
So more heat is given up to the cold reservoir than in the Following the second law of thermodynamics, entropy of an
Carnot cycle. If we denote the entropies by S=Q/T for isolated system always increases. The dierence between
the two states, then the above inequality can be written as a an isolated system and closed system is that heat may not
decrease in the entropy
ow to and from an isolated system, but heat ow to and
from a closed system is possible. Nevertheless, for both
SH SC < 0
closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur.
or
According
SH < SC
H to the Clausius equality, for a reversible
cyclic
process: QTrev = 0. This means the line integral L QTrev
In other words, the entropy that leaves the system is greater is path-independent.
than the entropy that enters the system, implying that some So we can dene a state function S called entropy, which

6.2. ENTROPY
satises dS =

Qrev
T .

To nd the entropy dierence between any two states of a


system, the integral must be evaluated for some reversible
path between the initial and nal states.[13] Since entropy
is a state function, the entropy change of the system for an
irreversible path will be the same as for a reversible path between the same two states.[14] However, the entropy change
of the surroundings will be dierent.

133
sure of disorder (the higher the entropy, the higher the
disorder).[15][16][17] This denition describes the entropy as
being proportional to the natural logarithm of the number of
possible microscopic congurations of the individual atoms
and molecules of the system (microstates) which could give
rise to the observed macroscopic state (macrostate) of the
system. The constant of proportionality is the Boltzmann
constant.

We can only obtain the change of entropy by integrating the Specically, entropy is a logarithmic measure of the number
above formula. To obtain the absolute value of the entropy, of states with signicant probability of being occupied:
we need the third law of thermodynamics, which states that
S = 0 at absolute zero for perfect crystals.

S = kB
pi ln pi ,
From a macroscopic perspective, in classical thermodyi
namics the entropy is interpreted as a state function of a
thermodynamic system: that is, a property depending only where kB is the Boltzmann constant, equal to
on the current state of the system, independent of how that 1.380651023 J/K. The summation is over all the
state came to be achieved. In any process where the system possible microstates of the system, and pi is the probability
gives up energy E, and its entropy falls by S, a quantity that the system is in the i-th microstate.[18] This denition
at least TR S of that energy must be given up to the sys- assumes that the basis set of states has been picked so
tems surroundings as unusable heat (TR is the temperature that there is no information on their relative phases. In a
of the systems external surroundings). Otherwise the pro- dierent basis set, the more general expression is
cess will not go forward. In classical thermodynamics, the
entropy of a system is dened only if it is in thermodynamic
equilibrium.
S = k Tr (b
ln(b
)),
B

Statistical mechanics
The statistical denition was developed by Ludwig Boltzmann in the 1870s by analyzing the statistical behavior of
the microscopic components of the system. Boltzmann
showed that this denition of entropy was equivalent to the
thermodynamic entropy to within a constant number which
has since been known as Boltzmanns constant. In summary, the thermodynamic denition of entropy provides
the experimental denition of entropy, while the statistical denition of entropy extends the concept, providing an
explanation and a deeper understanding of its nature.

where b is the density matrix, Tr is trace (linear algebra)


and ln is the matrix logarithm. This density matrix formulation is not needed in cases of thermal equilibrium so long
as the basis states are chosen to be energy eigenstates. For
most practical purposes, this can be taken as the fundamental denition of entropy since all other formulas for S can
be mathematically derived from it, but not vice versa.

In what has been called the fundamental assumption of statistical thermodynamics or the fundamental postulate in statistical mechanics, the occupation of any microstate is assumed to be equally probable (i.e. Pi = 1/, where is the
number of microstates); this assumption is usually justied
for an isolated system in equilibrium.[19] Then the previous
The interpretation of entropy in statistical mechanics is the
equation reduces to
measure of uncertainty, or mixedupness in the phrase of
Gibbs, which remains about a system after its observable
macroscopic properties, such as temperature, pressure and
volume, have been taken into account. For a given set of S = kB ln .
macroscopic variables, the entropy measures the degree to
which the probability of the system is spread out over dif- In thermodynamics, such a system is one in which the volferent possible microstates. In contrast to the macrostate, ume, number of molecules, and internal energy are xed
which characterizes plainly observable average quantities, (the microcanonical ensemble).
a microstate species all molecular details about the sys- The most general interpretation of entropy is as a measure
tem including the position and velocity of every molecule. of our uncertainty about a system. The equilibrium state
The more such states available to the system with apprecia- of a system maximizes the entropy because we have lost all
ble probability, the greater the entropy. In statistical me- information about the initial conditions except for the conchanics, entropy is a measure of the number of ways in served variables; maximizing the entropy maximizes our igwhich a system may be arranged, often taken to be a mea- norance about the details of the system.[20] This uncertainty

134

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES


.01 psi
100,000 ft^3/lbm
.02 psi
50,000 ft^3/lbm

.5 psi
2000 ft^3/lbm
1 psi
1000 ft^3/lbm
2 psi
500 ft^3/lbm

.05 psi
20,000 ft^3/lbm
.1 psi
10,000 ft^3/lbm
.2 psi
5000 ft^3/lbm

5 psi
200 ft^3/lbm
10 psi
100 ft^3/lbm
20 psi
50 ft^3/lbm

50 psi
20 ft^3/lbm
100 psi
10 ft^3/lbm
200 psi
5 ft^3/lbm

5000 psi
.2 ft^3/lbm

500 psi
2 ft^3/lbm
1000 psi
1 ft^3/lbm
2000 psi
.5 ft^3/lbm

10,000 psi
.1 ft^3/lbm

50,000 psi

2200

20,000 psi
.05 ft^3/lbm

100,000 psi
.02 ft^3/lbm

is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model.

2100

2000

1900

1800

1700

supercritical region
1600

1500

1400

temperature, R

The interpretative model has a central role in determining


entropy. The qualier for a given set of macroscopic variables above has deep implications: if two observers use
dierent sets of macroscopic variables, they will observe
dierent entropies. For example, if observer A uses the
variables U, V and W, and observer B uses U, V, W, X,
then, by changing X, observer B can cause an eect that
looks like a violation of the second law of thermodynamics to observer A. In other words: the set of macroscopic
variables one chooses must include everything that may
change in the experiment, otherwise one might see decreasing entropy![21]

1300

vapor region

liquid region
1200

1100

1000

900

800

700

saturated region
600

1.6

entropy, Btu/lbm-R

1.8

2.0

100%

1.4

80%

1.2

90%

1.0

60%

0.8

70%

0.6

40%

0.4

50%
quality

0.2

20%

0.0

30%

0%

500

10%

Entropy can be dened for any Markov processes with


reversible dynamics and the detailed balance property.

2.2

2.4

2.6

2.8

3.0

3.2

In Boltzmanns 1896 Lectures on Gas Theory, he showed


that this expression gives a measure of entropy for systems A temperatureentropy diagram for steam. The vertical axis repreof atoms and molecules in the gas phase, thus providing a sents uniform temperature, and the horizontal axis represents spemeasure for the entropy of classical thermodynamics.
cic entropy. Each dark line on the graph represents constant pres-

Entropy of a system

sure, and these form a mesh with light gray lines of constant volume.
(Dark-blue is liquid water, light-blue is liquid-steam mixture, and
faint-blue is steam. Grey-blue represents supercritical liquid water.)

mechanics. As an example, for a glass of ice water in air at


room temperature, the dierence in temperature between
a warm room (the surroundings) and cold glass of ice and
water (the system and not part of the room), begins to be
equalized as portions of the thermal energy from the warm
surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents
and the temperature of the room become equal. The enSYSTEM
tropy of the room has decreased as some of its energy has
been dispersed to the ice and water. However, as calculated in the example, the entropy of the system of ice and
water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the
room and ice water taken together, the dispersal of energy
from warmer to cooler always results in a net increase in enBOUNDARY
tropy. Thus, when the universe of the room and ice water
system has reached a temperature equilibrium, the entropy
A thermodynamic system
change from the initial state is at a maximum. The entropy
of the thermodynamic system is a measure of how far the
Entropy is the above-mentioned unexpected and, to some, equalization has progressed.
obscure integral that arises directly from the Carnot cycle. Thermodynamic entropy is a non-conserved state funcIt is reversible heat divided by temperature. It is also, re- tion that is of great importance in the sciences of physics
markably, a fundamental and very useful function of state. and chemistry.[15][22] Historically, the concept of entropy

SURROUNDINGS

In a thermodynamic system, pressure, density, and temperature tend to become uniform over time because this
equilibrium state has higher probability (more possible
combinations of microstates) than any other; see statistical

evolved in order to explain why some processes (permitted by conservation laws) occur spontaneously while
their time reversals (also permitted by conservation laws)
do not; systems tend to progress in the direction of in-

6.2. ENTROPY
creasing entropy.[23][24] For isolated systems, entropy never
decreases.[22] This fact has several important consequences
in science: rst, it prohibits "perpetual motion" machines;
and second, it implies the arrow of entropy has the same
direction as the arrow of time. Increases in entropy correspond to irreversible changes in a system, because some
energy is expended as waste heat, limiting the amount of
work a system can do.[15][16][25][26]
Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Entropy can be calculated for a substance as the standard molar entropy from
absolute zero (also known as absolute entropy) or as a difference in entropy from some other reference state which
is dened as zero entropy. Entropy has the dimension of
energy divided by temperature, which has a unit of joules
per kelvin (J/K) in the International System of Units. While
these are the same units as heat capacity, the two concepts
are distinct.[27] Entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature,
heat might irreversibly ow and the temperature become
more uniform such that entropy increases. The second law
of thermodynamics, states that a closed system has entropy
which may increase or otherwise remain constant. Chemical reactions cause changes in entropy and entropy plays an
important role in determining in which direction a chemical
reaction spontaneously proceeds.

135
that heat will not ow from a colder body to a hotter body
without the application of work (the imposition of order)
to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single
temperature reservoir; the production of net work requires
ow of heat from a hotter reservoir to a colder reservoir,
or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no
possibility of a perpetual motion system. It follows that a
reduction in the increase of entropy in a specied process,
such as a chemical reaction, means that it is energetically
more ecient.
It follows from the second law of thermodynamics that the
entropy of a system that is not isolated may decrease. An air
conditioner, for example, may cool the air in a room, thus
reducing the entropy of the air of that system. The heat
expelled from the room (the system), which the air conditioner transports and discharges to the outside air, will
always make a bigger contribution to the entropy of the environment than will the decrease of the entropy of the air
of that system. Thus, the total of entropy of the room plus
the entropy of the environment increases, in agreement with
the second law of thermodynamics.
In mechanics, the second law in conjunction with the
fundamental thermodynamic relation places limits on a systems ability to do useful work.[29] The entropy change of a
system at temperature T absorbing an innitesimal amount
of heat q in a reversible way, is given by q/T. More explicitly, an energy TR S is not available to do useful work, where
TR is the temperature of the coldest accessible reservoir or
heat sink external to the system. For further discussion, see
Exergy.

One dictionary denition of entropy is that it is a measure


of thermal energy per unit temperature that is not available
for useful work. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower
entropy (than if the heat distribution is allowed to even out)
and some of the thermal energy can drive a heat engine.
Statistical mechanics demonstrates that entropy is governed
A special case of entropy increase, the entropy of mixing, by probability, thus allowing for a decrease in disorder
occurs when two or more dierent substances are mixed. even in an isolated system. Although this is possible, such
If the substances are at the same temperature and pres- an event has a small probability of occurring, making it
sure, there will be no net exchange of heat or work the unlikely.[30]
entropy change will be entirely due to the mixing of the
dierent substances. At a statistical mechanical level, this
results due to the change in available volume per particle 6.2.4 Applications
with mixing.[28]
The fundamental thermodynamic relation

6.2.3

Second law of thermodynamics

Main article: Second law of thermodynamics


The second law of thermodynamics requires that, in general, the total entropy of any system will not decrease
other than by increasing the entropy of some other system.
Hence, in a system isolated from its environment, the entropy of that system will tend not to decrease. It follows

Main article: Fundamental thermodynamic relation


The entropy of a system depends on its internal energy and
its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the
change in the internal energy U to changes in the entropy
and the external parameters. This relation is known as the
fundamental thermodynamic relation. If external pressure
P bears on the volume V as the only external parameter,
this relation is:

136

dU = T dS P dV

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES


rection of complex chemical reactions. For such applications, S must be incorporated in an expression that includes both the system and its surroundings, S =
S + S . This expression becomes, via some
steps, the Gibbs free energy equation for reactants and
products in the system: G [the Gibbs free energy change
of the system] = H [the enthalpy change] T S [the entropy change].[31]

Since both internal energy and entropy are monotonic functions of temperature T, implying that the internal energy is
xed when one species the entropy and the volume, this relation is valid even if the change from one state of thermal
equilibrium to another with innitesimally larger entropy
and volume happens in a non-quasistatic way (so during this
change the system may be very far out of thermal equilib- Entropy balance equation for open systems
rium and then the entropy, pressure and temperature may
not exist).
Heat added
Q

The fundamental thermodynamic relation implies many


thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between
heat capacities.

Work performed
external to boundary
Wshaft

Hout

Entropy in chemical thermodynamics


Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantied and the outcome
of reactions predicted. The second law of thermodynamics states that entropy in an isolated system the combination of a subsystem under study and its surroundings
increases during all spontaneous chemical and physical processes. The Clausius equation of q/T = S introduces
the measurement of entropy change, S. Entropy change
describes the direction and quanties the magnitude of simple changes such as heat transfer between systems always
from hotter to cooler spontaneously.
The thermodynamic entropy therefore has the dimension of
energy divided by temperature, and the unit joule per kelvin
(J/K) in the International System of Units (SI).

Hin
System boundary (open)

During steady-state continuous operation, an entropy balance applied to an open system accounts for system entropy changes related
to heat ow and mass ow across the system boundary.

In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in
which heat, work, and mass ow across the system bound S (shaft
ary. Flows of both heat ( Q ) and work, i.e. W
work) and P(dV/dt) (pressure-volume work), across the system boundaries, in general cause changes in the entropy of

the system. Transfer as heat entails entropy transfer Q/T,


where T is the absolute thermodynamic temperature of the
system at the point of the heat ow. If there are mass ows
across the system boundaries, they will also inuence the
total entropy of the system. This account, in terms of heat
and work, is valid only for cases in which the work and heat
transfers are by paths physically distinct from the paths of
entry and exit of matter from the system.[34][35]

Thermodynamic entropy is an extensive property, meaning


that it scales with the size or extent of a system. In many
processes it is useful to specify the entropy as an intensive
property independent of the size, as a specic entropy characteristic of the type of system studied. Specic entropy
may be expressed relative to a unit of mass, typically the
kilogram (unit: Jkg1 K1 ). Alternatively, in chemistry, it is
also referred to one mole of substance, in which case it is
To derive a generalized entropy balanced equation, we start
called the molar entropy with a unit of Jmol1 K1 .
with the general balance equation for the change in any
Thus, when one mole of substance at about 0K is warmed extensive quantity in a thermodynamic system, a quanby its surroundings to 298K, the sum of the incremental val- tity that may be either conserved, such as energy, or nonues of q/T constitute each elements or compounds stan- conserved, such as entropy. The basic generic balance exdard molar entropy, an indicator of the amount of energy pression states that d/dt, i.e. the rate of change of in
stored by a substance at 298K.[31][32] Entropy change also the system, equals the rate at which enters the system at
measures the mixing of substances as a summation of their the boundaries, minus the rate at which leaves the sysrelative quantities in the nal mixture.[33]
tem across the system boundaries, plus the rate at which
Entropy is equally essential in predicting the extent and di- is generated within the system. For an open thermody-

6.2. ENTROPY
namic system in which heat and work are transferred by
paths separate from the paths for transfer of matter, using
this generic balance equation, with respect to the rate of
change with time t of the extensive quantity entropy S, the
entropy balance equation is:[36][note 2]

137
Cooling and heating
For heating or cooling of any system (gas, liquid or solid) at
constant pressure from an initial temperature T0 to a nal
temperature T , the entropy change is

Q
dS
=
M k Sk + + S gen
dt
T

S = nCP ln

where

provided that the constant-pressure molar heat capacity (or


specic heat) CP is constant and that no phase transition
occurs in this temperature interval.

k=1

M k Sk = the net rate of entropy ow due


to the ows of mass into and out of the system
(where S = entropy per unit mass).
k=1

Q
T

= the rate of entropy ow due to the ow of


heat across the system boundary.
S gen = the rate of entropy production within the
system. This entropy production arises from processes within the system, including chemical reactions, internal matter diusion, internal heat
transfer, and frictional eects such as viscosity occurring within the system from mechanical
work transfer to or from the system.

T
T0

Similarly at constant volume, the entropy change is

S = nCv ln

T
T0

where the constant-volume heat capacity C is constant and


there is no phase change.
At low temperatures near absolute zero, heat capacities of
solids quickly drop o to near zero, so the assumption of
constant heat capacity does not apply.[38]

Since entropy is a state function, the entropy change of any


process in which temperature and volume both vary is the
same as for a path divided into two steps - heating at conNote, also, that if there are
multiple
heat
ows,
the
term

Qj /Tj , where Q j is the heat stant volume and expansion at constant temperature. For an
Q/T
will be replaced by
ow and Tj is the temperature at the jth heat ow port into ideal gas, the total entropy change is[39]
the system.

6.2.5

S = nC ln

+ nR ln

v
Entropy change formulas for simple
T0
V0
processes
Similarly if the temperature and pressure of an ideal gas

For certain simple transformations in systems of constant


composition, the entropy changes are given by simple
formulas.[37]

both vary,

S = nCP ln

T
P
nR ln
T0
P0

Isothermal expansion or compression of an ideal gas


Phase transitions
For the expansion (or compression) of an ideal gas from
an initial volume V0 and pressure P0 to a nal volume V Reversible phase transitions occur at constant temperature
and pressure P at any constant temperature, the change in and pressure. The reversible heat is the enthalpy change
for the transition, and the entropy change is the enthalpy
entropy is given by:
change divided by the thermodynamic temperature. For fusion (melting) of a solid to a liquid at the melting point T ,
V
P
the entropy of fusion is
S = nR ln
= nR ln .
V0
P0
Here n is the number of moles of gas and R is the ideal
Hfus
gas constant. These equations also apply for expansion into Sfus =
.
Tm
a nite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain Similarly, for vaporization of a liquid to a gas at the boiling
point T , the entropy of vaporization is
constant.

138

Svap =

6.2.6

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

Hvap
.
Tb

Approaches to understanding entropy

measure of the total amount of disorder in the system is


given by:[43][44]

Disorder =

CD
.
CI

Similarly, the total amount of order in the system is given


As a fundamental aspect of thermodynamics and physics, by:
several dierent approaches to entropy beyond that of Clausius and Boltzmann are valid.
CO
Order = 1
.
CI
Standard textbook denitions
In which CD is the disorder capacity of the system, which
The following is a list of additional denitions of entropy is the entropy of the parts contained in the permitted enfrom a collection of textbooks:
semble, CI is the information capacity of the system, an
expression similar to Shannons channel capacity, and CO
a measure of energy dispersal at a specic tempera- is the order capacity of the system.[42]
ture.
a measure of disorder in the universe or of the avail- Energy dispersal
ability of the energy in a system to do work.[40]
Main article: Entropy (energy dispersal)
a measure of a systems thermal energy per unit temperature that is unavailable for doing useful work.[41]
The concept of entropy can be described qualitatively as a
measure of energy dispersal at a specic temperature.[45]
In Boltzmanns denition, entropy is a measure of the num- Similar terms have been in use from early in the history
ber of possible microscopic states (or microstates) of a sys- of classical thermodynamics, and with the development of
tem in thermodynamic equilibrium. Consistent with the statistical thermodynamics and quantum theory, entropy
Boltzmann denition, the second law of thermodynamics changes have been described in terms of the mixing or
needs to be re-worded as such that entropy increases over spreading of the total energy of each constituent of a systime, though the underlying principle remains the same.
tem over its particular quantized energy levels.
Order and disorder
Main article: Entropy (order and disorder)

Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students.[46] As the second law of
thermodynamics shows, in an isolated system internal portions at dierent temperatures will tend to adjust to a single uniform temperature and thus produce equilibrium. A
recently developed educational approach avoids ambiguous
terms and describes such spreading out of energy as dispersal, which leads to loss of the dierentials required for
work even though the total energy remains constant in accordance with the rst law of thermodynamics[47] (compare
discussion in next section). Physical chemist Peter Atkins,
for example, who previously wrote of dispersal leading to a
disordered state, now writes that spontaneous changes are
always accompanied by a dispersal of energy.[48]

Entropy has often been loosely associated with the amount


of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is
that it refers to changes in the status quo of the system and
is a measure of molecular disorder and the amount of
wasted energy in a dynamical energy transformation from
one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account
for and measure disorder and order in atomic and molecular assemblies.[42][43][44] One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination
of thermodynamics and information theory arguments. He Relating entropy to energy usefulness
argues that when constraints operate on a system, such that
it is prevented from entering one or more of its possible or Following on from the above, it is possible (in a thermal
permitted states, as contrasted with its forbidden states, the context) to regard entropy as an indicator or measure of

6.2. ENTROPY
the eectiveness or usefulness of a particular quantity of
energy.[49] This is because energy supplied at a high temperature (i.e. with low entropy) tends to be more useful
than the same amount of energy available at room temperature. Mixing a hot parcel of a uid with a cold one produces
a parcel of intermediate temperature, in which the overall
increase in entropy represents a loss which can never be
replaced.

139

S = kB

pi log pi

i.e. in such a basis the density matrix is diagonal.

Von Neumann established a rigorous mathematical framework for quantum mechanics with his work Mathematische Grundlagen der Quantenmechanik. He provided in this
Thus, the fact that the entropy of the universe is steadily in- work a theory of measurement, where the usual notion of
creasing, means that its total energy is becoming less useful: wave function collapse is described as an irreversible proeventually, this will lead to the "heat death of the Universe". cess (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density
matrix he extended the classical concept of entropy into the
Entropy and adiabatic accessibility
quantum domain.
A denition of entropy based entirely on the relation of
adiabatic accessibility between equilibrium states was given
by E.H.Lieb and J. Yngvason in 1999.[50] This approach
has several predecessors, including the pioneering work of
Constantin Carathodory from 1909 [51] and the monograph
by R. Giles from 1964.[52] In the setting of Lieb and Yngvason one starts by picking, for a unit amount of the substance
under consideration, two reference states X0 and X1 such
that the latter is adiabatically accessible from the former
but not vice versa. Dening the entropies of the reference
states to be 0 and 1 respectively the entropy of a state X is
dened as the largest number such that X is adiabatically
accessible from a composite state consisting of an amount
in the state X1 and a complementary amount, (1 ) ,
in the state X0 . A simple but important result within this
setting is that entropy is uniquely determined, apart from a
choice of unit and an additive constant for each chemical
element, by the following properties: It is monotonic with
respect to the relation of adiabatic accessibility, additive on
composite systems, and extensive under scaling.
Entropy in quantum mechanics
Main article: von Neumann entropy

Information theory
I thought of calling it information, but the word was overly
used, so I decided to call it uncertainty. [...] Von Neumann told me, You should call it entropy, for two reasons.
In the rst place your uncertainty function has been used
in statistical mechanics under that name, so it already has
a name. In the second place, and more important, nobody
knows what entropy really is, so in a debate you will always
have the advantage.
Conversation between Claude Shannon and John von Neumann regarding what name to give to the attenuation in
phone-line signals[53]
Main articles: Entropy (information theory), Entropy in
thermodynamics and information theory and Entropic
uncertainty
When viewed in terms of information theory, the entropy
state function is simply the amount of information (in the
Shannon sense) that would be needed to specify the full microstate of the system. This is left unspecied by the macroscopic description.

In information theory, entropy is the measure of the amount


In quantum statistical mechanics, the concept of entropy of information that is missing before reception and is somewas developed by John von Neumann and is generally re- times referred to as Shannon entropy.[54] Shannon entropy
ferred to as "von Neumann entropy",
is a broad and general concept which nds applications
in information theory as well as thermodynamics. It was
originally devised by Claude Shannon in 1948 to study the
amount of information in a transmitted message. The deS = kB Tr( log )
nition of the information entropy is, however, quite general,
where is the density matrix and Tr is the trace operator. and is expressed in terms of a discrete set of probabilities
This upholds the correspondence principle, because in the pi so that
classical limit, when the phases between the basis states
used for the classical probabilities are purely random, this
n

expression is equivalent to the familiar classical denition H(X) =


p(xi ) log p(xi ).
of entropy,
i=1

140

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

In the case of transmitted messages, these probabilities Thermodynamic and statistical mechanics concepts
were the probabilities that a particular message was actually transmitted, and the entropy of the message system was
Entropy unit a non-S.I. unit of thermodynamic ena measure of the average amount of information in a mestropy, usually denoted e.u. and equal to one calorie
sage. For the case of equal probabilities (i.e. each message
per Kelvin per mole, or 4.184 Joules per Kelvin per
is equally probable), the Shannon entropy (in bits) is just the
mole.[67]
number of yes/no questions needed to determine the content of the message.[18]
Gibbs entropy the usual statistical mechanical entropy of a thermodynamic system.
The question of the link between information entropy
and thermodynamic entropy is a debated topic. While
most authors argue that there is a link between the
two,[55][56][57][58][59] a few argue that they have nothing to
do with each other.[18]
The expressions for the two entropies are similar. If W is
the number of microstates that can yield a given macrostate,
and each microstate has the same A priori probability, then
that probability is p=1/W. The Shannon entropy (in nats)
will be:

H=

p log(p) = log(W )

i=1

Boltzmann entropy a type of Gibbs entropy, which


neglects internal statistical correlations in the overall
particle distribution.
Tsallis entropy a generalization of the standard
Boltzmann-Gibbs entropy.
Standard molar entropy is the entropy content of one
mole of substance, under conditions of standard temperature and pressure.
Residual entropy the entropy present after a substance is cooled arbitrarily close to absolute zero.

and if entropy is measured in units of k per nat, then the


entropy is given[60] by:

Entropy of mixing the change in the entropy when


two dierent chemical substances or components are
mixed.

H = k log(W )

Loop entropy is the entropy lost upon bringing together two residues of a polymer within a prescribed
distance.

which is the famous Boltzmann entropy formula when k


is Boltzmanns constant, which may be interpreted as the
thermodynamic entropy per nat. There are many ways
of demonstrating the equivalence of information entropy
and physics entropy, that is, the equivalence of Shannon
entropy and Boltzmann entropy. Nevertheless, some authors argue for dropping the word entropy for the H function of information theory and using Shannons other term
uncertainty instead.[61]

6.2.7

Interdisciplinary applications of entropy

Although the concept of entropy was originally a


thermodynamic construct, it has been adapted in
other elds of study, including information theory,
psychodynamics, thermoeconomics/ecological economics,
and evolution.[42][62][63][64][65] For instance, an entropic
argument has been recently proposed for explaining the
preference of cave spiders in choosing a suitable area for
laying their eggs.[66]

Conformational entropy is the entropy associated


with the physical arrangement of a polymer chain that
assumes a compact or globular state in solution.
Entropic force a microscopic force or reaction tendency related to system organization changes, molecular frictional considerations, and statistical variations.
Free entropy an entropic thermodynamic potential
analogous to the free energy.
Entropic explosion an explosion in which the reactants undergo a large change in volume without releasing a large amount of heat.
Entropy change a change in entropy dS between
two equilibrium states is given by the heat transferred
dQrev divided by the absolute temperature T of the
system in this interval.
Sackur-Tetrode entropy the entropy of a monatomic
classical ideal gas determined via quantum considerations.

6.2. ENTROPY
The arrow of time
Main article: Entropy (arrow of time)
Entropy is the only quantity in the physical sciences that
seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an
isolated system never decreases. Hence, from this perspective, entropy measurement is thought of as a kind of clock.

Cosmology
Main article: Heat death of the universe

141
The entropy gap is widely believed to have been originally
opened up by the early rapid exponential expansion of the
universe.
Economics
See also: Nicholas Georgescu-Roegen The relevance of
thermodynamics to economics and Ecological economics
Methodology
Romanian American economist Nicholas GeorgescuRoegen, a progenitor in economics and a paradigm founder
of ecological economics, made extensive use of the entropy
concept in his magnum opus on The Entropy Law and the
Economic Process.[75] Due to Georgescu-Roegens work,
the laws of thermodynamics now form an integral part of
the ecological economics school.[76]:204f [77]:29-35 Although
his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly
been included in one elementary physics textbook on the
historical development of thermodynamics.[78]:95-112

Since a nite universe is an isolated system, the Second Law


of Thermodynamics states that its total entropy is constantly
increasing. It has been speculated, since the 19th century,
that the universe is fated to a heat death in which all the
energy ends up as a homogeneous distribution of thermal
energy, so that no more work can be extracted from any In economics, Georgescu-Roegens work has generated the
term 'entropy pessimism'.[79]:116 Since the 1990s, leadsource.
ing ecological economist and steady-state theorist Herman
If the universe can be considered to have generally increasDaly a student of Georgescu-Roegen has been the
ing entropy, then as Sir Roger Penrose has pointed out
economists professions most inuential proponent of the
gravity plays an important role in the increase because graventropy pessimism position.[80]:545f
ity causes dispersed matter to accumulate into stars, which
collapse eventually into black holes. The entropy of a black
hole is proportional to the surface area of the black holes 6.2.8 See also
event horizon.[68] Jacob Bekenstein and Stephen Hawking
have shown that black holes have the maximum possible
Autocatalytic reactions and order creation
entropy of any object of equal size. This makes them likely
Brownian ratchet
end points of all entropy-increasing processes, if they are
totally eective matter and energy traps. However, the es ClausiusDuhem inequality
cape of energy from black holes might be possible due to
quantum activity, see Hawking radiation. Hawking has re Conguration entropy
cently changed his stance on some details, in a paper which
Departure function
largely redened the event horizons of black holes.[69]
The role of entropy in cosmology remains a controversial
subject since the time of Ludwig Boltzmann. Recent work
has cast some doubt on the heat death hypothesis and the
applicability of any simple thermodynamic model to the
universe in general. Although entropy does increase in the
model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further
from the heat death with time, not closer.[70][71][72] This results in an entropy gap pushing the system further away
from the posited heat death equilibrium.[73] Other complicating factors, such as the energy density of the vacuum and
macroscopic quantum eects, are dicult to reconcile with
thermodynamical models, making any predictions of largescale thermodynamics extremely dicult.[74]

Enthalpy
Entropic force
Entropy (information theory)
Entropy (computing)
Entropy and life
Entropy (order and disorder)
Entropy rate
Geometrical frustration
Laws of thermodynamics

142

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

Multiplicity function
Negentropy (negative entropy)
Orders of magnitude (entropy)
Stirlings formula
Thermodynamic databases for pure substances
Thermodynamic potential
Wavelet entropy

[12] Clausius, Rudolf (1865). Ueber verschiedene fr die Anwendung bequeme Formen der Hauptgleichungen der mechanischen Wrmetheorie: vorgetragen in der naturforsch.
Gesellschaft den 24. April 1865. p. 46.
[13] Atkins, Peter; Julio De Paula (2006). Physical Chemistry,
8th ed. Oxford University Press. p. 79. ISBN 0-19-8700725.
[14] Engel, Thomas; Philip Reid (2006). Physical Chemistry.
Pearson Benjamin Cummings. p. 86. ISBN 0-8053-3842X.
[15] McGraw-Hill Concise Encyclopedia of Chemistry, 2004

6.2.9

Notes

[1] A machine in this context includes engineered devices as


well as biological organisms.
[2] The overdots represent derivatives of the quantities with respect to time.

6.2.10

References

[1] Carnot, Sadi (17961832)". Wolfram Research. 2007.


Retrieved 2010-02-24.
[2] McCulloch, Richard, S. (1876). Treatise on the Mechanical
Theory of Heat and its Applications to the Steam-Engine, etc.
D. Van Nostrand.
[3] Clausius, Rudolf (1850). On the Motive Power of Heat, and
on the Laws which can be deduced from it for the Theory of
Heat. Poggendors Annalen der Physick, LXXIX (Dover
Reprint). ISBN 0-486-59065-8.

[16] Sethna, J. Statistical Mechanics Oxford University Press


2006 p. 78
[17] Barnes & Nobles Essential Dictionary of Science, 2004
[18] Frigg, R. and Werndl, C. Entropy A Guide for the Perplexed. In Probabilities in Physics; Beisbart C. and Hartmann, S. Eds; Oxford University Press, Oxford, 2010
[19] Schroeder, Daniel V. An Introduction to Thermal Physics.
Addison Wesley Longman, 1999, p. 57
[20] EntropyOrderParametersComplexity.pdf www.physics.
cornell.edu" (PDF). Retrieved 2012-08-17.
[21] Jaynes, E. T., The Gibbs Paradox, In Maximum Entropy
and Bayesian Methods; Smith, C. R; Erickson, G. J; Neudorfer, P. O., Eds; Kluwer Academic: Dordrecht, 1992, pp.
122 (PDF). Retrieved 2012-08-17.
[22] Sandler S. I., Chemical and Engineering Thermodynamics,
3rd Ed. Wiley, New York, 1999 p. 91

[4] The scientic papers of J. Willard Gibbs in Two Volumes 1.


Longmans, Green, and Co. 1906. p. 11. Retrieved 201102-26.

[23] McQuarrie D. A., Simon J. D., Physical Chemistry: A Molecular Approach, University Science Books, Sausalito 1997 p.
817

[5] J. A. McGovern, 2.5 Entropy at the Wayback Machine


(archived September 23, 2012)

[24] Haynie, Donald, T. (2001). Biological Thermodynamics.


Cambridge University Press. ISBN 0-521-79165-0.

[6] Irreversibility, Entropy Changes, and Lost Work Thermodynamics and Propulsion, Z. S. Spakovszky, 2002

[25] Oxford Dictionary of Science, 2005

[7] What is entropy? Thermodynamics of Chemical Equilibrium by S. Lower, 2007

[26] de Rosnay, Joel (1979). The Macroscope a New World


View (written by an M.I.T.-trained biochemist). Harper &
Row, Publishers. ISBN 0-06-011029-5.

[8] B. H. Lavenda, A New Perspective on Thermodynamics


Springer, 2009, Sec. 2.3.4,

[27] J. A. McGovern, Heat Capacities at the Wayback Machine


(archived August 19, 2012)

[9] S. Carnot, Reexions on the Motive Power of Fire, translated and annotated by R. Fox, Manchester University Press,
1986, p. 26; C. Truesdell, The Tragicomical History of
Thermodynamics, Springer, 1980, pp. 7885

[28] Ben-Naim, Arieh, On the So-Called Gibbs Paradox, and on


the Real Paradox, Entropy, 9, pp. 132136, 2007 Link

[10] J. Clerk-Maxwell, Theory of Heat, 10th ed. Longmans,


Green and Co., 1891, pp. 155158.
[11] R. Clausius, The Mechanical Theory of Heat, translated
by T. Archer Hirst, van Voorst, 1867, p. 28

[29] Daintith, John (2005). Oxford Dictionary of Physics. Oxford


University Press. ISBN 0-19-280628-9.
[30] ""Entropy production theorems and some consequences,
Physical Review E; Saha, Arnab; Lahiri, Sourabh; Jayannavar, A. M; The American Physical Society: 14 July 2009,
pp. 110. Link.aps.org. Retrieved 2012-08-17.

6.2. ENTROPY

[31] Moore, J. W.; C. L. Stanistski; P. C. Jurs (2005). Chemistry,


The Molecular Science. Brooks Cole. ISBN 0-534-42201-2.
[32] Jungermann, A.H. (2006). Entropy and the Shelf Model:
A Quantum Physical Approach to a Physical Property.
Journal of Chemical Education 83 (11): 16861694.
Bibcode:2006JChEd..83.1686J. doi:10.1021/ed083p1686.
[33] Levine, I. N. (2002). Physical Chemistry, 5th ed. McGrawHill. ISBN 0-07-231808-2.
[34] Born, M. (1949). Natural Philosophy of Cause and Chance,
Oxford University Press, London, pp. 44, 146147.
[35] Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 197 of volume 1, ISBN
0122456017, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, p. 35.

143

[49] Sandra Saary (Head of Science, Latifa Girls School, Dubai)


(23 February 1993). Book Review of A Science Miscellany"". Khaleej Times (Galadari Press, UAE): XI.
[50] Elliott H. Lieb, Jakob Yngvason: The Physics and Mathematics of the Second Law of Thermodynamics, Phys. Rep.
310, pp. 196 (1999)
[51] Constantin Carathodory: Untersuchungen ber die Grundlagen der Thermodynamik, Math. Ann., 67, pp. 355386,
1909
[52] Robin Giles: Mathematical Foundations of Thermodynamics, Pergamon, Oxford 1964
[53] M. Tribus, E.C. McIrvine, Energy and information, Scientic American, 224 (September 1971), pp. 178184

[36] Sandler, Stanley, I. (1989). Chemical and Engineering Thermodynamics. John Wiley & Sons. ISBN 0-471-83050-X.

[54] Balian, Roger (2004). Entropy, a Protean concept. In Dalibard, Jean. Poincar Seminar 2003: Bose-Einstein condensation - entropy. Basel: Birkhuser. pp. 119144. ISBN
9783764371166.

[37] GRC.nasa.gov. GRC.nasa.gov. 2000-03-27. Retrieved


2012-08-17.

[55] Brillouin, Leon (1956). Science and Information Theory.


ISBN 0-486-43918-6.

[38] The Third Law Chemistry 433, Stefan Franzen, ncsu.edu

[56] Georgescu-Roegen, Nicholas (1971). The Entropy Law and


the Economic Process. Harvard University Press. ISBN 0674-25781-2.

[39] GRC.nasa.gov. GRC.nasa.gov. 2008-07-11. Retrieved


2012-08-17.
[40] Gribbins Q Is for Quantum: An Encyclopedia of Particle
Physics, Free Press ISBN 0-684-85578-X, 2000
[41] Entropy Encyclopdia Britannica
[42] Brooks, Daniel, R.; Wiley, E.O. (1988). Evolution as
Entropy Towards a Unied Theory of Biology. University
of Chicago Press. ISBN 0-226-07574-5.
[43] Landsberg, P.T. (1984). Is Equilibrium always an Entropy Maximum?". J. Stat. Physics 35: 159169.
Bibcode:1984JSP....35..159L. doi:10.1007/bf01017372.
[44] Landsberg, P.T. (1984).
Can Entropy and Order Increase Together?". Physics Letters 102A: 171
173. Bibcode:1984PhLA..102..171L. doi:10.1016/03759601(84)90934-4.
[45] Frank L. Lambert, A Students Approach to the Second Law
and Entropy
[46] Carson, E. M. and J. R. Watson (Department of Educational and Professional Studies, Kings College, London), Undergraduate students understandings of entropy and
Gibbs Free energy, University Chemistry Education 2002
Papers, Royal Society of Chemistry
[47] Frank L. Lambert, JCE 2002 (79) 187 [Feb] Disorder A
Cracked Crutch for Supporting Entropy Discussions
[48] Atkins, Peter (1984). The Second Law. Scientic American
Library. ISBN 0-7167-5004-X.

[57] Chen, Jing (2005). The Physical Foundation of Economics


an Analytical Thermodynamic Theory. World Scientic.
ISBN 981-256-323-7.
[58] Kalinin, M.I.; Kononogov, S.A. (2005). Boltzmanns
constant.
Measurement Techniques 48: 632636.
doi:10.1007/s11018-005-0195-9.
[59] Ben-Naim A. (2008), Entropy Demystied (World Scientic).
[60] Edwin T. Jaynes Bibliography. Bayes.wustl.edu. 199803-02. Retrieved 2009-12-06.
[61] Schneider, Tom, DELILA system (Deoxyribonucleic acid
Library Language), (Information Theory Analysis of binding sites), Laboratory of Mathematical Biology, National
Cancer Institute, FCRDC Bldg. 469. Rm 144, P.O. Box.
B Frederick, MD 21702-1201, USA
[62] Avery, John (2003). Information Theory and Evolution.
World Scientic. ISBN 981-238-399-9.
[63] Yockey, Hubert, P. (2005). Information Theory, Evolution,
and the Origin of Life. Cambridge University Press. ISBN
0-521-80293-8.
[64] Chiavazzo, Eliodoro; Fasano, Matteo; Asinari, Pietro.
Inference of analytical thermodynamic models for biological networks. Physica A: Statistical Mechanics and its Applications 392: 11221132. Bibcode:2013PhyA..392.1122C.
doi:10.1016/j.physa.2012.11.030.

144

[65] Chen, Jing (2015). The Unity of Science and Economics: A


New Foundation of Economic Theory. http://www.springer.
com/us/book/9781493934645: Springer.
[66] Chiavazzo, Eliodoro; Isaia, Marco; Mammola, Stefano;
Lepore, Emiliano; Ventola, Luigi; Asinari, Pietro; Pugno,
Nicola Maria. Cave spiders choose optimal environmental factors with respect to the generated entropy
when laying their cocoon. Scientic Reports 5: 7611.
Bibcode:2015NatSR...5E7611C. doi:10.1038/srep07611.
[67] IUPAC, Compendium of Chemical Terminology, 2nd ed.
(the Gold Book) (1997). Online corrected version:
(2006) "Entropy unit".
[68] von Baeyer, Christian, H. (2003).
Informationthe
New Language of Science. Harvard University Press.
ISBN 0-674-01387-5.Srednicki M (August 1993). Entropy and area. Phys. Rev. Lett. 71 (5): 666669.
arXiv:hep-th/9303048.
Bibcode:1993PhRvL..71..666S.
doi:10.1103/PhysRevLett.71.666.
PMID
10055336.Callaway DJE (April 1996). Surface tension, hydrophobicity, and black holes: The entropic
connection. Phys. Rev. E 53 (4): 37383744. arXiv:condmat/9601111.
Bibcode:1996PhRvE..53.3738C.
doi:10.1103/PhysRevE.53.3738. PMID 9964684.
[69] Buchan, Lizzy. Black holes do not exist, says Stephen
Hawking. Cambridge News. Retrieved 27 January 2014.
[70] Layzer, David (1988). Growth of Order in the Universe.
MIT Press.
[71] Chaisson, Eric J. (2001). Cosmic Evolution: The Rise of
Complexity in Nature. Harvard University Press. ISBN 0674-00342-X.
[72] Lineweaver, Charles H.; Davies, Paul C. W.; Ruse, Michael,
eds. (2013). Complexity and the Arrow of Time. Cambridge
University Press. ISBN 978-1-107-02725-1.
[73] Stenger, Victor J. (2007). God: The Failed Hypothesis.
Prometheus Books. ISBN 1-59102-481-1.
[74] Benjamin Gal-Or (1981, 1983, 1987). Cosmology, Physics
and Philosophy. Springer Verlag. ISBN 0-387-96526-2.
Check date values in: |date= (help)
[75] Georgescu-Roegen, Nicholas (1971). The Entropy Law and
the Economic Process. (PDF contains only the introductory
chapter of the book). Cambridge, Massachusetts: Harvard
University Press. ISBN 0674257804.
[76] Cleveland, Cutler J.; Ruth, Matthias (1997). When,
where, and by how much do biophysical limits constrain
the economic process? A survey of Nicholas GeorgescuRoegens contribution to ecological economics (PDF).
Ecological Economics (Amsterdam: Elsevier) 22 (3): 203
223. doi:10.1016/s0921-8009(97)00079-7.
[77] Daly, Herman E.; Farley, Joshua (2011). Ecological
Economics. Principles and Applications. (PDF contains
full book) (2nd ed.). Washington: Island Press. ISBN
9781597266819.

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

[78] Schmitz, John E.J. (2007). The Second Law of Life: Energy,
Technology, and the Future of Earth As We Know It. (Link to
the authors science blog, based on his textbook). Norwich:
William Andrew Publishing. ISBN 0815515375.
[79] Ayres, Robert U. (2007). On the practical limits to substitution (PDF). Ecological Economics (Amsterdam: Elsevier)
61: 115128. doi:10.1016/j.ecolecon.2006.02.011.
[80] Kerschner, Christian (2010).
Economic de-growth
vs. steady-state economy (PDF). Journal of Cleaner
Production (Amsterdam:
Elsevier) 18:
544551.
doi:10.1016/j.jclepro.2009.10.019.

6.2.11

Further reading

Atkins, Peter; Julio De Paula (2006). Physical Chemistry, 8th ed. Oxford University Press. ISBN 0-19870072-5.
Baierlein, Ralph (2003). Thermal Physics. Cambridge
University Press. ISBN 0-521-65838-1.
Ben-Naim, Arieh (2007). Entropy Demystied. World
Scientic. ISBN 981-270-055-2.
Callen, Herbert, B (2001). Thermodynamics and an
Introduction to Thermostatistics, 2nd Ed. John Wiley
and Sons. ISBN 0-471-86256-8.
Chang, Raymond (1998). Chemistry, 6th Ed. New
York: McGraw Hill. ISBN 0-07-115221-0.
Cutnell, John, D.; Johnson, Kenneth, J. (1998).
Physics, 4th ed. John Wiley and Sons, Inc. ISBN 0471-19113-2. Cite uses deprecated parameter |coauthor= (help)
Dugdale, J. S. (1996). Entropy and its Physical Meaning (2nd ed.). Taylor and Francis (UK); CRC (US).
ISBN 0-7484-0569-0.
Fermi, Enrico (1937). Thermodynamics. Prentice
Hall. ISBN 0-486-60361-X.
Goldstein, Martin; Inge, F (1993). The Refrigerator
and the Universe. Harvard University Press. ISBN 0674-75325-9.
Gyftopoulos, E.P.; G.P. Beretta (1991, 2005, 2010).
Thermodynamics. Foundations and Applications.
Dover. ISBN 0-486-43932-1. Check date values in:
|date= (help)
Haddad, Wassim M.; Chellaboina, VijaySekhar;
Nersesov, Sergey G. (2005). Thermodynamics A
Dynamical Systems Approach. Princeton University
Press. ISBN 0-691-12327-6.

6.3. PRESSURE

145

Kroemer, Herbert; Charles Kittel (1980). Thermal


Physics (2nd ed.). W. H. Freeman Company. ISBN
0-7167-1088-9.

The Discovery of Entropy by Adam Shulman. Hourlong video, January 2013.

Lambert, Frank L.; entropysite.oxy.edu

Moriarty, Philip; Merrield, Michael (2009). S Entropy. Sixty Symbols. Brady Haran for the University
of Nottingham.

Penrose, Roger (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. New York: A.
A. Knopf. ISBN 0-679-45443-8.

Entropy Scholarpedia

Reif, F. (1965). Fundamentals of statistical and thermal physics. McGraw-Hill. ISBN 0-07-051800-9.
Schroeder, Daniel V. (2000). Introduction to Thermal
Physics. New York: Addison Wesley Longman. ISBN
0-201-38027-7.

6.3

Pressure

This article is about pressure in the physical sciences. For


Serway, Raymond, A. (1992). Physics for Scientists other uses, see Pressure (disambiguation).
Pressure (symbol: p or P) is the force applied perpenand Engineers. Saunders Golden Subburst Series.
ISBN 0-03-096026-6.
Spirax-Sarco Limited, Entropy A Basic Understanding A primer on entropy tables for steam engineering
vonBaeyer; Hans Christian (1998). Maxwells Demon:
Why Warmth Disperses and Time Passes. Random
House. ISBN 0-679-43342-2.
Entropy for beginners a wikibook
An Intuitive Guide to the Concept of Entropy Arising
in Various Sectors of Science a wikibook

6.2.12

External links

Entropy and the Second Law of Thermodynamics - an


A-level physics lecture with detailed derivation of entropy based on Carnot cycle
Khan Academy: entropy lectures, part of Chemistry
playlist

Pressure as exerted by particle collisions inside a closed container.

dicular to the surface of an object per unit area over which


that force is distributed. Gauge pressure (also spelled gage
[lower-alpha 1]
pressure)
is the pressure relative to the ambient
Thermodynamic Entropy Denition Claricapressure.
tion
Reconciling Thermodynamic and State Deni- Various units are used to express pressure. Some of these
derive from a unit of force divided by a unit of area; the SI
tions of Entropy
unit of pressure, the pascal (Pa), for example, is one newton
Entropy Intuition
per square metre; similarly, the pound-force per square inch
More on Entropy
(psi) is the traditional unit of pressure in the imperial and
The Second Law of Thermodynamics and Entropy - US customary systems. Pressure may also be expressed
Yale OYC lecture, part of Fundamentals of Physics I in terms of standard atmospheric pressure; the atmosphere
(atm) is equal to this pressure and the torr is dened as 1 760
(PHYS 200)
of this. Manometric units such as the centimetre of wa Entropy and the Clausius inequality MIT OCW lec- ter, millimetre of mercury, and inch of mercury are used
ture, part of 5.60 Thermodynamics & Kinetics, Spring to express pressures in terms of the height of column of a
particular uid in a manometer.
2008
Proof: S (or Entropy) is a valid state variable

146

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

6.3.1

Denition

points outward. The equation has meaning in that, for any


surface S in contact with the uid, the total force exerted by
Pressure is the amount of force acting per unit area. The the uid on that surface is the surface integral over S of the
symbol for pressure is p or P.[1] The IUPAC recommenda- right-hand side of the above equation.
tion for pressure is a lower-case p.[2] However, upper-case
It is incorrect (although rather usual) to say the pressure
P is widely used. The usage of P vs p depends on the eld
is directed in such or such direction. The pressure, as a
in which one is working, on the nearby presence of other
scalar, has no direction. The force given by the previous
symbols for quantities such as power and momentum, and
relationship to the quantity has a direction, but the preson writing style.
sure does not. If we change the orientation of the surface
element, the direction of the normal force changes accordingly, but the pressure remains the same.
Formula
Pressure is transmitted to solid boundaries or across arbitrary sections of uid normal to these boundaries or sections at every point. It is a fundamental parameter in
thermodynamics, and it is conjugate to volume.
Units

Mathematically:

p=

F
A

where:
p is the pressure,
F is the normal force,
A is the area of the surface on contact.
Pressure is a scalar quantity. It relates the vector surface
element (a vector normal to the surface) with the normal
force acting on it. The pressure is the scalar proportionality Mercury column
constant that relates the two normal vectors:
The SI unit for pressure is the pascal (Pa), equal to one
newton per square metre (N/m2 or kgm1 s2 ). This name
dFn = p dA = p n dA
for the unit was added in 1971;[3] before that, pressure in SI
The minus sign comes from the fact that the force is consid- was expressed simply in newtons per square metre.
ered towards the surface element, while the normal vector Other units of pressure, such as pounds per square inch and

6.3. PRESSURE
bar, are also in common use. The CGS unit of pressure is
the barye (Ba), equal to 1 dyncm2 or 0.1 Pa. Pressure is
sometimes expressed in grams-force or kilograms-force per
square centimetre (g/cm2 or kg/cm2 ) and the like without
properly identifying the force units. But using the names
kilogram, gram, kilogram-force, or gram-force (or their
symbols) as units of force is expressly forbidden in SI. The
technical atmosphere (symbol: at) is 1 kgf/cm2 (98.0665
kPa or 14.223 psi).
Since a system under pressure has potential to perform work
on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to energy
density and may be expressed in units such as joules per cubic metre (J/m3 , which is equal to Pa).
Some meteorologists prefer the hectopascal (hPa) for atmospheric air pressure, which is equivalent to the older unit
millibar (mbar). Similar pressures are given in kilopascals (kPa) in most other elds, where the hecto- prex is
rarely used. The inch of mercury is still used in the United
States. Oceanographers usually measure underwater pressure in decibars (dbar) because pressure in the ocean increases by approximately one decibar per metre depth.
The standard atmosphere (atm) is an established constant. It
is approximately equal to typical air pressure at earth mean
sea level and is dened as 101325 Pa.
Because pressure is commonly measured by its ability to
displace a column of liquid in a manometer, pressures
are often expressed as a depth of a particular uid (e.g.,
centimetres of water, millimetres of mercury or inches of
mercury). The most common choices are mercury (Hg) and
water; water is nontoxic and readily available, while mercurys high density allows a shorter column (and so a smaller
manometer) to be used to measure a given pressure. The
pressure exerted by a column of liquid of height h and density is given by the hydrostatic pressure equation p = gh,
where g is the gravitational acceleration. Fluid density and
local gravity can vary from one reading to another depending on local factors, so the height of a uid column does
not dene pressure precisely. When millimetres of mercury or inches of mercury are quoted today, these units are
not based on a physical column of mercury; rather, they
have been given precise denitions that can be expressed
in terms of SI units. One millimetre of mercury is approximately equal to one torr. The water-based units still depend
on the density of water, a measured, rather than dened,
quantity. These manometric units are still encountered in
many elds. Blood pressure is measured in millimetres of
mercury in most of the world, and lung pressures in centimetres of water are still common.
Underwater divers use the metre sea water (msw or MSW)
and foot sea water (fsw or FSW) units of pressure, and these
are the standard units for pressure gauges used to measure

147
pressure exposure in diving chambers and personal decompression computers. A msw is dened as 0.1 bar, and is not
the same as a linear metre of depth, and 33.066 fsw = 1
atm.[4] Note that the pressure conversion from msw to fsw
is dierent from the length conversion: 10 msw = 32.6336
fsw, while 10 m = 32.8083 ft
Gauge pressure is often given in units with 'g' appended,
e.g. 'kPag', 'barg' or 'psig', and units for measurements of
absolute pressure are sometimes given a sux of 'a', to
avoid confusion, for example 'kPaa', 'psia'. However, the
US National Institute of Standards and Technology recommends that, to avoid confusion, any modiers be instead
applied to the quantity being measured rather than the unit
of measure[5] For example, "p = 100 psi rather than "p =
100 psig.
Dierential pressure is expressed in units with 'd' appended;
this type of measurement is useful when considering sealing
performance or whether a valve will open or close.
Presently or formerly popular pressure units include the following:
atmosphere (atm)
manometric units:
centimetre, inch, millimetre (torr) and micrometre (mTorr, micron) of mercury
Height of equivalent column of water, including
millimetre (mm H
2O), centimetre (cm H
2O), metre, inch, and foot of water
imperial and customary units:
kip, short ton-force, long ton-force, pound-force,
ounce-force, and poundal per square inch
short ton-force and long ton-force per square
inch
fsw (feet sea water) used in underwater diving,
particularly in connection with diving pressure
exposure and decompression
non-SI metric units:
bar, decibar, millibar
msw (metres sea water), used in underwater
diving, particularly in connection with diving pressure exposure and decompression
kilogram-force, or kilopond, per square centimetre (technical atmosphere)
gram-force and tonne-force (metric ton-force)
per square centimetre
barye (dyne per square centimetre)

148

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

kilogram-force and tonne-force per square metre fruit with the at side it obviously will not cut. But if we
take the thin side, it will cut smoothly. The reason is that
sthene per square metre (pieze)
the at side has a greater surface area (less pressure) and so
it does not cut the fruit. When we take the thin side, the
surface area is reduced and so it cuts the fruit easily and
Examples
quickly. This is one example of a practical application of
pressure.
For gases, pressure is sometimes measured not as an absolute pressure, but relative to atmospheric pressure; such
measurements are called gauge pressure. An example of
this is the air pressure in an automobile tire, which might be
said to be 220 kPa (32 psi)", but is actually 220 kPa (32
psi) above atmospheric pressure. Since atmospheric pressure at sea level is about 100 kPa (14.7 psi), the absolute
pressure in the tire is therefore about 320 kPa (46.7 psi).
In technical work, this is written a gauge pressure of 220
kPa (32 psi)". Where space is limited, such as on pressure
gauges, name plates, graph labels, and table headings, the
use of a modier in parentheses, such as kPa (gauge)" or
kPa (absolute)", is permitted. In non-SI technical work, a
gauge pressure of 32 psi is sometimes written as 32 psig
and an absolute pressure as 32 psia, though the other
methods explained above that avoid attaching characters to
the unit of pressure are preferred.[6]
Gauge pressure is the relevant measure of pressure wherever one is interested in the stress on storage vessels and
the plumbing components of uidics systems. However,
whenever equation-of-state properties, such as densities or
changes in densities, must be calculated, pressures must be
expressed in terms of their absolute values. For instance, if
the atmospheric pressure is 100 kPa, a gas (such as helium)
at 200 kPa (gauge) (300 kPa [absolute]) is 50% denser than
the same gas at 100 kPa (gauge) (200 kPa [absolute]). Focusing on gauge values, one might erroneously conclude the
rst sample had twice the density of the second one.

Scalar nature
The eects of an external pressure of 700bar on an aluminum
cylinder with 5mm wall thickness

As an example of varying pressures, a nger can be pressed


against a wall without making any lasting impression; however, the same nger pushing a thumbtack can easily damage the wall. Although the force applied to the surface is
the same, the thumbtack applies more pressure because the
point concentrates that force into a smaller area. Pressure
is transmitted to solid boundaries or across arbitrary sections of uid normal to these boundaries or sections at every
point. Unlike stress, pressure is dened as a scalar quantity.
The negative gradient of pressure is called the force density.
Another example is of a common knife. If we try to cut a

In a static gas, the gas as a whole does not appear to move.


The individual molecules of the gas, however, are in constant random motion. Because we are dealing with an extremely large number of molecules and because the motion
of the individual molecules is random in every direction, we
do not detect any motion. If we enclose the gas within a container, we detect a pressure in the gas from the molecules
colliding with the walls of our container. We can put the
walls of our container anywhere inside the gas, and the force
per unit area (the pressure) is the same. We can shrink the
size of our container down to a very small point (becoming less true as we approach the atomic scale), and the pressure will still have a single value at that point. Therefore,
pressure is a scalar quantity, not a vector quantity. It has

6.3. PRESSURE
magnitude but no direction sense associated with it. Pressure acts in all directions at a point inside a gas. At the
surface of a gas, the pressure force acts perpendicular (at
right angle) to the surface.
A closely related quantity is the stress tensor , which relates
via the linear relation
the vector force F to the vector area A
.
F = A

149
uid being ideal[8] and incompressible.[8] An ideal uid is
a uid in which there is no friction, it is inviscid,[8] zero
viscosity.[8] The equation for all points of a system lled
with a constant-density uid is
p

v2
2g

+ z = const [9]

This tensor may be expressed as the sum of the viscous where:


stress tensor minus the hydrostatic pressure. The negative
of the stress tensor is sometimes called the pressure tensor,
p = pressure of the uid
but in the following, the term pressure will refer only to
the scalar pressure.
= g = densityacceleration of gravity = specic
weight of the uid.[8]
According to the theory of general relativity, pressure increases the strength of a gravitational eld (see stress
energy tensor) and so adds to the mass-energy cause of
gravity. This eect is unnoticeable at everyday pressures
but is signicant in neutron stars, although it has not been
experimentally tested.[7]

v = velocity of the uid


g = acceleration of gravity
z = elevation
p

6.3.2

Types

Fluid pressure
Fluid pressure is the pressure at some point within a uid,
such as water or air (for more information specically about
liquid pressure, see section below).
Fluid pressure occurs in one of two situations:
1. an open condition, called open channel ow, e.g. the
ocean, a swimming pool, or the atmosphere.
2. a closed condition, called closed conduit, e.g. a water line or gas line.
Pressure in open conditions usually can be approximated as
the pressure in static or non-moving conditions (even in
the ocean where there are waves and currents), because the
motions create only negligible changes in the pressure. Such
conditions conform with principles of uid statics. The
pressure at any given point of a non-moving (static) uid
is called the hydrostatic pressure.

v
2g

= pressure head
= velocity head

Applications
Hydraulic brakes
Artesian well
Blood pressure
Hydraulic head
Plant cell turgidity
Pythagorean cup
Explosion or deagration pressures
Explosion or deagration pressures are the result of the ignition of explosive gases, mists, dust/air suspensions, in unconned and conned spaces.

Closed bodies of uid are either static, when the uid is


not moving, or dynamic, when the uid can move as in Negative pressures
either a pipe or by compressing an air gap in a closed container. The pressure in closed conditions conforms with the While pressures are, in general, positive, there are several
situations in which negative pressures may be encountered:
principles of uid dynamics.
The concepts of uid pressure are predominantly attributed
to the discoveries of Blaise Pascal and Daniel Bernoulli.
Bernoullis equation can be used in almost any situation to
determine the pressure at any point in a uid. The equation makes some assumptions about the uid, such as the

When dealing in relative (gauge) pressures. For instance, an absolute pressure of 80 kPa may be described as a gauge pressure of 21 kPa (i.e., 21 kPa
below an atmospheric pressure of 101 kPa).

150

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

p0 =

1 2
v + p
2

where
p0 is the stagnation pressure
v is the ow velocity
p is the static pressure.
The pressure of a moving uid can be measured using a
Pitot tube, or one of its variations such as a Kiel probe or
low pressure chamber in Bundesleistungszentrum Kienbaum, GerCobra probe, connected to a manometer. Depending on
many
where the inlet holes are located on the probe, it can measure static pressures or stagnation pressures.
When attractive intermolecular forces (e.g., van der
Waals forces or hydrogen bonds) between the partiSurface pressure and surface tension
cles of a uid exceed repulsive forces due to thermal motion. These forces explain ascent of sap in tall
There is a two-dimensional analog of pressure the lateral
plants. An apparent negative pressure must act on waforce per unit length applied on a line perpendicular to the
ter molecules at the top of any tree taller than 10 m,
force.
which is the pressure head of water that balances the
atmospheric pressure. Intermolecular forces maintain Surface pressure is denoted by and shares many simcohesion of columns of sap that run continuously in ilar properties with three-dimensional pressure. Properties of surface chemicals can be investigated by measuring
xylem from the roots to the top leaves.[10]
pressure/area isotherms, as the two-dimensional analog of
The Casimir eect can create a small attractive force Boyles law, A = k, at constant temperature.
due to interactions with vacuum energy; this force is
sometimes termed vacuum pressure (not to be confused with the negative gauge pressure of a vacuum).
F
=
l
For non-isotropic stresses in rigid bodies, depending
on how the orientation of a surface is chosen, the same Surface tension is another example of surface pressure, but
distribution of forces may have a component of pos- with a reversed sign, because tension is the opposite to
itive pressure along one surface normal, with a com- pressure.
ponent of negative pressure acting along the another
surface normal.
Pressure of an ideal gas
The stresses in an electromagnetic eld are generally non-isotropic, with the pressure normal to Main article: Ideal gas law
one surface element (the normal stress) being
negative, and positive for surface elements per- In an ideal gas, molecules have no volume and do not interpendicular to this.
act. According to the ideal gas law, pressure varies linearly
In the cosmological constant.
Stagnation pressure

with temperature and quantity, and inversely with volume.

p=

nRT
V

Stagnation pressure is the pressure a uid exerts when it is where:


forced to stop moving. Consequently, although a uid moving at higher speed will have a lower static pressure, it may
p is the absolute pressure of the gas
have a higher stagnation pressure when forced to a standstill.
Static pressure and stagnation pressure are related by:
n is the amount of substance

6.3. PRESSURE
T is the absolute temperature

151
where:

V is the volume
R is the ideal gas constant.

p is liquid pressure
g is gravity at the surface of overlaying material

Real gases exhibit a more complex dependence on the variables of state.[11]

is density of liquid
h is height of liquid column or depth within a substance

Vapor pressure
Another way of saying this same formula is the following:
Main article: Vapor pressure
Vapor pressure is the pressure of a vapor in thermodynamic
equilibrium with its condensed phases in a closed system.
All liquids and solids have a tendency to evaporate into a
gaseous form, and all gases have a tendency to condense
back to their liquid or solid form.
The atmospheric pressure boiling point of a liquid (also
known as the normal boiling point) is the temperature at
which the vapor pressure equals the ambient atmospheric
pressure. With any incremental increase in that temperature, the vapor pressure becomes sucient to overcome atmospheric pressure and lift the liquid to form vapor bubbles
inside the bulk of the substance. Bubble formation deeper
in the liquid requires a higher pressure, and therefore higher
temperature, because the uid pressure increases above the
atmospheric pressure as the depth increases.

p = density weight depth


The pressure a liquid exerts against the sides and bottom of a
container depends on the density and the depth of the liquid.
If atmospheric pressure is neglected, liquid pressure against
the bottom is twice as great at twice the depth; at three times
the depth, the liquid pressure is threefold; etc. Or, if the
liquid is two or three times as dense, the liquid pressure is
correspondingly two or three times as great for any given
depth. Liquids are practically incompressible that is, their
volume can hardly be changed by pressure (water volume
decreases by only 50 millionths of its original volume for
each atmospheric increase in pressure). Thus, except for
small changes produced by temperature, the density of a
particular liquid is practically the same at all depths.

Atmospheric pressure pressing on the surface of a liquid


The vapor pressure that a single component in a mixture must be taken into account when trying to discover the total
contributes to the total pressure in the system is called pressure acting on a liquid. The total pressure of a liquid,
partial vapor pressure.
then, is gh plus the pressure of the atmosphere. When this
distinction is important, the term total pressure is used. Otherwise, discussions of liquid pressure refer to pressure withLiquid pressure
out regard to the normally ever-present atmospheric pressure.
See also: Fluid statics Pressure in uids at rest
When a person swims under the water, water pressure is
felt acting on the persons eardrums. The deeper that person swims, the greater the pressure. The pressure felt is due
to the weight of the water above the person. As someone
swims deeper, there is more water above the person and
therefore greater pressure. The pressure a liquid exerts depends on its depth.
Liquid pressure also depends on the density of the liquid. If
someone was submerged in a liquid more dense than water,
the pressure would be correspondingly greater. The pressure due to a liquid in liquid columns of constant density or
at a depth within a substance is represented by the following
formula:

p = gh

It is important to recognize that the pressure does not depend on the amount of liquid present. Volume is not the important factor depth is. The average water pressure acting
against a dam depends on the average depth of the water and
not on the volume of water held back. For example, a wide
but shallow lake with a depth of 3 m (10 ft) exerts only half
the average pressure that a small 6 m (20 ft) deep pond does
(note that the total force applied to the longer dam will be
greater, due to the greater total surface area for the pressure
to act upon, but for a given 5 foot section of each dam, the
10ft deep water will apply half the force of 20ft deep water). A person will feel the same pressure whether his/her
head is dunked a metre beneath the surface of the water in
a small pool or to the same depth in the middle of a large
lake. If four vases contain dierent amounts of water but
are all lled to equal depths, then a sh with its head dunked
a few centimetres under the surface will be acted on by water pressure that is the same in any of the vases. If the sh

152
swims a few centimetres deeper, the pressure on the sh will
increase with depth and be the same no matter which vase
the sh is in. If the sh swims to the bottom, the pressure
will be greater, but it makes no dierence what vase it is in.
All vases are lled to equal depths, so the water pressure is
the same at the bottom of each vase, regardless of its shape
or volume. If water pressure at the bottom of a vase were
greater than water pressure at the bottom of a neighboring
vase, the greater pressure would force water sideways and
then up the narrower vase to a higher level until the pressures at the bottom were equalized. Pressure is depth dependent, not volume dependent, so there is a reason that
water seeks its own level.
Restating this as energy equation, the energy per unit volume in an ideal, incompressible liquid is constant throughout its vessel. At the surface, gravitational potential energy
is large but liquid pressure energy is low. At the bottom
of the vessel, all the gravitational potential energy is converted to pressure energy. The sum of pressure energy and
gravitational potential energy per unit volume is constant
throughout the volume of the uid and the two energy components change linearly with the depth.[12] Mathematically,
it is described by Bernoullis equation where velocity head
is zero and comparisons per unit volume in the vessel are:
p
+ z = const

Terms have the same meaning as in section Fluid pressure.


Direction of liquid pressure
An experimentally determined fact about liquid pressure is
that it is exerted equally in all directions.[13] If someone is
submerged in water, no matter which way that person tilts
his/her head, the person will feel the same amount of water pressure on his/her ears. Because a liquid can ow, this
pressure isn't only downward. Pressure is seen acting sideways when water spurts sideways from a leak in the side of
an upright can. Pressure also acts upward, as demonstrated
when someone tries to push a beach ball beneath the surface of the water. The bottom of a boat is pushed upward
by water pressure (buoyancy).
When a liquid presses against a surface, there is a net force
that is perpendicular to the surface. Although pressure
doesn't have a specic direction, force does. A submerged
triangular block has water forced against each point from
many directions, but components of the force that are not
perpendicular to the surface cancel each other out, leaving
only a net perpendicular point.[13] This is why water spurting from a hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES


the hole is located. Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and
middle), then the force vectors perpendicular to the inner
container surface will increase with increasing depth that
is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted
by a uid on a smooth surface is always at right angles to the

surface. The speed of liquid out of the hole is 2gh , where


[13]
h is the depth below the free surface.
Interestingly, this
is the same speed the water (or anything else) would have
if freely falling the same vertical distance h.
Kinematic pressure
P = p/0
is the kinematic pressure, where p is the pressure and 0
constant mass density. The SI unit of P is m2 /s2 . Kinematic
pressure is used in the same manner as kinematic viscosity
in order to compute NavierStokes equation without explicitly showing the density 0 .
NavierStokes equation with kinematic quantities
u
2
t + (u)u = P + u

6.3.3

See also

Atmospheric pressure
Blood pressure
Boyles Law
Combined gas law
Conversion of units
Critical point (thermodynamics)
Dynamic pressure
Hydraulics
Internal pressure
Kinetic theory
Microphone
Orders of magnitude (pressure)
Partial pressure
Pressure measurement
Pressure sensor
Sound pressure

6.4. THERMODYNAMIC TEMPERATURE


Spouting can

153

[12] Streeter, V.L., Fluid Mechanics, Example 3.5, McGrawHill


Inc. (1966), New York.

Timeline of temperature and pressure measurement


[13] Hewitt 251 (2006)
technology
Units conversion by factor-label
Vacuum
Vacuum pump
Vertical pressure variation

6.3.6

External links

Introduction to Fluid Statics and Dynamics on Project


PHYSNET
Pressure being a scalar quantity

6.3.4

Notes

[1] The preferred spelling varies by country and even by industry. Further, both spellings are often used within a particular
industry or country. Industries in British English-speaking
countries typically use the gauge spelling.

6.3.5

References

[1] Giancoli, Douglas G. (2004). Physics: principles with applications. Upper Saddle River, N.J.: Pearson Education.
ISBN 0-13-060620-0.
[2] McNaught, A. D.; Wilkinson, A.; Nic, M.; Jirat,
J.; Kosata, B.; Jenkins, A. (2014).
IUPAC. Compendium of Chemical Terminology, 2nd ed. (the Gold
Book). 2.3.3. Oxford: Blackwell Scientic Publications.
doi:10.1351/goldbook.P04819. ISBN 0-9678550-9-8.
[3] 14th Conference of the International Bureau of Weights and
Measures. Bipm.fr. Retrieved 2012-03-27.

6.4

Thermodynamic temperature

Thermodynamic temperature is the absolute measure


of temperature and is one of the principal parameters of
thermodynamics.
Thermodynamic temperature is dened by the third law of
thermodynamics in which the theoretically lowest temperature is the null or zero point. At this point, absolute zero, the
particle constituents of matter have minimal motion and can
become no colder.[1][2] In the quantum-mechanical description, matter at absolute zero is in its ground state, which is its
state of lowest energy. Thermodynamic temperature is often also called absolute temperature, for two reasons: one,
proposed by Kelvin, that it does not depend on the properties of a particular material; two that it refers to an absolute
zero according to the properties of the ideal gas.

The International System of Units species a particular


scale
for thermodynamic temperature. It uses the Kelvin
[4] US Navy (2006). US Navy Diving Manual, 6th revision.
scale
for
measurement and selects the triple point of water
United States: US Naval Sea Systems Command. pp. 2
at
273.16K
as the fundamental xing point. Other scales
32. Retrieved 2008-06-15.
have been in use historically. The Rankine scale, using the
[5] Rules and Style Conventions for Expressing Values of degree Fahrenheit as its unit interval, is still in use as part of
Quantities. NIST. Retrieved 2009-07-07.
the English Engineering Units in the United States in some
engineering
elds. ITS-90 gives a practical means of esti[6] NIST, Rules and Style Conventions for Expressing Values of
mating
the
thermodynamic
temperature to a very high deQuantities, Sect. 7.4.
gree of accuracy.
[7] Einsteins gravity under pressure. Springerlink.com. Retrieved 2012-03-27.
[8]

[9]

[10]
[11]

Roughly, the temperature of a body at rest is a measure of


the mean of the energy of the translational, vibrational and
Finnemore, John, E. and Joseph B. Franzini (2002). Fluid rotational motions of matter's particle constituents, such as
Mechanics: With Engineering Applications. New York: Mc- molecules, atoms, and subatomic particles. The full variety
Graw Hill, Inc. pp. 1429. ISBN 978-0-07-243202-2.
of these kinetic motions, along with potential energies of
NCEES (2011). Fundamentals of Engineering: Supplied particles, and also occasionally certain other types of parReference Handbook. Clemson, South Carolina: NCEES. ticle energy in equilibrium with these, make up the total
internal energy of a substance. Internal energy is loosely
p. 64. ISBN 978-1-932613-59-9.
called the heat energy or thermal energy in conditions when
Karen Wright (March 2003). The Physics of Negative
no work is done upon the substance by its surroundings, or
Pressure. Discover. Retrieved 31 January 2015.
by the substance upon the surroundings. Internal energy
P. Atkins, J. de Paula Elements of Physical Chemistry, 4th may be stored in a number of ways within a substance, each
Ed, W.H. Freeman, 2006. ISBN 0-7167-7329-5.
way constituting a degree of freedom. At equilibrium,

154

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

each degree of freedom will have on average the same energy: kB T /2 where kB is the Boltzmann constant, unless
that degree of freedom is in the quantum regime. The internal degrees of freedom (rotation, vibration, etc.) may be
in the quantum regime at room temperature, but the translational degrees of freedom will be in the classical regime
except at extremely low temperatures (fractions of kelvins)
and it may be said that, for most situations, the thermodynamic temperature is specied by the average translational
kinetic energy of the particles.

Temperatures expressed in kelvins are converted to degrees


Rankine simply by multiplying by 1.8 as follows: TR =
1.8TK, where TK and TR are temperatures in kelvin and
degrees Rankine respectively. Temperatures expressed in
degrees Rankine are converted to kelvins by dividing by 1.8
as follows: TK = TR ..

6.4.1

Practical realization

Overview

Temperature is a measure of the random submicroscopic


motions and vibrations of the particle constituents of
matter. These motions comprise the internal energy of a
substance. More specically, the thermodynamic temperature of any bulk quantity of matter is the measure of the
average kinetic energy per classical (i.e., non-quantum) degree of freedom of its constituent particles. Translational
motions are almost always in the classical regime. Translational motions are ordinary, whole-body movements in
three-dimensional space in which particles move about and
exchange energy in collisions. Figure 1 below shows translational motion in gases; Figure 4 below shows translational
motion in solids. Thermodynamic temperatures null point,
absolute zero, is the temperature at which the particle constituents of matter are as close as possible to complete rest;
that is, they have minimal motion, retaining only quantum
mechanical motion.[3] Zero kinetic energy remains in a substance at absolute zero (see Thermal energy at absolute zero,
below).
Throughout the scientic world where measurements are
made in SI units, thermodynamic temperature is measured
in kelvins (symbol: K). Many engineering elds in the U.S.
however, measure thermodynamic temperature using the
Rankine scale.
By international agreement, the unit kelvin and its scale are
dened by two points: absolute zero, and the triple point of
Vienna Standard Mean Ocean Water (water with a specied
blend of hydrogen and oxygen isotopes). Absolute zero, the
lowest possible temperature, is dened as being precisely 0
K and 273.15 C. The triple point of water is dened as
being precisely 273.16 K and 0.01 C. This denition does
three things:
1. It xes the magnitude of the kelvin unit as being precisely 1 part in 273.16 parts the dierence between
absolute zero and the triple point of water;
2. It establishes that one kelvin has precisely the same
magnitude as a one-degree increment on the Celsius
scale; and

3. It establishes the dierence between the two scales


null points as being precisely 273.15 kelvins (0 K =
273.15 C and 273.16 K = 0.01 C).

Main article: ITS-90


Although the Kelvin and Celsius scales are dened using absolute zero (0 K) and the triple point of water (273.16 K and
0.01 C), it is impractical to use this denition at temperatures that are very dierent from the triple point of water.
ITS-90 is then designed to represent the thermodynamic
temperature as closely as possible throughout its range.
Many dierent thermometer designs are required to cover
the entire range. These include helium vapor pressure thermometers, helium gas thermometers, standard platinum resistance thermometers (known as SPRTs, PRTs or Platinum RTDs) and monochromatic radiation thermometers.
For some types of thermometer the relationship between
the property observed (e.g., length of a mercury column)
and temperature, is close to linear, so for most purposes a
linear scale is sucient, without point-by-point calibration.
For others a calibration curve or equation is required. The
mercury thermometer, invented before the thermodynamic
temperature was understood, originally dened the temperature scale; its linearity made readings correlate well with
true temperature, i.e. the mercury temperature scale was
a close t to the true scale.

6.4.2

The relationship of temperature, motions, conduction, and thermal energy

The nature of kinetic energy, translational motion, and


temperature
The thermodynamic temperature is a measure of the average energy of the translational, vibrational, and rotational
motions of matter's particle constituents (molecules, atoms,
and subatomic particles). The full variety of these kinetic
motions, along with potential energies of particles, and
also occasionally certain other types of particle energy in
equilibrium with these, contribute the total internal energy
(loosely, the thermal energy) of a substance. Thus, internal energy may be stored in a number of ways (degrees of

6.4. THERMODYNAMIC TEMPERATURE

155
Since there are three translational degrees of freedom (e.g.,
motion along the x, y, and z axes), the translational kinetic
energy is related to the kinetic temperature by:
= 3 kB Tk
E
2
where:

is the mean kinetic energy in joules (J) and is proE


nounced E bar

kB = 1.3806504(24)1023 J/K is the Boltzmann constant and is pronounced Kay sub bee
Tk is the kinetic temperature in kelvins (K) and is pronounced Tee sub kay
Fig. 1 The translational motion of fundamental particles of nature such as atoms and molecules are directly related to temperature. Here, the size of helium atoms relative to their spacing is
shown to scale under 1950 atmospheres of pressure. These roomtemperature atoms have a certain average speed (slowed down here
two trillion-fold). At any given instant however, a particular helium
atom may be moving much faster than average while another may
be nearly motionless. Five atoms are colored red to facilitate following their motions.

freedom) within a substance. When the degrees of freedom


are in the classical regime (unfrozen) the temperature is
very simply related to the average energy of those degrees
of freedom at equilibrium. The three translational degrees
of freedom are unfrozen except for the very lowest temperatures, and their kinetic energy is simply related to the thermodynamic temperature over the widest range. The heat
capacity, which relates heat input and temperature change,
is discussed below.
The relationship of kinetic energy, mass, and velocity is
given by the formula Ek = 1 2 mv2 .[4] Accordingly, particles
with one unit of mass moving at one unit of velocity have
precisely the same kinetic energy, and precisely the same
temperature, as those with four times the mass but half the Fig. 2 The translational motions of helium atoms occur across a
range of speeds. Compare the shape of this curve to that of a Planck
velocity.
curve in Fig. 5 below.

Except in the quantum regime at extremely low temperatures, the thermodynamic temperature of any bulk quantity
of a substance (a statistically signicant quantity of particles) is directly proportional to the mean average kinetic
energy of a specic kind of particle motion known as translational motion. These simple movements in the three x, y,
and zaxis dimensions of space means the particles move in
the three spatial degrees of freedom. The temperature derived from this translational kinetic energy is sometimes referred to as kinetic temperature and is equal to the thermodynamic temperature over a very wide range of temperatures.

While the Boltzmann constant is useful for nding the mean


kinetic energy of a particle, its important to note that even
when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat
is going into or out of it), the translational motions of individual atoms and molecules occur across a wide range of
speeds (see animation in Figure 1 above). At any one instant, the proportion of particles moving at a given speed
within this range is determined by probability as described
by the MaxwellBoltzmann distribution. The graph shown

156

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

here in Fig. 2 shows the speed distribution of 5500 K


helium atoms. They have a most probable speed of 4.780
km/s. However, a certain proportion of atoms at any given
instant are moving faster while others are moving relatively
slowly; some are momentarily at a virtual standstill (o the
xaxis to the right). This graph uses inverse speed for its
xaxis so the shape of the curve can easily be compared to
the curves in Figure 5 below. In both graphs, zero on the
xaxis represents innite temperature. Additionally, the x
and yaxis on both graphs are scaled proportionally.
The high speeds of translational motion
Although very specialized laboratory equipment is required
to directly detect translational motions, the resultant collisions by atoms or molecules with small particles suspended in a uid produces Brownian motion that can be
seen with an ordinary microscope. The translational motions of elementary particles are very fast[5] and temperatures close to absolute zero are required to directly observe
them. For instance, when scientists at the NIST achieved a
record-setting cold temperature of 700 nK (billionths of a
kelvin) in 1994, they used optical lattice laser equipment to
adiabatically cool caesium atoms. They then turned o the
entrapment lasers and directly measured atom velocities of
7 mm per second in order to calculate their temperature.[6]
Formulas for calculating the velocity and speed of translational motion are given in the following footnote.[7]

Fig. 3 Because of their internal structure and exibility, molecules


can store kinetic energy in internal degrees of freedom which contribute to the heat capacity.

rem, which states that for any bulk quantity of a substance in


equilibrium, the kinetic energy of particle motion is evenly
distributed among all the active (i.e. unfrozen) degrees of
freedom available to the particles. Since the internal temperature of molecules are usually equal to their kinetic temperature, the distinction is usually of interest only in the deThe internal motions of molecules and specic heat
tailed study of non-local thermodynamic equilibrium (LTE)
phenomena such as combustion, the sublimation of solids,
There are other forms of internal energy besides the ki- and the diusion of hot gases in a partial vacuum.
netic energy of translational motion. As can be seen in
the animation at right, molecules are complex objects; they The kinetic energy stored internally in molecules causes
are a population of atoms and thermal agitation can strain substances to contain more internal energy at any given temtheir internal chemical bonds in three dierent ways: via perature and to absorb additional internal energy for a given
rotation, bond length, and bond angle movements. These temperature increase. This is because any kinetic energy
are all types of internal degrees of freedom. This makes that is, at a given instant, bound in internal motions is not at
molecules distinct from monatomic substances (consisting that same[8]instant contributing to the molecules translational
of individual atoms) like the noble gases helium and argon, motions. This extra thermal energy simply increases the
which have only the three translational degrees of free- amount of energy a substance absorbs for a given temperdom. Kinetic energy is stored in molecules internal de- ature rise. This property is known as a substances specic
grees of freedom, which gives them an internal tempera- heat capacity.
ture. Even though these motions are called internal, the ex- Dierent molecules absorb dierent amounts of thermal
ternal portions of molecules still moverather like the jig- energy for each incremental increase in temperature; that
gling of a stationary water balloon. This permits the two- is, they have dierent specic heat capacities. High specic
way exchange of kinetic energy between internal motions heat capacity arises, in part, because certain substances
and translational motions with each molecular collision. molecules possess more internal degrees of freedom than
Accordingly, as energy is removed from molecules, both others do. For instance, nitrogen, which is a diatomic
their kinetic temperature (the temperature derived from the molecule, has ve active degrees of freedom at room temkinetic energy of translational motion) and their internal perature: the three comprising translational motion plus
temperature simultaneously diminish in equal proportions. two rotational degrees of freedom internally. Since the two
This phenomenon is described by the equipartition theo- internal degrees of freedom are essentially unfrozen, in ac-

6.4. THERMODYNAMIC TEMPERATURE


cordance with the equipartition theorem, nitrogen has vethirds the specic heat capacity per mole (a specic number
of molecules) as do the monatomic gases.[9] Another example is gasoline (see table showing its specic heat capacity).
Gasoline can absorb a large amount of thermal energy per
mole with only a modest temperature change because each
molecule comprises an average of 21 atoms and therefore
has many internal degrees of freedom. Even larger, more
complex molecules can have dozens of internal degrees of
freedom.

157
lisions, but entire molecules or atoms can move forward
into new territory, bringing their kinetic energy with them.
Consequently, temperature dierences equalize throughout
gases very quicklyespecially for light atoms or molecules;
convection speeds this process even more.[10]

Translational motion in solids, however, takes the form of


phonons (see Fig. 4 at right). Phonons are constrained,
quantized wave packets that travel at a given substances
speed of sound. The manner in which phonons interact
within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids,
[11]
and
The diusion of thermal energy: Entropy, phonons, phonon-based heat conduction is usually inecient
such
solids
are
considered
thermal
insulators
(such
as
glass,
and mobile conduction electrons
plastic, rubber, ceramic, and rock). This is because in
solids, atoms and molecules are locked into place relative
to their neighbors and are not free to roam.
Metals however, are not restricted to only phonon-based
heat conduction. Thermal energy conducts through metals
extraordinarily quickly because instead of direct moleculeto-molecule collisions, the vast majority of thermal energy
is mediated via very light, mobile conduction electrons. This
is why there is a near-perfect correlation between metals
thermal conductivity and their electrical conductivity.[12]
Conduction electrons imbue metals with their extraordinary
conductivity because they are delocalized (i.e., not tied to a
specic atom) and behave rather like a sort of quantum gas
due to the eects of zero-point energy (for more on ZPE, see
Note 1 below). Furthermore, electrons are relatively light
with a rest mass only 1 1836 th that of a proton. This is about
the same ratio as a .22 Short bullet (29 grains or 1.88 g)
compared to the rie that shoots it. As Isaac Newton wrote
with his third law of motion,
Fig. 4 The temperature-induced translational motion of particles
in solids takes the form of phonons. Shown here are phonons with
identical amplitudes but with wavelengths ranging from 2 to 12
molecules.

Law #3: All forces occur in pairs, and these


two forces are equal in magnitude and opposite
in direction.

However, a bullet accelerates faster than a rie given an


equal force. Since kinetic energy increases as the square of
velocity, nearly all the kinetic energy goes into the bullet,
not the rie, even though both experience the same force
from the expanding propellant gases. In the same manner,
because they are much less massive, thermal energy is readily borne by mobile conduction electrons. Additionally, because they're delocalized and very fast, kinetic thermal energy conducts extremely quickly through metals with abunOne particular heat conduction mechanism occurs when dant conduction electrons.
translational motion, the particle motion underlying temperature, transfers momentum from particle to particle
in collisions. In gases, these translational motions are The diusion of thermal energy: Black-body radiation
of the nature shown above in Fig. 1. As can be seen
in that animation, not only does momentum (heat) dif- Thermal radiation is a byproduct of the collisions arising
fuse throughout the volume of the gas through serial col- from various vibrational motions of atoms. These collisions
Heat conduction is the diusion of thermal energy from hot
parts of a system to cold. A system can be either a single bulk entity or a plurality of discrete bulk entities. The
term bulk in this context means a statistically signicant
quantity of particles (which can be a microscopic amount).
Whenever thermal energy diuses within an isolated system, temperature dierences within the system decrease
(and entropy increases).

158

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES


Table of thermodynamic temperatures The full range
of the thermodynamic temperature scale, from absolute
zero to absolute hot, and some notable points between them
are shown in the table below.

5500K

Spectral energy density / kJ/m3 nm

8E+11

The 2500 K value is approximate.


For a true blackbody (which tungsten laments are not).
Tungsten laments emissivity is greater at shorter wavelengths, which makes them appear whiter.
C
Eective photosphere temperature.
D
For a true blackbody (which the plasma was not). The Z
machines dominant emission originated from 40 MK electrons (soft xray emissions) within the plasma.

6E+11

5000K

4E+11
4500K

2E+11

4000K
3500K

500

1000

1500

2000

Wavelength / nm

Fig. 5 The spectrum of black-body radiation has the form of a


Planck curve. A 5500 K black-body has a peak emittance wavelength of 527 nm. Compare the shape of this curve to that of a
Maxwell distribution in Fig. 2 above.

cause the electrons of the atoms to emit thermal photons


(known as black-body radiation). Photons are emitted anytime an electric charge is accelerated (as happens when
electron clouds of two atoms collide). Even individual
molecules with internal temperatures greater than absolute
zero also emit black-body radiation from their atoms. In
any bulk quantity of a substance at equilibrium, black-body
photons are emitted across a range of wavelengths in a spectrum that has a bell curve-like shape called a Planck curve
(see graph in Fig. 5 at right). The top of a Planck curve (the
peak emittance wavelength) is located in a particular part
of the electromagnetic spectrum depending on the temperature of the black-body. Substances at extreme cryogenic
temperatures emit at long radio wavelengths whereas extremely hot temperatures produce short gamma rays (see
Table of common temperatures).

Fig. 6 Ice and water: two phases of the same substance

Black-body radiation diuses thermal energy throughout


a substance as the photons are absorbed by neighboring
atoms, transferring momentum in the process. Black-body
photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost
in the process.

The heat of phase changes The kinetic energy of particle motion is just one contributor to the total thermal energy
in a substance; another is phase transitions, which are the
potential energy of molecular bonds that can form in a substance as it cools (such as during condensing and freezing).
The thermal energy required for a phase transition is called
latent heat. This phenomenon may more easily be grasped
by considering it in the reverse direction: latent heat is the
energy required to break chemical bonds (such as during
evaporation and melting). Almost everyone is familiar with
the eects of phase transitions; for instance, steam at 100
C can cause severe burns much faster than the 100 C air
from a hair dryer. This occurs because a large amount of
latent heat is liberated as steam condenses into liquid water
on the skin.

As established by the StefanBoltzmann law, the intensity


of black-body radiation increases as the fourth power of absolute temperature. Thus, a black-body at 824 K (just short
of glowing dull red) emits 60 times the radiant power as it
does at 296 K (room temperature). This is why one can so
easily feel the radiant heat from hot objects at a distance. At
higher temperatures, such as those found in an incandescent
lamp, black-body radiation can be the principal mechanism
by which thermal energy escapes a system.

Even though thermal energy is liberated or absorbed during phase transitions, pure chemical elements, compounds,
and eutectic alloys exhibit no temperature change whatsoever while they undergo them (see Fig. 7, below right).
Consider one particular type of phase transition: melting.
When a solid is melting, crystal lattice chemical bonds are
being broken apart; the substance is transitioning from what
is known as a more ordered state to a less ordered state. In
Fig. 7, the melting of ice is shown within the lower left box

6.4. THERMODYNAMIC TEMPERATURE


heading from blue to green.

Fig. 7 Waters temperature does not change during phase transitions as heat ows into or out of it. The total heat capacity of a
mole of water in its liquid phase (the green line) is 7.5507 kJ.

159
known as enthalpy of vaporization) is roughly 540 times that
required for a one-degree increase.[29]
Waters sizable enthalpy of vaporization is why ones skin
can be burned so quickly as steam condenses on it (heading from red to green in Fig. 7 above). In the opposite
direction, this is why ones skin feels cool as liquid water on
it evaporates (a process that occurs at a sub-ambient wetbulb temperature that is dependent on relative humidity).
Waters highly energetic enthalpy of vaporization is also an
important factor underlying why solar pool covers (oating,
insulated blankets that cover swimming pools when not in
use) are so eective at reducing heating costs: they prevent
evaporation. For instance, the evaporation of just 20 mm
of water from a 1.29-meter-deep pool chills its water 8.4
degrees Celsius (15.1 F).

Internal energy The total energy of all particle motion


translational and internal, including that of conduction elecAt one specic thermodynamic point, the melting point trons, plus the potential energy of phase changes, plus zero(which is 0 C across a wide pressure range in the case point energy[3] comprise the internal energy of a substance.
of water), all the atoms or molecules are, on average, at
the maximum energy threshold their chemical bonds can
withstand without breaking away from the lattice. Chemical bonds are all-or-nothing forces: they either hold fast, or
break; there is no in-between state. Consequently, when a
substance is at its melting point, every joule of added thermal energy only breaks the bonds of a specic quantity of
its atoms or molecules,[25] converting them into a liquid of
precisely the same temperature; no kinetic energy is added
to translational motion (which is what gives substances their
temperature). The eect is rather like popcorn: at a certain
temperature, additional thermal energy can't make the kernels any hotter until the transition (popping) is complete.
If the process is reversed (as in the freezing of a liquid),
thermal energy must be removed from a substance.
As stated above, the thermal energy required for a phase
transition is called latent heat. In the specic cases of melting and freezing, its called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong,
the heat of fusion can be relatively great, typically in the
range of 6 to 30 kJ per mole for water and most of the metallic elements.[26] If the substance is one of the monatomic
gases, (which have little tendency to form molecular bonds)
the heat of fusion is more modest, ranging from 0.021 to
2.3 kJ per mole.[27] Relatively speaking, phase transitions
can be truly energetic events. To completely melt ice at 0
C into water at 0 C, one must add roughly 80 times the
thermal energy as is required to increase the temperature
of the same mass of liquid water by one degree Celsius.
The metals ratios are even greater, typically in the range of
400 to 1200 times.[28] And the phase transition of boiling
is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is

Fig. 8 When many of the chemical elements, such as the noble gases
and platinum-group metals, freeze to a solid the most ordered
state of matter their crystal structures have a closest-packed arrangement. This yields the greatest possible packing density and the
lowest energy state.

Internal energy at absolute zero As a substance cools,


dierent forms of internal energy and their related effects simultaneously decrease in magnitude: the latent heat
of available phase transitions is liberated as a substance
changes from a less ordered state to a more ordered state;
the translational motions of atoms and molecules dimin-

160

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

ish (their kinetic temperature decreases); the internal motions of molecules diminish (their internal temperature decreases); conduction electrons (if the substance is an electrical conductor) travel somewhat slower;[30] and black-body
radiations peak emittance wavelength increases (the photons energy decreases). When the particles of a substance
are as close as possible to complete rest and retain only
ZPE-induced quantum mechanical motion, the substance
is at the temperature of absolute zero (T=0).
Note that whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute
zero is not necessarily the point at which a substance contains zero thermal energy; one must be very precise with
what one means by internal energy. Often, all the phase
changes that can occur in a substance, will have occurred
by the time it reaches absolute zero. However, this is not
always the case. Notably, T=0 helium remains liquid at
room pressure and must be under a pressure of at least 25
bar (2.5 MPa) to crystallize. This is because heliums heat
of fusion (the energy required to melt helium ice) is so low
(only 21 joules per mole) that the motion-inducing eect
of zero-point energy is sucient to prevent it from freezing
at lower pressures. Only if under at least 25 bar (2.5 MPa)
of pressure will this latent thermal energy be liberated as
helium freezes while approaching absolute zero. A further
complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars, or hundreds of gigapascals).
These are known as solid-solid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more
thermodynamically favorable, compact one.
The above complexities make for rather cumbersome
blanket statements regarding the internal energy in T=0
substances. Regardless of pressure though, what can be
said is that at absolute zero, all solids with a lowest-energy
crystal lattice such those with a closest-packed arrangement
(see Fig. 8, above left) contain minimal internal energy,
retaining only that due to the ever-present background
of zero-point energy.[3] [31] One can also say that for a
given substance at constant pressure, absolute zero is the
point of lowest enthalpy (a measure of work potential
that takes internal energy, pressure, and volume into
consideration).[32] Lastly, it is always true to say that all
T=0 substances contain zero kinetic thermal energy.[3] [7]

6.4.3

Helium-4, is a superuid at or below 2.17 kelvins, (2.17 Celsius


degrees above absolute zero)

involving gases. By expressing variables in absolute terms


and applying GayLussacs law of temperature/pressure
proportionality, solutions to everyday problems are straightforward; for instance, calculating how a temperature change
aects the pressure inside an automobile tire. If the tire has
a relatively cold pressure of 200 kPa-gage , then in absolute terms (relative to a vacuum), its pressure is 300 kPaabsolute.[33][34][35] Room temperature (cold in tire terms)
is 296 K. If the tire pressure is 20 C hotter (20 kelvins),
the solution is calculated as 316 K K = 6.8% greater thermodynamic temperature and absolute pressure; that is, a
pressure of 320 kPa-absolute, which is 220 kPa-gage.

6.4.4

Denition of thermodynamic temperature

The thermodynamic temperature is dened by the second


law of thermodynamics and its consequences. The thermodynamic temperature can be shown to have special properties, and in particular can be seen to be uniquely dened (up
to some constant multiplicative factor) by considering the
eciency of idealized heat engines. Thus the ratio T 2 /T 1
of two temperaturesT 1 andT 2 is the same in all absolute
scales.
Strictly speaking, the temperature of a system is welldened only if it is at thermal equilibrium. From a microscopic viewpoint, a material is at thermal equilibrium if
the quantity of heat between its individual particles cancel
out. There are many possible scales of temperature, derived
from a variety of observations of physical phenomena.

Practical applications for thermody- Loosely stated, temperature dierences dictate the direcnamic temperature
tion of heat between two systems such that their combined

energy is maximally distributed among their lowest possiThermodynamic temperature is useful not only for scien- ble states. We call this distribution "entropy". To better untists, it can also be useful for lay-people in many disciplines derstand the relationship between temperature and entropy,

6.4. THERMODYNAMIC TEMPERATURE

161

consider the relationship between heat, work and temperature illustrated in the Carnot heat engine. The engine converts heat into work by directing a temperature gradient between a higher temperature heat source, TH, and a lower
temperature heat sync, TC, through a gas lled piston. The
work done per cycle is equal to the dierence between the
heat supplied to the engine by TH, qH, and the heat supplied to TC by the engine, qC. The eciency of the engine
is the work divided by the heat put into the system or

Eciency =

wcy
qH qC
qC
=
=1
qH
qH
qH

so that

f (T1 , T3 ) =

g(T3 )
q3
= .
g(T1 )
q1

i.e. The ratio of heat exchanged is a function of the respective temperatures at which they occur. We can choose any
monotonic function for our g(T ) ; it is a matter of convenience and convention that we choose g(T ) = T . Choosing then one xed reference temperature (i.e. triple point of
water), we establish the thermodynamic temperature scale.

(1)

It is to be noted that such a denition coincides with that


of the ideal gas derivation; also it is this denition of the
where w is the work done per cycle. Thus the eciency
thermodynamic temperature that enables us to represent the
depends only on qC/qH.
Carnot eciency in terms of TH and TC, and hence derive
Carnots theorem states that all reversible engines operating that the (complete) Carnot cycle is isentropic:
between the same heat reservoirs are equally ecient. Thus,
any reversible heat engine operating between temperatures
TC
T 1 and T 2 must have the same eciency, that is to say, the qC
= f (TH , TC ) =
.
(3).
eciency is the function of only temperatures
qH
TH
qC
= f (TH , TC )
qH

(2).

In addition, a reversible heat engine operating between temperatures T 1 and T 3 must have the same eciency as one
consisting of two cycles, one between T 1 and another (intermediate) temperature T 2 , and the second between T 2
andT 3 . If this were not the case, then energy (in the form
of Q) will be wasted or gained, resulting in dierent overall eciencies every time a cycle is split into component
cycles; clearly a cycle can be composed of any number of
smaller cycles.
With this understanding of Q1 , Q2 and Q3 , we note also that
mathematically,

Substituting this back into our rst formula for eciency


yields a relationship in terms of temperature:

Eciency = 1

qC
TC
=1
qH
TH

(4).

Notice that for TC=0 the eciency is 100% and that eciency becomes greater than 100% for TC<0, which cases
are unrealistic. Subtracting the right hand side of Equation
4 from the middle portion and rearranging gives
qH
qC

= 0,
TH
TC

where the negative sign indicates heat ejected from the system. The generalization of this equation is Clausius theorem, which suggests the existence of a state function S (i.e.,
q3
q2 q3
f (T1 , T3 ) =
=
= f (T1 , T2 )f (T2 , T3 ).
a function which depends only on the state of the system,
q1
q1 q2
not on how it reached that state) dened (up to an additive
But the rst function is NOT a function of T 2 , therefore constant) by
the product of the nal two functions MUST result in the
removal of T 2 as a variable. The only way is therefore to
dqrev
dene the function f as follows:
(5),
dS =
T
f (T1 , T2 ) =

g(T2 )
.
g(T1 )

and

f (T2 , T3 ) =

g(T3 )
.
g(T2 )

where the subscript indicates heat transfer in a reversible


process. The function S corresponds to the entropy of the
system, mentioned previously, and the change of S around
any cycle is zero (as is necessary for any state function).
Equation 5 can be rearranged to get an alternative denition for temperature in terms of entropy and heat (to avoid
logic loop, we should rst dene entropy through statistical
mechanics):

162

T =

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

dqrev
.
dS

For a system in which the entropy S is a function S(E) of


its energy E, the thermodynamic temperature T is therefore
given by

1
dS
=
,
T
dE
so that the reciprocal of the thermodynamic temperature is
the rate of increase of entropy with energy.

6.4.5

History

Ca. 485 BC: Parmenides in his treatise On Nature

Guillaume Amontons

endeavoring to improve upon the air thermometers in


use at the time. His J-tube thermometers comprised
a mercury column that was supported by a xed mass
of air entrapped within the sensing portion of the thermometer. In thermodynamic terms, his thermometers relied upon the volume / temperature relationship
of gas under constant pressure. His measurements of
the boiling point of water and the melting point of ice
showed that regardless of the mass of air trapped inside his thermometers or the weight of mercury the air
was supporting, the reduction in air volume at the ice
point was always the same ratio. This observation led
him to posit that a sucient reduction in temperature
would reduce the air volume to zero. In fact, his calculations projected that absolute zero was equivalent to
240 Conly 33.15 degrees short of the true value
of 273.15 C.

Parmenides

postulated the existente of primum frigidum, a hypothetical elementary substance source of all cooling or
cold in the world.[36]
17021703: Guillaume Amontons (16631705) published two papers that may be used to credit him as
being the rst researcher to deduce the existence of a
fundamental (thermodynamic) temperature scale featuring an absolute zero. He made the discovery while

1742: Anders Celsius (17011744) created a backwards version of the modern Celsius temperature
scale. In Celsiuss original scale, zero represented the
boiling point of water and 100 represented the melting
point of ice. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that ices melting point was eectively
unaected by pressure. He also determined with remarkable precision how waters boiling point varied as
a function of atmospheric pressure. He proposed that
zero on his temperature scale (waters boiling point)
would be calibrated at the mean barometric pressure

6.4. THERMODYNAMIC TEMPERATURE

163

Carl Linnaeus
Anders Celsius

at mean sea level.


1744: Coincident with the death of Anders Celsius,
the famous botanist Carl Linnaeus (17071778) effectively reversed[37] Celsiuss scale upon receipt of his
rst thermometer featuring a scale where zero represented the melting point of ice and 100 represented
waters boiling point. The custom-made linnaeusthermometer, for use in his greenhouses, was made
by Daniel Ekstrm, Swedens leading maker of scientic instruments at the time. For the next 204 years,
the scientic and thermometry communities worldwide referred to this scale as the centigrade scale.
Temperatures on the centigrade scale were often reported simply as degrees or, when greater specicity
was desired, degrees centigrade. The symbol for temperature values on this scale was C (in several formats over the years). Because the term centigrade
was also the French-language name for a unit of angular measurement (one-hundredth of a right angle)
and had a similar connotation in other languages, the
term "centesimal degree" was used when very precise,
unambiguous language was required by international
standards bodies such as the International Bureau of
Weights and Measures (Bureau international des poids
et mesures) (BIPM). The 9th CGPM (General Conference on Weights and Measures (Confrence gnrale

des poids et mesures) and the CIPM (International


Committee for Weights and Measures (Comit international des poids et mesures) formally adopted[38] degree
Celsius (symbol: C) in 1948.[39]
1777: In his book Pyrometrie (Berlin: Haude &
Spener, 1779) completed four months before his
death, Johann Heinrich Lambert (17281777), sometimes incorrectly referred to as Joseph Lambert, proposed an absolute temperature scale based on the pressure/temperature relationship of a xed volume of
gas. This is distinct from the volume/temperature relationship of gas under constant pressure that Guillaume Amontons discovered 75 years earlier. Lambert
stated that absolute zero was the point where a simple straight-line extrapolation reached zero gas pressure and was equal to 270 C.
Circa 1787: Notwithstanding the work of Guillaume
Amontons 85 years earlier, Jacques Alexandre Csar
Charles (17461823) is often credited with discovering, but not publishing, that the volume of a gas under
constant pressure is proportional to its absolute temperature. The formula he created was V 1 /T 1 = V 2 /T 2 .
1802: Joseph Louis Gay-Lussac (17781850) published work (acknowledging the unpublished lab notes
of Jacques Charles fteen years earlier) describing how the volume of gas under constant pressure
changes linearly with its absolute (thermodynamic)

164

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

Johann Heinrich Lambert


Jacques Alexandre Csar Charles

temperature. This behavior is called Charless Law


and is one of the gas laws. His are the rst known
formulas to use the number 273 for the expansion coecient of gas relative to the melting point of ice (indicating that absolute zero was equivalent to 273 C).
1848: William Thomson, (18241907) also known as
Lord Kelvin, wrote in his paper, On an Absolute Thermometric Scale, of the need for a scale whereby innite cold (absolute zero) was the scales null point, and
which used the degree Celsius for its unit increment.
Like Gay-Lussac, Thomson calculated that absolute
zero was equivalent to 273 C on the air thermometers of the time. This absolute scale is known today
as the Kelvin thermodynamic temperature scale. Its
noteworthy that Thomsons value of 273 was actually
derived from 0.00366, which was the accepted expansion coecient of gas per degree Celsius relative to the
ice point. The inverse of 0.00366 expressed to ve
signicant digits is 273.22 C which is remarkably
close to the true value of 273.15 C.
1859: William John Macquorn Rankine (18201872)
proposed a thermodynamic temperature scale similar to William Thomsons but which used the degree
Fahrenheit for its unit increment. This absolute scale Joseph Louis Gay-Lussac
is known today as the Rankine thermodynamic temperature scale.

6.4. THERMODYNAMIC TEMPERATURE

165

Lord Kelvin

18771884: Ludwig Boltzmann (18441906) made William John Macquorn Rankine


major contributions to thermodynamics through an
understanding of the role that particle kinetics and
the Celsius temperature scale. [39]
black body radiation played. His name is now attached
to several of the formulas used today in thermodynam 1954: Resolution 3 of the 10th CGPM gave the Kelvin
ics.
scale its modern denition by choosing the triple point
of water as its second dening point and assigned it a
Circa 1930s: Gas thermometry experiments carefully
temperature of precisely 273.16 kelvin (what was accalibrated to the melting point of ice and boiling point
tually written 273.16 degrees Kelvin at the time). This,
of water showed that absolute zero was equivalent to
in combination with Resolution 3 of the 9th CGPM,
273.15 C.
had the eect of dening absolute zero as being precisely zero kelvin and 273.15 C.
1948: Resolution 3 of the 9th CGPM (Confrence
Gnrale des Poids et Mesures, also known as the
1967/1968: Resolution 3 of the 13th CGPM renamed
General Conference on Weights and Measures) xed
the unit increment of thermodynamic temperature
the triple point of water at precisely 0.01 C. At this
kelvin, symbol K, replacing degree absolute, symbol
time, the triple point still had no formal denition for
K. Further, feeling it useful to more explicitly dene
its equivalent kelvin value, which the resolution dethe magnitude of the unit increment, the 13th CGPM
clared will be xed at a later date. The implication
also decided in Resolution 4 that The kelvin, unit of
is that if the value of absolute zero measured in the
thermodynamic temperature, is the fraction 1/273.16
1930s was truly 273.15 C, then the triple point of
of the thermodynamic temperature of the triple point
water (0.01 C) was equivalent to 273.16 K. Additionof water.
ally, both the CIPM (Comit international des poids et
2005: The CIPM (Comit International des Poids et
mesures, also known as the International Committee
Mesures, also known as the International Committee
for Weights and Measures) and the CGPM formally
adopted the name Celsius for the degree Celsius and
for Weights and Measures) armed that for the pur-

166

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES


Delocalized electron
Diusion
Elastic collision
Electron
Energy
Energy conversion eciency
Enthalpy
Entropy
Equipartition theorem
Evaporation
Fahrenheit
First law of thermodynamics
Freezing
Gas laws

Ludwig Boltzmann

poses of delineating the temperature of the triple point


of water, the denition of the Kelvin thermodynamic
temperature scale would refer to water having an isotopic composition dened as being precisely equal to
the nominal specication of Vienna Standard Mean
Ocean Water.

6.4.6

See also

Absolute hot

Heat
Heat conduction
Heat engine
Internal energy
International System of Quantities
ITS-90
Ideal gas law
Joule

Absolute zero

Kelvin

Adiabatic process

Kinetic energy

Black-body

Latent heat

Boiling

Laws of thermodynamics

Boltzmann constant

MaxwellBoltzmann distribution

Brownian motion

Melting

Carnot heat engine

Mole

Chemical bond

Molecule

Condensation

Orders of magnitude (temperature)

Convection

Phase transition

Degrees of freedom

Phonon

6.4. THERMODYNAMIC TEMPERATURE


Plancks law of black-body radiation
Potential energy
Quantum mechanics:
Introduction to quantum mechanics
Quantum mechanics (main article)
Rankine scale
Specic heat capacity
Standard enthalpy change of fusion

167

6.4.7

Notes

In the following notes, wherever numeric equalities are shown in concise form, such as
1.85487(14)1043 , the two digits between the
parentheses denotes the uncertainty at 1- (1
standard deviation, 68% condence level) in the
two least signicant digits of the signicand.
[1] Rankine, W.J.M., A manual of the steam engine and other
prime movers, Richard Grin and Co., London (1859), p.
306-7
[2] William Thomson, 1st Baron Kelvin, Heat, Adam and
Charles Black, Edinburgh (1880), p. 39

Standard enthalpy change of vaporization


StefanBoltzmann law
Sublimation
Temperature
Temperature conversion formulas
Thermal conductivity
Thermal radiation
Thermodynamic beta
Thermodynamic equations
Thermodynamic equilibrium
Thermodynamics
Thermodynamics Category (list of articles)
Timeline of heat engine technology
Timeline of temperature and pressure measurement
technology
Triple point
Universal gas constant
Vienna Standard Mean Ocean Water (VSMOW)
Wiens displacement law
Work (Mechanical)
Work (thermodynamics)
Zero-point energy

[3]
Absolute zeros relationship to zero-point energy
While scientists are achieving temperatures ever closer to
absolute zero, they can not fully achieve a state of zero temperature. However, even if scientists could remove all kinetic thermal energy from matter, quantum mechanical zeropoint energy (ZPE) causes particle motion that can never be
eliminated. Encyclopdia Britannica Online denes zeropoint energy as the vibrational energy that molecules retain even at the absolute zero of temperature. ZPE is the
result of all-pervasive energy elds in the vacuum between
the fundamental particles of nature; it is responsible for the
Casimir eect and other phenomena. See Zero Point Energy
and Zero Point Field. See also Solid Helium by the University of Albertas Department of Physics to learn more about
ZPEs eect on BoseEinstein condensates of helium. Although absolute zero (T=0) is not a state of zero molecular
motion, it is the point of zero temperature and, in accordance with the Boltzmann constant, is also the point of zero
particle kinetic energy and zero kinetic velocity. To understand how atoms can have zero kinetic velocity and simultaneously be vibrating due to ZPE, consider the following

168

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

thought experiment: two T=0 helium atoms in zero gravity are carefully positioned and observed to have an average
separation of 620 pm between them (a gap of ten atomic diameters). Its an average separation because ZPE causes
them to jostle about their xed positions. Then one atom
is given a kinetic kick of precisely 83 yoctokelvins (1 yK =
11024 K). This is done in a way that directs this atoms
velocity vector at the other atom. With 83 yK of kinetic
energy between them, the 620 pm gap through their common barycenter would close at a rate of 719 pm/s and they
would collide after 0.862 second. This is the same speed as
shown in the Fig. 1 animation above. Before being given
the kinetic kick, both T=0 atoms had zero kinetic energy
and zero kinetic velocity because they could persist indenitely in that state and relative orientation even though both
were being jostled by ZPE. At T=0, no kinetic energy is
available for transfer to other systems. The Boltzmann constant and its related formulas describe the realm of particle
kinetics and velocity vectors whereas ZPE is an energy eld
that jostles particles in ways described by the mathematics of
quantum mechanics. In atomic and molecular collisions in
gases, ZPE introduces a degree of chaos, i.e., unpredictability, to rebound kinetics; it is as likely that there will be less
ZPE-induced particle motion after a given collision as more.
This random nature of ZPE is why it has no net eect upon
either the pressure or volume of any bulk quantity (a statistically signicant quantity of particles) of T>0 K gases. However, in T=0 condensed matter; e.g., solids and liquids, ZPE
causes inter-atomic jostling where atoms would otherwise
be perfectly stationary. Inasmuch as the real-world eects
that ZPE has on substances can vary as one alters a thermodynamic system (for example, due to ZPE, helium won't
freeze unless under a pressure of at least 25 bar or 2.5 MPa),
ZPE is very much a form of thermal energy and may properly be included when tallying a substances internal energy.
Note too that absolute zero serves as the baseline atop which
thermodynamics and its equations are founded because they
deal with the exchange of thermal energy between systems
(a plurality of particles and elds modeled as an average).
Accordingly, one may examine ZPE-induced particle motion within a system that is at absolute zero but there can
never be a net outow of thermal energy from such a system. Also, the peak emittance wavelength of black-body
radiation shifts to innity at absolute zero; indeed, a peak
no longer exists and black-body photons can no longer escape. Because of ZPE, however, virtual photons are still
emitted at T=0. Such photons are called virtual because
they can't be intercepted and observed. Furthermore, this
zero-point radiation has a unique zero-point spectrum. However, even though a T=0 system emits zero-point radiation,
no net heat ow Q out of such a system can occur because if
the surrounding environment is at a temperature greater than
T=0, heat will ow inward, and if the surrounding environment is at T=0, there will be an equal ux of ZP radiation
both inward and outward. A similar Q equilibrium exists at
T=0 with the ZPE-induced spontaneous emission of photons
(which is more properly called a stimulated emission in this
context). The graph at upper right illustrates the relationship
of absolute zero to zero-point energy. The graph also helps

in the understanding of how zero-point energy got its name:


it is the vibrational energy matter retains at the zero kelvin
point. Derivation of the classical electromagnetic zero-point
radiation spectrum via a classical thermodynamic operation
involving van der Waals forces, Daniel C. Cole, Physical Review A, 42 (1990) 1847.
[4] At non-relativistic temperatures of less than about 30 GK,
classical mechanics are sucient to calculate the velocity of
particles. At 30 GK, individual neutrons (the constituent of
neutron stars and one of the few materials in the universe
with temperatures in this range) have a 1.0042 (gamma
or Lorentz factor). Thus, the classic Newtonian formula for
kinetic energy is in error less than half a percent for temperatures less than 30 GK.
[5] Even roomtemperature air has an average molecular
translational speed (not vector-isolated velocity) of 1822
km/hour. This is relatively fast for something the size of a
molecule considering there are roughly 2.421016 of them
crowded into a single cubic millimeter. Assumptions: Average molecular weight of wet air = 28.838 g/mol and T =
296.15 K. Assumptions primary variables: An altitude of
194 meters above mean sea level (the worldwide median
altitude of human habitation), an indoor temperature of 23
C, a dewpoint of 9 C (40.85% relative humidity), and 760
mmHg (101.325 kPa) sea levelcorrected barometric pressure.
[6] Adiabatic Cooling of Cesium to 700 nK in an Optical Lattice,
A. Kastberg et al., Physical Review Letters 74 (1995) 1542
doi:10.1103/PhysRevLett.74.1542. Its noteworthy that a
record cold temperature of 450 pK in a BoseEinstein condensate of sodium atoms (achieved by A. E. Leanhardt et al..
of MIT) equates to an average vector-isolated atom velocity
of 0.4 mm/s and an average atom speed of 0.7 mm/s.
[7] The rate of translational motion of atoms and molecules is
calculated based on thermodynamic temperature as follows:

kB T
v =
m
where:
v is the vector-isolated mean velocity of translational
particle motion in m/s
kB is the Boltzmann constant = 1.3806504(24)1023
J/K
T is the thermodynamic temperature in kelvins
m is the molecular mass of substance in kilograms
In the above formula, molecular mass, m, in kilograms per
particle is the quotient of a substances molar mass (also
known as atomic weight, atomic mass, relative atomic mass,
and unied atomic mass units) in g/mol or daltons divided
by 6.02214179(30)1026 (which is the Avogadro constant
times one thousand). For diatomic molecules such as H2 ,
N2 , and O2 , multiply atomic weight by two before plugging it
into the above formula. The mean speed (not vector-isolated

6.4. THERMODYNAMIC TEMPERATURE

velocity) of an atom or molecule along any arbitrary path is


calculated as follows:

s = v 3
where:
s is the mean speed of translational particle motion in
m/s
Note that the mean energy of the translational motions of
a substances constituent particles correlates to their mean
speed, not velocity. Thus, substituting s for v in the classic
formula for kinetic energy, Ek = 1 2 m v 2 produces precisely the same value as does Emean = 3/2kBT (as shown in
the section titled The nature of kinetic energy, translational
motion, and temperature). Note too that the Boltzmann constant and its related formulas establish that absolute zero is
the point of both zero kinetic energy of particle motion and
zero kinetic velocity (see also Note 1 above).
[8] The internal degrees of freedom of molecules cause their external surfaces to vibrate and can also produce overall spinning motions (what can be likened to the jiggling and spinning of an otherwise stationary water balloon). If one examines a single molecule as it impacts a containers wall, some
of the kinetic energy borne in the molecules internal degrees of freedom can constructively add to its translational
motion during the instant of the collision and extra kinetic
energy will be transferred into the containers wall. This
would induce an extra, localized, impulse-like contribution
to the average pressure on the container. However, since
the internal motions of molecules are random, they have
an equal probability of destructively interfering with translational motion during a collision with a containers walls or
another molecule. Averaged across any bulk quantity of a
gas, the internal thermal motions of molecules have zero net
eect upon the temperature, pressure, or volume of a gas.
Molecules internal degrees of freedom simply provide additional locations where internal energy is stored. This is
precisely why molecular-based gases have greater specic
heat capacity than monatomic gases (where additional thermal energy must be added to achieve a given temperature
rise).
[9] When measured at constant-volume since dierent amounts
of work must be performed if measured at constant-pressure.
Nitrogens CvH (100 kPa, 20 C) equals 20.8 J mol1 K1 vs.
the monatomic gases, which equal 12.4717 J mol1 K1 . Citations: W.H. Freemans Physical Chemistry, Part 3: Change
(422 kB PDF, here), Exercise 21.20b, p. 787. Also Georgia
State Universitys Molar Specic Heats of Gases.
[10] The speed at which thermal energy equalizes throughout the
volume of a gas is very rapid. However, since gases have
extremely low density relative to solids, the heat ux (the
thermal power passing per area) through gases is comparatively low. This is why the dead-air spaces in multi-pane
windows have insulating qualities.
[11] Diamond is a notable exception. Highly quantized modes of
phonon vibration occur in its rigid crystal lattice. Therefore,

169

not only does diamond have exceptionally poor specic heat


capacity, it also has exceptionally high thermal conductivity.
[12] Correlation is 752 (W m1 K1 ) /(MScm), = 81, through
a 7:1 range in conductivity. Value and standard deviation
based on data for Ag, Cu, Au, Al, Ca, Be, Mg, Rh, Ir, Zn,
Co, Ni, Os, Fe, Pa, Pt, and Sn. Citation: Data from CRC
Handbook of Chemistry and Physics, 1st Student Edition and
this link to Web Elements home page.
[13] The cited emission wavelengths are for true black bodies in
equilibrium. In this table, only the sun so qualies. CODATA 2006 recommended value of 2.897 7685(51) 103
m K used for Wien displacement law constant b.
[14] A record cold temperature of 450 80 pK in a Bose
Einstein condensate (BEC) of sodium atoms was achieved in
2003 by researchers at MIT. Citation: Cooling BoseEinstein
Condensates Below 500 Picokelvin, A. E. Leanhardt et al.,
Science 301, 12 Sept. 2003, Pg. 1515. Its noteworthy that
this records peak emittance black-body wavelength of 6,400
kilometers is roughly the radius of Earth.
[15] The peak emittance wavelength of 2.897 77 m is a frequency
of 103.456 MHz
[16] Measurement was made in 2002 and has an uncertainty of
3 kelvins. A 1989 measurement produced a value of
5777 2.5 K. Citation: Overview of the Sun (Chapter 1
lecture notes on Solar Physics by Division of Theoretical
Physics, Dept. of Physical Sciences, University of Helsinki).
Download paper (252 kB PDF)
[17] The 350 MK value is the maximum peak fusion fuel temperature in a thermonuclear weapon of the TellerUlam conguration (commonly known as a hydrogen bomb). Peak
temperatures in Gadget-style ssion bomb cores (commonly
known as an atomic bomb) are in the range of 50 to 100
MK. Citation: Nuclear Weapons Frequently Asked Questions, 3.2.5 Matter At High Temperatures. Link to relevant
Web page. All referenced data was compiled from publicly
available sources.
[18] Peak temperature for a bulk quantity of matter was achieved
by a pulsed-power machine used in fusion physics experiments. The term bulk quantity draws a distinction from
collisions in particle accelerators wherein high temperature applies only to the debris from two subatomic particles or nuclei at any given instant. The >2 GK temperature
was achieved over a period of about ten nanoseconds during shot Z1137. In fact, the iron and manganese ions in
the plasma averaged 3.58 0.41 GK (309 35 keV) for 3
ns (ns 112 through 115). Citation: Ion Viscous Heating in
a Magnetohydrodynamically Unstable Z Pinch at Over 2
109 Kelvin, M. G. Haines et al., Physical Review Letters 96,
Issue 7, id. 075003. Link to Sandias news release.
[19] Core temperature of a highmass (>811 solar masses) star
after it leaves the main sequence on the HertzsprungRussell
diagram and begins the alpha process (which lasts one day)
of fusing silicon28 into heavier elements in the following

170

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

steps: sulfur32 argon36 calcium40 titanium


44 chromium48 iron52 nickel56. Within minutes of nishing the sequence, the star explodes as a Type II
supernova. Citation: Stellar Evolution: The Life and Death
of Our Luminous Neighbors (by Arthur Holland and Mark
Williams of the University of Michigan). Link to Web site.
More informative links can be found here, and here, and a
concise treatise on stars by NASA is here. Archived July 20,
2015, at the Wayback Machine.
[20] Based on a computer model that predicted a peak internal
temperature of 30 MeV (350 GK) during the merger of a
binary neutron star system (which produces a gammaray
burst). The neutron stars in the model were 1.2 and 1.6
solar masses respectively, were roughly 20 km in diameter,
and were orbiting around their barycenter (common center
of mass) at about 390 Hz during the last several milliseconds
before they completely merged. The 350 GK portion was a
small volume located at the pairs developing common core
and varied from roughly 1 to 7 km across over a time span
of around 5 ms. Imagine two city-sized objects of unimaginable density orbiting each other at the same frequency as
the G4 musical note (the 28th white key on a piano). Its
also noteworthy that at 350 GK, the average neutron has a
vibrational speed of 30% the speed of light and a relativistic
mass (m) 5% greater than its rest mass (m0 ). Citation: Torus
Formation in Neutron Star Mergers and Well-Localized Short
Gamma-Ray Bursts, R. Oechslin et al. of Max Planck Institute for Astrophysics., arXiv:astro-ph/0507099 v2, 22 Feb.
2006. Download paper (725 kB PDF) (from Cornell University Librarys arXiv.org server). To view a browser-based
summary of the research, click here.
[21] NewScientist: Eight extremes: The hottest thing in the universe, 07 March 2011, which stated While the details of
this process are currently unknown, it must involve a reball
of relativistic particles heated to something in the region of
a trillion kelvin
[22] Results of research by Stefan Bathe using the PHENIX detector on the Relativistic Heavy Ion Collider at Brookhaven
National Laboratory in Upton, New York, U.S.A. Bathe has
studied gold-gold, deuteron-gold, and proton-proton collisions to test the theory of quantum chromodynamics, the
theory of the strong force that holds atomic nuclei together.
Link to news release.
[23] Citation: How do physicists study particles? by CERN.
[24] The Planck frequency equals 1.854 87(14) 1043 Hz (which
is the reciprocal of one Planck time). Photons at the Planck
frequency have a wavelength of one Planck length. The
Planck temperature of 1.416 79(11) 1032 K equates to a
calculated b /T = max wavelength of 2.045 31(16) 1026
nm. However, the actual peak emittance wavelength quantizes to the Planck length of 1.616 24(12) 1026 nm.
[25] Waters enthalpy of fusion (0 C, 101.325 kPa) equates to
0.062284 eV per molecule so adding one joule of thermal energy to 0 C water ice causes 1.00211020 water

molecules to break away from the crystal lattice and become


liquid.
[26] Waters enthalpy of fusion is 6.0095 kJ mol1 K1 (0 C,
101.325 kPa). Citation: Water Structure and Science, Water Properties, Enthalpy of fusion, (0 C, 101.325 kPa) (by
London South Bank University). Link to Web site. The only
metals with enthalpies of fusion not in the range of 630 J
mol1 K1 are (on the high side): Ta, W, and Re; and (on the
low side) most of the group 1 (alkaline) metals plus Ga, In,
Hg, Tl, Pb, and Np. Citation: This link to Web Elements
home page.
[27] Xenon value citation: This link to WebElements xenon data
(available values range from 2.3 to 3.1 kJ/mol). It is also
noteworthy that heliums heat of fusion of only 0.021 kJ/mol
is so weak of a bonding force that zero-point energy prevents
helium from freezing unless it is under a pressure of at least
25 atmospheres.
[28] CRC Handbook of Chemistry and Physics, 1st Student Edition and Web Elements.
[29] H2 Ospecic heat capacity, Cp = 0.075327 kJ mol1 K1 (25
C); Enthalpy of fusion = 6.0095 kJ/mol (0 C, 101.325
kPa); Enthalpy of vaporization (liquid) = 40.657 kJ/mol
(100 C). Citation: Water Structure and Science, Water Properties (by London South Bank University). Link to Web site.
[30] Mobile conduction electrons are delocalized, i.e. not tied to
a specic atom, and behave rather like a sort of quantum
gas due to the eects of zero-point energy. Consequently,
even at absolute zero, conduction electrons still move between atoms at the Fermi velocity of about 1.6106 m/s.
Kinetic thermal energy adds to this speed and also causes
delocalized electrons to travel farther away from the nuclei.
[31] No other crystal structure can exceed the 74.048% packing density of a closest-packed arrangement. The two regular crystal lattices found in nature that have this density
are hexagonal close packed (HCP) and face-centered cubic
(FCC). These regular lattices are at the lowest possible energy state. Diamond is a closest-packed structure with an
FCC crystal lattice. Note too that suitable crystalline chemical compounds, although usually composed of atoms of different sizes, can be considered as closest-packed structures
when considered at the molecular level. One such compound is the common mineral known as magnesium aluminum spinel (MgAl2 O4 ). It has a face-centered cubic crystal lattice and no change in pressure can produce a lattice
with a lower energy state.
[32] Nearly half of the 92 naturally occurring chemical elements
that can freeze under a vacuum also have a closest-packed
crystal lattice. This set includes beryllium, osmium, neon,
and iridium (but excludes helium), and therefore have zero
latent heat of phase transitions to contribute to internal energy (symbol: U). In the calculation of enthalpy (formula:
H = U + pV), internal energy may exclude dierent sources
of thermal energy (particularly ZPE) depending on the nature of the analysis. Accordingly, all T=0 closest-packed

6.5. VOLUME

171

matter under a perfect vacuum has either minimal or zero


enthalpy, depending on the nature of the analysis. Use Of
Legendre Transforms In Chemical Thermodynamics, Robert
A. Alberty, Pure Appl.Chem., 73 (2001) 1349.

of thermometer at least as early as 1850. The OED also


cites this 1928 reporting of a temperature: My altitude was
about 5,800 metres, the temperature was 28 Celsius. However, dictionaries seek to nd the earliest use of a word or
term and are not a useful resource as regards the terminology
used throughout the history of science. According to several writings of Dr. Terry Quinn CBE FRS, Director of the
BIPM (19882004), including Temperature Scales from the
early days of thermometry to the 21st century (148 kB PDF,
here) as well as Temperature (2nd Edition / 1990 / Academic
Press / 0125696817), the term Celsius in connection with the
centigrade scale was not used whatsoever by the scientic or
thermometry communities until after the CIPM and CGPM
adopted the term in 1948. The BIPM wasn't even aware that
degree Celsius was in sporadic, non-scientic use before that
time. Its also noteworthy that the twelve-volume, 1933 edition of OED did not even have a listing for the word Celsius
(but did have listings for both centigrade and centesimal in
the context of temperature measurement). The 1948 adoption of Celsius accomplished three objectives:

[33] Pressure also must be in absolute terms. The air still in a tire
at 0 kPa-gage expands too as it gets hotter. Its not uncommon for engineers to overlook that one must work in terms
of absolute pressure when compensating for temperature.
For instance, a dominant manufacturer of aircraft tires published a document on temperature-compensating tire pressure, which used gage pressure in the formula. However, the
high gage pressures involved (180 psi; 12.4 bar; 1.24 MPa)
means the error would be quite small. With low-pressure automobile tires, where gage pressures are typically around 2
bar (200 kPa), failing to adjust to absolute pressure results
in a signicant error. Referenced document: Aircraft Tire
Ratings (155 kB PDF, here).
[34] Regarding the spelling gage vs. gauge in the context
of pressures measured relative to atmospheric pressure, the
preferred spelling varies by country and even by industry.
Further, both spellings are often used within a particular
industry or country. Industries in British English-speaking
countries typically use the spelling gauge pressure to distinguish it from the pressure-measuring instrument, which in
the U.K., is spelled pressure gage. For the same reason, many
of the largest American manufacturers of pressure transducers and instrumentation use the spelling gage pressure (the
convention used here) in their formal documentation to distinguish it from the instrument, which is spelled pressure
gauge. (see Honeywell-Sensotecs FAQ page and Fluke Corporations product search page).
[35] A dierence of 100 kPa is used here instead of the 101.325
kPa value of one standard atmosphere. In 1982, the
International Union of Pure and Applied Chemistry (IUPAC) recommended that for the purposes of specifying the
physical properties of substances, the standard pressure (atmospheric pressure) should be dened as precisely 100 kPa
(750.062 Torr). Besides being a round number, this had a
very practical eect: relatively few people live and work at
precisely sea level; 100 kPa equates to the mean pressure at
an altitude of about 112 meters, which is closer to the 194
meter, worldwide median altitude of human habitation. For
especially low-pressure or high-accuracy work, true atmospheric pressure must be measured. Citation: IUPAC.org,
Gold Book, Standard Pressure
[36] Absolute Zero and the Conquest of Cold , Shachtman, Tom.,
Mariner Books, 1999.
[37] A Brief History of Temperature Measurement and; Uppsala
University (Sweden), Linnaeus thermometer

(a) All common temperature scales would have their units


named after someone closely associated with them;
namely, Kelvin, Celsius, Fahrenheit, Raumur and
Rankine.
(b) Notwithstanding the important contribution of Linnaeus who gave the Celsius scale its modern form,
Celsiuss name was the obvious choice because it began with the letter C. Thus, the symbol C that for
centuries had been used in association with the name
centigrade could continue to be used and would simultaneously inherit an intuitive association with the new
name.
(c) The new name eliminated the ambiguity of the term
centigrade, freeing it to refer exclusively to the Frenchlanguage name for the unit of angular measurement.

6.4.8

External links

Kinetic Molecular Theory of Gases. An explanation


(with interactive animations) of the kinetic motion of
molecules and how it aects matter. By David N.
Blauch, Department of Chemistry, Davidson College.
Zero Point Energy and Zero Point Field. A Web site
with in-depth explanations of a variety of quantum effects. By Bernard Haisch, of Calphysics Institute.

6.5

Volume

[38] bipm.org
[39] According to The Oxford English Dictionary (OED), the
term Celsiuss thermometer had been used at least as early
as 1797. Further, the term The Celsius or Centigrade thermometer was again used in reference to a particular type

For the general geometric concept, see volume.


In thermodynamics, the volume of a system is an important extensive parameter for describing its thermodynamic

172

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

state. The specic volume, an intensive property, is the systems volume per unit of mass. Volume is a function of state
and is interdependent with other thermodynamic properties
such as pressure and temperature. For example, volume is
related to the pressure and temperature of an ideal gas by
the ideal gas law.

Volume is one of a pair of conjugate variables, the other


being pressure. As with all conjugate pairs, the product is
a form of energy. The product pV is the energy lost to a
system due to mechanical work. This product is one term
which makes up enthalpy H :

The physical volume of a system may or may not coincide


H = U + pV,
with a control volume used to analyze the system.
where U is the internal energy of the system.
The second law of thermodynamics describes constraints on
the amount of useful work which can be extracted from a
thermodynamic system. In thermodynamic systems where
The volume of a thermodynamic system typically refers the temperature and volume are held constant, the measure
to the volume of the working uid, such as, for example, of useful work attainable is the Helmholtz free energy;
the uid within a piston. Changes to this volume may be and in systems where the volume is not held constant, the
made through an application of work, or may be used to measure of useful work attainable is the Gibbs free energy.
produce work. An isochoric process however operates at
a constant-volume, thus no work can be produced. Many Similarly, the appropriate value of heat capacity to use in
other thermodynamic processes will result in a change in a given process depends on whether the process produces
volume. A polytropic process, in particular, causes changes a change in volume. The heat capacity is a function of the
to the system so that the quantity pV n is constant (where p amount of heat added to a system. In the case of a constantis pressure, V is volume, and n is the polytropic index, a volume process, all the heat aects the internal energy of
constant). Note that for specic polytropic indexes a poly- the system (i.e., there is no pV-work, and all the heat aftropic process will be equivalent to a constant-property pro- fects the temperature). However in a process without a concess. For instance, for very large values of n approaching stant volume, the heat addition aects both the internal energy and the work (i.e., the enthalpy); thus the temperature
innity, the process becomes constant-volume.
changes by a dierent amount than in the constant-volume
Gases are compressible, thus their volumes (and specic case and a dierent heat capacity value is required.
volumes) may be subject to change during thermodynamic
processes. Liquids, however, are nearly incompressible,
thus their volumes can be often taken as constant. In 6.5.3 Specic volume
general, compressibility is dened as the relative volume
change of a uid or solid as a response to a pressure, and See also: Specic volume
may be determined for substances in any phase. Similarly,
thermal expansion is the tendency of matter to change in
Specic volume ( ) is the volume occupied by a unit of
volume in response to a change in temperature.
mass of a material.[1] In many cases the specic volume is a
Many thermodynamic cycles are made up of varying pro- useful quantity to determine because, as an intensive propcesses, some which maintain a constant volume and some erty, it can be used to determine the complete state of a syswhich do not. A vapor-compression refrigeration cycle, tem in conjunction with another independent intensive varifor example, follows a sequence where the refrigerant uid able. The specic volume also allows systems to be studied
transitions between the liquid and vapor states of matter.
without reference to an exact operating volume, which may
3
Typical units for volume are m (cubic meters), l (liters), not be known (nor signicant) at some stages of analysis.

6.5.1

Overview

and ft3 (cubic feet).

6.5.2

Heat and work

Mechanical work performed on a working uid causes a


change in the mechanical constraints of the system; in other
words, for work to occur, the volume must be altered.
Hence volume is an important parameter in characterizing
many thermodynamic processes where an exchange of energy in the form of work is involved.

The specic volume of a substance is equal to the reciprocal


3
of its mass density. Specic volume may be expressed in mkg
,

ft3
lbm

ft3
slug

, or

mL
g

V
1
=
m

where, V is the volume, m is the mass and is the density


of the material.
For an ideal gas,

6.5. VOLUME

173
General conversion

RT
P

To compare gas volume between two conditions of dierent temperature or pressure (1 and 2), assuming nR are the

where, R is the specic gas constant, T is the temperature same, the following equation uses humidity exclusion in addition to the ideal gas law:
and P is the pressure of the gas.
=

Specic volume may also refer to molar volume.

V2 = V1

T2
T1

p1 pw,1
p2 pw,2

Where, in addition to terms used in the ideal gas law:

6.5.4

Gas volume

pw is the partial pressure of gaseous water


during condition 1 and 2, respectively

Dependence on pressure and temperature

The volume of gas increases proportionally to absolute tem- For example, calculating how much 1 liter of air (a) at 0 C,
perature and decreases inversely proportionally to pressure, 100 kPa, pw = 0 kPa (known as STPD, see below) would ll
when breathed into the lungs where it is mixed with water
approximately according to the ideal gas law:
vapor (l), where it quickly becomes 37 C, 100 kPa, pw =
V = nRT
p
6.2 kPa (BTPS):
where:
310 K
100 kPa0 kPa
Vl = 1 l 273
K 100 kPa6.2 kPa = 1.21 l
p is the pressure

Common conditions

V is the volume
n is the amount of substance of gas (moles)
R is the gas constant, 8.314 JK mol
1

T is the absolute temperature


To simplify, a volume of gas may be expressed as the volume it would have in standard conditions for temperature
and pressure, which are 0 C and 100 kPa.[2]
Humidity exclusion
In contrast to other gas components, water content in air,
or humidity, to a higher degree depends on vaporization
and condensation from or into water, which, in turn, mainly
depends on temperature. Therefore, when applying more
pressure to a gas saturated with water, all components will
initially decrease in volume approximately according to
the ideal gas law. However, some of the water will condense until returning to almost the same humidity as before, giving the resulting total volume deviating from what
the ideal gas law predicted. Conversely, decreasing temperature would also make some water condense, again making
the nal volume deviating from predicted by the ideal gas
law.
Therefore, gas volume may alternatively be expressed excluding the humidity content: V (volume dry). This fraction more accurately follows the ideal gas law. On the contrary V (volume saturated) is the volume a gas mixture
would have if humidity was added to it until saturation (or
100% relative humidity).

Some common expressions of gas volume with dened or


variable temperature, pressure and humidity inclusion are:
ATPS: Ambient temperature (variable) and pressure
(variable), saturated (humidity depends on temperature)
ATPD: Ambient temperature (variable) and pressure
(variable), dry (no humidity)
BTPS: Body Temperature (37 C or 310 K) and pressure (generally same as ambient), saturated (47 mmHg
or 6.2 kPa)
STPD: Standard temperature (0 C or 273 K) and
pressure (760 mmHg (101.33 kPa) or 100 kPa (750.06
mmHg)), dry (no humidity)
Conversion factors
The following conversion factors can be used to convert between expressions for volume of a gas:[3]
Partial volume
See also: Partial pressure
The partial volume of a particular gas is the volume which
the gas would have if it alone occupied the volume, with
unchanged pressure and temperature, and is useful in gas

174

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

mixtures, e.g. air, to focus on one particular gas component,


e.g. oxygen.
It can be approximated both from partial pressure and molar
fraction:[4]
Vx = Vtot

Px
Ptot

= Vtot

nx
ntot

Vx is the partial volume of any individual


gas component (X)
Vtot is the total volume in gas mixture
Px is the partial pressure of gas X
Ptot is the total pressure in gas mixture
nx is the amount of substance of a gas (X)
ntot is the total amount of substance in gas
mixture

6.5.5

See also

Volumetric ow rate

6.5.6

References

[1] Cengel, Yunus A.; Boles, Michael A. (2002). Thermodynamics: an engineering approach. Boston: McGraw-Hill. p.
11. ISBN 0-07-238332-1.
[2] A. D. McNaught, A. Wilkinson (1997). Compendium of
Chemical Terminology, The Gold Book (2nd ed.). Blackwell
Science. ISBN 0-86542-684-8.
[3] Brown, Stanley; Miller, Wayne; Eason, M (2006). Exercise
Physiology: Basis of Human Movement in Health and Disease. Lippincott Williams & Wilkins. p. 113. ISBN 07817-3592-0. Retrieved 13 February 2014.
[4] Page 200 in: Medical biophysics. Flemming Cornelius. 6th
Edition, 2008.

Chapter 7

Chapter 7
7.1 Thermodynamic system

rather than of states of the system; such were historically


important in the conceptual development of the subject; and
(b) systems considered in terms of processes described by
A thermodynamic system is the material and radiative steady ows; such are important in engineering.
content of a macroscopic volume in space, that can be ade- In 1824 Sadi Carnot described a thermodynamic system as
quately described by thermodynamic state variables such as the working substance (such as the volume of steam) of
temperature, entropy, internal energy and pressure. Usu- any heat engine under study. The very existence of such
ally, by default, a thermodynamic system is taken to be thermodynamic systems may be considered a fundamental
in its own internal state of thermodynamic equilibrium, postulate of equilibrium thermodynamics, though it is only
as opposed to a non-equilibrium state. The thermody- rarely cited as a numbered law.[1][2][3] According to Bainamic system is always enclosed by walls that separate lyn, the commonly rehearsed statement of the zeroth law
it from its surroundings; these constrain the system. A of thermodynamics is a consequence of this fundamental
thermodynamic system is subject to external interventions postulate.[4]
called thermodynamic operations; these alter the systems
walls or its surroundings; as a result, the system under- In equilibrium thermodynamics the state variables do not
goes thermodynamic processes according to the principles include uxes because in a state of thermodynamic equilibof thermodynamics. (This account mainly refers to the sim- rium all uxes have zero values by postulation. Equilibrium
plest kind of thermodynamic system; compositions of sim- thermodynamic processes may of course involve uxes but
these must have ceased by the time a thermodynamic prople systems may also be considered.)
cess or operation is complete bringing a system to its evenThe thermodynamic state of a thermodynamic system is its tual thermodynamic state. Non-equilibrium thermodynaminternal state as specied by its state variables. In addi- ics allows its state variables to include non-zero uxes, that
tion to the state variables, a thermodynamic account also describe transfers of matter or energy or entropy between a
requires a special kind of quantity called a state function, system and its surroundings.[5]
which is a function of the dening state variables. For example, if the state variables are internal energy, volume and
mole amounts, that special function is the entropy. These 7.1.1 Overview
quantities are inter-related by one or more functional relationships called equations of state, and by the systems char- Thermodynamic equilibrium is characterized by absence of
acteristic equation. Thermodynamics imposes restrictions ow of matter or energy. Equilibrium thermodynamics, as
on the possible equations of state and on the characteris- a subject in physics, considers macroscopic bodies of mattic equation. The restrictions are imposed by the laws of ter and energy in states of internal thermodynamic equithermodynamics.
librium. It uses the concept of thermodynamic processes,
According to the permeabilities of the walls of a system,
transfers of energy and matter occur between it and its
surroundings, which are assumed to be unchanging over
time, until a state of thermodynamic equilibrium is attained.
The only states considered in equilibrium thermodynamics
are equilibrium states. Classical thermodynamics includes
equilibrium thermodynamics. It also considers: (a) systems considered in terms of cyclic sequences of processes

by which bodies pass from one equilibrium state to another


by transfer of matter and energy between them. The term
'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics.
The possible equilibria between bodies are determined by
the physical properties of the walls that separate the bodies.
Equilibrium thermodynamics in general does not measure
time. Equilibrium thermodynamics is a relatively simple

175

176

CHAPTER 7. CHAPTER 7

SURROUNDINGS

SYSTEM

BOUNDARY

Reections on the Motive Power of Fire studied what he


called the working substance, e.g., typically a body of water
vapor, in steam engines, in regards to the systems ability
to do work when heat is applied to it. The working substance could be put in contact with either a heat reservoir (a
boiler), a cold reservoir (a stream of cold water), or a piston
(to which the working body could do work by pushing on
it). In 1850, the German physicist Rudolf Clausius generalized this picture to include the concept of the surroundings,
and began referring to the system as a working body. In
his 1850 manuscript On the Motive Power of Fire, Clausius
wrote:
The article Carnot heat engine shows the original pistonand-cylinder diagram used by Carnot in discussing his ideal
engine; below, we see the Carnot engine as is typically modeled in current use:

and well settled subject. One reason for this is the existence of a well dened physical quantity called 'the entropy
of a body'.
Non-equilibrium thermodynamics, as a subject in physics,
considers bodies of matter and energy that are not in states
of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to
allow description in terms of quantities that are closely related to thermodynamic state variables. It is characterized
by presence of ows of matter and energy. For this topic,
very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough dened. Thus the description of non-equilibrium thermodynamic systems is a
eld theory, more complicated than the theory of equilibrium thermodynamics. Non-equilibrium thermodynamics
is a growing subject, not an established edice. In general, it is not possible to nd an exactly dened entropy for
non-equilibrium problems. For many non-equilibrium thermodynamical problems, an approximately dened quantity
called 'time rate of entropy production' is very useful. Nonequilibrium thermodynamics is mostly beyond the scope of
the present article.

Carnot engine diagram (modern) - where heat ows from a high


temperature TH furnace through the uid of the working body
(working substance) and into the cold sink TC, thus forcing the
working substance to do mechanical work W on the surroundings,
via cycles of contractions and expansions.

Another kind of thermodynamic system is considered in


engineering. It takes part in a ow process. The account
is in terms that approximate, well enough in practice in
many cases, equilibrium thermodynamical concepts. This
is mostly beyond the scope of the present article, and is set
out in other articles, for example the article Flow process.

In the diagram shown, the working body (system), a term


introduced by Clausius in 1850, can be any uid or vapor
body through which heat Q can be introduced or transmitted through to produce work. In 1824, Sadi Carnot, in his
famous paper Reections on the Motive Power of Fire, had
postulated that the uid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol,
vapor of mercury, a permanent gas, or air, etc. Though,
in these early years, engines came in a number of congurations, typically QH was supplied by a boiler, wherein
water boiled over a furnace; QC was typically a stream of
cold owing water in the form of a condenser located on
a separate part of the engine. The output work W was the
movement of the piston as it turned a crank-arm, which typically turned a pulley to lift water out of ooded salt mines.
Carnot dened work as weight lifted through a height.

7.1.2

7.1.3

History

Systems in equilibrium

The rst to create the concept of a thermodynamic sys- At thermodynamic equilibrium, a systems properties are,
tem was the French physicist Sadi Carnot whose 1824 by denition, unchanging in time. Systems in equilib-

7.1. THERMODYNAMIC SYSTEM


rium are much simpler and easier to understand than systems not in equilibrium. In some cases, when analyzing a
thermodynamic process, one can assume that each intermediate state in the process is at equilibrium. This considerably simplies the analysis.

177
contact, such as conduction of heat, or by long-range forces
such as an electric eld in the surroundings.

A system with walls that prevent all transfers is said to be


isolated. This is an idealized conception, because in practice some transfer is always possible, for example by gravIn isolated systems it is consistently observed that as time itational forces. It is an axiom of thermodynamics that an
goes on internal rearrangements diminish and stable condi- isolated system eventually reaches internal thermodynamic
tions are approached. Pressures and temperatures tend to equilibrium, when its state no longer changes with time.
equalize, and matter arranges itself into one or a few rela- The walls of a closed system allow transfer of energy as heat
tively homogeneous phases. A system in which all processes and as work, but not of matter, between it and its surroundof change have gone practically to completion is considered ings. The walls of an open system allow transfer both of
in a state of thermodynamic equilibrium. The thermody- matter and of energy.[12][13][14][15][16][17][18] This scheme of
namic properties of a system in equilibrium are unchang- denition of terms is not uniformly used, though it is coning in time. Equilibrium system states are much easier to venient for some purposes. In particular, some writers use
describe in a deterministic manner than non-equilibrium 'closed system' where 'isolated system' is here used.[19][20]
states.
Anything that passes across the boundary and eects a
For a process to be reversible, each step in the process must change in the contents of the system must be accounted for
be reversible. For a step in a process to be reversible, the in an appropriate balance equation. The volume can be the
system must be in equilibrium throughout the step. That region surrounding a single atom resonating energy, such as
ideal cannot be accomplished in practice because no step Max Planck dened in 1900; it can be a body of steam or
can be taken without perturbing the system from equilib- air in a steam engine, such as Sadi Carnot dened in 1824.
rium, but the ideal can be approached by making changes
It could also be just one nuclide (i.e. a system of quarks) as
slowly.
hypothesized in quantum thermodynamics.

7.1.4

Walls

A system is enclosed by walls that bound it and connect


it to its surroundings.[6][7][8][9][10][11] Often a wall restricts
passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more
than an imaginary two-dimensional closed surface through
which the connection to the surroundings is direct.

7.1.5

Surroundings

See also: Environment (systems)


The system is the part of the universe being studied, while
the surroundings is the remainder of the universe that lies
outside the boundaries of the system. It is also known
as the environment, and the reservoir. Depending on the
type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum,
electric charge, or other conserved properties. The environment is ignored in analysis of the system, except in regards
to these interactions.

A wall can be xed (e.g. a constant volume reactor) or


moveable (e.g. a piston). For example, in a reciprocating engine, a xed wall means the piston is locked at its
position; then, a constant volume process may occur. In
that same engine, a piston may be unlocked and allowed to
move in and out. Ideally, a wall may be declared adiabatic,
diathermal, impermeable, permeable, or semi-permeable. 7.1.6 Closed system
Actual physical materials that provide walls with such idealized properties are not always readily available.
Main article: Closed system In thermodynamics
The system is delimited by walls or boundaries, either actual
or notional, across which conserved (such as matter and energy) or unconserved (such as entropy) quantities can pass
into and out of the system. The space outside the thermodynamic system is known as the surroundings, a reservoir,
or the environment. The properties of the walls determine
what transfers can occur. A wall that allows transfer of a
quantity is said to be permeable to it, and a thermodynamic
system is classied by the permeabilities of its several walls.
A transfer between system and surroundings can arise by

In a closed system, no mass may be transferred in or out


of the system boundaries. The system always contains the
same amount of matter, but heat and work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the
property of its boundary.
Adiabatic boundary not allowing any heat exchange:
A thermally isolated system

178

CHAPTER 7. CHAPTER 7

Rigid boundary not allowing exchange of work: A particles. However, for systems undergoing a chemical remechanically isolated system
action, there may be all sorts of molecules being generated
and destroyed by the reaction process. In this case, the fact
One example is uid being compressed by a piston in a that the system is closed is expressed by stating that the tocylinder. Another example of a closed system is a bomb tal number of each elemental atom is conserved, no matter
calorimeter, a type of constant-volume calorimeter used in what kind of molecule it may be a part of. Mathematically:
measuring the heat of combustion of a particular reaction.
Electrical energy travels across the boundary to produce m

a spark between the electrodes and initiates combustion.


aij Nj = b0i
Heat transfer occurs across the boundary after combustion j=1
but no mass transfer takes place either way.
where N is the number of j-type molecules, a is the numBeginning with the rst law of thermodynamics for an open ber of atoms of element i in molecule j and b0 is the total
system, this is expressed as:
number of atoms of element i in the system, which remains
constant, since the system is closed. There is one such equation for each element in the system.
1
1
U = QW +mi (h+ v 2 +gz)i me (h+ v 2 +gz)e
2
2
where U is internal energy, Q is the heat added to the system, W is the work done by the system, and since no mass is
transferred in or out of the system, both expressions involving mass ow are zero and the rst law of thermodynamics
for a closed system is derived. The rst law of thermodynamics for a closed system states that the increase of internal energy of the system equals the amount of heat added
to the system minus the work done by the system. For innitesimal changes the rst law for closed systems is stated
by:

dU = Q W.

7.1.7

Isolated system

Main article: Isolated system


An isolated system is more restrictive than a closed system
as it does not interact with its surroundings in any way. Mass
and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As
time passes in an isolated system, internal dierences in the
system tend to even out and pressures and temperatures tend
to equalize, as do density dierences. A system in which all
equalizing processes have gone practically to completion is
in a state of thermodynamic equilibrium.

Truly isolated physical systems do not exist in reality (exIf the work is due to a volume expansion by dV at a pressure cept perhaps for the universe as a whole), because, for exP then:
ample, there is always gravity between a system with mass
and masses elsewhere.[21][22][23][24][25] However, real systems may behave nearly as an isolated system for nite (posW = P dV.
sibly very long) times. The concept of an isolated system
can serve as a useful model approximating many real-world
For a homogeneous system undergoing a reversible process, situations. It is an acceptable idealization used in constructthe second law of thermodynamics reads:
ing mathematical models of certain natural phenomena.
In the attempt to justify the postulate of entropy increase
in the second law of thermodynamics, Boltzmanns HQ = T dS
theorem used equations, which assumed that a system (for
where T is the absolute temperature and S is the entropy example, a gas) was isolated. That is all the mechanical
of the system. With these relations the fundamental ther- degrees of freedom could be specied, treating the walls
modynamic relation, used to compute changes in internal simply as mirror boundary conditions. This inevitably led
to Loschmidts paradox. However, if the stochastic behavenergy, is expressed as:
ior of the molecules in actual walls is considered, along with
the randomizing eect of the ambient, background thermal
radiation, Boltzmanns assumption of molecular chaos can
dU = T dS P dV.
be justied.
For a simple system, with only one type of particle (atom or The second law of thermodynamics for isolated systems
molecule), a closed system amounts to a constant number of states that the entropy of an isolated system not in equi-

7.1. THERMODYNAMIC SYSTEM


librium tends to increase over time, approaching maximum
value at equilibrium. Overall, in an isolated system, the
internal energy is constant and the entropy can never decrease. A closed systems entropy can decrease e.g. when
heat is extracted from the system.

179

7.1.9

Open system

In an open system, matter may pass in and out of some


segments of the system boundaries. There may be other
segments of the system boundaries that pass heat or work
but not matter. Respective account is kept of the transfers
It is important to note that isolated systems are not equivof energy across those and any other several boundary segalent to closed systems. Closed systems cannot exchange
ments. In thermodynamic equilibrium, all ows have vanmatter with the surroundings, but can exchange energy. Isoished.
lated systems can exchange neither matter nor energy with
their surroundings, and as such are only theoretical and do
not exist in reality (except, possibly, the entire universe).
7.1.10 See also
It is worth noting that 'closed system' is often used in thermodynamics discussions when 'isolated system' would be
correct - i.e. there is an assumption that energy does not
enter or leave the system.

7.1.8

Selective transfer of matter

For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes.
An open system has one or several walls that allow transfer
of matter. To account for the internal energy of the open
system, this requires energy transfer terms in addition to
those for heat and work. It also leads to the idea of the
chemical potential.

Physical system

7.1.11

References

[1] Bailyn, M. (1994). A Survey of Thermodynamics, American


Institute of Physics Press, New York, ISBN 0-88318-797-3,
p. 20.
[2] Tisza, L. (1966). Generalized Thermodynamics, M.I.T
Press, Cambridge MA, p. 119.
[3] Marsland, R. III, Brown, H.R., Valente, G. (2015). Time
and irreversibility in axiomatic thermodynamics, Am. J.
Phys., 83(7): 628634.
[4] Bailyn, M. (1994). A Survey of Thermodynamics, American
Institute of Physics Press, New York, ISBN 0-88318-797-3,
p. 22.

A wall selectively permeable only to a pure substance can


put the system in diusive contact with a reservoir of that
pure substance in the surroundings. Then a process is possible in which that pure substance is transferred between system and surroundings. Also, across that wall a contact equilibrium with respect to that substance is possible. By suitable thermodynamic operations, the pure substance reservoir can be dealt with as a closed system. Its internal energy and its entropy can be determined as functions of its
temperature, pressure, and mole number.

[5] Eu, B.C. (2002). Generalized Thermodynamics. The


Thermodynamics of Irreversible Processes and Generalized
Hydrodynamics, Kluwer Academic Publishers, Dordrecht,
ISBN 1-4020-0788-4.

A thermodynamic operation can render impermeable to


matter all system walls other than the contact equilibrium
wall for that substance. This allows the denition of an intensive state variable, with respect to a reference state of the
surroundings, for that substance. The intensive variable is
called the chemical potential; for component substance i it
is usually denoted i. The corresponding extensive variable
can be the number of moles Ni of the component substance
in the system.

[9] Adkins, C.J. (1968/1975), p. 4

[6] Born, M. (1949). Natural Philosophy of Cause and Chance,


Oxford University Press, London, p.44
[7] Tisza, L. (1966), pp. 109, 112.
[8] Haase, R. (1971), p. 7.

[10] Callen, H.B. (1960/1985), pp. 15, 17.


[11] Tschoegl, N.W. (2000), p. 5.
[12] Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London, p. 66.
[13] Tisza, L. (1966). Generalized Thermodynamics, M.I.T
Press, Cambridge MA, pp. 112113.

For a contact equilibrium across a wall permeable to a sub- [14] Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, (1st edition
stance, the chemical potentials of the substance must be
1949) 5th edition 1967, North-Holland, Amsterdam, p. 14.
same on either side of the wall. This is part of the nature
of thermodynamic equilibrium, and may be regarded as re- [15] Mnster, A. (1970). Classical Thermodynamics, translated
by E.S. Halberstadt, WileyInterscience, London, pp. 67.
lated to the zeroth law of thermodynamics.[26]

180

[16] Haase, R. (1971). Survey of Fundamental Laws, chapter 1


of Thermodynamics, pages 197 of volume 1, ed. W. Jost,
of Physical Chemistry. An Advanced Treatise, ed. H. Eyring,
D. Henderson, W. Jost, Academic Press, New York, lcn 73
117081, p. 3.
[17] Tschoegl, N.W. (2000). Fundamentals of Equilibrium and
Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN
0-444-50426-5, p. 5.
[18] Silbey, R.J., Alberty, R.A., Bawendi, M.G. (1955/2005).
Physical Chemistry, fourth edition, Wiley, Hoboken NJ, p.
4.
[19] Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition
1985, Wiley, New York, ISBN 0-471-86256-8, p. 17.
[20] ter Haar, D., Wergeland, H. (1966). Elements of Thermodynamics, Addison-Wesley Publishing, Reading MA, p. 43.
[21] I.M.Kolesnikov; V.A.Vinokurov; S.I.Kolesnikov (2001).
Thermodynamics of Spontaneous and Non-Spontaneous Processes. Nova science Publishers. p. 136. ISBN 1-56072904-X.
[22] A System and Its Surroundings. ChemWiki. University of
California - Davis. Retrieved May 2012.
[23] Hyperphysics. The Department of Physics and Astronomy
of Georgia State University. Retrieved May 2012.
[24] Bryan Sanctuary. Open, Closed and Isolated Systems in
Physical Chemistry,. Foundations of Quantum Mechanics
and Physical Chemistry. McGill University (Montreal). Retrieved May 2012.
[25] Material and Energy Balances for Engineers and Environmentalists (PDF). Imperial College Press. p. 7. Retrieved
May 2012.
[26] Bailyn, M. (1994). A Survey of Thermodynamics, American
Institute of Physics Press, New York, ISBN 0-88318-797-3,
pp. 1923.

Abbott, M.M.; van Hess, H.G. (1989). Thermodynamics with Chemical Applications (2nd ed.). McGraw
Hill.
Callen, H.B. (1960/1985). Thermodynamics and an
Introduction to Thermostatistics, (1st edition 1960) 2nd
edition 1985, Wiley, New York, ISBN 0-471-862568.
Halliday, David; Resnick, Robert; Walker, Jearl
(2008). Fundamentals of Physics (8th ed.). Wiley.
Moran, Michael J.; Shapiro, Howard N. (2008). Fundamentals of Engineering Thermodynamics (6th ed.).
Wiley.

CHAPTER 7. CHAPTER 7

Chapter 8

Chapter 8. Material Properties


8.1 Heat capacity

be used to quantitatively predict the specic heat capacity


of simple systems.

Heat capacity or thermal capacity is a measurable


physical quantity equal to the ratio of the heat added to
(or removed from) an object to the resulting temperature
change.[1] The SI unit of heat capacity is joule per kelvin KJ
and the dimensional form is L2 MT2 1 . Specic heat is
the amount of heat needed to raise the temperature of one
gram of mass by 1 degree Celsius.
Heat capacity is an extensive property of matter, meaning
it is proportional to the size of the system. When expressing the same phenomenon as an intensive property, the heat
capacity is divided by the amount of substance, mass, or
volume, so that the quantity is independent of the size or
extent of the sample. The molar heat capacity is the heat
capacity per unit amount (SI unit: mole) of a pure substance
and the specic heat capacity, often simply called specic
heat, is the heat capacity per unit mass of a material. Occasionally, in engineering contexts, the volumetric heat capacity is used.

8.1.1

History

Main article: History of heat


In a previous theory of heat common in the early modern
period, heat was thought to be a measurement of an invisible uid, known as the caloric. Bodies were capable of
holding a certain amount of this uid, hence the term heat
capacity, named and rst investigated by Scottish chemist
Joseph Black in the 1750s.[5]

Since the development of thermodynamics during the 18th


and 19th centuries, scientists have abandoned the idea of a
physical caloric, and instead understand heat as changes in
a systems internal energy. That is, heat is no longer considered a uid; rather, heat is a transfer of disordered energy.
Nevertheless, at least in English, the term heat capacity
survives. In some other languages, the term thermal capacTemperature reects the average randomized kinetic energy ity is preferred, and it is also sometimes used in English.
of constituent particles of matter (e.g. atoms or molecules)
relative to the centre of mass of the system, while heat is the
transfer of energy across a system boundary into the body 8.1.2 Units
other than by work or matter transfer. Translation, rotation,
and vibration of atoms represent the degrees of freedom of Extensive properties
motion which classically contribute to the heat capacity of
gases, while only vibrations are needed to describe the heat In the International System of Units, heat capacity has the
capacities of most solids[2] , as shown by the DulongPetit unit joules per kelvin. An objects heat capacity (symbol C)
law. Other contributions can come from magnetic[3] and is dened as the ratio of the amount of heat energy transelectronic[4] degrees of freedom in solids, but these rarely ferred to an object and the resulting increase in temperature
of the object,
make substantial contributions.
For quantum mechanical reasons, at any given temperature,
some of these degrees of freedom may be unavailable, or
only partially available, to store thermal energy. In such
cases, the specic heat capacity is a fraction of the maximum. As the temperature approaches absolute zero, the
specic heat capacity of a system approaches zero, due to
loss of available degrees of freedom. Quantum theory can

Q
,
T
assuming that the temperature range is suciently small so
that the heat capacity is constant. More generally, because
heat capacity does depend upon temperature, it should be
written as
C=

181

182

C(T ) =

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

Q
,
dT

where the symbol is used to imply that heat is a path function. Heat capacity is an extensive property, meaning it depends on the extent or size of the physical system in question. A sample containing twice the amount of substance as
another sample requires the transfer of twice the amount of
heat ( Q ) to achieve the same change in temperature ( T
).
Intensive properties
For many experimental and theoretical purposes it is more
convenient to report heat capacity as an intensive property
an intrinsic characteristic of a particular substance. This
is most often accomplished by expressing the property in
relation to a unit of mass. In science and engineering, such
properties are often prexed with the term specic.[6] International standards now recommend that specic heat capacity always refer to division by mass.[7] The units for the
J
specic heat capacity are [c] = kgK
.
In chemistry, heat capacity is often specied relative to one
mole, the unit of amount of substance, and is called the moJ
lar heat capacity. It has the unit [Cmol ] = molK
.
For some considerations it is useful to specify the volumespecic heat capacity, commonly called volumetric heat capacity, which is the heat capacity per unit volume and has
SI units [s] = m3JK . This is used almost exclusively for
liquids and solids, since for gases it may be confused with
specic heat capacity at constant volume.
Alternative unit systems
While SI units are the most widely used, some countries
and industries also use other systems of measurement. One
older unit of heat is the kilogram-calorie (Cal), originally
dened as the energy required to raise the temperature of
one kilogram of water by one degree Celsius, typically from
14.5 to 15.5 C. The specic average heat capacity of water on this scale would therefore be exactly 1 Cal/(Ckg).
However, due to the temperature-dependence of the specic heat, a large number of dierent denitions of the
calorie came into being. Whilst once it was very prevalent,
especially its smaller cgs variant the gram-calorie (cal), dened so that the specic heat of water would be 1 cal/(Kg),
in most elds the use of the calorie is now archaic.
In the United States other units of measure for heat capacity
may be quoted in disciplines such as construction, civil engineering, and chemical engineering. A still common system is the English Engineering Units in which the mass ref-

erence is pound mass and the temperature is specied in


degrees Fahrenheit or Rankine. One (rare) unit of heat is
the pound calorie (lb-cal), dened as the amount of heat required to raise the temperature of one pound of water by
one degree Celsius. On this scale the specic heat of water would be 1 lb-cal/(Klbm). More common is the British
thermal unit, the standard unit of heat in the U.S. construction industry. This is dened such that the specic heat of
water is 1 BTU/(Flb).

8.1.3

Measurement of heat capacity

It may appear that the way to measure heat capacity is to


add a known amount of heat to an object, and measure
the change in temperature. This works reasonably well for
many solids. However, for precise measurements, and especially for gasses, other aspects of measurement become
critical.
The heat capacity can be aected by many of the state variables that describe the thermodynamic system under study.
These include the starting and ending temperature, as well
as the pressure and the volume of the system before and after heat is added. So rather than a single way to measure
heat capacity, there are actually several slightly dierent
measurements of heat capacity. The most commonly used
methods for measurement are to hold the object either at
constant pressure (CP) or at constant volume (CV). Gases
and liquids are typically also measured at constant volume.
Measurements under constant pressure produce larger values than those at constant volume because the constant pressure values also include heat energy that is used to do work
to expand the substance against the constant pressure as its
temperature increases. This dierence is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume.
Hence the heat capacity ratio of gases is typically between
1.3 and 1.67.[8]
The specic heat capacities of substances comprising
molecules (as distinct from monatomic gases) are not xed
constants and vary somewhat depending on temperature.
Accordingly, the temperature at which the measurement is
made is usually also specied. Examples of two common
ways to cite the specic heat of a substance are as follows:[9]
Water (liquid): CP = 4185.5 J/(kgK) (15 C, 101.325
kPa)
Water (liquid): CVH = 74.539 J/(molK) (25 C)
For liquids and gases, it is important to know the pressure to
which given heat capacity data refer. Most published data
are given for standard pressure. However, quite dierent
standard conditions for temperature and pressure have been

8.1. HEAT CAPACITY


dened by dierent organizations. The International Union
of Pure and Applied Chemistry (IUPAC) changed its recommendation from one atmosphere to the round value 100
kPa (750.062 Torr).[notes 1]

183

H = U + PV .
A small change in the enthalpy can be expressed as

Calculation from rst principles


dH = Q + V dP ,
The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based
on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed
of relatively heavy atoms (atomic number > iron), at noncryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole
of atoms (DulongPetit law, R is the gas constant). Low
temperature approximations for both gases and solids at
temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods
of Einstein and Debye discussed below.

and therefore, at constant pressure, we have


(

H
T

(
=

Q
T

)
= CP .
P

These two equations:


(
(

U
T
H
T

)
=
V

(
=

Q
T
Q
T

)
= CV .
V

= CP .
P

Thermodynamic relations and denition of heat capac- are property relations and are therefore independent of the
ity
type of process. In other words, they are valid for any substance going through any process. Both the internal energy
The internal energy of a closed system changes either by and enthalpy of a substance can change with the transfer of
adding heat to the system or by the system performing work. energy in many forms i.e., heat.[10]
Written mathematically we have
Relation between heat capacities
esystem = ein eout

Main article: Relations between heat capacities

Or
Measuring the heat capacity, sometimes referred to as specic heat, at constant volume can be prohibitively dicult
dU = Q W .
for liquids and solids. That is, small temperature changes
For work as a result of an increase of the system volume we typically require large pressures to maintain a liquid or solid
at constant volume implying the containing vessel must be
may write,
nearly rigid or at least very strong (see coecient of thermal expansion and compressibility). Instead it is easier to
measure the heat capacity at constant pressure (allowing the
dU = Q P dV .
material to expand or contract freely) and solve for the heat
If the heat is added at constant volume, then the second term capacity at constant volume using mathematical relationships derived from the basic thermodynamic laws. Startof this relation vanishes and one readily obtains
ing from the fundamental thermodynamic relation one can
show
(
)
(
)
U
Q
=
= CV .
T V
T V
) (
)
(
V
P
This denes the heat capacity at constant volume, CV, which CP CV = T T
T P,n
V,n
is also related to changes in internal energy. Another useful
quantity is the heat capacity at constant pressure, CP. This where the partial derivatives are taken at constant volume
quantity refers to the change in the enthalpy of the system, and constant number of particles, and constant pressure and
which is given by
constant number of particles, respectively.

184

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

This can also be rewritten


c = Em =
CP CV = V T

2
T

C
C
=
,
m
V

where

where
is the coecient of thermal expansion,
T is the isothermal compressibility.

C
m

V
The heat capacity ratio or adiabatic index is the ratio of the
m
heat capacity at constant pressure to heat capacity at con- = V
stant volume. It is sometimes also known as the isentropic For gases, and also for other materials under high pressures,
expansion factor.
there is need to distinguish between dierent boundary conditions for the processes under consideration (since values
Ideal gas [11] For an ideal gas, evaluating the partial dier signicantly between dierent conditions). Typical
derivatives above according to the equation of state where processes for which a heat capacity may be dened include
isobaric (constant pressure, dP = 0 ) or isochoric (constant
R is the gas constant for an ideal gas
volume, dV = 0 ) processes. The corresponding specic
heat capacities are expressed as
P V = nRT
) (
)
V
P
CP CV = T
T V,n T P,n
(
)
nRT
P
nR
P =

=
V
T V,n
V
(
)
V
nR
nRT

=
V =
P
T P,n
P

cP =
(
cV =

C
m
C
m

)
,
P

)
.
V

From the results of the previous section, dividing through


by the mass gives the relation

substituting
2 T
cP cV =
.
T )
(
) (
)
(
)(
) (
)(
)
(
P
V
nR
nR
nRT
nR
nR
T
=T
=
= Pparameter=tonR
AP related
c is CV 1 , the volumetric heat
T V,n T P,n
V
P
V
P
capacity. In engineering practice, cV for solids or liquids
often signies a volumetric heat capacity, rather than a
this equation reduces simply to Mayer's relation,
constant-volume one. In such cases, the mass-specic heat
capacity (specic heat) is often explicitly written with the
subscript m , as cm . Of course, from the above relationCP,m CV,m = R
ships, for solids one writes
Specic heat capacity
The specic heat capacity of a material on a per mass basis
is

c=

C
,
m

which in the absence of phase transitions is equivalent to

cm =

C
cvolumetric
=
.
m

For pure homogeneous chemical compounds with established molecular or molar mass or a molar quantity is established, heat capacity as an intensive property can be expressed on a per mole basis instead of a per mass basis by
the following equations analogous to the per mass equations:

8.1. HEAT CAPACITY

(
CP,m =
(
CV,m =

C
n
C
n

185
dimensionless entropy per particle S = S/N k , measured
in nats.

)
P

C =

dS
d ln T

Alternatively, using base 2 logarithms, C * relates the basewhere n is the number of moles in the body or
2 logarithmic increase in temperature to the increase in the
thermodynamic system. One may refer to such a per mole
dimensionless entropy measured in bits.[12]
quantity as molar heat capacity to distinguish it from specic heat capacity on a per mass basis.
Heat capacity at absolute zero
Polytropic heat capacity

From the denition of entropy

The polytropic heat capacity is calculated at processes if all


the thermodynamic properties (pressure, volume, temperaT dS = Q
ture) change
(
Ci,m =

C
n

the absolute entropy can be calculated by integrating from


zero kelvins temperature to the nal temperature T

Tf
Tf
Tf
Q
Q dT
dT
The most important polytropic processes run between the S(Tf ) =
=
=
.
C(T )
T
dT
T
T
adiabatic and the isotherm functions, the polytropic index
T =0
0
0
is between 1 and the adiabatic exponent ( or )
The heat capacity must be zero at zero temperature in order for the above integral not to yield an innite absolute
entropy, which would violate the third law of thermodyDimensionless heat capacity
namics. One of the strengths of the Debye model is that
(unlike the preceding Einstein model) it predicts the proper
The dimensionless heat capacity of a material is
mathematical form of the approach of heat capacity toward
zero, as absolute zero temperature is approached.
C
C
C =
=
nR
Nk
Negative heat capacity (stars)
where
Most physical systems exhibit a positive heat capacity.
However, even though it can seem paradoxical at rst,[13][14]
C is the heat capacity of a body made of the mathere are some systems for which the heat capacity is negaterial in question (J/K)
tive. These are inhomogeneous systems which do not meet
n is the amount of substance in the body (mol)
the strict denition of thermodynamic equilibrium. They
include gravitating objects such as stars, galaxies; and also
R is the gas constant (J/(Kmol))
sometimes some nano-scale clusters of a few tens of atoms,
N is the number of molecules in the body. (diclose to a phase transition.[15] A negative heat capacity can
mensionless)
result in a negative temperature.
k is Boltzmanns constant (J/(Kmolecule))
According to the virial theorem, for a self-gravitating body
like a star or an interstellar gas cloud, the average potential
In the ideal gas article, dimensionless heat capacity C is energy UP and the average kinetic energy UK are locked
expressed as c , and is related there directly to half the num- together in the relation
ber of degrees of freedom per particle. This holds true
for quadratic degrees of freedom, a consequence of the
equipartition theorem.
UPot = 2UKin ,
More generally, the dimensionless heat capacity relates the
logarithmic increase in temperature to the increase in the The total energy U (= UP + UK ) therefore obeys

186

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

U = UKin ,
If the system loses energy, for example by radiating energy away into space, the average kinetic energy actually
increases. If a temperature is dened by the average kinetic energy, then the system therefore can be said to have
a negative heat capacity.[16]
A more extreme version of this occurs with black holes.
According to black hole thermodynamics, the more mass
and energy a black hole absorbs, the colder it becomes. In
contrast, if it is a net emitter of energy, through Hawking
radiation, it will become hotter and hotter until it boils away.

8.1.4

Theory of heat capacity

Factors that aect specic heat capacity

the resulting specic heat capacity is a function of the structure of the substance itself. In particular, it depends on the
number of degrees of freedom that are available to the particles in the substance; each independent degree of freedom
allows the particles to store thermal energy. The translational kinetic energy of substance particles is only one of
the many possible degrees of freedom which manifests as
temperature change, and thus the larger the number of degrees of freedom available to the particles of a substance
other than translational kinetic energy, the larger will be the
specic heat capacity for the substance. For example, rotational kinetic energy of gas molecules stores heat energy
in a way that increases heat capacity, since this energy does
not contribute to temperature.
In addition, quantum eects require that whenever energy
be stored in any mechanism associated with a bound system
which confers a degree of freedom, it must be stored in certain minimal-sized deposits (quanta) of energy, or else not
stored at all. Such eects limit the full ability of some degrees of freedom to store energy when their lowest energy
storage quantum amount is not easily supplied at the average energy of particles at a given temperature. In general,
for this reason, specic heat capacities tend to fall at lower
temperatures where the average thermal energy available to
each particle degree of freedom is smaller, and thermal energy storage begins to be limited by these quantum eects.
Due to this process, as temperature falls toward absolute
zero, so also does heat capacity.

Degrees of freedom Main article: degrees of freedom


(physics and chemistry)

Molecules undergo many characteristic internal vibrations. Potential energy stored in these internal degrees of freedom contributes to
a samples energy content, [17] [18] but not to its temperature. More
internal degrees of freedom tend to increase a substances specic
heat capacity, so long as temperatures are high enough to overcome
quantum eects.

Molecules are quite dierent from the monatomic gases


like helium and argon. With monatomic gases, thermal
energy comprises only translational motions. Translational
motions are ordinary, whole-body movements in 3D space
whereby particles move about and exchange energy in
collisionslike rubber balls in a vigorously shaken container (see animation here [19] ). These simple movements in
the three dimensions of space mean individual atoms have
three translational degrees of freedom. A degree of freedom is any form of energy in which heat transferred into
an object can be stored. This can be in translational kinetic energy, rotational kinetic energy, or other forms such
as potential energy in vibrational modes. Only three translational degrees of freedom (corresponding to the three independent directions in space) are available for any individual
atom, whether it is free, as a monatomic molecule, or bound
into a polyatomic molecule.

For any given substance, the heat capacity of a body is directly proportional to the amount of substance it contains
(measured in terms of mass or moles or volume). Doubling
the amount of substance in a body doubles its heat capacity,
etc.
As to rotation about an atoms axis (again, whether the atom
However, when this eect has been corrected for, by divid- is bound or free), its energy of rotation is proportional to the
ing the heat capacity by the quantity of substance in a body, moment of inertia for the atom, which is extremely small

8.1. HEAT CAPACITY


compared to moments of inertia of collections of atoms.
This is because almost all of the mass of a single atom is
concentrated in its nucleus, which has a radius too small to
give a signicant moment of inertia. In contrast, the spacing
of quantum energy levels for a rotating object is inversely
proportional to its moment of inertia, and so this spacing
becomes very large for objects with very small moments
of inertia. For these reasons, the contribution from rotation of atoms on their axes is essentially zero in monatomic
gases, because the energy spacing of the associated quantum levels is too large for signicant thermal energy to be
stored in rotation of systems with such small moments of
inertia. For similar reasons, axial rotation around bonds
joining atoms in diatomic gases (or along the linear axis
in a linear molecule of any length) can also be neglected as
a possible degree of freedom as well, since such rotation
is similar to rotation of monatomic atoms, and so occurs
about an axis with a moment of inertia too small to be able
to store signicant heat energy.
In polyatomic molecules, other rotational modes may become active, due to the much higher moments of inertia
about certain axes which do not coincide with the linear
axis of a linear molecule. These modes take the place of
some translational degrees of freedom for individual atoms,
since the atoms are moving in 3-D space, as the molecule rotates. The narrowing of quantum mechanically determined
energy spacing between rotational states results from situations where atoms are rotating around an axis that does not
connect them, and thus form an assembly that has a large
moment of inertia. This small dierence between energy
states allows the kinetic energy of this type of rotational
motion to store heat energy at ambient temperatures. Furthermore, internal vibrational degrees of freedom also may
become active (these are also a type of translation, as seen
from the view of each atom). In summary, molecules are
complex objects with a population of atoms that may move
about within the molecule in a number of dierent ways
(see animation at right), and each of these ways of moving
is capable of storing energy if the temperature is sucient.
The heat capacity of molecular substances (on a per-atom
or atom-molar, basis) does not exceed the heat capacity of
monatomic gases, unless vibrational modes are brought into
play. The reason for this is that vibrational modes allow energy to be stored as potential energy in intra-atomic bonds in
a molecule, which are not available to atoms in monatomic
gases. Up to about twice as much energy (on a per-atom
basis) per unit of temperature increase can be stored in a
solid as in a monatomic gas, by this mechanism of storing
energy in the potentials of interatomic bonds. This gives
many solids about twice the atom-molar heat capacity at
room temperature of monatomic gases.

187
perature of the solid), especially in solids with light and
tightly bound atoms (e.g., beryllium metal or diamond).
Polyatomic gases store intermediate amounts of energy,
giving them a per-atom heat capacity that is between that
of monatomic gases (3 2 R per mole of atoms, where R is
the ideal gas constant), and the maximum of fully excited
warmer solids (3 R per mole of atoms). For gases, heat capacity never falls below the minimum of 3 2 R per mole (of
molecules), since the kinetic energy of gas molecules is always available to store at least this much thermal energy.
However, at cryogenic temperatures in solids, heat capacity
falls toward zero, as temperature approaches absolute zero.

Example of temperature-dependent specic heat capacity, in a diatomic gas To illustrate the role of various degrees of freedom in storing heat, we may consider
nitrogen, a diatomic molecule that has ve active degrees of
freedom at room temperature: the three comprising translational motion plus two rotational degrees of freedom internally. Although the constant-volume molar heat capacity of
nitrogen at this temperature is ve-thirds that of monatomic
gases, on a per-mole of atoms basis, it is ve-sixths that of a
monatomic gas. The reason for this is the loss of a degree of
freedom due to the bond when it does not allow storage of
thermal energy. Two separate nitrogen atoms would have a
total of six degrees of freedomthe three translational degrees of freedom of each atom. When the atoms are bonded
the molecule will still only have three translational degrees
of freedom, as the two atoms in the molecule move as one.
However, the molecule cannot be treated as a point object,
and the moment of inertia has increased suciently about
two axes to allow two rotational degrees of freedom to be
active at room temperature to give ve degrees of freedom.
The moment of inertia about the third axis remains small, as
this is the axis passing through the centres of the two atoms,
and so is similar to the small moment of inertia for atoms of
a monatomic gas. Thus, this degree of freedom does not act
to store heat, and does not contribute to the heat capacity
of nitrogen. The heat capacity per atom for nitrogen (5/2
R per mole molecules = 5/4 R per mole atoms) is therefore
less than for a monatomic gas (3/2 R per mole molecules or
atoms), so long as the temperature remains low enough that
no vibrational degrees of freedom are activated.[20]

At higher temperatures, however, nitrogen gas gains one


more degree of internal freedom, as the molecule is excited
into higher vibrational modes that store thermal energy. A
vibrational degree of freedom contributes a heat capacity
of 1/2 R each for kinetic and potential energy, for a total of R. Now the bond is contributing heat capacity, and
(because of storage of energy in potential energy) is contributing more than if the atoms were not bonded. With
However, quantum eects heavily aect the actual ratio at full thermal excitation of bond vibration, the heat capaclower temperatures (i.e., much lower than the melting tem- ity per volume, or per mole of gas molecules approaches

188
seven-thirds that of monatomic gases. Signicantly, this
is seven-sixths of the monatomic gas value on a mole-ofatoms basis, so this is now a higher heat capacity per atom
than the monatomic gure, because the vibrational mode
enables for diatomic gases allows an extra degree of potential energy freedom per pair of atoms, which monatomic
gases cannot possess.[21][22] See thermodynamic temperature for more information on translational motions, kinetic
(heat) energy, and their relationship to temperature.
However, even at these large temperatures where gaseous
nitrogen is able to store 7/6ths of the energy per atom of
a monatomic gas (making it more ecient at storing energy on an atomic basis), it still only stores 7/12 ths of the
maximal per-atom heat capacity of a solid, meaning it is not
nearly as ecient at storing thermal energy on an atomic basis, as solid substances can be. This is typical of gases, and
results because many of the potential bonds which might be
storing potential energy in gaseous nitrogen (as opposed to
solid nitrogen) are lacking, because only one of the spatial
dimensions for each nitrogen atom oers a bond into which
potential energy can be stored without increasing the kinetic
energy of the atom. In general, solids are most ecient, on
an atomic basis, at storing thermal energy (that is, they have
the highest per-atom or per-mole-of-atoms heat capacity).
Per mole of dierent units
Per mole of molecules When the specic heat capacity, c, of a material is measured (lowercase c means the
unit quantity is in terms of mass), dierent values arise because dierent substances have dierent molar masses (essentially, the weight of the individual atoms or molecules).
In solids, thermal energy arises due to the number of atoms
that are vibrating. Molar heat capacity per mole of
molecules, for both gases and solids, oer gures which are
arbitrarily large, since molecules may be arbitrarily large.
Such heat capacities are thus not intensive quantities for this
reason, since the quantity of mass being considered can be
increased without limit.

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES


at high temperatures. This relationship was noticed empirically in 1819, and is called the DulongPetit law, after its
two discoverers.[23] Historically, the fact that specic heat
capacities are approximately equal when corrected by the
presumed weight of the atoms of solids, was an important
piece of data in favor of the atomic theory of matter.
Because of the connection of heat capacity to the number
of atoms, some care should be taken to specify a mole-ofmolecules basis vs. a mole-of-atoms basis, when comparing specic heat capacities of molecular solids and gases.
Ideal gases have the same numbers of molecules per volume, so increasing molecular complexity adds heat capacity
on a per-volume and per-mole-of-molecules basis, but may
lower or raise heat capacity on a per-atom basis, depending
on whether the temperature is sucient to store energy as
atomic vibration.
In solids, the quantitative limit of heat capacity in general is
about 3 R per mole of atoms, where R is the ideal gas constant. This 3 R value is about 24.9 J/mole.K. Six degrees of
freedom (three kinetic and three potential) are available to
each atom. Each of these six contributes 1 2 R specic heat
capacity per mole of atoms.[24] This limit of 3 R per mole
specic heat capacity is approached at room temperature
for most solids, with signicant departures at this temperature only for solids composed of the lightest atoms which
are bound very strongly, such as beryllium (where the value
is only of 66% of 3 R), or diamond (where it is only 24%
of 3 R). These large departures are due to quantum effects which prevent full distribution of heat into all vibrational modes, when the energy dierence between vibrational quantum states is very large compared to the average
energy available to each atom from the ambient temperature.
For monatomic gases, the specic heat is only half of 3 R
per mole, i.e. (3 2 R per mole) due to loss of all potential
energy degrees of freedom in these gases. For polyatomic
gases, the heat capacity will be intermediate between these
values on a per-mole-of-atoms basis, and (for heat-stable
molecules) would approach the limit of 3 R per mole of
atoms, for gases composed of complex molecules, and at
higher temperatures at which all vibrational modes accept
excitational energy. This is because very large and complex
gas molecules may be thought of as relatively large blocks of
solid matter which have lost only a relatively small fraction
of degrees of freedom, as compared to a fully integrated
solid.

Per mole of atoms Conversely, for molecular-based substances (which also absorb heat into their internal degrees
of freedom), massive, complex molecules with high atomic
countlike octanecan store a great deal of energy per
mole and yet are quite unremarkable on a mass basis, or on
a per-atom basis. This is because, in fully excited systems, For a list of heat capacities per atom-mole of various subheat is stored independently by each atom in a substance, stances, in terms of R, see the last column of the table of
heat capacities below.
not primarily by the bulk motion of molecules.
Thus, it is the heat capacity per-mole-of-atoms, not permole-of-molecules, which is the intensive quantity, and Corollaries of these considerations for solids (volumewhich comes closest to being a constant for all substances specic heat capacity) Since the bulk density of a solid

8.1. HEAT CAPACITY


chemical element is strongly related to its molar mass (usually about 3 R per mole, as noted above), there exists a noticeable inverse correlation between a solids density and its
specic heat capacity on a per-mass basis. This is due to
a very approximate tendency of atoms of most elements to
be about the same size, and constancy of mole-specic heat
capacity) result in a good correlation between the volume of
any given solid chemical element and its total heat capacity. Another way of stating this, is that the volume-specic
heat capacity (volumetric heat capacity) of solid elements
is roughly a constant. The molar volume of solid elements
is very roughly constant, and (even more reliably) so also
is the molar heat capacity for most solid substances. These
two factors determine the volumetric heat capacity, which
as a bulk property may be striking in consistency. For example, the element uranium is a metal which has a density
almost 36 times that of the metal lithium, but uraniums
specic heat capacity on a volumetric basis (i.e. per given
volume of metal) is only 18% larger than lithiums.

189
Impurities In the case of alloys, there are several conditions in which small impurity concentrations can greatly affect the specic heat. Alloys may exhibit marked dierence
in behaviour even in the case of small amounts of impurities being one element of the alloy; for example impurities
in semiconducting ferromagnetic alloys may lead to quite
dierent specic heat properties.[25]
The simple case of the monatomic gas

In the case of a monatomic gas such as helium under constant volume, if it is assumed that no electronic or nuclear
quantum excitations occur, each atom in the gas has only
3 degrees of freedom, all of a translational type. No energy dependence is associated with the degrees of freedom
which dene the position of the atoms. While, in fact, the
degrees of freedom corresponding to the momenta of the
atoms are quadratic, and thus contribute to the heat capacity. There are N atoms, each of which has 3 components
Since the volume-specic corollary of the DulongPetit of momentum, which leads to 3N total degrees of freedom.
specic heat capacity relationship requires that atoms of all This gives:
elements take up (on average) the same volume in solids,
there are many departures from it, with most of these due
(
)
to variations in atomic size. For instance, arsenic, which is
U
3
3
C
=
= N kB = n R
V
only 14.5% less dense than antimony, has nearly 59% more
T V
2
2
specic heat capacity on a mass basis. In other words; even
though an ingot of arsenic is only about 17% larger than
CV
3
an antimony one of the same mass, it absorbs about 59% CV,m = n = 2 R
more heat for a given temperature rise. The heat capacwhere
ity ratios of the two substances closely follows the ratios of
their molar volumes (the ratios of numbers of atoms in the
CV is the heat capacity at constant volume of the
same volume of each substance); the departure from the
gas
correlation to simple volumes in this case is due to lighter
arsenic atoms being signicantly more closely packed than
CV,m is the molar heat capacity at constant volantimony atoms, instead of similar size. In other words,
ume of the gas
similar-sized atoms would cause a mole of arsenic to be
N is the total number of atoms present in the con63% larger than a mole of antimony, with a correspondingly
tainer
lower density, allowing its volume to more closely mirror its
n is the number of moles of atoms present in
heat capacity behavior.
the container (n is the ratio of N and Avogadros
number)
Other factors

Hydrogen bonds Hydrogen-containing polar molecules


like ethanol, ammonia, and water have powerful, intermolecular hydrogen bonds when in their liquid phase.
These bonds provide another place where heat may be
stored as potential energy of vibration, even at comparatively low temperatures. Hydrogen bonds account for the
fact that liquid water stores nearly the theoretical limit of
3 R per mole of atoms, even at relatively low temperatures
(i.e. near the freezing point of water).

R is the ideal gas constant, (8.3144621[75]


J/(molK). R is equal to the product of
Boltzmanns constant kB and Avogadros
number
The following table shows experimental molar constant
volume heat capacity measurements taken for each noble
monatomic gas (at 1 atm and 25 C):
It is apparent from the table that the experimental heat capacities of the monatomic noble gases agrees with this simple application of statistical mechanics to a very high degree.

190

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

The molar heat capacity of a monatomic gas at constant inertia about the internuclear axis is vanishingly small relapressure is then
tive to the other two rotational axes, the energy spacing can
be considered so high that no excitations of the rotational
state can occur unless the temperature is extremely high. It
5
is easy to calculate the expected number of vibrational deCp,m = CV,m + R = R
grees of freedom (or vibrational modes). There are three
2
degrees of translational freedom, and two degrees of rotational freedom, therefore
Diatomic gas
fvib = f ftrans frot = 6 3 2 = 1
Each rotational and translational degree of freedom will
contribute R/2 in the total molar heat capacity of the gas.
Each vibrational mode will contribute R to the total molar
heat capacity, however. This is because for each vibrational
mode, there is a potential and kinetic energy component.
Both the potential and kinetic components will contribute
R/2 to the total molar heat capacity of the gas. Therefore,
a diatomic molecule would be expected to have a molar
constant-volume heat capacity of

CV,m =
Constant volume specic heat capacity of a diatomic gas (idealised). As temperature increases, heat capacity goes from 3/2 R
(translation contribution only), to 5/2 R (translation plus rotation),
nally to a maximum of 7/2 R (translation + rotation + vibration)

3R
7R
+R+R=
= 3.5R
2
2

where the terms originate from the translational, rotational,


and vibrational degrees of freedom, respectively.

The following is a table of some molar constant-volume heat


capacities of various diatomic gases at standard temperature
In the somewhat more complex case of an ideal gas of (25 C = 298 K)
diatomic molecules, the presence of internal degrees of
freedom are apparent. In addition to the three translational From the above table, clearly there is a problem with
degrees of freedom, there are rotational and vibrational de- the above theory. All of the diatomics examined have
grees of freedom. In general, the number of degrees of heat capacities that are lower than those predicted by the
equipartition theorem, except Br2 . However, as the atoms
freedom, f, in a molecule with na atoms is 3na:
composing the molecules become heavier, the heat capacities move closer to their expected values. One of the reasons
for this phenomenon is the quantization of vibrational, and
f = 3na
to a lesser extent, rotational states. In fact, if it is assumed
Mathematically, there are a total of three rotational degrees that the molecules remain in their lowest energy vibrational
of freedom, one corresponding to rotation about each of the state because the inter-level energy spacings for vibrationaxes of three-dimensional space. However, in practice only energies are large, the predicted molar constant volume heat
the existence of two degrees of rotational freedom for linear capacity for a diatomic molecule becomes just that from the
molecules will be considered. This approximation is valid contributions of translation and rotation:
because the moment of inertia about the internuclear axis is
vanishingly small with respect to other moments of inertia
in the molecule (this is due to the very small rotational moments of single atoms, due to the concentration of almost
all their mass at their centers; compare also the extremely
small radii of the atomic nuclei compared to the distance
between them in a diatomic molecule). Quantum mechanically, it can be shown that the interval between successive
rotational energy eigenstates is inversely proportional to the
moment of inertia about that axis. Because the moment of

CV,m =

3R
5R
+R=
= 2.5R
2
2

which is a fairly close approximation of the heat capacities of the lighter molecules in the above table. If the
quantum harmonic oscillator approximation is made, it
turns out that the quantum vibrational energy level spacings are actually inversely proportional to the square root
of the reduced mass of the atoms composing the diatomic

8.1. HEAT CAPACITY

191
In addition, a molecule may have rotational motion. The
kinetic energy of rotational motion is generally expressed
as

E=

Constant volume specic heat capacity of diatomic gases (real


gases) between about 200 K and 2000 K. This temperature range
is not large enough to include both quantum transitions in all gases.
Instead, at 200 K, all but hydrogen are fully rotationally excited, so
all have at least 5/2 R heat capacity. (Hydrogen is already below
5/2, but it will require cryogenic conditions for even H2 to fall to 3/2
R). Further, only the heavier gases fully reach 7/2 R at the highest
temperature, due to the relatively small vibrational energy spacing
of these molecules. HCl and H2 begin to make the transition above
500 K, but have not achieved it by 1000 K, since their vibrational
energy level spacing is too wide to fully participate in heat capacity,
even at this temperature.

molecule. Therefore, in the case of the heavier diatomic


molecules such as chlorine or bromine, the quantum vibrational energy level spacings become ner, which allows
more excitations into higher vibrational levels at lower temperatures. This limit for storing heat capacity in vibrational
modes, as discussed above, becomes 7R/2 = 3.5 R per mole
of gas molecules, which is fairly consistent with the measured value for Br2 at room temperature. As temperatures
rise, all diatomic gases approach this value.
General gas phase
The specic heat of the gas is best conceptualized in terms
of the degrees of freedom of an individual molecule. The
dierent degrees of freedom correspond to the dierent
ways in which the molecule may store energy. The molecule
may store energy in its translational motion according to the
formula:

E=

)
1 ( 2
m vx + vy2 + vz2
2

)
1 (
I1 12 + I2 22 + I3 32
2

where I is the moment of inertia tensor of the molecule,


and [1 , 2 , 3 ] is the angular velocity pseudo-vector (in
a coordinate system aligned with the principal axes of the
molecule). In general, then, there will be three additional
degrees of freedom corresponding to the rotational motion
of the molecule, (For linear molecules one of the inertia
tensor terms vanishes and there are only two rotational degrees of freedom). The degrees of freedom corresponding
to translations and rotations are called the rigid degrees of
freedom, since they do not involve any deformation of the
molecule.
The motions of the atoms in a molecule which are not part
of its gross translational motion or rotation may be classied
as vibrational motions. It can be shown that if there are n
atoms in the molecule, there will be as many as v = 3n3
nr vibrational degrees of freedom, where nr is the number
of rotational degrees of freedom. A vibrational degree of
freedom corresponds to a specic way in which all the atoms
of a molecule can vibrate. The actual number of possible
vibrations may be less than this maximal one, due to various
symmetries.
For example, triatomic nitrous oxide N2 O will have only 2
degrees of rotational freedom (since it is a linear molecule)
and contains n=3 atoms: thus the number of possible vibrational degrees of freedom will be v = (33) 3 2 = 4.
There are four ways or modes in which the three atoms
can vibrate, corresponding to 1) A mode in which an atom
at each end of the molecule moves away from, or towards,
the center atom at the same time, 2) a mode in which either
end atom moves asynchronously with regard to the other
two, and 3) and 4) two modes in which the molecule bends
out of line, from the center, in the two possible planar directions that are orthogonal to its axis. Each vibrational degree
of freedom confers TWO total degrees of freedom, since
vibrational energy mode partitions into 1 kinetic and 1 potential mode. This would give nitrous oxide 3 translational,
2 rotational, and 4 vibrational modes (but these last giving
8 vibrational degrees of freedom), for storing energy. This
is a total of f = 3 + 2 + 8 = 13 total energy-storing degrees
of freedom, for N2 O.

where m is the mass of the molecule and [vx , vy , vz ] is velocity of the center of mass of the molecule. Each direction For a bent molecule like water H2 O, a similar calculation
of motion constitutes a degree of freedom, so that there are gives 9 3 3 = 3 modes of vibration, and 3 (translational)
three translational degrees of freedom.
+ 3 (rotational) + 6 (vibrational) = 12 degrees of freedom.

192
The storage of energy into degrees of freedom
If the molecule could be entirely described using classical mechanics, then the theorem of equipartition of energy
could be used to predict that each degree of freedom would
have an average energy in the amount of (1/2)kT where
k is Boltzmanns constant and T is the temperature. Our
calculation of the constant-volume heat capacity would be
straightforward. Each molecule would be holding, on average, an energy of (f/2)kT where f is the total number of
degrees of freedom in the molecule. Note that Nk = R if
N is Avogadros number, which is the case in considering
the heat capacity of a mole of molecules. Thus, the total internal energy of the gas would be (f/2)NkT where N is the
total number of molecules. The heat capacity (at constant
volume) would then be a constant (f/ 2)Nk the mole-specic
heat capacity would be (f/ 2)R the molecule-specic heat
capacity would be (f/2)k and the dimensionless heat capacity would be just f/2. Here again, each vibrational degree
of freedom contributes 2f. Thus, a mole of nitrous oxide
would have a total constant-volume heat capacity (including vibration) of (13/2)R by this calculation.

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES


If the temperature of the substance is so low that the
equipartition energy of (1/2)kT is much smaller than this
excitation energy, then there will be little or no energy in
this degree of freedom. This degree of freedom is then said
to be frozen out. As mentioned above, the temperature
corresponding to the rst excited vibrational state of HCl is
about 4156 K. For temperatures well below this value, the
vibrational degrees of freedom of the HCl molecule will be
frozen out. They will contain little energy and will not contribute to the thermal energy or the heat capacity of HCl
gas.

Energy storage mode freeze-out temperatures

It can be seen that for each degree of freedom there is a


critical temperature at which the degree of freedom unfreezes and begins to accept energy in a classical way. In
the case of translational degrees of freedom, this temperature is that temperature at which the thermal wavelength of
the molecules is roughly equal to the size of the container.
For a container of macroscopic size (e.g. 10 cm) this temIn summary, the molar heat capacity (mole-specic heat ca- perature is extremely small and has no signicance, since
pacity) of an ideal gas with f degrees of freedom is given the gas will certainly liquify or freeze before this low temby
perature is reached. For any real gas translational degrees
of freedom may be considered to always be classical and
contain an average energy of (3/2)kT per molecule.
f
CV,m = R
The rotational degrees of freedom are the next to un2
freeze. In a diatomic gas, for example, the critical temThis equation applies to all polyatomic gases, if the degrees
perature for this transition is usually a few tens of kelvins,
[26]
of freedom are known.
although with a very light molecule such as hydrogen the
The constant-pressure heat capacity for any gas would ex- rotational energy levels will be spaced so widely that roceed this by an extra factor of R (see Mayer's relation, tational heat capacity may not completely unfreeze until
above). As example C would be a total of (15/2)R/mole considerably higher temperatures are reached. Finally, the
for nitrous oxide.
vibrational degrees of freedom are generally the last to unfreeze. As an example, for diatomic gases, the critical temperature for the vibrational motion is usually a few thouThe eect of quantum energy levels in storing energy in
sands of kelvins, and thus for the nitrogen in our example
degrees of freedom
at room temperature, no vibration modes would be excited,
The various degrees of freedom cannot generally be consid- and the constant-volume heat capacity at room temperature
ered to obey classical mechanics, however. Classically, the is (5/2)R/mole, not (7/2)R/mole. As seen above, with some
energy residing in each degree of freedom is assumed to be unusually heavy gases such as iodine gas I2 , or bromine gas
continuousit can take on any positive value, depending on Br2 , some vibrational heat capacity may be observed even
the temperature. In reality, the amount of energy that may at room temperatures.
reside in a particular degree of freedom is quantized: It may
only be increased and decreased in nite amounts. A good
estimate of the size of this minimum amount is the energy
of the rst excited state of that degree of freedom above its
ground state. For example, the rst vibrational state of the
hydrogen chloride (HCl) molecule has an energy of about
5.74 1020 joule. If this amount of energy were deposited
in a classical degree of freedom, it would correspond to a
temperature of about 4156 K.

It should be noted that it has been assumed that atoms have


no rotational or internal degrees of freedom. This is in fact
untrue. For example, atomic electrons can exist in excited
states and even the atomic nucleus can have excited states
as well. Each of these internal degrees of freedom are assumed to be frozen out due to their relatively high excitation
energy. Nevertheless, for suciently high temperatures,
these degrees of freedom cannot be ignored. In a few exceptional cases, such molecular electronic transitions are of

8.1. HEAT CAPACITY

193

suciently low energy that they contribute to heat capac- Solid phase
ity at room temperature, or even at cryogenic temperatures.
One example of an electronic transition degree of freedom Main articles: Einstein solid, Debye model and Kinetic thewhich contributes heat capacity at standard temperature is ory of solids
that of nitric oxide (NO), in which the single electron in an For matter in a crystalline solid phase, the DulongPetit
anti-bonding molecular orbital has energy transitions which
contribute to the heat capacity of the gas even at room temperature.
An example of a nuclear magnetic transition degree of freedom which is of importance to heat capacity, is the transition which converts the spin isomers of hydrogen gas (H2 )
into each other. At room temperature, the proton spins
of hydrogen gas are aligned 75% of the time, resulting
in orthohydrogen when they are. Thus, some thermal energy has been stored in the degree of freedom available
when parahydrogen (in which spins are anti-aligned) absorbs energy, and is converted to the higher energy ortho
form. However, at the temperature of liquid hydrogen,
not enough heat energy is available to produce orthohydrogen (that is, the transition energy between forms is large
enough to freeze out at this low temperature), and thus
the parahydrogen form predominates. The heat capacity
of the transition is sucient to release enough heat, as orthohydrogen converts to the lower-energy parahydrogen, to
boil the hydrogen liquid to gas again, if this evolved heat is
not removed with a catalyst after the gas has been cooled
and condensed. This example also illustrates the fact that
some modes of storage of heat may not be in constant equilibrium with each other in substances, and heat absorbed or
released from such phase changes may catch up with temperature changes of substances, only after a certain time. In
other words, the heat evolved and absorbed from the orthopara isomeric transition contributes to the heat capacity of
hydrogen on long time-scales, but not on short time-scales.
These time scales may also depend on the presence of a catalyst.
Less exotic phase-changes may contribute to the heatcapacity of substances and systems, as well, as (for example) when water is converted back and forth from solid to
liquid or gas form. Phase changes store heat energy entirely
in breaking the bonds of the potential energy interactions
between molecules of a substance. As in the case of hydrogen, it is also possible for phase changes to be hindered
as the temperature drops, so that they do not catch up and
become apparent, without a catalyst. For example, it is possible to supercool liquid water to below the freezing point,
and not observe the heat evolved when the water changes to
ice, so long as the water remains liquid. This heat appears
instantly when the water freezes.

The dimensionless heat capacity divided by three, as a function of


temperature as predicted by the Debye model and by Einsteins earlier model. The horizontal axis is the temperature divided by the
Debye temperature. Note that, as expected, the dimensionless heat
capacity is zero at absolute zero, and rises to a value of three as the
temperature becomes much larger than the Debye temperature. The
red line corresponds to the classical limit of the DulongPetit law

law, which was discovered empirically, states that the molar


heat capacity assumes the value 3 R. Indeed, for solid metallic chemical elements at room temperature, molar heat capacities range from about 2.8 R to 3.4 R. Large exceptions
at the lower end involve solids composed of relatively lowmass, tightly bonded atoms, such as beryllium at 2.0 R, and
diamond at only 0.735 R. The latter conditions create larger
quantum vibrational energy spacing, so that many vibrational modes have energies too high to be populated (and
thus are frozen out) at room temperature. At the higher
end of possible heat capacities, heat capacity may exceed R
by modest amounts, due to contributions from anharmonic
vibrations in solids, and sometimes a modest contribution
from conduction electrons in metals. These are not degrees
of freedom treated in the Einstein or Debye theories.
The theoretical maximum heat capacity for multi-atomic
gases at higher temperatures, as the molecules become
larger, also approaches the DulongPetit limit of 3 R, so
long as this is calculated per mole of atoms, not molecules.
The reason for this behavior is that, in theory, gases
with very large molecules have almost the same hightemperature heat capacity as solids, lacking only the (small)
heat capacity contribution that comes from potential energy
that cannot be stored between separate molecules in a gas.

194

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

The DulongPetit limit results from the equipartition theorem, and as such is only valid in the classical limit of a
microstate continuum, which is a high temperature limit.
For light and non-metallic elements, as well as most of the
common molecular solids based on carbon compounds at
standard ambient temperature, quantum eects may also
play an important role, as they do in multi-atomic gases.
These eects usually combine to give heat capacities lower
than 3 R per mole of atoms in the solid, although in molecular solids, heat capacities calculated per mole of molecules
in molecular solids may be more than 3 R. For example,
the heat capacity of water ice at the melting point is about
4.6 R per mole of molecules, but only 1.5 R per mole of
atoms. As noted, heat capacity values far lower than 3 R
per atom (as is the case with diamond and beryllium) result from freezing out of possible vibration modes for light
atoms at suitably low temperatures, just as happens in many
low-mass-atom gases at room temperatures (where vibrational modes are all frozen out). Because of high crystal
binding energies, the eects of vibrational mode freezing
are observed in solids more often than liquids: for example
the heat capacity of liquid water is twice that of ice at near
the same temperature, and is again close to the 3 R per mole
of atoms of the DulongPetit theoretical maximum.
Liquid phase

Note that the especially high molar values, as for paran,


gasoline, water and ammonia, result from calculating specic heats in terms of moles of molecules. If specic heat is
expressed per mole of atoms for these substances, none of
the constant-volume values exceed, to any large extent, the
theoretical DulongPetit limit of 25 Jmol1 K1 = 3 R per
mole of atoms (see the last column of this table). Paran,
for example, has very large molecules and thus a high heat
capacity per mole, but as a substance it does not have remarkable heat capacity in terms of volume, mass, or atommol (which is just 1.41 R per mole of atoms, or less than
half of most solids, in terms of heat capacity per atom).
In the last column, major departures of solids at standard
temperatures from the DulongPetit law value of 3 R, are
usually due to low atomic weight plus high bond strength
(as in diamond) causing some vibration modes to have too
much energy to be available to store thermal energy at the
measured temperature. For gases, departure from 3 R per
mole of atoms in this table is generally due to two factors:
(1) failure of the higher quantum-energy-spaced vibration
modes in gas molecules to be excited at room temperature,
and (2) loss of potential energy degree of freedom for small
gas molecules, simply because most of their atoms are not
bonded maximally in space to other atoms, as happens in
many solids.
A

Assuming an altitude of 194 metres above mean sea level (the


worldwide median altitude of human habitation), an indoor temperature of 23 C, a dewpoint of 9 C (40.85% relative humidity),
and 760 mmHg sea levelcorrected barometric pressure (molar
water vapor content = 1.16%).
*Derived data by calculation. This is for water-rich tissues such
as brain. The whole-body average gure for mammals is approximately 2.9 Jcm3 K1 [38]

A general theory of the heat capacity of liquids has not yet


been achieved, and is still an active area of research. It was
long thought that phonon theory is not able to explain the
heat capacity of liquids, since liquids only sustain longitudinal, but not transverse phonons, which in solids are responsible for 2/3 of the heat capacity. However, Brillouin
scattering experiments with neutrons and with X-rays, conrming an intuition of Yakov Frenkel,[27] have shown that
transverse phonons do exist in liquids, albeit restricted to 8.1.6 Mass heat capacity of building materials
frequencies above a threshold called the Frenkel frequency.
Since most energy is contained in these high-frequency
modes, a simple modication of the Debye model is suf- See also: Thermal mass
cient to yield a good approximation to experimental heat
capacities of simple liquids.[28]
(Usually of interest to builders and solar designers)
Amorphous materials can be considered a type of liquid.
The specic heat of amorphous materials has characteristic discontinuities at the glass transition temperature. These 8.1.7 Further reading
discontinuities are frequently used to detect the glass transi Encyclopdia Britannica, 2015, Heat capacity (Altion temperature where a supercooled liquid transforms to
ternate title: thermal capacity), see , accessed 14
[29]
a glass.
February 2015.

8.1.5

Table of specic heat capacities

See also: List of thermal conductivities

Emmerich Wilhelm & Trevor M. Letcher, Eds., 2010,


Heat Capacities: Liquids, Solutions and Vapours, Cambridge, U.K.:Royal Society of Chemistry, ISBN 085404-176-1, see , accessed 14 February 2014. A
very recent outline of selected traditional aspects of

8.1. HEAT CAPACITY

195

the title subject, including a recent specialist introduc- 8.1.10 References


tion to its theory, Emmerich Wilhelm, Heat Capacities: Introduction, Concepts, and Selected Applica- [1] Halliday, David; Resnick, Robert (2013). Fundamentals of
Physics. Wiley. p. 524.
tions (Chapter 1, pp. 127), chapters on traditional
and more contemporary experimental methods such [2] Kittel, Charles (2005). Introduction to Solid State Physics
as photoacoustic methods, e.g., Jan Thoen & Christ
(8th ed.). Hoboken, New Jesery, USA: John Wiley & Sons.
p. 141. ISBN 0-471-41526-X.
Glorieux, Photothermal Techniques for Heat Capacities, and chapters on newer research interests, includ[3] Blundell, Stephen (2001). Magnetism in Condensed Matter.
ing on the heat capacities of proteins and other polyOxford Master Series in Condensed Matter Physics (1st ed.).
meric systems (Chs. 16, 15), of liquid crystals (Ch.
Hoboken, New Jesery, USA: Oxford University Press. p.
17), etc.
27. ISBN 978-0-19-850591-4.

8.1.8

See also

Quantum statistical mechanics


Heat capacity ratio
Statistical mechanics
Thermodynamic equations
Thermodynamic databases for pure substances

[4] Kittel, Charles (2005). Introduction to Solid State Physics


(8th ed.). Hoboken, New Jesery, USA: John Wiley & Sons.
p. 141. ISBN 0-471-41526-X.
[5] Laider, Keith J. (1993). The World of Physical Chemistry.
Oxford University Press. ISBN 0-19-855919-4.
[6] International Union of Pure and Applied Chemistry, Physical Chemistry Division. Quantities, Units and Symbols in
Physical Chemistry (PDF). Blackwell Sciences. p. 7. The
adjective specic before the name of an extensive quantity
is often used to mean divided by mass.

Heat equation

[7] International Bureau of Weights and Measures (2006), The


International System of Units (SI) (PDF) (8th ed.), ISBN 92822-2213-6

Heat transfer coecient

[8] Langes Handbook of Chemistry, 10th ed. page 1524

Latent heat

[9] Water Thermal Properties. Engineeringtoolbox.com.


Retrieved 2013-10-31.

Material properties (thermodynamics)


Joback method (Estimation of heat capacities)
Specic melting heat
Specic heat of vaporization
Volumetric heat capacity
Thermal mass
R-value (insulation)
Storage heater

8.1.9

Notes

[1] IUPAC, Compendium of Chemical Terminology, 2nd ed.


(the Gold Book) (1997). Online corrected version:
(2006) "Standard Pressure".. Besides being a round number, this had a very practical eect: relatively few people
live and work at precisely sea level; 100 kPa equates to the
mean pressure at an altitude of about 112 metres (which is
closer to the 194metre, worldwide median altitude of human habitation).

[10] Thermodynamics: An Engineering Approach by Yunus A.


Cengal and Michael A. Boles
[11] Yunus A. Cengel and Michael A. Boles,Thermodynamics:
An Engineering Approach 7th Edition, , McGraw-Hill,
2010,ISBN 007-352932-X
[12] Fraundorf, P. (2003). Heat capacity in bits. American
Journal of Physics 71 (11): 1142. arXiv:cond-mat/9711074.
Bibcode:2003AmJPh..71.1142F. doi:10.1119/1.1593658.
[13] D. Lynden-Bell & R. M. Lynden-Bell (Nov
1977).
On the negative specic heat paradox.
Monthly Notices of the Royal Astronomical Society
181:
405419.
Bibcode:1977MNRAS.181..405L.
doi:10.1093/mnras/181.3.405.
[14] Lynden-Bell, D. (Dec 1998).
Negative Specic
Heat in Astronomy, Physics and Chemistry.
PhysarXiv:cond-mat/9812172v1.
ica A 263: 293304.
Bibcode:1999PhyA..263..293L.
doi:10.1016/S03784371(98)00518-4.
[15] Schmidt, Martin; Kusche, Robert; Hippler, Thomas;
Donges, Jrn; Kronmller, Werner; Issendor, von, Bernd;
Haberland, Hellmut (2001). Negative Heat Capacity
for a Cluster of 147 Sodium Atoms. Physical Review
Letters 86 (7): 11914. Bibcode:2001PhRvL..86.1191S.
doi:10.1103/PhysRevLett.86.1191. PMID 11178041.

196

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

[16] See e.g., Wallace, David (2010). Gravity, entropy, and cosmology: in search of clarity (preprint). British Journal for
the Philosophy of Science 61 (3): 513. arXiv:0907.0659.
Bibcode:2010BJPS...61..513W. doi:10.1093/bjps/axp048.
Section 4 and onwards.
[17] Reif, F. (1965). Fundamentals of statistical and thermal
physics. McGraw-Hill. pp. 253254.
[18] Charles Kittel; Herbert Kroemer (2000). Thermal physics.
Freeman. p. 78. ISBN 0-7167-1088-9.
[19] Media:Translational motion.gif
[20] Smith, C. G. (2008). Quantum Physics and the Physics of
large systems, Part 1A Physics. University of Cambridge.
[21] The comparison must be made under constant-volume
conditionsCvHso that no work is performed. Nitrogens CvH (100 kPa, 20 C) = 20.8 J mol1 K1 vs. the
monatomic gases which equal 12.4717 J mol1 K1 . Citations: Freemans, W. H. Physical Chemistry Part 3: Change
Exercise 21.20b, Pg. 787 (PDF).

[35] Heat Storage in Materials. The Engineering Toolbox.


[36] Crawford, R. J. Rotational molding of plastics. ISBN 159124-192-8.
[37] Gaur, Umesh; Wunderlich, Bernhard (1981). Heat
capacity and other thermodynamic properties of linear macromolecules.
II. Polyethylene (PDF). Journal of Physical and Chemical Reference Data 10: 119.
Bibcode:1981JPCRD..10..119G. doi:10.1063/1.555636.
[38] Faber, P.; Garby, L. (1995). Fat content aects heat capacity: a study in mice. Acta Physiologica Scandinavica
153 (2): 1857. doi:10.1111/j.1748-1716.1995.tb09850.x.
PMID 7778459.

8.1.11

8.2

External links

Compressibility

Incompressible redirects here. For the property of vector


elds, see Solenoidal vector eld. For the topological
[23] Petit A.-T., Dulong P.-L. (1819). Recherches sur quelques property, see Incompressible surface.
[22] Georgia State University. Molar Specic Heats of Gases.

points importants de la Thorie de la Chaleur. Annales de


Chimie et de Physique 10: 395413.
[24] The Heat Capacity of a Solid (PDF).
[25] Hogan, C. (1969).
Density of States of an Insulating Ferromagnetic Alloy.
Physical Review
188 (2):
870.
Bibcode:1969PhRv..188..870H.
doi:10.1103/PhysRev.188.870.

In thermodynamics and uid mechanics, compressibility is


a measure of the relative volume change of a uid or solid
as a response to a pressure (or mean stress) change.

1 V
V p

[26] Young; Geller (2008). Young and Geller College Physics (8th
ed.). Pearson Education. ISBN 0-8053-9218-1.

where V is volume and p is pressure.

[27] In his textbook Kinetic Theory of Liquids (engl. 1947)

8.2.1

Denition

[28] Bolmatov, D.; Brazhkin, V. V.; Trachenko, K. (2012). The


phonon theory of liquid thermodynamics. Scientic Reports
2. doi:10.1038/srep00421. Lay summary.

The specication above is incomplete, because for any object or system the magnitude of the compressibility depends
strongly on whether the process is adiabatic or isothermal.
[29] Ojovan, Michael I. (2008). Viscosity and Glass Transition in Amorphous Oxides. Advances in Condensed Accordingly isothermal compressibility is dened:
Matter Physics 2008: 1. Bibcode:2008AdCMP2008....1O.
doi:10.1155/2008/817829.
[30] Page 183 in: Cornelius, Flemming (2008). Medical biophysics (6th ed.). ISBN 1-4020-7110-8. (also giving a density of 1.06 kg/L)
[31] Table of Specic Heats.
[32] Iron. National Institute of Standards and Technology.

T =

1
V

V
p

)
T

where the subscript T indicates that the partial dierential


is to be taken at constant temperature
Isentropic compressibility is dened:
(

[33] Materials Properties Handbook, Material: Lithium


(PDF). Archived from the original (PDF) on September 5,
2006.

S =

[34] HCV (Molar Heat Capacity (cV)) Data for Methanol.


Dortmund Data Bank Software and Separation Technology.

where S is entropy. For a solid, the distinction between the


two is usually negligible.

1
V

V
p

8.2. COMPRESSIBILITY

197

The minus sign makes the compressibility positive in the The deviation from ideal gas behavior tends to become par(usual) case that an increase in pressure induces a reduction ticularly signicant (or, equivalently, the compressibility
in volume.
factor strays far from unity) near the critical point, or in the
case of high pressure or low temperature. In these cases, a
generalized compressibility chart or an alternative equation
Relation to speed of sound
of state better suited to the problem must be utilized to produce accurate results.
The speed of sound is dened in classical mechanics as:
A related situation occurs in hypersonic aerodynamics,
where dissociation causes an increase in the notational
( )
molar volume, because a mole of oxygen, as O2 , becomes
p
c2 =
2 moles of monatomic oxygen and N2 similarly dissociates
S
to 2N. Since this occurs dynamically as air ows over the
where is the density of the material. It follows, by re- aerospace object, it is convenient to alter Z, dened for an
placing partial derivatives, that the isentropic compressibil- initial 30 gram mole of air, rather than track the varying
ity can be expressed as:
mean molecular weight, millisecond by millisecond. This
pressure dependent transition occurs for atmospheric oxygen in the 2500 K to 4000 K temperature range, and in the
1
5000 K to 10,000 K range for nitrogen.[1]
S = 2
c
In transition regions, where this pressure dependent dissociation is incomplete, both beta (the volume/pressure difRelation to bulk modulus
ferential ratio) and the dierential, constant pressure heat
capacity will greatly increase.
The inverse of the compressibility is called the bulk moduFor moderate pressures, above 10,000 K the gas further
lus, often denoted K (sometimes B). That page also contains
dissociates into free electrons and ions. Z for the resultsome examples for dierent materials.
ing plasma can similarly be computed for a mole of initial
The compressibility equation relates the isothermal com- air, producing values between 2 and 4 for partially or singly
pressibility (and indirectly the pressure) to the structure of ionized gas. Each dissociation absorbs a great deal of enthe liquid.
ergy in a reversible process and this greatly reduces the thermodynamic temperature of hypersonic gas decelerated near
the aerospace object. Ions or free radicals transported to
8.2.2 Thermodynamics
the object surface by diusion may release this extra (nonthermal) energy if the surface catalyzes the slower recomMain article: Compressibility factor
bination process.
The isothermal compressibility is related to the isentropic
The term compressibility is also used in thermodynamics (or adiabatic) compressibility by the relation,
to describe the deviance in the thermodynamic properties
of a real gas from those expected from an ideal gas. The
compressibility factor is dened as
2 T
S = T
cp
pV
Z=
RT
via Maxwells relations. More simply stated,
where p is the pressure of the gas, T is its temperature, and
V is its molar volume. In the case of an ideal gas, the compressibility factor Z is equal to unity, and the familiar ideal
gas law is recovered:

p=

RT
V

Z can, in general, be either greater or less than unity for a


real gas.

T
=
S
where,

is the heat capacity ratio. See here for a derivation.

198

8.2.3

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

Earth science

8.2.5

Compressibility is used in the Earth sciences to quantify


the ability of a soil or rock to reduce in volume with applied
pressure. This concept is important for specic storage,
when estimating groundwater reserves in conned aquifers.
Geologic materials are made up of two portions: solids and
voids (or same as porosity). The void space can be full of
liquid or gas. Geologic materials reduces in volume only
when the void spaces are reduced, which expel the liquid or
gas from the voids. This can happen over a period of time,
resulting in settlement.
It is an important concept in geotechnical engineering in the
design of certain structural foundations. For example, the
construction of high-rise structures over underlying layers
of highly compressible bay mud poses a considerable design
constraint, and often leads to use of driven piles or other
innovative techniques.

8.2.4

Main article: Navier-Stokes equations Compressible ow


of Newtonian uids
The degree of compressibility of a uid has strong implications for its dynamics. Most notably, the propagation of
sound is dependent on the compressibility of the medium.

8.2.6

See also

Poisson ratio
Mach number
Prandtl-Glauert singularity, associated with supersonic ight.
Shear strength

References

[1] Regan, Frank J. Dynamics of Atmospheric Re-entry. p. 313.


ISBN 1-56347-048-9.
[2] Domenico, P. A.; Miin, M. D. (1965).
Water from low permeability sediments and land
subsidence.
Water Resources Research 1
(4):
563576.
Bibcode:1965WRR.....1..563D.
doi:10.1029/WR001i004p00563. OSTI 5917760.
[3] Hugh D. Young; Roger A. Freedman. University Physics
with Modern Physics. Addison-Wesley; 2012. ISBN 9780-321-69686-1. p. 356.

Aeronautical dynamics
Main
article:
Aerodynamics
sign_issues_with_increasing_speed

In general, the bulk compressibility (sum of the linear compressibilities on the three axes) is positive, i.e. an increase
in pressure squeezes the material to a smaller volume. This
condition is required for mechanical stability.[5] However,
under very specic conditions the compressibility can be
negative.[6]

8.2.7

Fluid dynamics

Negative compressibility

De-

Compressibility is an important factor in aerodynamics. At


low speeds, the compressibility of air is not signicant in
relation to aircraft design, but as the airow nears and exceeds the speed of sound, a host of new aerodynamic eects
become important in the design of aircraft. These eects,
often several of them at a time, made it very dicult for
World War II era aircraft to reach speeds much beyond 800
km/h (500 mph).
Many eects are often mentioned in conjunction with the
term compressibility, but regularly have little to do with
the compressible nature of air. From a strictly aerodynamic
point of view, the term should refer only to those sideeects arising as a result of the changes in airow from an
incompressible uid (similar in eect to water) to a compressible uid (acting as a gas) as the speed of sound is approached. There are two eects in particular, wave drag
and critical mach.

[4] Fine, Rana A.; Millero, F. J. (1973). Compressibility of water as a function of temperature and pressure. Journal of Chemical Physics 59 (10): 55295536.
Bibcode:1973JChPh..59.5529F. doi:10.1063/1.1679903.
[5] Munn, R. W. (1971).
Role of the elastic constants in negative thermal expansion of axial solids.
Journal of Physics C: Solid State Physics 5: 535
542. Bibcode:1972JPhC....5..535M. doi:10.1088/00223719/5/5/005.
[6] Lakes,
Rod;
Wojciechowski,
K. W. (2008).
Negative compressibility,
negative Poissons ratio, and stability.
Physica Status Solidi (b)
Bibcode:2008PSSBR.245..545L.
245 (3):
545.
doi:10.1002/pssb.200777708.
Gatt, Ruben; Grima, Joseph N. (2008). Negative compressibility. Physica status solidi (RRL) - Rapid Research
Bibcode:2008PSSRR...2..236G.
Letters 2 (5): 236.
doi:10.1002/pssr.200802101.
Kornblatt,
J. A. (1998).
Materials with
Negative
Compressibilities.
Science
281
Bibcode:1998Sci...281..143K.
(5374):
143a.

8.3. THERMAL EXPANSION

doi:10.1126/science.281.5374.143a.
Moore, B.; Jaglinski, T.; Stone, D. S.; Lakes,
R. S. (2006).
Negative incremental bulk modulus in foams.
Philosophical Magazine Letters
Bibcode:2006PMagL..86..651M.
86 (10):
651.
doi:10.1080/09500830600957340.

199
change in temperature is called the materials coecient of
thermal expansion and generally varies with temperature.

8.3.1

Overview

Predicting expansion

8.3 Thermal expansion

If an equation of state is available, it can be used to predict


the values of the thermal expansion at all the required temperatures and pressures, along with many other state functions.
Contraction eects (negative thermal expansion)
A number of materials contract on heating within certain
temperature ranges; this is usually called negative thermal
expansion, rather than thermal contraction. For example, the coecient of thermal expansion of water drops to
zero as it is cooled to 3.983 C and then becomes negative
below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies
of water maintaining this temperature at their lower depths
during extended periods of sub-zero weather. Also, fairly
pure silicon has a negative coecient of thermal expansion
for temperatures between about 18 and 120 Kelvin.[2]
Factors aecting thermal expansion
Unlike gases or liquids, solid materials tend to keep their
shape when undergoing thermal expansion.

Expansion joint in a road bridge used to avoid damage from thermal expansion.

Thermal expansion generally decreases with increasing


bond energy, which also has an eect on the melting point
of solids, so, high melting point materials are more likely to
have lower thermal expansion. In general, liquids expand
slightly more than solids. The thermal expansion of glasses
is higher compared to that of crystals.[3] At the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefcient of thermal expansion or specic heat. These discontinuities allow detection of the glass transition temperature
where a supercooled liquid transforms to a glass.[4]

Thermal expansion is the tendency of matter to change


in shape, area, and volume in response to a change in Absorption or desorption of water (or other solvents) can
change the size of many common materials; many organic
temperature,[1] through heat transfer.
materials change size much more due to this eect than they
Temperature is a monotonic function of the average molec- do to thermal expansion. Common plastics exposed to waular kinetic energy of a substance. When a substance is ter can, in the long term, expand by many percent.
heated, the kinetic energy of its molecules increases. Thus,
the molecules begin moving more and usually maintain a
greater average separation. Materials which contract with 8.3.2 Coecient of thermal expansion
increasing temperature are unusual; this eect is limited in
size, and only occurs within limited temperature ranges (see The coecient of thermal expansion describes how the
examples below). The degree of expansion divided by the size of an object changes with a change in temperature.

200

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

Specically, it measures the fractional change in size per


degree change in temperature at a constant pressure. Several types of coecients have been developed: volumetric, area, and linear. Which is used depends on the particular application and which dimensions are considered important. For solids, one might only be concerned with the
change along a length, or over some area.

the size of an object and so it is not usually necessary to


consider the eect of pressure changes.

Common engineering solids usually have coecients of


thermal expansion that do not vary signicantly over the
range of temperatures where they are designed to be used,
so where extremely high accuracy is not required, practical
calculations can be based on a constant, average, value of
The volumetric thermal expansion coecient is the most the coecient of expansion.
basic thermal expansion coecient, and the most relevant for uids. In general, substances expand or contract
when their temperature changes, with expansion or contrac- Linear expansion
tion occurring in all directions. Substances that expand at
the same rate in every direction are called isotropic. For
isotropic materials, the area and volumetric thermal expansion coecient are, respectively, approximately twice and
three times larger than the linear thermal expansion coecient.
Mathematical denitions of these coecients are dened
below for solids, liquids, and gases.
Change in length of a rod due to thermal expansion.
Linear expansion means change in one dimension (length)
as opposed to change in volume (volumetric expansion). To
a rst approximation, the change in length measurements of
In the general case of a gas, liquid, or solid, the volumetric an object due to thermal expansion is related to temperature
coecient of thermal expansion is given by
change by a linear expansion coecient. It is the fractional change in length per degree of temperature change.
(
)
Assuming negligible eect of pressure, we may write:
1 V
V =
V
T p
1 dL
The subscript p indicates that the pressure is held constant L = L dT
during the expansion, and the subscript V stresses that it is
the volumetric (not linear) expansion that enters this general where L is a particular length measurement and dL/dT is
denition. In the case of a gas, the fact that the pressure is the rate of change of that linear dimension per unit change
held constant is important, because the volume of a gas will in temperature.
vary appreciably with pressure as well as temperature. For The change in the linear dimension can be estimated to be:
a gas of low density this can be seen from the ideal gas law
General volumetric thermal expansion coecient

8.3.3

Expansion in solids

When calculating thermal expansion it is necessary to consider whether the body is free to expand or is constrained. If
the body is free to expand, the expansion or strain resulting
from an increase in temperature can be simply calculated
by using the applicable coecient of thermal expansion
If the body is constrained so that it cannot expand, then
internal stress will be caused (or changed) by a change in
temperature. This stress can be calculated by considering
the strain that would occur if the body were free to expand
and the stress required to reduce that strain to zero, through
the stress/strain relationship characterised by the elastic or
Youngs modulus. In the special case of solid materials, external ambient pressure does not usually appreciably aect

L
= L T
L
This equation works well as long as the linear-expansion
coecient does not change much over the change in temperature T , and the fractional change in length is small
L/L 1 . If either of these conditions does not hold,
the equation must be integrated.
Eects on strain For solid materials with a signicant
length, like rods or cables, an estimate of the amount of
thermal expansion can be described by the material strain,
given by thermal and dened as:

thermal =

(Lfinal Linitial )
Linitial

8.3. THERMAL EXPANSION

201

where Linitial is the length before the change of temperature Volume expansion
and Lfinal is the length after the change of temperature.
For a solid, we can ignore the eects of pressure on the
For most solids, thermal expansion is proportional to the
material, and the volumetric thermal expansion coecient
change in temperature:
can be written:[5]
thermal T

V =

1 dV
V dT

Thus, the change in either the strain or temperature can be


where V is the volume of the material, and dV /dT is the
estimated by:
rate of change of that volume with temperature.
This means that the volume of a material changes by some
xed fractional amount. For example, a steel block with a
volume of 1 cubic meter might expand to 1.002 cubic meters when the temperature is raised by 50 K. This is an exwhere
pansion of 0.2%. If we had a block of steel with a volume
of 2 cubic meters, then under the same conditions, it would
expand to 2.004 cubic meters, again an expansion of 0.2%.
T = (Tfinal Tinitial )
The volumetric expansion coecient would be 0.2% for 50
1
is the dierence of the temperature between the two K, or 0.004% K .
recorded strains, measured in degrees Celsius or Kelvin, and If we already know the expansion coecient, then we can
L is the linear coecient of thermal expansion in per calculate the change in volume
degree Celsius or per Kelvin, denoted by C1 or K1 ,
respectively. In the eld of continuum mechanics, the thermal expansion and its eects are treated as eigenstrain and V
= V T
eigenstress.
V
thermal = L T

Area expansion
The area thermal expansion coecient relates the change in
a materials area dimensions to a change in temperature. It
is the fractional change in area per degree of temperature
change. Ignoring pressure, we may write:

A =

1 dA
A dT

where V /V is the fractional change in volume (e.g.,


0.002) and T is the change in temperature (50 C).
The above example assumes that the expansion coecient
did not change as the temperature changed and the increase
in volume is small compared to the original volume. This
is not always true, but for small changes in temperature, it
is a good approximation. If the volumetric expansion coecient does change appreciably with temperature, or the
increase in volume is signicant, then the above equation
will have to be integrated:

(
) Tf
where A is some area of interest on the object, and dA/dT
V + V
=
V (T ) dT
is the rate of change of that area per unit change in temper- ln
V
Ti
ature.
(
)
Tf
The change in the area can be estimated as:
V
= exp
V (T ) dT 1
V
Ti
A
= A T
A

where V (T ) is the volumetric expansion coecient as a


function of temperature T, and Ti , Tf are the initial and
nal temperatures respectively.

This equation works well as long as the area expansion coecient does not change much over the change in temperature T , and the fractional change in area is small Isotropic materials For isotropic materials the volumetA/A 1 . If either of these conditions does not hold, ric thermal expansion coecient is three times the linear
the equation must be integrated.
coecient:

202

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

8.3.4
V = 3L

Isobaric expansion in gases

For an ideal gas, the volumetric thermal expansion (i.e., relative change in volume due to temperature change) depends
on the type of process in which temperature is changed.
Two simple cases are isobaric change, where pressure is
held constant, and adiabatic change, where no heat is exchanged with the environment.

This ratio arises because volume is composed of three mutually orthogonal directions. Thus, in an isotropic material,
for small dierential changes, one-third of the volumetric
expansion is in a single axis. As an example, take a cube of
steel that has sides of length L. The original volume will be
V = L3 and the new volume, after a temperature increase, The ideal gas law can be written as:
will be
pv = T

L
V +V = (L+L) = L +3L L+3LL +L L where
+3L2 L
+3V
p is =
theVpressure,
L v is the specic volume, and t is
temperature measured in energy units. By taking the logaWe can make the substitutions V = V L3 T and, for rithm of this equation:
isotropic materials, L = L LT . We now have:
3

ln (v) + ln (p) = ln (T )
2
3
V +V = (L+LL T )3 = L3 +3L3 L T +3L3 L
T 2 +L3 L
T 3 L3 +3L3 L T
Then by denition of isobaric thermal expansion coecient,
Since the volumetric and linear coecients are dened only with the above equation of state:
for extremely small temperature and dimensional changes
(that is, when T and L are small), the last two terms
(
)
(
)
d(ln v)
d(ln T )
1
can be ignored and we get the above relationship between 1 v
=
=
= .
p
v T p
dT
dT
T
the two coecients. If we are trying to go back and forth
p
between volumetric and linear coecients using larger values of T then we will need to take into account the third The index p denotes an isobaric process.
term, and sometimes even the fourth term.
Similarly, the area thermal expansion coecient is two 8.3.5 Expansion in liquids
times the linear coecient:
Theoretically, the coecient of linear expansion can be
found from the coecient of volumetric expansion (V
3). However, for liquids, is calculated through the exA = 2L
perimental determination of V.
This ratio can be found in a way similar to that in the linear
example above, noting that the area of a face on the cube is
just L2 . Also, the same considerations must be made when 8.3.6 Expansion in mixtures and alloys
dealing with large values of T .
The expansivity of the components of the mixture can cancel each other like in invar.
Anisotropic materials
The thermal expansivity of a mixture from the expansiviMaterials with anisotropic structures, such as crystals (with ties of the pure components and their excess expansivities
less than cubic symmetry) and many composites, will gener- follow from:
ally have dierent linear expansion coecients L in different directions. As a result, the total volumetric expan Vi V E
i
sion is distributed unequally among the three axes. If the V =
+
T
T
T
crystal symmetry is monoclinic or triclinic, even the angles
i
i
between these axes are subject to thermal changes. In such

i Vi +
iE ViE
cases it is necessary to treat the coecient of thermal ex- =
i
i
pansion as a tensor with up to six independent elements. A
good way to determine the elements of the tensor is to study VE i
(ln(i ))
2
=
R
+
RT
ln(i )
the expansion by powder diraction.
T
P
T P

8.3. THERMAL EXPANSION

8.3.7

Apparent and absolute expansion

When measuring the expansion of a liquid, the measurement must account for the expansion of the container as
well. For example, a ask that has been constructed with
a long narrow stem lled with enough liquid that the stem
itself is partially lled, when placed in a heat bath will initially show the column of liquid in the stem to drop followed by the immediate increase of that column until the
ask-liquid-heat bath system has thermalized. The initial
observation of the column of liquid dropping is not due to
an initial contraction of the liquid but rather the expansion
of the ask as it contacts the heat bath rst. Soon after,
the liquid in the ask is heated by the ask itself and begins
to expand. Since liquids typically have a greater expansion
over solids, the liquid in the ask eventually exceeds that of
the ask, causing the column of liquid in the ask to rise. A
direct measurement of the height of the liquid column is a
measurement of the apparent expansion of the liquid. The
absolute expansion of the liquid is the apparent expansion
corrected for the expansion of the containing vessel.[6]

8.3.8

Examples and applications

203
shaft, and allowing it to cool after it has been pushed over
the shaft, thus achieving a 'shrink t'. Induction shrink tting is a common industrial method to pre-heat metal components between 150 C and 300 C thereby causing them
to expand and allow for the insertion or removal of another
component.
There exist some alloys with a very small linear expansion
coecient, used in applications that demand very small
changes in physical dimension over a range of temperatures. One of these is Invar 36, with approximately equal
to 0.6106 K1 . These alloys are useful in aerospace applications where wide temperature swings may occur.
Pullingers apparatus is used to determine the linear expansion of a metallic rod in the laboratory. The apparatus consists of a metal cylinder closed at both ends (called a steam
jacket). It is provided with an inlet and outlet for the steam.
The steam for heating the rod is supplied by a boiler which
is connected by a rubber tube to the inlet. The center of
the cylinder contains a hole to insert a thermometer. The
rod under investigation is enclosed in a steam jacket. One
of its ends is free, but the other end is pressed against a
xed screw. The position of the rod is determined by a
micrometer screw gauge or spherometer.

For applications using the thermal expansion property, see


bi-metal and mercury-in-glass thermometer.
The expansion and contraction of materials must be con-

Thermal expansion of long continuous sections of rail tracks is the


driving force for rail buckling. This phenomenon resulted in 190
train derailments during 19982002 in the US alone.[7]

sidered when designing large structures, when using tape or


chain to measure distances for land surveys, when designing molds for casting hot material, and in other engineering
applications when large changes in dimension due to temperature are expected.
Thermal expansion is also used in mechanical applications
to t parts over one another, e.g. a bushing can be tted over
a shaft by making its inner diameter slightly smaller than
the diameter of the shaft, then heating it until it ts over the

Drinking glass with fracture due to uneven thermal expansion after


pouring of hot liquid into the otherwise cool glass

The control of thermal expansion in brittle materials is a


key concern for a wide range of reasons. For example,
both glass and ceramics are brittle and uneven temperature
causes uneven expansion which again causes thermal stress
and this might lead to fracture. Ceramics need to be joined
or work in consort with a wide range of materials and therefore their expansion must be matched to the application.
Because glazes need to be rmly attached to the underlying porcelain (or other body type) their thermal expansion
must be tuned to 't' the body so that crazing or shivering
do not occur. Good example of products whose thermal
expansion is the key to their success are CorningWare and

204
the spark plug. The thermal expansion of ceramic bodies
can be controlled by ring to create crystalline species that
will inuence the overall expansion of the material in the
desired direction. In addition or instead the formulation of
the body can employ materials delivering particles of the
desired expansion to the matrix. The thermal expansion of
glazes is controlled by their chemical composition and the
ring schedule to which they were subjected. In most cases
there are complex issues involved in controlling body and
glaze expansion, adjusting for thermal expansion must be
done with an eye to other properties that will be aected,
generally trade-os are required.

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES


which is constrained to ow in only one direction (along the
tube) due to changes in volume brought about by changes
in temperature. A bi-metal mechanical thermometer uses a
bimetallic strip and bends due to the diering thermal expansion of the two metals.
Metal pipes made of dierent materials are heated by passing steam through them. While each pipe is being tested,
one end is securely xed and the other rests on a rotating
shaft, the motion of which is indicated with a pointer. The
linear expansion of the dierent metals is compared qualitatively and the coecient of linear thermal expansion is
calculated.

Thermal expansion can have a noticeable eect in gasoline


stored in above ground storage tanks which can cause gasoline pumps to dispense gasoline which may be more com- 8.3.9 Thermal expansion coecients for
pressed than gasoline held in underground storage tanks in
various materials
the winter time or less compressed than gasoline held in underground storage tanks in the summer time.[8]
Main article: Thermal expansion coecients of the eleHeat-induced expansion has to be taken into account in ments (data page)
This section summarizes the coecients for some common
most areas of engineering. A few examples are:
Coecient de dilatation volumique isobare (modle Tait)

Metal framed windows need rubber spacers


Rubber tires

2 000
0 bar
125 bar
250 bar
375 bar
500 bar

1 800

1 600

Metal hot water heating pipes should not be used in


long straight lengths

1 400

1 200

Large structures such as railways and bridges need


expansion joints in the structures to avoid sun kink

1 000

800

One of the reasons for the poor performance of


cold car engines is that parts have ineciently large
spacings until the normal operating temperature is
achieved.

600

400

200
20

40

60

80

100

120

140

160

180

200

220

240

260

A gridiron pendulum uses an arrangement of dierent


metals to maintain a more temperature stable pendu- Volumetric thermal expansion coecient for a semicrystalline
lum length.
polypropylene.
A power line on a hot day is droopy, but on a cold day
it is tight. This is because the metals expand under materials.
heat.
For isotropic materials the coecients linear thermal ex Expansion joints that absorb the thermal expansion in pansion and volumetric thermal expansion V are related
by V = 3. For liquids usually the coecient of volua piping system.[9]
metric expansion is listed and linear expansion is calculated
Precision engineering nearly always requires the engi- here for comparison.
neer to pay attention to the thermal expansion of the For common materials like many metals and compounds,
product. For example, when using a scanning electron the thermal expansion coecient is inversely proportional
microscope even small changes in temperature such as to the melting point.[10] In particular for metals the relation
1 degree can cause a sample to change its position rel- is:
ative to the focus point.
Thermometers are another application of thermal expan0.020
sion most contain a liquid (usually mercury or alcohol) MP

8.3. THERMAL EXPANSION

205

Coecients de dilatation liniques


18
17.5
X2CrNi12 (1.4003, 403)
X20Cr13 (1.4021, 420)
C35E (1.1181, 1035)
X2CrNiMoN22-5-3 (1.4462, 2205)
X2CrNiMo17-12-2 (1.4404, 316L)

17
16.5
16
15.5

[2] Bullis, W. Murray (1990). Chapter 6. In O'Mara, William


C.; Herring, Robert B.; Hunt, Lee P. Handbook of semiconductor silicon technology. Park Ridge, New Jersey: Noyes
Publications. p. 431. ISBN 0-8155-1237-6. Retrieved
2010-07-11.

15

[3] Varshneya, A. K. (2006). Fundamentals of inorganic


glasses. Sheeld: Society of Glass Technology. ISBN 012-714970-8.

14.5
14
13.5
13
12.5

[4] Ojovan, M. I. (2008). Congurons: thermodynamic parameters and symmetry changes at glass transition. Entropy 10 (3): 334364. Bibcode:2008Entrp..10..334O.
doi:10.3390/e10030334.

12
11.5
11
10.5
10
100

150

200

250

300

350

400

450

500

550

600

Linear thermal expansion coecient for some steel grades.

for halides and oxides

0.038

7.0 106 K1
MP

[5] Turcotte, Donald L.; Schubert, Gerald (2002). Geodynamics


(2nd ed.). Cambridge. ISBN 0-521-66624-4.
[6] Ganot, A., Atkinson, E. (1883). Elementary treatise on
physics experimental and applied for the use of colleges and
schools, William and Wood & Co, New York, pp. 27273.
[7] Track Buckling Research. Volpe Center, U.S. Department
of Transportation
[8] Cost or savings of thermal expansion in above ground tanks.

Artofbeingcheap.com (2013-09-06). Retrieved 2014-01In the table below, the range for is from 107 K1 for
19.
hard solids to 103 K1 for organic liquids. The coecient varies with the temperature and some materials have
[9] Lateral, Angular and Combined Movements U.S. Bellows.
a very high variation ; see for example the variation vs.
temperature of the volumetric coecient for a semicrys- [10] MIT Lecture Sheer and Thermal Expansion Tensors Part 1
talline polypropylene (PP) at dierent pressure, and the
variation of the linear coecient vs. temperature for some [11] Thermal Expansion. Western Washington University.
Archived from the original on 2009-04-17.
steel grades (from bottom to top: ferritic stainless steel,
martensitic stainless steel, carbon steel, duplex stainless
[12] Ahmed, Ashraf; Tavakol, Behrouz; Das, Rony; Joven,
steel, austenitic steel).

(The formula V 3 is usually used for solids.)[11]

8.3.10

See also

Ronald; Roozbehjavan, Pooneh; Minaie, Bob (2012). Study


of Thermal Expansion in Carbon Fiber Reinforced Polymer
Composites. Proceedings of SAMPE International Symposium. Charleston, SC.

Negative thermal expansion

[13] Young; Geller. Young and Geller College Physics (8th ed.).
ISBN 0-8053-9218-1.

Mie-Gruneisen equation of state

[14] Technical Glasses Data Sheet (PDF). schott.com.

Autovent

[15] Raymond Serway; John Jewett (2005), Principles of Physics:


A Calculus-Based Text, Cengage Learning, p. 506, ISBN
0-534-49143-X

Grneisen parameter
Apparent molar property

8.3.11

References

[1] when the body is heated its dimension(size) increase.This increase in dimension is called thermal expansion . Paul A.,
Tipler; Gene Mosca (2008). Physics for Scientists and Engineers, Volume 1 (6th ed.). New York, NY: Worth Publishers.
pp. 666670. ISBN 1-4292-0132-0.

[16] DuPont Kapton


matweb.com.

200EN

Polyimide

Film.

[17] Macor data sheet (PDF). corning.com.


[18] Properties of Common Liquid Materials.
[19] WDSC 340. Class Notes on Thermal Properties of Wood.
forestry.caf.wvu.edu. Archived from the original on 200903-30.

206

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

[20] Richard C. Weatherwax; Alfred J. Stamm (1956). The coecients of thermal expansion of wood and wood products (PDF) (Technical report). Forest Products Laboratory,
United States Forest Service. 1487.
[21] Sapphire (PDF). kyocera.com.
[22] Basic Parameters of Silicon Carbide (SiC)". Ioe Institute.
[23] Becker, P.; Seyfried, P.; Siegert, H. (1982). The lattice
parameter of highly pure silicon single crystals. Zeitschrift
fr Physik B 48: 17. Bibcode:1982ZPhyB..48...17B.
doi:10.1007/BF02026423.
[24] Nave, Rod. Thermal Expansion Coecients at 20 C.
Georgia State University.
[25] Sitall CO-115M (Astrositall)". Star Instruments.
[26] Thermal Expansion table
[27] Salvador, James R.; Guo, Fu; Hogan, Tim; Kanatzidis,
Mercouri G. (2003). Zero thermal expansion in YbGaGe due to an electronic valence transition.
Nature 425 (6959): 7025. Bibcode:2003Natur.425..702S.
doi:10.1038/nature02011. PMID 14562099.
[28] Janssen, Y.; Change, S.; Cho, B.K.; Llobet, A.; Dennis, K.W.; McCallum, R.W.; Mc Queeney, R.J.; Canfeld, P.C. (2005). YbGaGe: normal thermal expansion. Journal of Alloys and Compounds 389: 1013.
doi:10.1016/j.jallcom.2004.08.012.

8.3.12

External links

Glass Thermal Expansion Thermal expansion measurement, denitions, thermal expansion calculation
from the glass composition
Water thermal expansion calculator
DoITPoMS Teaching and Learning Package on Thermal Expansion and the Bi-material Strip
Engineering Toolbox List of coecients of Linear
Expansion for some common materials
Article on how V is determined
MatWeb: Free database of engineering properties for
over 79,000 materials
USA NIST Website Temperature and Dimensional
Measurement workshop
Hyperphysics: Thermal expansion
Understanding Thermal Expansion in Ceramic Glazes

Chapter 9

Chapter 9. Potentials
9.1 Thermodynamic potential
A thermodynamic potential is a scalar quantity used to
represent the thermodynamic state of a system. The concept of thermodynamic potentials was introduced by Pierre
Duhem in 1886. Josiah Willard Gibbs in his papers used
the term fundamental functions. One main thermodynamic
potential that has a physical interpretation is the internal energy U. It is the energy of conguration of a given system of
conservative forces (that is why it is a potential) and only has
meaning with respect to a dened set of references (or data).
Expressions for all other thermodynamic energy potentials
are derivable via Legendre transforms from an expression
for U. In thermodynamics, certain forces, such as gravity,
are typically disregarded when formulating expressions for
potentials. For example, while all the working uid in a
steam engine may have higher energy due to gravity while
sitting on top of Mount Everest than it would at the bottom
of the Mariana Trench, the gravitational potential energy
term in the formula for the internal energy would usually
be ignored because changes in gravitational potential within
the engine during operation would be negligible.

Just as in mechanics, where potential energy is dened as


capacity to do work, similarly dierent potentials have different meanings:
Internal energy (U) is the capacity to do work plus the
capacity to release heat.
Gibbs energy (G) is the capacity to do non-mechanical
work.
Enthalpy (H) is the capacity to do non-mechanical
work plus the capacity to release heat.
Helmholtz free energy (F) is the capacity to do mechanical work (useful work).

From these denitions we can say that U is the energy


added to the system, F is the total work done on it, G is
the non-mechanical work done on it, and H is the sum of
non-mechanical work done on the system and the heat given
to it. Thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction, or
when measuring the properties of materials in a chemical
reaction. The chemical reactions usually take place under
some simple constraints such as constant pressure and tem9.1.1 Description and interpretation
perature, or constant entropy and volume, and when this is
true, there is a corresponding thermodynamic potential that
Five common thermodynamic potentials are:[1]
comes into play. Just as in mechanics, the system will tend
where T = temperature, S = entropy, p = pressure, V = towards lower values of potential and at equilibrium, under
volume. The Helmholtz free energy is often denoted by these constraints, the potential will take on an unchanging
the symbol F, but the use of A is preferred by IUPAC,[2] minimum value. The thermodynamic potentials can also be
ISO and IEC.[3] N is the number of particles of type i in used to estimate the total amount of energy available from
the system and is the chemical potential for an i-type par- a thermodynamic system under the appropriate constraint.
ticle. For the sake of completeness, the set of all N are also In particular: (see principle of minimum energy for a
included as natural variables, although they are sometimes derivation)[4]
ignored.
These ve common potentials are all energy potentials,
but there are also entropy potentials. The thermodynamic
square can be used as a tool to recall and derive some of the
potentials.
207

When the entropy (S ) and external parameters (e.g.


volume) of a closed system are held constant, the internal energy (U ) decreases and reaches a minimum
value at equilibrium. This follows from the rst and

208

CHAPTER 9. CHAPTER 9. POTENTIALS

second laws of thermodynamics and is called the prin- The denitions of the thermodynamic potentials may be
ciple of minimum energy. The following three state- dierentiated and, along with the rst and second laws of
ments are directly derivable from this principle.
thermodynamics, a set of dierential equations known as
the fundamental equations follow.[7] (Actually they are all
When the temperature (T ) and external parameters of expressions of the same fundamental thermodynamic relaa closed system are held constant, the Helmholtz free tion, but are expressed in dierent variables.) By the rst
energy (F ) decreases and reaches a minimum value at law of thermodynamics, any dierential change in the interequilibrium.
nal energy U of a system can be written as the sum of heat
owing into the system and work done by the system on the
When the pressure (p) and external parameters of a environment, along with any change due to the addition of
closed system are held constant, the enthalpy (H ) de- new particles to the system:
creases and reaches a minimum value at equilibrium.

When the temperature (T ), pressure (p) and external


dU = Q W +
i dNi
parameters of a closed system are held constant, the
i
Gibbs free energy (G ) decreases and reaches a miniwhere Q is the innitesimal heat ow into the system, and
mum value at equilibrium.
W is the innitesimal work done by the system, is the
chemical potential of particle type i and N is the number of
type i particles. (Note that neither Q nor W are exact dif9.1.2 Natural variables
ferentials. Small changes in these variables are, therefore,
The variables that are held constant in this process are represented with rather than d.)
termed the natural variables of that potential.[5] The natu- By the second law of thermodynamics, we can express the
ral variables are important not only for the above-mentioned internal energy change in terms of state functions and their
reason, but also because if a thermodynamic potential can dierentials. In case of reversible changes we have:
be determined as a function of its natural variables, all of
the thermodynamic properties of the system can be found
by taking partial derivatives of that potential with respect to Q = T dS
its natural variables and this is true for no other combination
of variables. On the converse, if a thermodynamic potential W = p dV
is not given as a function of its natural variables, it will not,
where
in general, yield all of the thermodynamic properties of the
system.
T is temperature,
Notice that the set of natural variables for the above four poS is entropy,
tentials are formed from every combination of the T-S and
p is pressure,
P-V variables, excluding any pairs of conjugate variables.
There is no reason to ignore the Ni i conjugate pairs,
and in fact we may dene four additional potentials for and V is volume, and the equality holds for reversible proeach species.[6] Using IUPAC notation in which the brack- cesses.
ets contain the natural variables (other than the main four), This leads to the standard dierential form of the internal
we have:
energy in case of a quasistatic reversible change:
If there is only one species, then we are done. But, if there
are, say, two species, then there will be additional potentials
such as U [1 , 2 ] = U 1 N1 2 N2 and so on. If there
are D dimensions to the thermodynamic space, then there
are 2D unique thermodynamic potentials. For the most simple case, a single phase ideal gas, there will be three dimensions, yielding eight thermodynamic potentials.

9.1.3

The fundamental equations

Main article: Fundamental thermodynamic relation

dU = T dS pdV +

i dNi

Since U, S and V are thermodynamic functions of state,


the above relation holds also for arbitrary non-reversible
changes. If the system has more external variables than
just the volume that can change, the fundamental thermodynamic relation generalizes to:

dU = T dS

Xi dxi +

j dNj

9.1. THERMODYNAMIC POTENTIAL

209
)
(
)
H
G
=
p S,{Ni }
p T,{Ni }
(
)
)
(
G
F
S =
=
T p,{Ni }
T V,{Ni }
(
)

j =
Nj X,Y,{Ni=j }

Here the X are the generalized forces corresponding to the +V =


external variables x.
Applying Legendre transforms repeatedly, the following
dierential relations hold for the four potentials:

Note that the innitesimals on the right-hand side of each


of the above equations are of the natural variables of the
potential on the left-hand side. Similar equations can be developed for all of the other thermodynamic potentials of the
system. There will be one fundamental equation for each where, in the last equation, is any of the thermodynamic
thermodynamic potential, resulting in a total of 2D funda- potentials U, F, H, G and X, Y, {Nj=i } are the set of natural variables for that potential, excluding N . If we use all
mental equations.
potentials, then we will have more equations of state such
The dierences between the four thermodynamic potentials as
can be summarized as follows:
(
Nj =

d(pV ) = dH dU = dG dF

U [j ]
j

)
S,V,{Ni=j }

d(T S) = dU dF = dH dG

and so on. In all, there will be D equations for each potential, resulting in a total of D 2D equations of state. If the
D equations of state for a particular potential are known,
9.1.4 The equations of state
then the fundamental equation for that potential can be determined. This means that all thermodynamic information
We can use the above equations to derive some dierenabout the system will be known, and that the fundamental
tial denitions of some thermodynamic parameters. If we
equations for any other potential can be found, along with
dene to stand for any of the thermodynamic potentials,
the corresponding equations of state.
then the above equations are of the form:

d =

9.1.5

xi dyi

The Maxwell relations

Main article: Maxwell relations

where x and y are conjugate pairs, and the y are the natural
variables of the potential . From the chain rule it follows Again, dene x and y to be conjugate pairs, and the y to
that:
be the natural variables of some potential . We may take
the cross dierentials of the state equations, which obey
the following relationship:
(
)

xj =
yj {yi=j }
(
)
(
)
(
)
(
)

=
Where yi j is the set of all natural variables of except y .
yj yk {yi=k }
yk yj {yi=j }
{yi=j }
{yi=k }
This yields expressions for various thermodynamic parameters in terms of the derivatives of the potentials with re[1][9]
There will be
spect to their natural variables. These equations are known From these we get the Maxwell relations.
(D

1)/2
of
them
for
each
potential
giving
a
total of D(D
as equations of state since they specify parameters of the
[8]

1)/2
equations
in
all.
If
we
restrict
ourselves
the U, F, H,
thermodynamic state. If we restrict ourselves to the poG
tentials U, F, H and G, then we have:
(
+T =
(
p =

U
S

U
V

(
=
V,{Ni }

(
=

S,{Ni }

H
S

F
V

p,{Ni }

)
T,{Ni }

T
V
T
p

(
S,{Ni }

(
=+
S,{Ni }

p
S
V
S

)
V,{Ni }

)
p,{Ni }

210
(
(

S
V

CHAPTER 9. CHAPTER 9. POTENTIALS


)

(
=+

T,{Ni }

p
T

)
)

G=

i Ni

V,{Ni }

As in the above sections, this process can be carried out on


all of the other thermodynamic potentials. Note that the EuT,{Ni }
p,{Ni }
ler integrals are sometimes also referred to as fundamental
Using the equations of state involving the chemical potential equations.
we get equations such as:
(

S
p

T
Nj

V
T

j
S

9.1.7

The GibbsDuhem relation

Deriving the GibbsDuhem equation from basic thermodynamic state equations is straightforward.[7][10][11] Equating
any thermodynamic potential denition with its Euler inteand using the other potentials we can get equations such as:
gral expression yields:
(
(

Nj
V
Nj
Nk

V,S,{Ni=j }

S,j ,{Ni=j }

V,{Ni }

p
j

(
S,V,j ,{Ni=j,k }

)
U = TS PV +

S,V {Ni=j }

k
j

Dierentiating, and using the second law:

S,V {Ni=j }

dU = T dS P dV +

9.1.6

i Ni

i dNi

Euler integrals

yields:
Again, dene x and y to be conjugate pairs, and the y to
be the natural variables of the internal energy. Since all of

Ni di
the natural variables of the internal energy U are extensive 0 = SdT V dP +
i
quantities
Which is the GibbsDuhem relation. The GibbsDuhem
is a relationship among the intensive parameters of the system. It follows that for a simple system with I components,
U ({yi }) = U ({yi })
there will be I + 1 independent parameters, or degrees of
it follows from Eulers homogeneous function theorem that freedom. For example, a simple system with a single comthe internal energy can be written as:
ponent will have two degrees of freedom, and may be specied by only two parameters, such as pressure and volume
for example. The law is named after Josiah Willard Gibbs
( U )
and Pierre Duhem.
U ({yi }) =
yj
yj {yi=j }
j
From the equations of state, we then have:

9.1.8

Chemical reactions

Changes in these quantities are useful for assessing the degree to which a chemical reaction will proceed. The releU = T S pV +
i Ni
vant quantity depends on the reaction conditions, as shown
i
in the following table. denotes the change in the potential
Substituting into the expressions for the other main poten- and at equilibrium the change will be zero.
tials we have:
Most commonly one considers reactions at constant p and
T, so the Gibbs free energy is the most useful potential in
studies of chemical reactions.

F = pV +
i Ni

H = TS +

9.1.9
i Ni

See also

Coombers relationship

9.2. ENTHALPY

9.1.10

Notes

211

9.1.13

External links

[1] Alberty (2001) p. 1353

Thermodynamic Potentials Georgia State University

[2] Alberty (2001) p. 1376

Chemical Potential Energy: The 'Characteristic' vs the


Concentration-Dependent Kind

[3] ISO/IEC 80000-5:2007, item 5-20.4


[4] Callen (1985) p. 153
[5] Alberty (2001) p. 1352
[6] Alberty (2001) p. 1355

9.2

Enthalpy

Not to be confused with Entropy.

[7] Alberty (2001) p. 1354


[8] Callen (1985) p. 37
[9] Callen (1985) p. 181
[10] Moran & Shapiro, p. 538
[11] Callen (1985) p. 60

Enthalpy i /nlpi/ is a measurement of energy in a


thermodynamic system. It includes the internal energy,
which is the energy required to create a system, and the
amount of energy required to make room for it by displacing
its environment and establishing its volume and pressure.[1]

Enthalpy is dened as a state function that depends only on


the prevailing equilibrium state identied by the variables
9.1.11 References
internal energy, pressure, and volume. It is an extensive
quantity. The unit of measurement for enthalpy in the
Alberty, R. A. (2001). Use of Legendre transforms in International System of Units (SI) is the joule, but other hischemical thermodynamics (PDF). Pure Appl. Chem. torical, conventional units are still in use, such as the British
73 (8): 13491380. doi:10.1351/pac200173081349. thermal unit and the calorie.
Callen, Herbert B. (1985). Thermodynamics and an The enthalpy is the preferred expression of system energy
Introduction to Thermostatistics (2nd ed.). New York: changes in many chemical, biological, and physical measurements at constant pressure, because it simplies the deJohn Wiley & Sons. ISBN 0-471-86256-8.
scription of energy transfer. At constant pressure, the en Moran, Michael J.; Shapiro, Howard N. (1996). Fun- thalpy change equals the energy transferred from the endamentals of Engineering Thermodynamics (3rd ed.). vironment through heating or work other than expansion
New York ; Toronto: J. Wiley & Sons. ISBN 0-471- work.
07681-3.
The total enthalpy, H, of a system cannot be measured
directly. The same situation exists in classical mechanics: only a change or dierence in energy carries physi9.1.12 Further reading
cal meaning. Enthalpy itself is a thermodynamic poten McGraw Hill Encyclopaedia of Physics (2nd Edition), tial, so in order to measure the enthalpy of a system, we
C.B. Parker, 1994, ISBN 0-07-051400-3
must refer to a dened reference point; therefore what we
measure is the change in enthalpy, H. The H is a posi Thermodynamics, From Concepts to Applications
tive change in endothermic reactions, and negative in heat(2nd Edition), A. Shavit, C. Gutnger, CRC Press
releasing exothermic processes.
(Taylor and Francis Group, USA), 2009, ISBN
For processes under constant pressure, H is equal to
9781420073683
the change in the internal energy of the system, plus the
Chemical Thermodynamics, D.J.G. Ives, Univer- pressure-volume work that the system has done on its
sity Chemistry, Macdonald Technical and Scientic, surroundings.[2] This means that the change in enthalpy un1971, ISBN 0-356-03736-3
der such conditions is the heat absorbed (or released) by
Elements of Statistical Thermodynamics (2nd Edition), the material through a chemical reaction or by external
L.K. Nash, Principles of Chemistry, Addison-Wesley, heat transfer. Enthalpies for chemical substances at constant pressure assume standard state: most commonly 1 bar
1974, ISBN 0-201-05229-6
pressure. Standard state does not, strictly speaking, specify
Statistical Physics (2nd Edition), F. Mandl, Manch- a temperature (see standard state), but expressions for enester Physics, John Wiley & Sons, 2008, ISBN thalpy generally reference the standard heat of formation at
9780471566588
25 C.

212

CHAPTER 9. CHAPTER 9. POTENTIALS

Enthalpy of ideal gases and incompressible solids and liquids does not depend on pressure, unlike entropy and
Gibbs energy. Real materials at common temperatures and
pressures usually closely approximate this behavior, which
greatly simplies enthalpy calculation and use in practical
designs and analyses.

enthalpy h = H/m, where m is the mass of the system, or


the molar enthalpy H = H/n, where n is the number of
moles (h and H are intensive properties). For inhomogeneous systems the enthalpy is the sum of the enthalpies of
the composing subsystems:

9.2.1

H=

Origins

The word enthalpy is based on the Ancient Greek verb


enthalpein (), which means to warm in.[3] It
comes from the Classical Greek prex - en-, meaning
to put into, and the verb thalpein, meaning to
heat. The word enthalpy is often incorrectly attributed to
Benot Paul mile Clapeyron and Rudolf Clausius through
the 1850 publication of their ClausiusClapeyron relation.
This misconception was popularized by the 1927 publication of The Mollier Steam Tables and Diagrams. However,
neither the concept, the word, nor the symbol for enthalpy
existed until well after Clapeyrons death.

Hk ,

where the label k refers to the various subsystems. In case


of continuously varying p, T, and/or composition, the summation becomes an integral:

H=

h dV,

where is the density.

The enthalpy of homogeneous systems can be viewed as


function H(S,p) of the entropy S and the pressure p, and a
The earliest writings to contain the concept of enthalpy did dierential relation for it can be derived as follows. We start
not appear until 1875,[4] when Josiah Willard Gibbs intro- from the rst law of thermodynamics for closed systems for
duced a heat function for constant pressure. However, an innitesimal process:
Gibbs did not use the word enthalpy in his writings.[note 1]
The actual word rst appears in the scientic literature in a
1909 publication by J. P. Dalton. According to that pub- dU = Q W.
lication, Heike Kamerlingh Onnes (18531926) actually
coined the word.[5]
Here, Q is a small amount of heat added to the system, and
Over the years, many dierent symbols were used to de- W a small amount of work performed by the system. In
note enthalpy. It was not until 1922 that Alfred W. Porter a homogeneous system only reversible processes can take
proposed the symbol "H" as the accepted standard,[6] thus place, so the second law of thermodynamics gives Q = T
dS, with T the absolute temperature of the system. Furthernalizing the terminology still in use today.
more, if only pV work is done, W = p dV. As a result,

9.2.2

Formal denition

The enthalpy of a homogeneous system is dened as[7][8]

dU = T dS p dV.
Adding d(pV) to both sides of this expression gives

H = U + pV,
where
H is the enthalpy of the system,

dU + d(pV ) = T dS p dV + d(pV ),
or

U is the internal energy of the system,


p is the pressure of the system,

d(U + pV ) = T dS + V dp.

V is the volume of the system.


So
The enthalpy is an extensive property. This means that, for
homogeneous systems, the enthalpy is proportional to the
size of the system. It is convenient to introduce the specic dH(S, p) = T dS + V dp.

9.2. ENTHALPY

9.2.3

213

Other expressions

and therefore the internal energy is used.[10][11] In basic


chemistry, experiments are often conducted at constant
The above expression of dH in terms of entropy and pres- atmospheric pressure, and the pressure-volume work repsure may be unfamiliar to some readers. However, there resents an energy exchange with the atmosphere that canare expressions in terms of more familiar variables such as not be accessed or controlled, so that H is the expression
temperature and pressure:[9][7]:88
chosen for the heat of reaction.

dH = Cp dT + V (1 T ) dp.

9.2.5

Relationship to heat

Here Cp is the heat capacity at constant pressure and is In order to discuss the relation between the enthalpy increase and heat supply, we return to the rst law for closed
the coecient of (cubic) thermal expansion:
systems: dU = Q W. We apply it to the special case with
a uniform pressure at the surface. In this case the work term
(
)
1 V
can be split into two contributions, the so-called pV work,
.
=
V T p
given by p dV (where here p is the pressure at the surface,
dV is the increase of the volume of the system) and all other
With this expression one can, in principle, determine the types of work W, such as by a shaft or by electromagnetic
enthalpy if Cp and V are known as functions of p and T.
interaction. So we write W = p dV + W. In this case the
rst law reads:
Note that for an ideal gas, T = 1,[note 2] so that
dU = Q p dV W ,

dH = Cp dT.

In a more general form, the rst law describes the internal or


energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for dH then becomes
dH = Q + V dp W .

dH = T dS + V dp +

i dNi ,

From this relation we see that the increase in enthalpy of a


system is equal to the added heat:

where i is the chemical potential per particle for an i-type


particle, and Ni is the number of such particles. The last
term can also be written as i dni (with dni the number of
moles of component i added to the system and, in this case,
i the molar chemical potential) or as i dmi (with dmi the
mass of component i added to the system and, in this case,
i the specic chemical potential).

9.2.4

dH = Q,
provided that the system is under constant pressure (dp =
0) and that the only work done by the system is expansion
work (W = 0).[12]

9.2.6

Applications

Physical interpretation

In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from nothThe U term can be interpreted as the energy required to ingness"; the mechanical work required, pV, diers based
create the system, and the pV term as the energy that would upon the conditions that obtain during the creation of the
be required to make room for the system if the pressure thermodynamic system.
of the environment remained constant. When a system, for Energy must be supplied to remove particles from the surexample, n moles of a gas of volume V at pressure p and roundings to make space for the creation of the system, astemperature T, is created or brought to its present state from suming that the pressure p remains constant; this is the pV
absolute zero, energy must be supplied equal to its internal term. The supplied energy must also provide the change
energy U plus pV, where pV is the work done in pushing in internal energy, U, which includes activation energies,
against the ambient (atmospheric) pressure.
ionization energies, mixing energies, vaporization energies,
In basic physics and statistical mechanics it may be more chemical bond energies, and so forth. Together, these coninteresting to study the internal properties of the system stitute the change in the enthalpy U + pV. For systems at

214

CHAPTER 9. CHAPTER 9. POTENTIALS

constant pressure, with no external work done other than Enthalpy changes
the pV work, the change in enthalpy is the heat received by
An enthalpy change describes the change in enthalpy obthe system.
served in the constituents of a thermodynamic system when
For a simple system, with a constant number of particles,
undergoing a transformation or chemical reaction. It is the
the dierence in enthalpy is the maximum amount of therdierence between the enthalpy after the process has commal energy derivable from a thermodynamic process in
pleted, i.e. the enthalpy of the products, and the initial enwhich the pressure is held constant.
thalpy of the system, i.e. the reactants. These processes are
reversible and the enthalpy for the reverse process is the
negative value of the forward change.
Heat of reaction
A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of
substances. Enthalpy changes are routinely measured and
The total enthalpy of a system cannot be measured directly, compiled in chemical and physical reference works, such as
the enthalpy change of a system is measured instead. En- the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized
thalpy change is dened by the following equation:
in thermodynamics.
Main article: Standard enthalpy of reaction

H = Hf Hi ,
where
H is the enthalpy change,

When used in these recognized terms the qualier change


is usually dropped and the property is simply termed enthalpy of 'process. Since these properties are often used
as reference values it is very common to quote them for a
standardized set of environmental parameters, or standard
conditions, including:

H is the nal enthalpy of the system (in a chemical reaction, the enthalpy of the products),

A temperature of 25 C or 298 K,

H is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants).

A pressure of one atmosphere (1 atm or 101.325 kPa),

For an exothermic reaction at constant pressure, the systems change in enthalpy equals the energy released in the
reaction, including the energy retained in the system and
lost through expansion against its surroundings. In a similar
manner, for an endothermic reaction, the systems change
in enthalpy is equal to the energy absorbed in the reaction,
including the energy lost by the system and gained from
compression from its surroundings. A relatively easy way
to determine whether or not a reaction is exothermic or endothermic is to determine the sign of H. If H is positive,
the reaction is endothermic, that is heat is absorbed by the
system due to the products of the reaction having a greater
enthalpy than the reactants. On the other hand, if H is
negative, the reaction is exothermic, that is the overall decrease in enthalpy is achieved by the generation of heat.
Specic enthalpy
The specic enthalpy of a uniform system is dened as h
= H/m where m is the mass of the system. The SI unit for
specic enthalpy is joule per kilogram. It can be expressed
in other specic quantities by h = u + pv, where u is the
specic internal energy, p is the pressure, and v is specic
volume, which is equal to 1/, where is the density.

A concentration of 1.0 M when the element or compound is present in solution,


Elements or compounds in their normal physical
states, i.e. standard state.
For such standardized values the name of the enthalpy is
commonly prexed with the term standard, e.g. standard
enthalpy of formation.
Chemical properties:
Enthalpy of reaction, dened as the enthalpy change
observed in a constituent of a thermodynamic system
when one mole of substance reacts completely.
Enthalpy of formation, dened as the enthalpy change
observed in a constituent of a thermodynamic system
when one mole of a compound is formed from its elementary antecedents.
Enthalpy of combustion, dened as the enthalpy
change observed in a constituent of a thermodynamic
system when one mole of a substance burns completely
with oxygen.
Enthalpy of hydrogenation, dened as the enthalpy
change observed in a constituent of a thermodynamic

9.2. ENTHALPY

215

system when one mole of an unsaturated compound energy of a system is equal to the amount of energy added
reacts completely with an excess of hydrogen to form to the system by matter owing in and by heating, minus the
a saturated compound.
amount lost by matter owing out and in the form of work
done by the system:
Enthalpy of atomization, dened as the enthalpy
change required to atomize one mole of compound
completely.
dU = Q + dUin dUout W,
Enthalpy of neutralization, dened as the enthalpy
where U is the average internal energy entering the system,
change observed in a constituent of a thermodynamic
and U is the average internal energy leaving the system.
system when one mole of water is formed when an acid
and a base react.
Heat added
Q

Standard Enthalpy of solution, dened as the enthalpy


change observed in a constituent of a thermodynamic
system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is
at innite dilution.
Standard enthalpy of Denaturation (biochemistry), dened as the enthalpy change required to denature one
mole of compound.

Work performed
external to boundary
Wshaft

Hout

Hin

System boundary (open)


Enthalpy of hydration, dened as the enthalpy change
observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous
During steady, continuous operation, an energy balance applied to
ions.

Physical properties:

an open system equates shaft work performed by the system to heat


added plus net enthalpy added

The region of space enclosed by the boundaries if the open


Enthalpy of fusion, dened as the enthalpy change re- system is usually called a control volume, and it may or may
quired to completely change the state of one mole of not correspond to physical walls. If we choose the shape
substance between solid and liquid states.
of the control volume such that all ow in or out occurs
perpendicular to its surface, then the ow of matter into the
Enthalpy of vaporization, dened as the enthalpy
system performs work as if it were a piston of uid pushing
change required to completely change the state of one
mass into the system, and the system performs work on the
mole of substance between liquid and gaseous states.
ow of matter out as if it were driving a piston of uid.
Enthalpy of sublimation, dened as the enthalpy There are then two types of work performed: ow work
change required to completely change the state of one described above, which is performed on the uid (this is
also often called pV work), and shaft work, which may be
mole of substance between solid and gaseous states.
performed on some mechanical device.
Lattice enthalpy, dened as the energy required to sepThese two types of work are expressed in the equation
arate one mole of an ionic compound into separated
gaseous ions to an innite distance apart (meaning no
force of attraction).
W = d(pout Vout ) d(pin Vin ) + Wshaft .
Enthalpy of mixing, dened as the enthalpy change
upon mixing of two (non-reacting) chemical sub- Substitution into the equation above for the control volume
(cv) yields:
stances.
Open systems

dUcv = Q+dUin +d(pin Vin )dUout d(pout Vout )Wshaft .

In thermodynamic open systems, matter may ow in and The denition of enthalpy, H, permits us to use this
out of the system boundaries. The rst law of thermody- thermodynamic potential to account for both internal ennamics for open systems states: The increase in the internal ergy and pV work in uids for open systems:

216

CHAPTER 9. CHAPTER 9. POTENTIALS

dUcv = Q + dHin dHout Wshaft .


If we allow also the system boundary to move (e.g. due to
moving pistons), we get a rather general form of the rst law
for open systems.[13] In terms of time derivatives it reads:

dVk
dU
=
Q k +
H k
pk
P,
dt
dt
k

with sums over the various places k where heat is supplied,


matter ows into the system, and boundaries are moving.
The k terms represent enthalpy ows, which can be writ- Ts diagram of nitrogen.[14] The red curve at the left is the meltten as
ing curve. The red dome represents the two-phase region with the

H k = hk m
k = Hm n k ,
with k the mass ow and k the molar ow at position k
respectively. The term dVk/dt represents the rate of change
of the system volume at position k that results in pV power
done by the system. The parameter P represents all other
forms of power done by the system such as shaft power, but
it can also be e.g. electric power produced by an electrical
power plant.

low-entropy side the saturated liquid and the high-entropy side the
saturated gas. The black curves give the Ts relation along isobars.
The pressures are indicated in bar. The blue curves are isenthalps
(curves of constant enthalpy). The values are indicated in blue in
kJ/kg. The specic points a, b, etc., are treated in the main text.

of the most common diagrams is the temperaturespecic


entropy diagram (Ts-diagram). It gives the melting curve
and saturated liquid and vapor values together with isobars
and isenthalps. These diagrams are powerful tools in the
hands of the thermal engineer.

Note that the previous expression holds true only if the kinetic energy ow rate is conserved between system inlet and
outlet. Otherwise, it has to be included in the enthalpy bal- Some basic applications
ance. During steady-state operation of a device (see turbine,
pump, and engine), the average dU/dt may be set equal to The points a through h in the gure play a role in the diszero. This yields a useful expression for the average power cussion in this section.
generation for these devices in the absence of chemical reactions:
a: T = 300 K, p = 1 bar, s = 6.85 kJ/(kg K), h =
461 kJ/kg;
dVk
b: T = 380 K, p = 2 bar, s = 6.85 kJ/(kg K), h =
pk
H k
P =
Q k +
,
dt
530 kJ/kg;
k

where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its
presence in the rst law for open systems, as formulated
above.

9.2.7

Diagrams

Nowadays the enthalpy values of important substances can


be obtained using commercial software. Practically all relevant material properties can be obtained either in tabular
or in graphical form. There are many types of diagrams,
such as hT diagrams, which give the specic enthalpy as
function of temperature for various pressures, and hp diagrams, which give h as function of p for various T. One

c: T = 300 K, p = 200 bar, s = 5.16 kJ/(kg K), h


= 430 kJ/kg;
d: T = 270 K, p = 1 bar, s = 6.79 kJ/(kg K), h =
430 kJ/kg;
e: T = 108 K, p = 13 bar, s = 3.55 kJ/(kg K), h =
100 kJ/kg (saturated liquid at 13 bar);
f: T = 77.2 K, p = 1 bar, s = 3.75 kJ/(kg K), h =
100 kJ/kg;
g: T = 77.2 K, p = 1 bar, s = 2.83 kJ/(kg K), h =
28 kJ/kg (saturated liquid at 1 bar);
h: T = 77.2 K, p = 1 bar, s = 5.41 kJ/(kg K), h =
230 kJ/kg (saturated gas at 1 bar);

9.2. ENTHALPY

217

Throttling

and T = 108 K. Throttling from this point to a pressure of


1 bar ends in the two-phase region (point f). This means
Main article: JouleThomson eect
that a mixture of gas and liquid leaves the throttling valve.
One of the simple applications of the concept of enthalpy Since the enthalpy is an extensive parameter, the enthalpy
in f (h ) is equal to the enthalpy in g (h ) multiplied by the
liquid fraction in f (x ) plus the enthalpy in h (h ) multiplied
by the gas fraction in f (1 x ). So
hf = xf hg + (1 xf )hh .
With numbers: 100 = x 28 + (1 x ) 230, so x =
0.64. This means that the mass fraction of the liquid in the
liquidgas mixture that leaves the throttling valve is 64%.
Schematic diagram of a throttling in the steady state. Fluid enters
the system (dotted rectangle) at point 1 and leaves it at point 2. The
mass ow is .

is the so-called throttling process, also known as JouleThomson expansion. It concerns a steady adiabatic ow of
a uid through a ow resistance (valve, porous plug, or any
other type of ow resistance) as shown in the gure. This
process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature
drop between ambient temperature and the interior of the
refrigerator. It is also the nal stage in many types of liqueers.
In the rst law for open systems (see above) applied to the
system, all terms are zero, except the terms for the enthalpy
ow. Hence

0 = mh
1 mh
2.

Compressors
Main article: Gas compressor
A power P is applied e.g. as electrical power. If the com-

Schematic diagram of a compressor in the steady state. Fluid enters


the system (dotted rectangle) at point 1 and leaves it at point 2. The
mass ow is . A power P is applied and a heat ow Q is released
to the surroundings at ambient temperature Ta.

Since the mass ow is constant, the specic enthalpies at the pression is adiabatic, the gas temperature goes up. In the
two sides of the ow resistance are the same:
reversible case it would be at constant entropy, which corresponds with a vertical line in the Ts diagram. For example, compressing nitrogen from 1 bar (point a) to 2 bar
h1 = h2 ,
(point b) would result in a temperature increase from 300
K to 380 K. In order to let the compressed gas exit at ambithat is, the enthalpy per unit mass does not change during ent temperature T, heat exchange, e.g. by cooling water, is
the throttling. The consequences of this relation can be necessary. In the ideal case the compression is isothermal.
demonstrated using the Ts diagram above. Point c is at The average heat ow to the surroundings is Q. Since the
200 bar and room temperature (300 K). A JouleThomson system is in the steady state the rst law gives
expansion from 200 bar to 1 bar follows a curve of constant
enthalpy of roughly 425 kJ/kg (not shown in the diagram)
lying between the 400 and 450 kJ/kg isenthalps and ends in 0 = Q + mh
1 mh
2 + P.
point d, which is at a temperature of about 270 K. Hence
the expansion from 200 bar to 1 bar cools nitrogen from The minimal power needed for the compression is realized
300 K to 270 K. In the valve, there is a lot of friction, and if the compression is reversible. In that case the second law
a lot of entropy is produced, but still the nal temperature of thermodynamics for open systems gives
is below the starting value!
Point e is chosen so that it is on the saturated liquid line
Q
0
=

+ ms
1 ms
2.
with h = 100 kJ/kg. It corresponds roughly with p = 13 bar
Ta

218

CHAPTER 9. CHAPTER 9. POTENTIALS

Eliminating Q gives for the minimal power

For example, compressing 1 kg of nitrogen from 1 bar to


200 bar costs at least (h h) T(s s). With the data,
obtained with the Ts diagram, we nd a value of (430
461) 300 (5.16 6.85) = 476 kJ/kg.
The relation for the power can be further simplied by writing it as

(dh Ta ds).
1

9.2.8

[2] Van Wylen, G. J.; Sonntag, R. E. (1985). Section 5.5.


Fundamentals of Classical Thermodynamics (3rd ed.). New
York, NY: John Wiley & Sons. ISBN 0-471-82933-1.
[3] "". A GreekEnglish Lexicon.
[4] Henderson, Douglas; Eyring, Henry; Jost, Wilhelm (1967).
Physical Chemistry: An Advanced Treatise. Academic Press.
p. 29.
[5] Laidler, Keith (1995). The World of Physical Chemistry.
Oxford University Press. p. 110.

With dh = T ds + v dp, this results in the nal relation

Pmin
=
m

References

[1] Zemansky, Mark W. (1968). Chapter 11. Heat and Thermodynamics (5th ed.). New York, NY: McGraw-Hill. p.
275.

Pmin
= h2 h1 Ta (s2 s1 ).
m

Pmin
=
m

9.2.10

[6] Howard, Irmgard (2002). "H Is for Enthalpy, Thanks to


Heike Kamerlingh Onnes and Alfred W. Porter. Journal of Chemical Education (ACS Publications) 79 (6): 697.
Bibcode:2002JChEd..79..697H. doi:10.1021/ed079p697.
[7] Guggenheim, E. A. (1959). Thermodynamics. Amsterdam:
North-Holland Publishing Company.

v dp.
1

[8] Zumdahl, Steven S. (2008).


Thermochemistry.
Chemistry. Cengage Learning. p. 243. ISBN 978-0547-12532-9.

See also

Standard enthalpy change of formation (data table)


Calorimetry

[9] Moran, M. J.; Shapiro, H. N. (2006). Fundamentals of Engineering Thermodynamics (5th ed.). John Wiley & Sons. p.
511.
[10] Reif, F. (1967). Statistical Physics. London: McGraw-Hill.

Calorimeter

[11] Kittel, C.; Kroemer, H. (1980). Thermal Physics. London:


Freeman.

Departure function

[12] Ebbing, Darrel; Gammon, Steven (2010). General Chemistry. Cengage Learning. p. 231. ISBN 978-0-538-497527.

Hesss law
Isenthalpic process
Stagnation enthalpy

[13] Moran, M. J.; Shapiro, H. N. (2006). Fundamentals of Engineering Thermodynamics (5th ed.). John Wiley & Sons. p.
129.

Thermodynamic databases for pure substances

[14] Figure composed with data obtained with RefProp, NIST


Standard Reference Database 23.

Entropy

9.2.9

9.2.11

Notes

[1] The Collected Works of J. Willard Gibbs, Vol. I do not contain reference to the word enthalpy, but rather reference the
heat function for constant pressure.
[2] T =

T
V

( nRT
P
T

)
=
p

nRT
PV

=1

Bibliography

Dalton, J.P. (1909). Researches on the JouleKelvin


eect, especially at low temperatures. I. Calculations
for hydrogen (PDF). KNAW Proceedings 11: 863
873.
Haase, R. (1971). Jost, W., ed. Physical Chemistry:
An Advanced Treatise. New York, NY: Academic. p.
29.

9.3. INTERNAL ENERGY

219

Gibbs, J. W. The Collected Works of J. Willard Gibbs, chain of thermodynamic operations and thermodynamic
Vol. I (1948 ed.). New Haven, CT: Yale University processes by which the given state can be prepared, startPress. p. 88..
ing with a reference state which is customarily assigned a
reference value for its internal energy. Such a chain, or
Howard, I. K. (2002). "H Is for Enthalpy, Thanks to path, can be theoretically described by certain extensive
Heike Kamerlingh Onnes and Alfred W. Porter. J. state variables of the system, namely, its entropy, S, its volChem. Educ. 79: 697698. doi:10.1021/ed079p697. ume, V, and its mole numbers, {Nj}. The internal energy,
Laidler, K. (1995). The World of Physical Chemistry. U(S,V,{Nj}), is a function of those. Sometimes, to that
list are appended other extensive state variables, for examOxford: Oxford University Press. p. 110.
ple electric dipole moment. For practical considerations in
Kittel, C.; Kroemer, H. (1980). Thermal Physics. thermodynamics and engineering it is rarely necessary or
New York, NY: S. R. Furphy & Co. p. 246.
convenient to consider all energies belonging to the total
intrinsic energy of a system, such as the energy given by
DeHo, R. (2006). Thermodynamics in Materials Scithe equivalence of mass. Customarily, thermodynamic deence (2nd ed.). New York, NY: Taylor and Francis
scriptions include only items relevant to the processes unGroup.
der study. Thermodynamics is chiey concerned only with
changes in the internal energy, not with its absolute value.

9.2.12

External links

The internal energy is a state function of a system, because


its value depends only on the current state of the system
Enthalpy - Eric Weissteins World of Physics
and not on the path taken or processes undergone to prepare it. It is an extensive quantity. It is the one and only
Enthalpy - Georgia State University
cardinal thermodynamic potential.[4] Through it, by use of
Enthalpy example calculations - Texas A&M Univer- Legendre transforms, are mathematically constructed the
sity Chemistry Department
other thermodynamic potentials. These are functions of
variable lists in which some extensive variables are replaced
by their conjugate intensive variables. Legendre transformation is necessary because mere substitutive replacement
9.3 Internal energy
of extensive variables by intensive variables does not lead
In thermodynamics, the internal energy of a system is the to thermodynamic potentials. Mere substitution leads to a
energy contained within the system, excluding the kinetic less informative formula, an equation of state.
energy of motion of the system as a whole and the potential Though it is a macroscopic quantity, internal energy can
energy of the system as a whole due to external force elds. be explained in microscopic terms by two theoretical virIt keeps account of the gains and losses of energy of the tual components. One is the microscopic kinetic energy
system that are due to changes in its internal state.[1][2]
due to the microscopic motion of the systems particles
The internal energy of a system can be changed by transfers (translations, rotations, vibrations). The other is the potenof matter and by work and heat transfer.[3] When matter tial energy associated with the microscopic forces, includtransfer is prevented by impermeable containing walls, the ing the chemical bonds, between the particles; this is for
system is said to be closed. Then the rst law of thermo- ordinary physics and chemistry. If thermonuclear reactions
dynamics states that the increase in internal energy is equal are specied as a topic of concern, then the static rest mass
to the total heat added plus the work done on the system by energy of the constituents of matter is also counted. There
its surroundings. If the containing walls pass neither mat- is no simple universal relation between these quantities of
ter nor energy, the system is said to be isolated. Then its microscopic energy and the quantities of energy gained or
internal energy cannot change. The rst law of thermody- lost by the system in work, heat, or matter transfer.
namics may be regarded as establishing the existence of the The SI unit of energy is the joule (J). Sometimes it is conveinternal energy.
nient to use a corresponding density called specic internal
The internal energy is one of the two cardinal state functions energy which is internal energy per unit of mass (kilogram)
of the system in question. The SI unit of specic internal
of the state variables of a thermodynamic system.
energy is J/kg. If the specic internal energy is expressed
relative to units of amount of substance (mol), then it is referred to as molar internal energy and the unit is J/mol.
9.3.1 Introduction
From the standpoint of statistical mechanics, the internal
The internal energy of a given state of a system cannot be di- energy is equal to the ensemble average of the sum of the
rectly measured. It is determined through some convenient

220

CHAPTER 9. CHAPTER 9. POTENTIALS

microscopic kinetic and potential energies of the system.

energy needed to create the given state of the system from


the reference state.

From a non-relativistic microscopic point of view, it may


be divided into microscopic potential energy, U , and
The internal energy, U(S,V,{Nj}), expresses the thermody- microscopic kinetic energy, U , components:
namics of a system in the energy-language, or in the energy representation. Its arguments are exclusively extensive
variables of state. Alongside the internal energy, the other U = Umicro pot + Umicro kin
cardinal function of state of a thermodynamic system is its
entropy, as a function, S(U,V,{Nj}), of the same list of ex- The microscopic kinetic energy of a system arises as the
tensive variables of state, except that the entropy, S, is re- sum of the motions of all the systems particles with replaced in the list by the internal energy, U. It expresses the spect to the center-of-mass frame, whether it be the moentropy representation.[4][5][6]
tion of atoms, molecules, atomic nuclei, electrons, or other
Each cardinal function is a monotonic function of each particles. The microscopic potential energy algebraic sumof its natural or canonical variables. Each provides its mative components are those of the chemical and nuclear
characteristic or fundamental equation, for example U = particle bonds, and the physical force elds within the sysU(S,V,{Nj}), that by itself contains all thermodynamic in- tem, such as due to internal induced electric or magnetic
formation about the system. The fundamental equations for dipole moment, as well as the energy of deformation of
the two cardinal functions can in principle be interconverted solids (stress-strain). Usually, the split into microscopic kiby solving, for example, U = U(S,V,{Nj}) for S, to get S = netic and potential energies is outside the scope of macroscopic thermodynamics.
S(U,V,{Nj}).
Cardinal functions

In contrast, Legendre transforms are necessary to derive


fundamental equations for other thermodynamic potentials
and Massieu functions. The entropy as a function only of
extensive state variables is the one and only cardinal function of state for the generation of Massieu functions. It is not
itself customarily designated a 'Massieu function', though
rationally it might be thought of as such, corresponding to
the term 'thermodynamic potential', which includes the internal energy.[5][7][8]
For real and practical systems, explicit expressions of the
fundamental equations are almost always unavailable, but
the functional relations exist in principle. Formal, in principle, manipulations of them are valuable for the understanding of thermodynamics.

Internal energy does not include the energy due to motion


or location of a system as a whole. That is to say, it excludes any kinetic or potential energy the body may have
because of its motion or location in external gravitational,
electrostatic, or electromagnetic elds. It does, however, include the contribution of such a eld to the energy due to
the coupling of the internal degrees of freedom of the object with the eld. In such a case, the eld is included in the
thermodynamic description of the object in the form of an
additional external parameter.

For practical considerations in thermodynamics or engineering, it is rarely necessary, convenient, nor even possible, to consider all energies belonging to the total intrinsic
energy of a sample system, such as the energy given by the
equivalence of mass. Typically, descriptions only include
components relevant to the system under study. Indeed, in
most systems under consideration, especially through ther9.3.2 Description and denition
modynamics, it is impossible to calculate the total internal
The internal energy U of a given state of the system is de- energy.[9] Therefore, a convenient null reference point may
termined relative to that of a standard state of the system, be chosen for the internal energy.
by adding up the macroscopic transfers of energy that ac- The internal energy is an extensive property: it depends on
company a change of state from the reference state to the the size of the system, or on the amount of substance it congiven state:
tains.

U =

Ei

where U denotes the dierence between the internal energy of the given state and that of the reference state, and
the Ei are the various energies transferred to the system in
the steps from the reference state to the given state. It is the

At any temperature greater than absolute zero, microscopic


potential energy and kinetic energy are constantly converted
into one another, but the sum remains constant in an isolated
system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the
internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero

9.3. INTERNAL ENERGY

221

point energy. A system at absolute zero is merely in its


quantum-mechanical ground state, the lowest energy state
available. At absolute zero a system of given composition
has attained its minimum attainable entropy.

A second mechanism of change of internal energy of a


closed system is the doing of work on the system, either
in mechanical form by changing pressure or volume, or by
other perturbations, such as directing an electric current
The microscopic kinetic energy portion of the internal en- through the system.
ergy gives rise to the temperature of the system. Statistical If the system is not closed, the third mechanism that can inmechanics relates the pseudo-random kinetic energy of in- crease the internal energy is transfer of matter into the sysdividual particles to the mean kinetic energy of the entire tem. This increase, U cannot be split into heat and
ensemble of particles comprising a system. Furthermore, it work components. If the system is so set up physically that
relates the mean microscopic kinetic energy to the macro- heat and work can be done on it by pathways separate from
scopically observed empirical property that is expressed as and independent of matter transfer, then the transfers of entemperature of the system. This energy is often referred to ergy add to change the internal energy:
as the thermal energy of a system,[10] relating this energy,
like the temperature, to the human experience of hot and
cold.
U = Q + Wpressurevolume + Wisochoric + Umatter
Statistical mechanics considers any system to be statistically
(separate pathway for matter transfer from heat and work transfer pathway
distributed across an ensemble of N microstates. Each microstate has an energy E and is associated with a probabil- If a system undergoes certain phase transformations while
ity p. The internal energy is the mean value of the systems being heated, such as melting and vaporization, it may
total energy, i.e., the sum of all microstate energies, each be observed that the temperature of the system does not
change until the entire sample has completed the transforweighted by their probability of occurrence:
mation. The energy introduced into the system while the
temperature did not change is called a latent energy, or latent
N

heat, in contrast to sensible heat, which is associated with


U=
p i Ei .
temperature change.
i=1

This is the statistical expression of the rst law of thermodynamics.


9.3.3
Internal energy changes
Thermodynamics is chiey concerned only with the
changes, U, in internal energy.

Internal energy of the ideal gas

Thermodynamics often uses the concept of the ideal gas


for teaching purposes, and as an approximation for working systems. The ideal gas is a gas of particles considered as
point objects that interact only by elastic collisions and ll
a volume such that their free mean path between collisions
is much larger than their diameter. Such systems are approximated by the monatomic gases, helium and the other
noble gases. Here the kinetic energy consists only of the
translational energy of the individual atoms. Monatomic
particles do not rotate or vibrate, and are not electronically
excited to higher energies except at very high temperatures.

For a closed system, with matter transfer excluded, the


changes in internal energy are due to heat transfer Q and
due to work. The latter can be split into two kinds, pressurevolume work W - , and frictional and other kinds,
such as electrical polarization, which do not alter the volume of the system, and are called isochoric, W . Accordingly, the internal energy change U for a process may
Therefore, internal energy changes in an ideal gas may
be written[3]
be described solely by changes in its kinetic energy. Kinetic energy is simply the internal energy of the perU
=
Q + Wpressurevolume +
fect gas and depends entirely on its pressure, volume and
Wisochoric
(closed system, no transfer of matter).
thermodynamic temperature.
[note 1]
When a closed system receives energy as heat, this energy
increases the internal energy. It is distributed between microscopic kinetic and microscopic potential energies. In
general, thermodynamics does not trace this distribution.
In an ideal gas all of the extra energy results in a temperature increase, as it is stored solely as microscopic kinetic
energy; such heating is said to be sensible.

The internal energy of an ideal gas is proportional to its mass


(number of moles) N and to its temperature T

U = cN T,
where c is the heat capacity (at constant volume) of the gas.
The internal energy may be written as a function of the three

222

CHAPTER 9. CHAPTER 9. POTENTIALS

extensive properties S, V, N (entropy, volume, mass) in the and the change in internal energy becomes
following way [11][12]
S

U (S, V, N ) = const e cN V

R
c

R+c
c

dU = T dS pdV
,

where const is an arbitrary positive constant and where R is Changes due to temperature and volume
the universal gas constant. It is easily seen that U is a linearly homogeneous function of the three variables (that is, it The expression relating changes in internal energy to
is extensive in these variables), and that it is weakly convex. changes in temperature and volume is
Knowing temperature and pressure to be the derivatives
U
T = U
S , p = V , the ideal gas law pV = RN T imme[ (
)
]
p
diately follows.
dU = CV dT + T
p dV (1) .
T V

9.3.4

Internal energy of a closed thermody- This is useful if the equation of state is known.
namic system
In case of an ideal gas, we can derive that dU = Cv dT ,

i.e. the internal energy of an ideal gas can be written as a


This above summation of all components of change in inter- function that depends only on the temperature.
nal energy assume that a positive energy denotes heat added
to the system or work done on the system, while a negative Proof of pressure independence for an ideal gas
energy denotes work of the system on the environment.
Typically this relationship is expressed in innitesimal The expression relating changes in internal energy to
terms using the dierentials of each term. Only the inter- changes in temperature and volume is
nal energy is an exact dierential. For a system undergoing
]
[ (
)
only thermodynamics processes, i.e. a closed system that
p
can exchange only heat and work, the change in the internal dU = CV dT + T
p dV.
T V
energy is
The equation of state is the ideal gas law
dU = Q + W
which constitutes the rst law of thermodynamics.[note 1] It pV = nRT.
may be expressed in terms of other thermodynamic paramSolve for pressure:
eters. Each term is composed of an intensive variable (a
generalized force) and its conjugate innitesimal extensive
variable (a generalized displacement).
nRT
p=
.
V
For example, for a non-viscous uid, the mechanical work
done on the system may be related to the pressure p and Substitute in to internal energy expression:
volume V. The pressure is the intensive generalized force,
while the volume is the extensive generalized displacement:
[ (
)
]
p
nRT
dU = CV dT + T

dV.
T V
V
W = pdV
Take the derivative of pressure with respect to temperature:
This denes the direction of work, W, to be energy ow
from the working system to the surroundings, indicated by
(
)
a negative term.[note 1] Taking the direction of heat transfer
p
nR
=
.
Q to be into the working uid and assuming a reversible
T V
V
process, the heat is
Replace:
Q = T dS .
]
[
T is temperature
nRT
nRT

dV.
dU
=
C
dT
+
V
S is entropy
V
V

9.3. INTERNAL ENERGY

223

And simplify:
Cp = CV + V T
dU = CV dT.
Derivation of dU in terms of dT and dV
To express dU in terms of dT and dV, the term
(
dS =

S
T

(
dT +

S
V

Derivation of dU in terms of dT and dP


The partial derivative of the pressure with respect to temperature at constant volume can be expressed in terms of
the coecient of thermal expansion

dV
T

is substituted in the fundamental thermodynamic relation

2
T

1
V

V
T

)
p

and the isothermal compressibility


dU = T dS pdV.
This gives:

1
V

V
p

)
T

[ (
)
]
S
dT + T
p dV.
V T

by writing:
S
T V
( S )
(
(
)
)
V
V
The term T T
is the heat capacity at constant volume
V
dp+
dT = V (dT T dp) (2)
dV =
CV .
p T
T p
dU = T

The partial derivative of S with respect to V can be evaluated if the equation of state is known. From the fundamen- and equating dV to zero and solving for the ratio dp/dT.
tal thermodynamic relation, it follows that the dierential This gives:
of the Helmholtz free energy A is given by:
( V )
(
)
p

T p
= ( ) =
(3)
V
T

T
dA = SdT pdV.
V
p

The symmetry of second derivatives of A with respect to T


Substituting (2) and (3) in (1) gives the above expression.
and V yields the Maxwell relation:
(

S
V

(
=
T

p
T

Changes due to volume at constant temperature


.

This gives the expression above.

The internal pressure is dened as a partial derivative of


the internal energy with respect to the volume at constant
temperature:

Changes due to temperature and pressure


When dealing with uids or solids, an expression in terms
of the temperature and pressure is usually more useful:

(
T =

9.3.5
dU = (Cp pV ) dT + (T p T ) V dp

U
V

)
T

Internal energy of multi-component


systems

where it is assumed that the heat capacity at constant pres- In addition to including the entropy S and volume V terms in
sure is related to the heat capacity at constant volume ac- the internal energy, a system is often described also in terms
cording to:
of the number of particles or chemical species it contains:

224

CHAPTER 9. CHAPTER 9. POTENTIALS


The sum over the composition of the system is the Gibbs
free energy:

U = U (S, V, N1 , . . . , Nn )

where N are the molar amounts of constituents of type j in


i Ni
the system. The internal energy is an extensive function of G =
i
the extensive variables S, V, and the amounts N , the internal
energy may be written as a linearly homogeneous function
that arises from changing the composition of the system at
of rst degree:
constant temperature and pressure. For a single component
system, the chemical potential equals the Gibbs energy per
amount of substance, i.e. particles or moles according to
U (S, V, N1 , N2 , . . .) = U (S, V, N1 , N2 , . . .)
the original denition of the unit for {Nj } .
where is a factor describing the growth of the system. The
dierential internal energy may be written as

9.3.6

Internal energy in an elastic medium

For an elastic medium the mechanical energy term of the


internal
energy must be replaced by the more general ex U
U
U
dU
=
dS
+
dV
+
dN
=
T
dS

p
dV
+
pression
involving the stress ij and strain ij . The ini
i Ni
S
V

dN
nitesimal
statement is:
i
i
i
which shows (or denes) temperature T to be the partial
derivative of U with respect to entropy S and pressure p
to be the negative of the similar derivative with respect to dU = T dS + V ij dij
volume V
where Einstein notation has been used for the tensors, in
which there is a summation over all repeated indices in the
product term. The Euler theorem yields for the internal
energy:[13]
T = U
,
S
1
U = T S + ij ij
2

U
p = V
,

For a linearly elastic material, the stress is related to the


and where the coecients i are the chemical potentials strain by:
for the components of type i in the system. The chemical
potentials are dened as the partial derivatives of the energy
with respect to the variations in composition:
ij = Cijkl kl
(
i =

U
Ni

where the C are the components of the 4th-rank elastic


constant tensor of the medium.
S,V,Nj=i

As conjugate variables to the composition {Nj } , the chemical potentials are intensive properties, intrinsically characteristic of the qualitative nature of the system, and not proportional to its extent. Because of the extensive nature of
U and its independent variables, using Eulers homogeneous
function theorem, the dierential dU may be integrated and
yields an expression for the internal energy:

U = T S pV +

i Ni

9.3.7

History

James Joule studied the relationship between heat, work,


and temperature. He observed that if he did mechanical
work on a uid, such as water, by agitating the uid, its
temperature increased. He proposed that the mechanical
work he was doing on the system was converted to thermal
energy. Specically, he found that 4185.5 joules of energy
were needed to raise the temperature of a kilogram of water
by one degree Celsius.

9.3. INTERNAL ENERGY

9.3.8

Notes

[1] In this article we choose the sign convention of the mechanical work as typically dened in chemistry, which is dierent from the convention used in physics. In chemistry, work
performed by the system against the environment, e.g., a system expansion, is negative, while in physics this is taken to
be positive.

9.3.9

See also

Calorimetry
Enthalpy
Exergy
Thermodynamic equations
Thermodynamic potentials

9.3.10

References

[1] Crawford, F. H. (1963), pp. 106107.


[2] Haase, R. (1971), pp. 2428.
[3] Born, M. (1949), Appendix 8, pp. 146149.
[4] Tschoegl, N.W. (2000), p. 17.

225
Bibliography of cited references
Adkins, C.J. (1968/1975). Equilibrium Thermodynamics, second edition, McGraw-Hill, London, ISBN
0-07-084057-1.
Bailyn, M. (1994). A Survey of Thermodynamics,
American Institute of Physics Press, New York, ISBN
0-88318-797-3.
Born, M. (1949). Natural Philosophy of Cause and
Chance, Oxford University Press, London.
Callen, H.B. (1960/1985), Thermodynamics and an
Introduction to Thermostatistics, (rst edition 1960),
second edition 1985, John Wiley & Sons, New York,
ISBN 0-471-86256-8.
Crawford, F. H. (1963). Heat, Thermodynamics, and
Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc.
Haase, R. (1971). Survey of Fundamental Laws,
chapter 1 of Thermodynamics, pages 197 of volume
1, ed. W. Jost, of Physical Chemistry. An Advanced
Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73117081.
Mnster, A. (1970), Classical Thermodynamics,
translated by E.S. Halberstadt, WileyInterscience,
London, ISBN 0-471-62430-6.
Tschoegl, N.W. (2000). Fundamentals of Equilibrium
and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5.

[5] Callen, H.B. (1960/1985), Chapter 5.


[6] Mnster, A. (1970), p. 6.
[7] Mnster, A. (1970), Chapter 3.
[8] Bailyn, M. (1994), pp. 206209.
[9] I. Klotz, R. Rosenberg, Chemical Thermodynamics - Basic
Concepts and Methods, 7th ed., Wiley (2008), p.39
[10] Thermal energy Hyperphysics
[11] van Gool, W.; Bruggink, J.J.C. (Eds) (1985). Energy and
time in the economic and physical sciences. North-Holland.
pp. 4156. ISBN 0444877487.
[12] Grubbstrm, Robert W. (2007).
An Attempt
to Introduce Dynamics Into Generalised Exergy
Considerations.
Applied Energy 84:
701718.
doi:10.1016/j.apenergy.2007.01.003.
[13] Landau & Lifshitz 1986

9.3.11

Bibliography

Alberty, R. A. (2001). Use of Legendre transforms in


chemical thermodynamics (PDF). Pure Appl. Chem.
73 (8): 13491380. doi:10.1351/pac200173081349.
Lewis, Gilbert Newton; Randall, Merle: Revised by
Pitzer, Kenneth S. & Brewer, Leo (1961). Thermodynamics (2nd ed.). New York, NY USA: McGraw-Hill
Book Co. ISBN 0-07-113809-9.
Landau, L. D.; Lifshitz, E. M. (1986). Theory of
Elasticity (Course of Theoretical Physics Volume 7).
(Translated from Russian by J.B. Sykes and W.H.
Reid) (Third ed.). Boston, MA: Butterworth Heinemann. ISBN 0-7506-2633-X.

Chapter 10

Chapter 10. Equations


10.1 Ideal gas law

V is the volume of the gas


n is the amount of substance of gas (in moles)
R is the ideal, or universal, gas constant, equal to the
product of the Boltzmann constant and the Avogadro
constant.
T is the temperature of the gas
It can also be derived microscopically from kinetic theory, as was achieved (apparently independently) by August
Krnig in 1856[2] and Rudolf Clausius in 1857.[3]

10.1.1

Equation

The state of an amount of gas is determined by its pressure,


volume, and temperature. The modern form of the equation
relates these simply in two main forms. The temperature
used in the equation of state is an absolute temperature: in
[4]
Isotherms of an ideal gas. The curved lines represent the relation- the SI system of units, Kelvin.
ship between pressure (on the vertical, y-axis) and volume (on the
horizontal, x-axis) for an ideal gas at dierent temperatures: lines
which are farther away from the origin (that is, lines that are nearer
to the top right-hand corner of the diagram) represent higher temperatures.

Common form
The most frequently introduced form is

The ideal gas law is the equation of state of a hypothetical ideal gas. It is a good approximation to the behavior of
P V = nRT
many gases under many conditions, although it has several
limitations. It was rst stated by mile Clapeyron in 1834 as where:
a combination of Boyles law, Charles law and Avogadros
Law.[1] The ideal gas law is often written as:
P is the pressure of the gas
V is the volume of the gas

P V = nRT
where:

n is the amount of substance of gas (also known as


number of moles)

P is the pressure of the gas


226

10.1. IDEAL GAS LAW

227

R is the ideal, or universal, gas constant, equal to the Statistical mechanics


product of the Boltzmann constant and the Avogadro
In statistical mechanics the following molecular equation is
constant.
derived from rst principles:
T is the temperature of the gas
In SI units, P is measured in pascals, V is measured in cubic
metres, n is measured in moles, and T in Kelvin (The Kelvin
scale is a shifted Celsius scale where 0.00 Kelvin = 273.15
degrees Celsius, the lowest possible temperature). R has the
value 8.314 JK1 mol1 or 0.08206 Latmmol1 K1 or 2
calories if using pressure in standard atmospheres (atm) instead of pascals, and volume in litres instead of cubic metres.

P V = N kB T
where P is the absolute pressure of the gas measured in
pascals; N is the number of molecules in the given volume
V. The number density is given by the ratio N/V; kB is the
Boltzmann constant relating temperature and energy; and T
is the absolute temperature in Kelvin .

The number density contrasts to the other formulation,


which uses n, the number of moles and V, the volume. This
relation implies that R=NAkB where NA is Avogadros conMolar form
stant, and the consistency of this result with experiment is
How much gas is present could be specied by giving the a good check on the principles of statistical mechanics.
mass instead of the chemical amount of gas. Therefore, an From this we can notice that for an average particle mass of
alternative form of the ideal gas law may be useful. The times the atomic mass constant m (i.e., the mass is u)
chemical amount (n) (in moles) is equal to the mass (m) (in
grams) divided by the molar mass (M) (in grams per mole):
m
Y =
mu
m
n=
and since = mn, we nd that the ideal gas law can be
M
By replacing n with m / M, and subsequently introducing rewritten as:
density = m/V, we get:
m
PV =
RT
M
R
P = T
M
Dening the specic gas constant R as the ratio R/M,

P =

1 m
k
kT =
T.
V mu
mu

In SI units, P is measured in pascals; V in cubic metres; Y


is a dimensionless number; and T in Kelvin. k has the value
1.381023 JK1 in SI units.

10.1.2
P = Rspecific T
This form of the ideal gas law is very useful because it links
pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specic volume v, the reciprocal of density, as

Applications to thermodynamic processes

The table below essentially simplies the ideal gas equation


for a particular processes, thus making this equation easier
to solve using numerical methods.

A thermodynamic process is dened as a system that moves


from state 1 to state 2, where the state number is denoted
by subscript. As shown in the rst column of the table, baP v = Rspecific T
sic thermodynamic processes are dened such that one of
It is common, especially in engineering applications, to rep- the gas properties (P, V, T, or S) is constant throughout the
resent the specic gas constant by the symbol R. In such process.
cases, the universal gas constant is usually given a dier- For a given thermodynamics process, in order to specify the
ent symbol such as R to distinguish it. In any case, the con- extent of a particular process, one of the properties ratios
text and/or units of the gas constant should make it clear (which are listed under the column labeled known ratio)
as to whether the universal or specic gas constant is being must be specied (either directly or indirectly). Also, the
referred to.[5]
property for which the ratio is known must be distinct from

228

CHAPTER 10. CHAPTER 10. EQUATIONS

the property held constant in the previous column (other- where C is a constant which is directly proportional to the
wise the ratio would be unity, and not enough information amount of gas, n (Avogadros law). The proportionality facwould be available to simplify the gas law equation).
tor is the universal gas constant, R, i.e. C = nR.
In the nal three columns, the properties (P, V, or T) at state Hence the ideal gas law
2 can be calculated from the properties at state 1 using the
equations listed.
^
a. In an isentropic process, system entropy (S) is con- P V = nRT
stant. Under these conditions, P 1 V 1 = P 2 V 2 , where
is dened as the heat capacity ratio, which is constant for
a calorically perfect gas. The value used for is typically
1.4 for diatomic gases like nitrogen (N2 ) and oxygen (O2 ),
(and air, which is 99% diatomic). Also is typically 1.6
for monatomic gases like the noble gases helium (He), and
argon (Ar). In internal combustion engines varies between
1.35 and 1.15, depending on constitution gases and temperature.

10.1.3

Theoretical
Kinetic theory Main article: Kinetic theory of gases

The ideal gas law can also be derived from rst principles
using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that
the molecules, or atoms, of the gas are point masses, possessing mass but no signicant volume, and undergo only
Deviations from ideal behavior of real elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy
gases
are conserved.

The equation of state given here applies only to an ideal


gas, or as an approximation to a real gas that behaves sufciently like an ideal gas. There are in fact many dierent
forms of the equation of state. Since the ideal gas law neglects both molecular size and intermolecular attractions, it
is most accurate for monatomic gases at high temperatures
and low pressures. The neglect of molecular size becomes
less important for lower densities, i.e. for larger volumes at
lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular
size. The relative importance of intermolecular attractions
diminishes with increasing thermal kinetic energy, i.e., with
increasing temperatures. More detailed equations of state,
such as the van der Waals equation, account for deviations
from ideality caused by molecular size and intermolecular
forces.

Statistical mechanics Main article: Statistical mechanics


Let q = (q, q , q ) and p = (p, p , p ) denote the position
vector and momentum vector of a particle of an ideal gas,
respectively. Let F denote the net force on that particle.
Then the time-averaged potential energy of the particle is:

dp dp dp
x
y
z
q F = qx
+ qy
+ qz
dt
dt
dt
H H H
= qx
qy
qz
= 3kB T,
qx
qy
qz

where the rst equality is Newtons second law, and the


A residual property is dened as the dierence between a
second line uses Hamiltons equations and the equipartition
real gas property and an ideal gas property, both considered
theorem. Summing over a system of N particles yields
at the same pressure, temperature, and composition.

10.1.4

Derivations

3N kB T =

qk Fk .

k=1

Empirical

By Newtons third law and the ideal gas assumption, the net
force of the system is the force applied by the walls of the
The ideal gas law can be derived from combining two em- container, and this force is given by the pressure P of the
pirical gas laws: the combined gas law and Avogadros law. gas. Hence
The combined gas law states that

PV
=C
T

N
k=1

qk F k

I
q dS,

=P
surface

10.1. IDEAL GAS LAW

229

where dS is the innitesimal area element along the walls of


the container. Since the divergence of the position vector q
is

q=

qx
qy
qz
+
+
= 3,
qx
qy
qz

the divergence theorem implies that

I
q dS = P

P
surface

( q) dV = 3P V,
volume

where dV is an innitesimal volume within the container


and V is the total volume of the container.
Putting these equalities together yields

N
3N kB T =
qk Fk = 3P V,
k=1

which immediately implies the ideal gas law for N particles:

P V = N kB T = nRT,
where n = N/NA is the number of moles of gas and R =
NAkB is the gas constant.

[3] Clausius, R. (1857). Ueber die Art der Bewegung, welche


wir Wrme nennen. Annalen der Physik und Chemie (in
German) 176 (3): 35379. Bibcode:1857AnP...176..353C.
doi:10.1002/andp.18571760302. Facsimile at the Bibliothque nationale de France (pp. 35379).
[4] Equation of State.
[5] Moran and Shapiro, Fundamentals of Engineering Thermodynamics, Wiley, 4th Ed, 2000

10.1.7

Further reading

Davis and Masten Principles of Environmental Engineering and Science, McGraw-Hill Companies, Inc.
New York (2002) ISBN 0-07-235053-9
Website giving credit to Benot Paul mile Clapeyron,
(17991864) in 1834

10.1.8

External links

Conguration integral (statistical mechanics) where


an alternative statistical mechanics derivation of the
ideal-gas law, using the relationship between the
Helmholtz free energy and the partition function, but
without using the equipartition theorem, is provided.
Vu-Quoc, L., Conguration integral (statistical mechanics), 2008. this wiki site is down; see this article
in the web archive on 2012 April 28.
Online Ideal Gas law Calculator

10.1.5

See also

Van der Waals equation


Boltzmann constant
Conguration integral
Dynamic pressure
Internal energy

10.1.6

References

[1] Clapeyron, E (1834). Mmoire sur la puissance motrice


de la chaleur. Journal de l'cole Polytechnique (in French)
XIV: 15390. Facsimile at the Bibliothque nationale de
France (pp. 15390).
[2] Krnig, A. (1856).
Grundzge einer Theorie der
Gase.
Annalen der Physik und Chemie (in German) 99 (10): 31522. Bibcode:1856AnP...175..315K.
doi:10.1002/andp.18561751008. Facsimile at the Bibliothque nationale de France (pp. 31522).

Chapter 11

Chapter 11. Fundamentals


11.1 Fundamental thermodynamic
relation
In thermodynamics, the fundamental thermodynamic relation is generally expressed as an innitesimal change in
internal energy in terms of innitesimal changes in entropy,
and volume for a closed system in thermal equilibrium in
the following way.
dU = T dS P dV

dU = Q W
where Q and W are innitesimal amounts of heat supplied to the system by its surroundings and work done by
the system on its surroundings, respectively.
According to the second law of thermodynamics we have
for a reversible process:

dS =

Q
T

Here, U is internal energy, T is absolute temperature, S


is entropy, P is pressure, and V is volume. This relation Hence:
applies to a reversible change, or to a change in a closed
system of uniform temperature and pressure at constant
composition.[1]
Q = T dS
This is only one expression of the fundamental thermody- By substituting this into the rst law, we have:
namic relation. It may be expressed in other ways, using
dierent variables (e.g. using thermodynamic potentials).
For example, the fundamental relation may be expressed in
dU = T dS W
terms of the enthalpy as

dH = T dS + V dP

Letting W be reversible pressure-volume work done by the


system on its surroundings,

in terms of the Helmholtz free energy (F) as


W = P dV
dF = S dT P dV

we have:

and in terms of the Gibbs free energy (G) as


dU = T dS P dV
dG = S dT + V dP

This equation has been derived in the case of reversible


changes. However, since U, S, and V are thermodynamic state functions, the above relation holds also for non11.1.1 Derivation from the rst and second reversible changes in a system of uniform pressure and temperature at constant composition.[1] If the composition, i.e.
laws of thermodynamics
the amounts ni of the chemical components, in a system of
The rst law of thermodynamics states that:
uniform temperature and pressure can also change, e.g. due
230

11.1. FUNDAMENTAL THERMODYNAMIC RELATION

231

to a chemical reaction, the fundamental thermodynamic re- The fundamental assumption of statistical mechanics is that
lation generalizes to:
all the (E) states are equally likely. This allows us to
extract all the thermodynamical quantities of interest. The
temperature is dened as:

dU = T dS P dV +
i dni
d log[(E)]
1
kT
dE
i
The j are the chemical potentials corresponding to parti- This denition can be derived from the microcanonical encles of type j . The last term must be zero for a reversible semble, which is a system of a constant number of particles,
a constant volume and that does not exchange energy with
process.
its environment. Suppose that the system has some external
If the system has more external parameters than just the parameter, x, that can be changed. In general, the energy
volume that can change, the fundamental thermodynamic eigenstates of the system will depend on x. According to
relation generalizes to
the adiabatic theorem of quantum mechanics, in the limit
of an innitely slow change of the systems Hamiltonian,

the system will stay in the same energy eigenstate and thus
dU = T dS
Xj dxj +
i dni
change its energy according to the change in energy of the
j
i
energy eigenstate it is in.
Here the Xi are the generalized forces corresponding to the
The generalized force, X, corresponding to the external paexternal parameters xi .
rameter x is dened such that Xdx is the work performed
by the system if x is increased by an amount dx. E.g., if x
11.1.2 Derivation from statistical mechani- is the volume, then X is the pressure. The generalized force
for a system known to be in energy eigenstate Er is given
cal principles
by:
The above derivation uses the rst and second laws of thermodynamics. The rst law of thermodynamics is essentially
dE
a denition of heat, i.e. heat is the change in the internal X = r
dx
energy of a system that is not caused by a change of the
external parameters of the system.
Since the system can be in any energy eigenstate within an
However, the second law of thermodynamics is not a den- interval of E , we dene the generalized force for the sysing relation for the entropy. The fundamental denition of tem as the expectation value of the above expression:
entropy of an isolated system containing an amount of energy of E is:

X=

S = k log [ (E)]
where (E) is the number of quantum states in a small
interval between E and E + E . Here E is a macroscopically small energy interval that is kept xed. Strictly speaking this means that the entropy depends on the choice of E
. However, in the thermodynamic limit (i.e. in the limit of
innitely large system size), the specic entropy (entropy
per unit volume or per unit mass) does not depend on E
. The entropy is thus a measure of the uncertainty about
exactly which quantum state the system is in, given that we
know its energy to be in some interval of size E .

dEr
dx

To evaluate the average, we partition the (E) energy


eigenstates by counting how many of them have a value for
dEr
dx within a range between Y and Y + Y . Calling this
number Y (E) , we have:

(E) =

Y (E)

The average dening the generalized force can now be written:

Deriving the fundamental thermodynamic relation from


rst principles thus amounts to proving that the above def1
Y Y (E)
inition of entropy implies that for reversible processes we X = (E)
Y
have:
Q
dS =
T

We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we
change x to x + dx. Then (E) will change because the

232

CHAPTER 11. CHAPTER 11. FUNDAMENTALS

energy eigenstates depend on x, causing energy eigenstates


(
)
( )
to move into or out of the range between E and E + E
S
S
dE
X
r
. Lets focus again on the energy eigenstates for which dE
dS
=
dE
+
dx =
+ dx
dx
E
x
T
T
x
E
lies within the range between Y and Y + Y . Since these
energy eigenstates increase in energy by Y dx, all such enwhich we can write as:
ergy eigenstates that are in the interval ranging from E - Y
dx to E move from below E to above E. There are
dE = T dS Xdx
Y (E)
Y dx
NY (E) =
E
such energy eigenstates. If Y dx E , all these energy
eigenstates will move into the range between E and E +E
and contribute to an increase in . The number of energy
eigenstates that move from below E + E to above E + E
is, of course, given by NY (E + E) . The dierence

11.1.3

[1] Schmidt-Rohr, K. (2014). Expansion Work without the


External Pressure, and Thermodynamics in Terms of Quasistatic Irreversible Processes J. Chem. Educ. 91: 402-409.
http://dx.doi.org/10.1021/ed3008704

11.1.4

NY (E) NY (E + E)
is thus the net contribution to the increase in . Note that if
Y dx is larger than E there will be energy eigenstates that
move from below E to above E + E . They are counted
in both NY (E) and NY (E + E) , therefore the above
expression is also valid in that case.
Expressing the above expression as a derivative with respect
to E and summing over Y yields the expression:

References

External links

The Fundamental Thermodynamic Relation


Thermodynamics and Heat Transfer

11.2

Heat engine

See also: Thermodynamic cycle


(

)
=
E

(
Y

Y
E

(
=

(X)
E

)
x

The logarithmic derivative of with respect to x is thus


given by:
(

log ()
x

)
= X +
E

X
E

)
x

The rst term is intensive, i.e. it does not scale with system
size. In contrast, the last term scales as the inverse system
size and thus vanishes in the thermodynamic limit. We have
thus found that:
(

S
x

)
=
E

X
T

Combining this with


(

S
E

Gives:

=
x

1
T

In thermodynamics, a heat engine is a system that converts heat or thermal energyand chemical energy
to mechanical energy, which can then be used to do
mechanical work.[1][2] It does this by bringing a working
substance from a higher state temperature to a lower state
temperature. A heat source generates thermal energy that
brings the working substance to the high temperature state.
The working substance generates work in the "working
body" of the engine while transferring heat to the colder
"sink" until it reaches a low temperature state. During this
process some of the thermal energy is converted into work
by exploiting the properties of the working substance. The
working substance can be any system with a non-zero heat
capacity, but it usually is a gas or liquid. During this process, a lot of heat is lost to the surroundings, i.e. it cannot
be used.
In general an engine converts energy to mechanical work.
Heat engines distinguish themselves from other types of engines by the fact that their eciency is fundamentally limited by Carnots theorem.[3] Although this eciency limitation can be a drawback, an advantage of heat engines
is that most forms of energy can be easily converted to
heat by processes like exothermic reactions (such as combustion), absorption of light or energetic particles, friction,

11.2. HEAT ENGINE


dissipation and resistance. Since the heat source that supplies thermal energy to the engine can thus be powered by
virtually any kind of energy, heat engines are very versatile
and have a wide range of applicability.

233
(which no engine ever attains) is equal to the temperature
dierence between the hot and cold ends divided by the
temperature at the hot end, all expressed in absolute temperature or kelvins.

Heat engines are often confused with the cycles they attempt The eciency of various heat engines proposed or used toto implement. Typically, the term engine is used for a day has a large range:
physical device and cycle for the model.

11.2.1

Overview

3 percent[4] (97 percent waste heat using low quality


heat) for the OTEC ocean power proposal.
25 percent for most automotive gasoline engines [5]
49 percent for a supercritical coal-red power station
such as the Avedre Power Station, and many others
60 percent for a steam-cooled combined cycle gas turbine.[6]
All these processes gain their eciency (or lack thereof)
from the temperature drop across them. Signicant energy
may be used for auxiliary equipment, such as pumps, which
eectively reduces eciency.
Power

Heat engines can be characterized by their specic power,


which is typically given in kilowatts per litre of engine displacement (in the U.S. also horsepower per cubic inch). The
result oers an approximation of the peak power output of
an engine. This is not to be confused with fuel eciency,
since high eciency often requires a lean fuel-air ratio, and
Figure 1: Heat engine diagram
thus lower power density. A modern high-performance car
3
In thermodynamics, heat engines are often modeled using engine makes in excess of 75 kW/l (1.65 hp/in ).
a standard engineering model such as the Otto cycle. The
theoretical model can be rened and augmented with actual data from an operating engine, using tools such as an 11.2.2 Everyday examples
indicator diagram. Since very few actual implementations
of heat engines exactly match their underlying thermody- Examples of everyday heat engines include the steam ennamic cycles, one could say that a thermodynamic cycle is gine (for example most of the worlds power plants use
an ideal case of a mechanical engine. In any case, fully steam turbines, a modern form of steam engine), and the
understanding an engine and its eciency requires gaining internal combustion engine , gasoline (petrol) engine the
a good understanding of the (possibly simplied or ideal- diesel engine, in an automobile or truck. A common toy that
ized) theoretical model, the practical nuances of an actual is also a heat engine is a drinking bird. Also the stirling enmechanical engine, and the discrepancies between the two. gine is a heat engine. All of these familiar heat engines are
powered by the expansion of heated gases. The general surIn general terms, the larger the dierence in temperature roundings are the heat sink, which provides relatively cool
between the hot source and the cold sink, the larger is the gases that, when heated, expand rapidly to drive the mepotential thermal eciency of the cycle. On Earth, the cold chanical motion of the engine.
side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower
than 300 Kelvin, so most eorts to improve the thermo- 11.2.3 Examples of heat engines
dynamic eciencies of various heat engines focus on increasing the temperature of the source, within material lim- It is important to note that although some cycles have a typits. The maximum theoretical eciency of a heat engine ical combustion location (internal or external), they often

234
can be implemented with the other. For example, John Ericsson developed an external heated engine running on a
cycle very much like the earlier Diesel cycle. In addition,
externally heated engines can often be implemented in open
or closed cycles.
Earths heat engine
Earths atmosphere and hydrosphereEarths heat
engineare coupled processes that constantly even out
solar heating imbalances through evaporation of surface
water, convection, rainfall, winds, and ocean circulation,
when distributing heat around the globe.[7]

CHAPTER 11. CHAPTER 11. FUNDAMENTALS


Stirling cycle (Stirling engine, thermoacoustic devices)
Internal combustion engine (ICE):
Otto cycle (e.g. Gasoline/Petrol engine)
Diesel cycle (e.g. Diesel engine)
Atkinson cycle (Atkinson engine)
Brayton cycle or Joule cycle originally Ericsson
cycle (gas turbine)
Lenoir cycle (e.g., pulse jet engine)
Miller cycle (Miller engine)

Liquid only cycle


The Hadley system provides an example of a heat engine.
The Hadley circulation is identied with rising of warm and
In these cycles and engines the working uid are always like
moist air in the equatorial region with descent of colder air
liquid:
in the subtropics corresponding to a thermally driven direct circulation, with consequent net production of kinetic
Stirling cycle (Malone engine)
energy.[8]
Heat Regenerative Cyclone[9]
Phase-change cycles
In these cycles and engines, the working uids are gases and
liquids. The engine converts the working uid from a gas to
a liquid, from liquid to gas, or both, generating work from
the uid expansion or compression.
Rankine cycle (classical steam engine)
Regenerative cycle (steam engine more ecient than
Rankine cycle)

Electron cycles
Johnson thermoelectric energy converter
Thermoelectric (PeltierSeebeck eect)
Thermogalvanic cell
Thermionic emission
Thermotunnel cooling

Organic Rankine cycle (Coolant changing phase in


Magnetic cycles
temperature ranges of ice and hot liquid water)
Vapor to liquid cycle (Drinking bird, Injector, Minto
wheel)

Thermo-magnetic motor (Tesla)

Liquid to solid cycle (Frost heaving water changing Cycles used for refrigeration
from ice to liquid and back again can lift rock up to 60
Main article: refrigeration
cm.)
Solid to gas cycle (Dry ice cannon Dry ice sublimes A domestic refrigerator is an example of a heat pump: a heat
to gas.)
engine in reverse. Work is used to create a heat dierential.
Many cycles can run in reverse to move heat from the cold
side to the hot side, making the cold side cooler and the hot
Gas-only cycles
side hotter. Internal combustion engine versions of these
In these cycles and engines the working uid is always a gas cycles are, by their nature, not reversible.
(i.e., there is no phase change):

Refrigeration cycles include:

Carnot cycle (Carnot heat engine)

Vapor-compression refrigeration

Ericsson cycle (Caloric Ship John Ericsson)

Stirling cryocoolers

11.2. HEAT ENGINE


Gas-absorption refrigerator
Air cycle machine

235
work and delivering the rest to the cold temperature heat
sink.

Magnetic refrigeration

In general, the eciency of a given heat transfer process


(whether it be a refrigerator, a heat pump or an engine)
is dened informally by the ratio of what you get out to
what you put in.

Evaporative heat engines

In the case of an engine, one desires to extract work and


puts in a heat transfer.

Vuilleumier refrigeration

The Barton evaporation engine is a heat engine based on a


cycle producing power and cooled moist air from the evaporation of water into hot dry air.
Mesoscopic heat engines
Mesoscopic heat engines are nanoscale devices that may
serve the goal of processing heat uxes and perform useful work at small scales. Potential applications include e.g.
electric cooling devices. In such mesoscopic heat engines,
work per cycle of operation uctuates due to thermal noise.
There is exact equality that relates average of exponents of
work performed by any heat engine and the heat transfer
from the hotter heat bath.[10] This relation transforms the
Carnots inequality into exact equality.

11.2.4

Eciency

Qh Qc
Qc
W
=
=1
Qh
Qh
Qh

The theoretical maximum eciency of any heat engine depends only on the temperatures it operates between. This
eciency is usually derived using an ideal imaginary heat
engine such as the Carnot heat engine, although other engines using dierent cycles can also attain maximum eciency. Mathematically, this is because in reversible processes, the change in entropy of the cold reservoir is the
negative of that of the hot reservoir (i.e., Sc = Sh ),
keeping the overall change of entropy zero. Thus:

max = 1

Tc Sc
Tc
=1
Th Sh
Th

The eciency of a heat engine relates how much useful where Th is the absolute temperature of the hot source and
work is output for a given amount of heat energy input.
Tc that of the cold sink, usually measured in kelvin. Note
From the laws of thermodynamics, after a completed cycle: that dSc is positive while dSh is negative; in any reversible
work-extracting process, entropy is overall not increased,
but rather is moved from a hot (high-entropy) system to a
cold (low-entropy one), decreasing the entropy of the heat
source and increasing that of the heat sink.
W = Qc (Qh )
where
H
W = P dV is the work extracted
from the engine. (It is negative since
work is done by the engine.)
Qh = Th Sh is the heat energy taken from the high temperature system. (It is negative since heat
is extracted from the source, hence
(Qh ) is positive.)
Qc = Tc Sc is the heat energy delivered to the cold temperature system.
(It is positive since heat is added to the
sink.)

The reasoning behind this being the maximal eciency


goes as follows. It is rst assumed that if a more ecient
heat engine than a Carnot engine is possible, then it could
be driven in reverse as a heat pump. Mathematical analysis
can be used to show that this assumed combination would
result in a net decrease in entropy. Since, by the second
law of thermodynamics, this is statistically improbable to
the point of exclusion, the Carnot eciency is a theoretical
upper bound on the reliable eciency of any process.
Empirically, no heat engine has ever been shown to run at a
greater eciency than a Carnot cycle heat engine.

Figure 2 and Figure 3 show variations on Carnot cycle efciency. Figure 2 indicates how eciency changes with
an increase in the heat addition temperature for a constant
compressor inlet temperature. Figure 3 indicates how the
In other words, a heat engine absorbs heat energy from the eciency changes with an increase in the heat rejection
high temperature heat source, converting part of it to useful temperature for a constant turbine inlet temperature.

236

CHAPTER 11. CHAPTER 11. FUNDAMENTALS

Endoreversible heat engines


The most Carnot eciency as a criterion of heat engine
performance is the fact that by its nature, any maximally
ecient Carnot cycle must operate at an innitesimal temperature gradient. This is because any transfer of heat between two bodies at diering temperatures is irreversible,
and therefore the Carnot eciency expression only applies
in the innitesimal limit. The major problem with that is
that the object of most heat engines is to output some sort
of power, and innitesimal power is usually not what is being sought.
A dierent measure of ideal heat engine eciency is
given by considerations of endoreversible thermodynamics,
where the cycle is identical to the Carnot cycle except in that
the two processes of heat transfer are not reversible (Callen
1985):
= 1

Tc
Th

(Note: Units K or R)

This model does a better job of predicting how well


real-world heat engines can do (Callen 1985, see also
endoreversible thermodynamics):
As shown, the endoreversible eciency much more closely
models the observed data.

11.2.5

History

Main article: Timeline of heat engine technology


See also: History of the internal combustion engine and
History of thermodynamics
Heat engines have been known since antiquity but were only
made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed
today.

11.2.6

Heat engine enhancements

Engineers have studied the various heat engine cycles extensively in eort to improve the amount of usable work
they could extract from a given power source. The Carnot
cycle limit cannot be reached with any gas-based cycle, but
engineers have worked out at least two ways to possibly go
around that limit, and one way to get better eciency without bending any rules.

combined-cycle gas turbines. Unfortunately, physical


limits (such as the melting point of the materials used
to build the engine) and environmental concerns regarding NO production restrict the maximum temperature on workable heat engines. Modern gas turbines run at temperatures as high as possible within
the range of temperatures necessary to maintain acceptable NO output . Another way of increasing efciency is to lower the output temperature. One new
method of doing so is to use mixed chemical working uids, and then exploit the changing behavior of
the mixtures. One of the most famous is the so-called
Kalina cycle, which uses a 70/30 mix of ammonia and
water as its working uid. This mixture allows the cycle to generate useful power at considerably lower temperatures than most other processes.
2. Exploit the physical properties of the working uid.
The most common such exploitation is the use of water
above the so-called critical point, or so-called supercritical steam. The behavior of uids above their critical point changes radically, and with materials such
as water and carbon dioxide it is possible to exploit
those changes in behavior to extract greater thermodynamic eciency from the heat engine, even if it is
using a fairly conventional Brayton or Rankine cycle.
A newer and very promising material for such applications is CO2 . SO2 and xenon have also been considered for such applications, although SO2 is a little
toxic for most.
3. Exploit the chemical properties of the working uid.
A fairly new and novel exploit is to use exotic working uids with advantageous chemical properties. One
such is nitrogen dioxide (NO2 ), a toxic component
of smog, which has a natural dimer as di-nitrogen
tetraoxide (N2 O4 ). At low temperature, the N2 O4
is compressed and then heated. The increasing temperature causes each N2 O4 to break apart into two
NO2 molecules. This lowers the molecular weight of
the working uid, which drastically increases the efciency of the cycle. Once the NO2 has expanded
through the turbine, it is cooled by the heat sink, which
makes it recombine into N2 O4 . This is then fed back
by the compressor for another cycle. Such species
as aluminium bromide (Al2 Br6 ), NOCl, and Ga2 I6
have all been investigated for such uses. To date, their
drawbacks have not warranted their use, despite the
eciency gains that can be realized.[12]

1. Increase the temperature dierence in the heat engine. 11.2.7 Heat engine processes
The simplest way to do this is to increase the hot side
temperature, which is the approach used in modern Each process is one of the following:

11.3. THERMODYNAMIC CYCLE


isothermal (at constant temperature, maintained with
heat added or removed from a heat source or sink)
isobaric (at constant pressure)
isometric/isochoric (at constant volume), also referred
to as iso-volumetric
adiabatic (no heat is added or removed from the system
during adiabatic process)
isentropic (reversible adiabatic process, no heat is
added or removed during isentropic process)

11.2.8

See also

Heat pump
Reciprocating engine for a general description of the
mechanics of piston engines
Thermosynthesis
Timeline of heat engine technology

11.2.9

References

[1] Fundamentals of Classical Thermodynamics, 3rd ed. p. 159,


(1985) by G. J. Van Wylen and R. E. Sonntag: A heat engine may be dened as a device that operates in a thermodynamic cycle and does a certain amount of net positive work
as a result of heat transfer from a high-temperature body
and to a low-temperature body. Often the term heat engine
is used in a broader sense to include all devices that produce work, either through heat transfer or combustion, even
though the device does not operate in a thermodynamic cycle. The internal-combustion engine and the gas turbine are
examples of such devices, and calling these heat engines is
an acceptable use of the term.
[2] Mechanical eciency of heat engines, p. 1 (2007) by James
R. Senf: Heat engines are made to provide mechanical energy from thermal energy.
[3] Thermal physics: entropy and free energies, by Joon Chang
Lee (2002), Appendix A, p. 183: A heat engine absorbs
energy from a heat source and then converts it into work for
us.... When the engine absorbs heat energy, the absorbed
heat energy comes with entropy. (heat energy Q = T S
), When the engine performs work, on the other hand, no
entropy leaves the engine. This is problematic. We would
like the engine to repeat the process again and again to provide us with a steady work source. ... to do so, the working
substance inside the engine must return to its initial thermodynamic condition after a cycle, which requires to remove
the remaining entropy. The engine can do this only in one
way. It must let part of the absorbed heat energy leave without converting it into work. Therefore the engine cannot
convert all of the input energy into work!"

237

[4] M. Emam, Experimental Investigations on a Standing-Wave


Thermoacoustic Engine, M.Sc. Thesis, Cairo University,
Egypt (2013).
[5] Where the Energy Goes: Gasoline Vehicles, US Dept of Energy
[6] Eciency by the Numbers by Lee S. Langston
[7] Lindsey, Rebecca (2009). Climate and Earths Energy
Budget. NASA Earth Observatory.
[8] Junling Huang and Michael B. McElroy (2014).
Contributions of the Hadley and Ferrel Circulations
to the Energetics of the Atmosphere over the Past
32 Years. Journal of Climate 27 (7): 26562666.
Bibcode:2014JCli...27.2656H.
doi:10.1175/jcli-d-1300538.1.
[9] Cyclone Power Technologies Website.
clonepower.com. Retrieved 2012-03-22.

Cy-

[10] N. A. Sinitsyn (2011). Fluctuation Relation for Heat


Engines. J. Phys. A: Math. Theor. 44: 405001.
arXiv:1111.7014.
Bibcode:2011JPhA...44N5001S.
doi:10.1088/1751-8113/44/40/405001.
[11] F. L. Curzon, B. Ahlborn (1975). Eciency of a Carnot
Engine at Maximum Power Output. Am. J. Phys., Vol. 43,
pp. 24.
[12] Nuclear Reactors Concepts and Thermodynamic Cycles
(PDF). Retrieved 2012-03-22.

Kroemer, Herbert; Kittel, Charles (1980). Thermal


Physics (2nd ed.). W. H. Freeman Company. ISBN
0-7167-1088-9.
Callen, Herbert B. (1985). Thermodynamics and an
Introduction to Thermostatistics (2nd ed.). John Wiley
& Sons, Inc. ISBN 0-471-86256-8.

11.3

Thermodynamic cycle

A thermodynamic cycle consists of a linked sequence of


thermodynamic processes that involve transfer of heat and
work into and out of the system, while varying pressure,
temperature, and other state variables within the system,
and that eventually returns the system to its initial state.[1]
In the process of passing through a cycle, the working uid
(system) may convert heat from a warm source into useful work, and dispose of the remaining heat to a cold sink,
thereby acting as a heat engine. Conversely, the cycle may
be reversed and use work to move heat from a cold source
and transfer it to a warm sink thereby acting as a heat pump.
During a closed cycle, the system returns to its original
thermodynamic state of temperature and pressure. Process

238

CHAPTER 11. CHAPTER 11. FUNDAMENTALS

quantities (or path quantities), such as heat and work are


process dependent. For a cycle for which the system returns
to its initial state the rst law of thermodynamics applies:

E = Eout Ein = 0
The above states that there is no change of the energy of the
system over the cycle. E might be the work and heat input during the cycle and E would be the work and heat
output during the cycle. The rst law of thermodynamics also dictates that the net heat input is equal to the net
work output over a cycle (we account for heat, Q , as positive and Q as negative). The repeating nature of the process path allows for continuous operation, making the cycle
an important concept in thermodynamics. Thermodynamic
cycles are often represented mathematically as quasistatic
processes in the modeling of the workings of an actual device.

11.3.1

Heat and work

Two primary classes of thermodynamic cycles are power


cycles and heat pump cycles. Power cycles are cycles
which convert some heat input into a mechanical work output, while heat pump cycles transfer heat from low to high
temperatures by using mechanical work as the input. Cycles
composed entirely of quasistatic processes can operate as
power or heat pump cycles by controlling the process direction. On a pressure-volume (PV) diagram or temperatureentropy diagram, the clockwise and counterclockwise directions indicate power and heat pump cycles, respectively.

The net work equals the area inside because it is (a) the Riemann
sum of work done on the substance due to expansion, minus (b) the
work done to re-compress.

course of the cyclic process, when the cyclic process nishes


the systems energy is the same as the energy it had when
the process began.
If the cyclic process moves clockwise around the loop, then
W will be positive, and it represents a heat engine. If it
moves counterclockwise, then W will be negative, and it
represents a heat pump.
Each Point in the Cycle

Relationship to work

Otto Cycle:

Because the net variation in state properties during a thermodynamic cycle is zero, it forms a closed loop on a PV
diagram. A PV diagrams Y axis shows pressure (P) and X
axis shows volume (V). The area enclosed by the loop is the
work (W) done by the process:

12: Isentropic Expansion: Constant entropy (s), Decrease in pressure (P), Increase in volume (v), Decrease in
temperature (T)

I
(1)

W =

P dV

23: Isochoric Cooling: Constant volume(v), Decrease in


pressure (P), Decrease in entropy (S), Decrease in temperature (T)
34: Isentropic Compression: Constant entropy (s), Increase in pressure (P), Decrease in volume (v), Increase in
temperature (T)

This work is equal to the balance of heat (Q) transferred 41: Isochoric Heating: Constant volume (v), Increase in
into the system:
pressure (P), Increase in entropy (S), Increase in temperature (T)
(2)

W = Q = Qin Qout

A List of Thermodynamic Processes:

Adiabatic : No energy transfer as heat (Q) during that part


Equation (2) makes a cyclic process similar to an isothermal of the cycle would amount to Q=0. This does not exclude
process: even though the internal energy changes during the energy transfer as work.

11.3. THERMODYNAMIC CYCLE

239
power and run the vast majority of motor vehicles. Power
cycles can be organized into two categories: real cycles and
ideal cycles. Cycles encountered in real world devices (real
cycles) are dicult to analyze because of the presence of
complicating eects (friction), and the absence of sucient
time for the establishment of equilibrium conditions. For
the purpose of analysis and design, idealized models (ideal
cycles) are created; these ideal models allow engineers to
study the eects of major parameters that dominate the cycle without having to spend signicant time working out
intricate details present in the real cycle model.

Description of each point in the thermodynamic cycles.

Power cycles can also be divided according to the type of


heat engine they seek to model. The most common cycles
used to model internal combustion engines are the Otto cycle, which models gasoline engines, and the Diesel cycle,
which models diesel engines. Cycles that model external
combustion engines include the Brayton cycle, which models gas turbines, the Rankine cycle, which models steam turbines, the Stirling cycle, which models hot air engines, and
the Ericsson cycle, which also models hot air engines.

Isothermal : The process is at a constant temperature during


that part of the cycle (T=constant, T=0). This does not
exclude energy transfer as heat or work.
Isobaric : Pressure in that part of the cycle will remain constant. (P=constant, P=0). This does not exclude energy
transfer as heat or work.
Isochoric : The process is constant volume (V=constant,
V=0). This does not exclude energy transfer as heat or
work.
Isentropic : The process is one of constant entropy
(S=constant, S=0). This excludes the transfer of heat but
not work.
Power cycles

The clockwise thermodynamic cycle indicated by the arrows shows


that the cycle represents a heat engine. The cycle consists of four
states (the point shown by crosses) and four thermodynamic processes (lines).

Heat engine diagram.

Main article: Heat engine

For example, the pressure-volume mechanical work output


from the heat engine cycle (net work out), consisting of 4
thermodynamic processes, is:

(3)

Wnet = W12 + W23 + W34 + W41

Thermodynamic power cycles are the basis for the operation


W12 =
of heat engines, which supply most of the worlds electric

V2

P dV, system on done work negative,


V1

240

CHAPTER 11. CHAPTER 11. FUNDAMENTALS

V3

P dV, V3 equal V2 if work zero

W23 =
V2

V4

images illustrate the dierences in work output predicted


by an ideal Stirling cycle and the actual performance of a
Stirling engine:

P dV, system by done work positive,

W34 =

As the net work output for a cycle is represented by the interior of the cycle, there is a signicant dierence between
V1
the predicted work output of the ideal cycle and the actual
P dV, V1 equal V4 if work zero
W41 =
work output shown by a real engine. It may also be observed
V4
that the real individual processes diverge from their idealIf no volume change happens in process 4-1 and 2-3, equaized counterparts; e.g., isochoric expansion (process 1-2)
tion (3) simplies to:
occurs with some actual volume change.
V3

(4)

Wnet = W12 + W34

Heat pump cycles


Main article: Heat pump and refrigeration cycle

11.3.3

Well-known thermodynamic cycles

In practice, simple idealized thermodynamic cycles are usually made out of four thermodynamic processes. Any thermodynamic processes may be used. However, when idealized cycles are modeled, often processes where one state
variable is kept constant are used, such as an isothermal
process (constant temperature), isobaric process (constant
pressure), isochoric process (constant volume), isentropic
process (constant entropy), or an isenthalpic process (constant enthalpy). Often adiabatic processes are also used,
where no heat is exchanged.

Thermodynamic heat pump cycles are the models for


household heat pumps and refrigerators. There is no difference between the two except the purpose of the refrigerator is to cool a very small space while the household heat
pump is intended to warm a house. Both work by moving
heat from a cold space to a warm space. The most common
refrigeration cycle is the vapor compression cycle, which Some example thermodynamic cycles and their constituent
models systems using refrigerants that change phase. The processes are as follows:
absorption refrigeration cycle is an alternative that absorbs
the refrigerant in a liquid solution rather than evaporating it.
Gas refrigeration cycles include the reversed Brayton cycle Ideal cycle
and the Hampson-Linde cycle. Multiple compression and
expansion cycles allow gas refrigeration systems to liquify
gases.
p

11.3.2

Modelling real systems

1
2
Thermodynamic cycles may be used to model real devices
and systems, typically by making a series of assumptions.[2]
A
simplifying assumptions are often necessary to reduce the
[2]
problem to a more manageable form. For example, as
D
B
shown in the gure, devices such a gas turbine or jet engine can be modeled as a Brayton cycle. The actual device
C
is made up of a series of stages, each of which is itself mod4
3
eled as an idealized thermodynamic process. Although each
stage which acts on the working uid is a complex real device, they may be modelled as idealized processes which approximate their real behavior. If energy is added by means
other than combustion, then a further assumption is that the
v
exhaust gases would be passed from the exhaust to a heat exchanger that would sink the waste heat to the environment
and the working gas would be reused at the inlet stage.
An illustration of an ideal cycle heat engine (arrows clockwise).
The dierence between an idealized cycle and actual performance may be signicant.[2] For example, the following An ideal cycle is constructed out of:

11.3. THERMODYNAMIC CYCLE

241

1. TOP and BOTTOM of the loop: a pair of parallel iso- A Stirling cycle is like an Otto cycle, except that the adibaric processes
abats are replaced by isotherms. It is also the same as an
Ericsson cycle with the isobaric processes substituted for
2. LEFT and RIGHT of the loop: a pair of parallel iso- constant volume processes.
choric processes
1. TOP and BOTTOM of the loop: a pair of quasiparallel isothermal processes

Internal energy of a perfect gas undergoing dierent portions of a cycle:


Isothermal: U
= RT ln VV21 RT ln VV12
00) equal to has process isothermal an of U (Note:
Isochoric: U = Cv T 0 = Cv T
Isobaric: U = Cp T RT ( or P V ) = Cv T
Carnot cycle
Main article: Carnot cycle
The Carnot cycle is a cycle composed of the totally
reversible processes of isentropic compression and expansion and isothermal heat addition and rejection. The
thermal eciency of a Carnot cycle depends only on the
absolute temperatures of the two reservoirs in which heat
transfer takes place, and for a power cycle is:

=1

TL
TH

2. LEFT and RIGHT sides of the loop: a pair of parallel


isochoric processes

Heat ows into the loop through the top isotherm and the
left isochore, and some of this heat ows back out through
the bottom isotherm and the right isochore, but most of
the heat ow is through the pair of isotherms. This makes
sense since all the work done by the cycle is done by the
pair of isothermal processes, which are described by Q=W.
This suggests that all the net heat comes in through the top
isotherm. In fact, all of the heat which comes in through the
left isochore comes out through the right isochore: since the
top isotherm is all at the same warmer temperature TH and
the bottom isotherm is all at the same cooler temperature
TC , and since change in energy for an isochore is proportional to change in temperature, then all of the heat coming
in through the left isochore is cancelled out exactly by the
heat going out the right isochore.

11.3.4

State functions and entropy

where TL is the lowest cycle temperature and TH the high- If Z is a state function then the balance of Z remains unest. For Carnot power cycles the coecient of performance changed during a cyclic process:
for a heat pump is:
I
dZ = 0
TL
COP = 1 +
TH TL
Entropy is a state function and is dened as
and for a refrigerator the coecient of performance is:
S=
COP =

TL
TH TL

Q
T

so that

The second law of thermodynamics limits the eciency


and COP for all cyclic devices to levels at or below the
Q
Carnot eciency. The Stirling cycle and Ericsson cycle are S =
T
two other reversible cycles that use regeneration to obtain
isothermal heat transfer.
then it is clear that for any cyclic process,
Stirling cycle

I
dS =

Main article: Stirling cycle

dQ
=0
T

meaning that the net entropy change over a cycle is 0.

242

11.3.5

CHAPTER 11. CHAPTER 11. FUNDAMENTALS

See also

Entropy
Economizer

11.3.6

References

[1] Cengel, Yunus A.; Boles, Michael A. (2002). Thermodynamics: an engineering approach. Boston: McGraw-Hill. p.
14. ISBN 0-07-238332-1.
[2] Cengel, Yunus A.; Boles, Michael A. (2002). Thermodynamics: an engineering approach. Boston: McGraw-Hill.
pp. 452. ISBN 0-07-238332-1.

11.3.7

Further reading

Halliday, Resnick & Walker. Fundamentals of


Physics, 5th edition. John Wiley & Sons, 1997. Chapter 21, Entropy and the Second Law of Thermodynamics.
engel, Yunus A., and Michael A. Boles. Thermodynamics: An Engineering Approach, 7th ed. New York:
McGraw-Hill, 2011. Print.
Hill and Peterson. Mechanics and Thermodynamics
of Propulsion, 2nd ed. Prentice Hall, 1991. 760 pp.

11.3.8

External links

Chapter 12

Text and image sources, contributors, and


licenses
12.1 Text
Thermodynamics Source: https://en.wikipedia.org/wiki/Thermodynamics?oldid=715999458 Contributors: Bryan Derksen, Stokerm, Andre
Engels, Danny, Miguel~enwiki, Roadrunner, Jdpipe, Heron, Arj, Olivier, Ram-Man, Michael Hardy, Tim Starling, Kku, Menchi, Jedimike,
TakuyaMurata, Dgrant, Looxix~enwiki, Ahoerstemeier, CatherineMunro, Glenn, Victor Gijsbers, Je Relf, Mxn, Smack, Ehn, Tantalate, Reddi,
Lfh, Peregrine981, Eadric, Miterdale, Phys, Fvw, Raul654, Seherr, Mjmcb1, Lumos3, RadicalBender, Rogper~enwiki, Robbot, R3m0t, Babbage, Moink, Hadal, Fuelbottle, Quadalpha, Seth Ilys, Diberri, Ancheta Wis, Giftlite, Mshonle~enwiki, N12345n, Lee J Haywood, Monedula,
Wwoods, Dratman, Curps, Michael Devore, Bensaccount, Abqwildcat, Macrakis, Foobar, Physicist, Louis Labrche, Daen, Antandrus, BozMo,
OverlordQ, Karol Langner, APH, H Padleckas, Icairns, Monn0016, Sam Hocevar, MulderX, Agro r, Edsanville, Klemen Kocjancic, Mike Rosoft,
Poccil, CALR, EugeneZelenko, Masudr, Llh, Vsmith, Jpk, Pavel Vozenilek, Dmr2, Bender235, Eric Forste, Pmetzger, El C, Hayabusa future,
Femto, CDN99, Bobo192, Jung dalglish, SpeedyGonsales, Sasquatch, MPerel, Helix84, Haham hanuka, Pearle, Jumbuck, Ixfalia, Alansohn,
Gary, Dbeardsl, Atlant, PAR, Cdc, Malo, Cortonin, Wtmitchell, NAshbery, Docboat, Jheald, Gene Nygaard, Falcorian, Zntrip, Alyblaith, Miaow
Miaow, Uncle G, Plek, Carcharoth, Kzollman, Jwulsin, Sympleko, Pkeck, Tylerni7, Jwanders, Keta, Mido, Cbdorsett, Dzordzm, Frankie1969,
Prashanthns, Mandarax, Graham87, Jclemens, Melesse, Rjwilmsi, DrTorstenHenning, SMC, Ligulem, Dar-Ape, JohnnoShadbolt, Sango123,
Dyolf Knip, Titoxd, FlaBot, MacRusgail, RexNL, Jrtayloriv, Lynxara, Thecurran, Srleer, Chobot, DVdm, Bgwhite, Roboto de Ajvol, The
Rambling Man, Siddhant, RobotE, Pip2andahalf, Sillybilly, Anonymous editor, Anubis1975, JabberWok, Casey56, Wavesmikey, Stephenb, Okedem, The1physicist, CambridgeBayWeather, Rsrikanth05, Wiki alf, Hagiographer, UDScott, Nick, Dhollm, Abb3w, DeadEyeArrow, Ms2ger,
Spinkysam, Enormousdude, Lt-wiki-bot, Arthur Rubin, Pb30, KGasso, MaNeMeBasat, Banus, RG2, Bo Jacoby, DVD R W, That Guy, From
That Show!, Quadpus, Luk, ChemGardener, Vanka5, A13ean, SmackBot, Aim Here, Bobet, C J Cowie, Sounny, Bomac, Jagged 85, Onebravemonkey, Sundaryourfriend, Gilliam, Hmains, Skizzik, ThorinMuglindir, Saros136, Bluebot, Bduke, Silly rabbit, SchftyThree, Complexica,
DHN-bot~enwiki, Antonrojo, Stedder, Sholto Maud, EvelinaB, HGS, Nakon, Lagrangian, Dreadstar, Richard001, Hammer1980, BryanG, Jklin, DMacks, Sadi Carnot, Kukini, SashatoBot, Ocee, ML5, CatastrophicToad~enwiki, JoseREMY, Nonsuch, Patau, Ben Moore, CyrilB,
Frokor, Tasc, Beetstra, Waggers, , Funnybunny, Negrulio, Peyre, Ejw50, Lottamiata, Shoeofdeath, Mattmaccourt, Ivy mike, Moocowisi, Tawkerbot2, Dlohcierekim, Daniel5127, Deathcrap, Spudcrazy, Meisam.fa, CRGreathouse, Dycedarg, Scohoust, Albert.white, TVC 15,
Ruslik0, Dgw, McVities, MarsRover, Freedumb, Casper2k3, Grj23, Cydebot, Gtxfrance, Rieman 82, Bazzargh, Miketwardos, Shirulashem,
Tpot2688, Omicronpersei8, Freak in the bunnysuit, Thijs!bot, MuTau, Barticus88, Bill Nye the wheelin' guy, Coelacan, Knakts, Kablammo,
Headbomb, Pjvpjv, Gerry Ashton, James086, D.H, Stannered, Spud Gun, Austin Maxwell, AntiVandalBot, Gioto, Luna Santin, Jnyanydts,
FrankLambert, Dylan Lake, JAnDbot, MER-C, Matthew Fennell, Acroterion, Lidnariq, Bongwarrior, VoABot II, JNW, Indon, Loonymonkey, User A1, Pax:Vobiscum, Oneileri, A666666, Jtir, BetBot~enwiki, Mermaid from the Baltic Sea, NAHID, Rettetast, Ravichandar84,
R'n'B, LittleOldMe old, Mausy5043, Ludatha, Rhinestone K, Uncle Dick, Maurice Carbonaro, Yonidebot, Brien Clark, Ian.thomson, Dispenser, Katalaveno, MikeEagling, Notreallydavid, AntiSpamBot, Wariner, NewEnglandYankee, Nwbeeson, Ontarioboy, Rumpelstiltskin223,
WilfriedC, KylieTastic, Bob, Joshmt, Lyctc, Vagr7, Bi Laserre, CA387, Idioma-bot, Funandtrvl, VolkovBot, Macedonian, Orthologist, Philip
Trueman, TXiKiBoT, Rei-bot, Anonymous Dissident, Sankalpdravid, Baatarchuluun~enwiki, Qxz, Anna Lincoln, CaptinJohn, Sillygoosemo,
JhsBot, Leafyplant, Jackfork, Psyche825, Nny12345, Zion bias, Appieters, Whbstare, Enigmaman, Sploonie, Synthebot, Falcon8765, Enviroboy, Phmoreno, A Raider Like Indiana, Furious.baz, SvNH, Jianni, EmxBot, Kbrose, Arjun024, SieBot, Damorbel, Paradoctor, Jason Patton,
LeadSongDog, JerrySteal, Hoax user, Ddsmartie, Bentogoa, Happysailor, Flyer22 Reborn, Dhateld, BrianGregory86, Oxymoron83, Antonio Lopez, CultureShock582, OKBot, Correogsk, Mygerardromance, Hamiltondaniel, JL-Bot, Tomasz Prochownik, Loren.wilton, ClueBot,
Namasi, The Thing That Should Not Be, DesertAngel, Taroaldo, Therealmilton, Pak umrfrq, Kdruhl, LizardJr8, Whoever101, ChandlerMapBot, Notburnt, GrapeSmuckers, Aua, Djr32, Jusdafax, LaosLos, Chrisban0314, Pmronchi, Eeekster, Lartoven, Brews ohare, NuclearWarfare,
Jotterbot, PhySusie, Scog, Sidsawsome, SoxBot, Razorame, DEMOLISHOR, CheddarMan, Aitias, Dank, MagDude101, Galor612, Cableman1112, SoxBot III, RexxS, Faulcon DeLacy, Spitre, Shres58tha, Avoided, Snapperman2, Thatguyint, Mls1492, Thebestofall007, Addbot,
Power.corrupts, DOI bot, Morri028, DougsTech, Patrosnoopy, Glane23, Bob K31416, Numbo3-bot, Landofthedead2, Lightbot, OlEnglish,
Gatewayontrigue, Ben Ben, Luckas-bot, Yobot, THEN WHO WAS PHONE?, Bos7, QueenCake, IW.HG, Magog the Ogre, AnomieBOT,
Paranoidhuman, IncidentalPoint, Daniele Pugliesi, Jim1138, Flewis, Materialscientist, Celtis123, Citation bot, Fredde 99, LilHelpa, Xqbot, Ad-

243

244

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

dihockey10, Capricorn42, Fireballxyz, - ), Almabot, GrouchoBot, Omnipaedista, Jezhotwells, Waleswatcher, Logger9, Twested, Chjoaygame,
FrescoBot, VS6507, Wallyau, Petr10, Galorr, WikiCatalogEdit701, Sae1962, Denello, Neutiquam, HamburgerRadio, Citation bot 1, Pinethicket,
HRoestBot, 10metreh, Jonesey95, Calmer Waters, Thermo771, RedBot, MastiBot, Serols, TobeBot, Yunshui, CathySc, Thermodynoman,
Thomas85127, Myleneo, Schmei, Brian the Editor, Unbitwise, Sundareshan, DARTH SIDIOUS 2, TjBot, Beyond My Ken, EmausBot, Orphan
Wiki, Domesticenginerd, WikitanvirBot, Obamafan70, AriasFco, Helptry, Racerx11, GoingBatty, Nag 08, Your Lord and Master, Weleepoxypoo, Wikipelli, Dcirovic, K6ka, John of Lancaster, Hhhippo, Checkingfax, Traxs7, Shannon1, Azuris, H3llBot, Libb Thims, Wayne Slam,
Tolly4bolly, Vanished user fois8fhow3iqf9hsrlgkjw4tus, EricWesBrown, Mayur, Donner60, Jbergste, DennisIsMe, Haiti333, Hazard-Bot, ChuispastonBot, Levis ken, Matewis1, LaurentRDC, 28bot, Sonicyouth86, Anshul173, ClueBot NG, Coverman6, Piast93, Chester Markel, Andreas.Persson, Chronic21, Jj1236, Duciswrong1234, Suresh 5, Widr, The Troll lolololololololol, NuclearEnergy, Helpful Pixie Bot, Calabe1992,
DBigXray, Nomi12892, Necatikaval, BG19bot, Xonein, Krenair, BeRo999, Fedor Babkin, PTJoshua, Balajits93, Deadamouse, MusikAnimal,
Metricopolus, Ushakaron, Mariano Blasi, CitationCleanerBot, Hollycli, Zedshort, Asaydjari, Blodslav, Nascar90210, DarafshBot, Adwaele,
Kaslu.S, Dexbot, Duncanpark, Joeljoeljoel12345, Czforest, Josophie, Miyangoo, Beans098, Reatlas, Rejnej, Nerlost, Epicgenius, Georgegeorge127, Deadmau8****, I am One of Many, Harlem Baker Hughes, Dakkagon, DavidLeighEllis, Vinodhchennu, Ugog Nizdast, Prokaryotes,
E John Wayne, Ginsuloft, Bubba58, Nanapanners, Fortuna Imperatrix Mundi, Hknaik1307, Monkbot, Horseless Headman, Codebreaker1999,
BTHB2010, Bunlip, Zirus101, Xmlhttp.readystate, Crystallizedcarbon, Qpdatabase, Jayashree1203, Youlikeman, JellyPatotie, Loveusujeet,
Isambard Kingdom, CV9933, Nashrudin13l, Supdiop, The Collapsation of The Sensation, KasparBot, CabbagePotato, Amangautam1995,
, Ravi.dhami.234, Bishwajeet Panda, HenryGroupman, CaptainSirsir, Dctfgijkm, Paragnar, Spinrade, Downingk9711,
K Sikdar and Anonymous: 898
Statistical mechanics Source: https://en.wikipedia.org/wiki/Statistical_mechanics?oldid=713626783 Contributors: The Cunctator, Derek Ross,
Bryan Derksen, The Anome, Ap, Miguel~enwiki, Peterlin~enwiki, Edward, Patrick, Michael Hardy, Tim Starling, Den fjttrade ankan~enwiki,
Bogdangiusca, Mxn, Charles Matthews, Phys, Nnh, Eman, Fuelbottle, Isopropyl, Cordell, Ancheta Wis, Giftlite, Andries, Mikez, Monedula,
Alison, Tweenk, John Palkovic, Karol Langner, APH, Karl-Henner, Edsanville, Michael L. Kaufman, Chris Howard, Brianjd, Bender235, Elwikipedista~enwiki, Linuxlad, Jumbuck, Ryanmcdaniel, BryanD, PAR, Jheald, Woohookitty, Linas, StradivariusTV, Kzollman, Pol098, Mpatel,
SDC, DaveApter, Nanite, Rjwilmsi, HappyCamper, FlaBot, Margosbot~enwiki, Gurch, Fephisto, GangofOne, Sanpaz, YurikBot, Wavelength,
The.orpheus, DiceDiceBaby, JabberWok, Brec, Mary blackwell, Dhollm, E2mb0t~enwiki, Aleksas, Teply, That Guy, From That Show!, SmackBot, Pavlovi, Charele, Jyoshimi, Weiguxp, David Woolley, Edgar181, Drttm, Steve Omohundro, Skizzik, DMTagatac, ThorinMuglindir, Kmarinas86, Bluebot, MK8, Complexica, Sbharris, Wiki me, Phudga, Radagast83, RandomP, G716, Sadi Carnot, Yevgeny Kats, Lambiam, Chrisch,
Frokor, Mets501, Politepunk, Iridescent, IvanLanin, Daniel5127, Van helsing, Djus, Mct mht, Cydebot, Forthommel, Boardhead, Dancter,
Joyradost, Christian75, Abtract, Thijs!bot, Headbomb, Spud Gun, Samkung, Alphachimpbot, Perelaar, Chandraveer, JAnDbot, Yill577, Magioladitis, WolfmanSF, VoABot II, Dirac66, Jorgenumata, Peabeejay, SimpsonDG, Lantonov, Sheliak, Gerrit C. Groenenboom, VolkovBot,
Scorwin, LokiClock, The Original Wildbear, Agricola44, Moondarkx, Locke9k, PhysPhD, Anoko moonlight, Kbrose, SieBot, Damorbel, LeadSongDog, Melcombe, StewartMH, Apuldram, Plastikspork, Razimantv, Mild Bill Hiccup, Davennmarr, Vql, Lyonspen, Djr32, CohesionBot,
Brews ohare, Mlys~enwiki, Doprendek, SchreiberBike, Thingg, Edkarpov, Qwfp, JKeck, Koumz, TravisAF, Truthnlove, Addbot, Xp54321, DOI
bot, Wickey-nl, Looie496, Netzwerkerin, , SPat, Gail, Loupeter, Yobot, Ht686rg90, TaBOT-zerem, ^musaz, Xqbot, P99am, ChristopherKingChemist, Charvest, Hlfhjwlrdglsp, Baz.77.243.99.32, Anterior1, Jonesey95, RjwilmsiBot, Pullister, EmausBot, Dcirovic, Michael assis, JSquish,
ZroBot, Wikfr, AManWithNoPlan, Kyucasio, Hpubliclibrary, Keulian, Rashhypothesis, IBensone, RockMagnetist, EdoBot, Amviotd, ClueBot
NG, CocuBot, Landregn, Frietjes, Theopolisme, Helpful Pixie Bot, Mulhollant, Robwf, PhnomPencil, Op47, Acmedogs, F=q(E+v^B), JZCL,
Roshan220195, Egm4313.s12, Illia Connell, Dexbot, Mogism, Mark viking, Alefbenedetti, W. P. Uzer, KeithFratus, Michael Lee Baker, PhilippeTilly, , Udus97, Scientic Adviser, Izkala, VexorAbVikipdia, Dymaio, KasparBot, Spinrade, JosiahWilard, Gray76007600 and
Anonymous: 161
Chemical thermodynamics Source: https://en.wikipedia.org/wiki/Chemical_thermodynamics?oldid=704728105 Contributors: Jdpipe, Selket,
Jeq, Robbot, Giftlite, H Padleckas, Icairns, Discospinster, Vsmith, Nk, Alansohn, PAR, Count Iblis, LukeSurl, StradivariusTV, Je3000,
Ketiltrout, Srleer, Sanguinity, Dhollm, Arthur Rubin, Elfer~enwiki, Itub, SmackBot, Fuzzform, MalafayaBot, Hallenrm, SteveLower, Sadi
Carnot, JzG, Beetstra, Optakeover, Myasuda, AndrewHowse, Astrochemist, ErrantX, Thijs!bot, Barticus88, Headbomb, Marek69, D.H, User
A1, Thermbal, AtholM, Avitohol, Yuorme, Thisisborin9, Philip Trueman, The Original Wildbear, Seb az86556, Damorbel, Caltas, ClueBot, The
Thing That Should Not Be, Ectomaniac, DragonBot, Excirial, Tnxman307, SchreiberBike, Avoided, Ronhjones, LaaknorBot, EconoPhysicist,
Bwrs, Legobot, Luckas-bot, Yobot, Gdewilde, Daniele Pugliesi, Unara, Materialscientist, The High Fin Sperm Whale, Citation bot, J G Campbell,
GrouchoBot, Bellerophon, , Stratocracy, FrescoBot, Wikipe-tan, StaticVision, Galorr, Citation bot 1, Russot1, IncognitoErgoSum,
RenamedUser01302013, Wikipelli, ClueBot NG, Gilderien, NuclearEnergy, Helpful Pixie Bot, BG19bot, Mn-imhotep, JYBot, Notebooktheif,
The Herald, Citrusbowler, Billyjeanisalive1995, Monkbot, Shreyas murthy and Anonymous: 76
Equilibrium thermodynamics Source: https://en.wikipedia.org/wiki/Equilibrium_thermodynamics?oldid=693963750 Contributors: Quadalpha, Karol Langner, Pjacobi, Vsmith, ChrisChiasson, Wavesmikey, Dhollm, Sadi Carnot, Alphachimpbot, OKBot, Daniele Pugliesi, ,
Chjoaygame, EmausBot, ZxxZxxZ, Czforest and Anonymous: 3
Non-equilibrium thermodynamics Source: https://en.wikipedia.org/wiki/Non-equilibrium_thermodynamics?oldid=716464676 Contributors:
The Anome, Toby Bartels, Miguel~enwiki, SimonP, Michael Hardy, Kku, William M. Connolley, Phys, Aetheling, Tea2min, Waltpohl,
Karol Langner, Mike Rosoft, Chris Howard, Bender235, Mdd, PAR, Oleg Alexandrov, Linas, Mandarax, Rjwilmsi, Michielsen, Mathbot,
Physchim62, ChrisChiasson, Gwernol, Wavesmikey, Jugander, Ozarfreo, Dhollm, SmackBot, WebDrake, Bluebot, Complexica, Jbergquist,
Sadi Carnot, JarahE, NonDucor, Cydebot, X14n, Boardhead, Mirrormundo, Miketwardos, D4g0thur, HappyInGeneral, Headbomb, Juchoy,
Mythealias, GuidoGer, R'n'B, AgarwalSumeet, Unauthorised Immunophysicist, Lseixas, TXiKiBoT, Xdeh, Zhenqinli, Kbrose, Burhan Salay,
Mihaiam~enwiki, Eug373, XLinkBot, Nathan Johnson, Addbot, Favonian, Yobot, Tamtamar, AnomieBOT, Materialscientist, Citation bot,
Yrogirg, , Nerdseeksblonde, Chjoaygame, Sinusoidal, Citation bot 1, Loudubewe, RedBot, DrProbability, Thermoworld, Tranh
Nguyen, RjwilmsiBot, Massieu, ZroBot, TyA, Ems2715, ThePowerofX, Gary Dee, Snotbot, X-men2011, Bernhlav, MerlIwBot, Helpful Pixie
Bot, 7methylguanosine, Bibcode Bot, BG19bot, Mn-imhotep, Taylanmath, Pfd1986, Cyberbot II, Laberkiste, Adwaele, JYBot, Duncanpark,
Lebon-anthierens, Mimigdal, Yardimsever, Campo246, Kogge, Annakremen, Ssmmachen, JosiahWilard, WandaLan and Anonymous: 53
Zeroth law of thermodynamics Source: https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics?oldid=707169471 Contributors: The
Anome, Michael Hardy, Tim Starling, Ellywa, Victor Gijsbers, Reddi, Wik, Jeepien, Fibonacci, Sokane, Raul654, Bkell, Seth Ilys, Cutler, Alan

12.1. TEXT

245

Liefting, Giftlite, Binadot, Dissident, Marcika, Jason Quinn, Robert Brockway, Karol Langner, Asbestos, Cinar, M1ss1ontomars2k4, DanielJanzon~enwiki, Pjacobi, Paul August, Bender235, Pt, Ntmatter, Duk, Nk, Llywelyn, Wrs1864, Pearle, Alansohn, PAR, Rgeldard, Kdau, BDD,
Miaow Miaow, Tutmosis, Palica, Yurik, Jehochman, Fresheneesz, Chobot, YurikBot, Splintercellguy, NTBot~enwiki, Wavesmikey, SCZenz,
Dhollm, E2mb0t~enwiki, Syrthiss, Kortoso, Bota47, TheMadBaron, Theda, Kwyjibear, NetRolller 3D, SmackBot, InverseHypercube, Neptunius, Knowhow, Mslimix, ThorinMuglindir, MalafayaBot, DHN-bot~enwiki, Tsca.bot, Sholto Maud, Chlewbot, Quadparty, Cybercobra,
Richard001, Marosszk, Sadi Carnot, Lambiam, Wikipedialuva, Frokor, Dicklyon, Ginkgo100, K, Hyperquantization, Achoo5000, Equendil,
Kareemjee, Astrochemist, Meno25, Ring0, Odie5533, Christian75, Hernlund, Mawve, Headbomb, Pfranson, Widefox, JAnDbot, JamesBWatson, WLU, Anaxial, Yonidebot, SubwayEater, DorganBot, Gpetrov, Funandtrvl, ACSE, Amikake3, Lears Fool, Davwillev, Wenli, Anna512,
Spinningspark, Derek Iv, Zebas, Kbrose, SieBot, Tresiden, Revent, Jojalozzo, Tombomp, OKBot, Svick, ClueBot, Wikijens, Djr32, Excirial,
Alexbot, Estirabot, Sun Creator, La Pianista, MigFP, Nathan Johnson, Addbot, Tcncv, Metagraph, Chamal N, Lightbot, Zorrobot, Luckas-bot,
Yobot, THEN WHO WAS PHONE?, AnomieBOT, Kingpin13, Materialscientist, Xqbot, Eddy 1000, GrouchoBot, Omnipaedista, Brandon5485,
Markorajendra, Sheeson, Much noise, Chjoaygame, Dgyeah, Sawomir Biay, Nobleness of Mind, Fortesque666, Sundareshan, Korech, Devper94, EmausBot, WikitanvirBot, KurtLC, Wikipelli, ZroBot, Makecat, Psychokinetic, Pun, ClueBot NG, Krouge, Helpful Pixie Bot, Art
and Muscle, Cognitivecarbon, Savarona1, MusikAnimal, Ushakaron, Rs2360, Aisteco, Maxair215, Cup o' Java, SoledadKabocha, Vinayak 1995,
Eli4ph, Zmicier P., Noyster, PhoenixPub, Dr Marmilade, Hunteroid, RegistryKey, Captain Chesapeake and Anonymous: 133
First law of thermodynamics Source: https://en.wikipedia.org/wiki/First_law_of_thermodynamics?oldid=709496176 Contributors: Tarquin,
XJaM, Heron, Jebba, JWSchmidt, Glenn, Cherkash, Reddi, Gutsul, Giftlite, Geni, Karol Langner, Icairns, Cinar, Discospinster, Pjacobi, Vsmith,
Dave souza, Bender235, ESkog, Pjrich, Marx Gomes, Shanes, Smalljim, AtomicDragon, Pazouzou, Helix84, Orzetto, Alansohn, Arthena,
PAR, Jheald, Count Iblis, Kazvorpal, KTC, SmthManly, ChrisNoe, Zealander, WadeSimMiser, Hdante, Mandarax, Rjwilmsi, The wub, Fish
and karate, Gurch, Fresheneesz, SteveBaker, Srleer, Mcavoys, Flying Jazz, ChrisChiasson, DVdm, Bgwhite, YurikBot, 4C~enwiki, Arado,
Wavesmikey, Stephenb, NawlinWiki, ZacBowling, Dhollm, DeadEyeArrow, Nescio, Calaschysm, Sharkb, BorgQueen, JuJube, Pifvyubjwm,
JDspeeder1, Mejor Los Indios, Vojta2, Sbyrnes321, Itub, SmackBot, Aido2002, Philx, Rex the rst, McGeddon, Gilliam, The Gnome, Keegan,
Complexica, Sadads, Gracenotes, VMS Mosaic, Marosszk, Sadi Carnot, Loodog, AstroChemist, Nonsuch, Patau, Wikster72, 2T, Iridescent, K, TwistOfCain, IvanLanin, Charlieb003, Rhetth, Mikiemike, Makeemlighter, Equendil, Cydebot, Kareemjee, Astrochemist, Meno25,
Christian75, DumbBOT, LeBofSportif, Headbomb, BirdKr, EdJohnston, Perpetual motion machine, Pgagge, Luna Santin, MichaelHenley,
TimVickers, Qwerty Binary, Zidane tribal, JAnDbot, Narssarssuaq, MER-C, Matthew Fennell, Dr mindbender, Acroterion, Easchi, Askeyca, Bongwarrior, Wikidudeman, Usien6, Dirac66, TheBusiness, Hbent, Valthalas, Stafo86, Pharaoh of the Wizards, Abecedare, McDScott,
Akmunna, Littlecanargie, Sunderland06, VolkovBot, TXiKiBoT, NPrice, JhsBot, LeaveSleaves, Cremepu222, Venny85, Koen Van de moortel~enwiki, Blurpeace, Logan, Kbrose, SieBot, Caltas, Adamaja456, Happysailor, CombatCraig, Belinrahs, Momo san, Cloudjunkie, Shally87,
ClueBot, Mild Bill Hiccup, NewYorkDreams, NuclearWarfare, PhySusie, TCGrenfell, MigFP, Erodium, Nathan Johnson, Wertuose, Gonfer,
Addbot, Rishabhgoel, Lightbot, PV=nRT, Echinoidea, Luckas-bot, Yobot, Fraggle81, TaBOT-zerem, GMTA, Worm That Turned, AnomieBOT,
Jim1138, Materialscientist, 90 Auto, ArthurBot, LilHelpa, Lh389, Xqbot, Popx3rocks, Nanog, GrouchoBot, , Waleswatcher,
A. di M., Chjoaygame, FrescoBot, Tobby72, Pepper, Cannolis, Redrose64, Pinethicket, I dream of horses, Jonesey95, Martinvl, Jschnur, Vincenzo Malvestuto, Piandcompany, FoxBot, Tehfu, Fox Wilson, Skk146, Tbhotch, Isrl.abel, Ripchip Bot, EmausBot, Llewkcalbyram, 478jjjz,
Passionless, Netheril96, Wikipelli, JSquish, Moravveji, Thine Antique Pen, Jay-Sebastos, Mentibot, BF6-NJITWILL, 912doctorwho, Wikiwind, Spicemix, ClueBot NG, Sag010793, Widr, Helpful Pixie Bot, Novusuna, Ninja-bunny.webs, Bibcode Bot, Lowercase sigmabot, Ditto51,
NZLS11, ISTB351, Hallows AG, Wiki13, MusikAnimal, Mark Arsten, Ushakaron, Mn-imhotep, Rs2360, Zedshort, Glacialfox, Anbu121,
Therealrockstar007, Coldestgecko, Alchemice, Keitam, Pmmanley, Fatimah M, Arcandam, Adwaele, Webclient101, Mogism, Ninjamen1234,
Zmicier P., Chessmad1, Nerlost, Babitaarora, Zenibus, PhoenixPub, JaconaFrere, Ordessa, Isambard Kingdom, Timothya101., Captain Chesapeake, Das O2, Kolaberry, Awyeahlol, Klaus Schmidt-Rohr and Anonymous: 326
Second law of thermodynamics Source: https://en.wikipedia.org/wiki/Second_law_of_thermodynamics?oldid=715820238 Contributors: The
Anome, Jeronimo, XJaM, Roadrunner, Jdpipe, David spector, Lorenzarius, Michael Hardy, Ixfd64, Ahoerstemeier, Cyp, Theresa knott,
Snoyes, Jebba, AugPi, Cherkash, Ilyanep, Tantalate, Reddi, Terse, Tb, Timc, IceKarma, DJ Clayworth, Tpbradbury, Phys, Omegatron, Marc
Girod~enwiki, Jeq, ScienceGuy, ChrisO~enwiki, Fredrik, Romanm, Gandalf61, Postdlf, Ashley Y, Sunray, Hadal, Robinh, Johnstone, Cutler,
Tea2min, Stirling Newberry, Giftlite, ComaVN, N12345n, Karn, FunnyMan3595, Curps, FeloniousMonk, Chinasaur, Dav4is, Duncharris, Bobblewik, Edcolins, LiDaobing, Pcarbonn, Antandrus, Eroica, Ravikiran r, Kaldari, Jossi, Karol Langner, Wikimol, Rdsmith4, Panzi, Sam Hocevar,
Neutrality, Ratiocinate, Trevor MacInnis, Grstain, DanielCD, Brianhe, Rich Farmbrough, KillerChihuahua, Rhobite, Pjacobi, Vsmith, ArnoldReinhold, Dave souza, Ivan Bajlo, Number 0, Bender235, Ignignot, Sietse Snel, Euyyn, SteveCoast, Bobo192, I9Q79oL78KiL0QTFHgyc,
Aquillion, Nk, Maebmij, Helix84, AppleJuggler, Cpcjr, Jason One, Kingsindian, Zenosparadox, Arthena, Mineralogy, PAR, Wtshymanski,
Evil Monkey, Jheald, Count Iblis, Dominic, Pauli133, Alai, KTC, Oleg Alexandrov, Crosbiesmith, ChrisNoe, Madmardigan53, Miaow Miaow,
Keta, Denevans, Funhistory, Christopher Thomas, Gerbrant, GSlicer, Kbdank71, Nanite, Rjwilmsi, Koavf, Eyu100, Yamamoto Ichiro, Nihiltres, Dantecubed, Fresheneesz, Srleer, Jittat~enwiki, Chobot, Flying Jazz, ChrisChiasson, Wavelength, Michaeladenner, RobotE, Hairy
Dude, Bobby1011, Wavesmikey, Akamad, Stephenb, Gaius Cornelius, CambridgeBayWeather, Aeusoes1, SCZenz, Ragesoss, Dhollm, Abb3w,
Mgrierson, Dna-webmaster, WAS 4.250, Enormousdude, 2over0, Dieseldrinker, Arthur Rubin, BorgQueen, Ilmari Karonen, Fluent aphasia,
Profero, Innity0, Sbyrnes321, DVD R W, Knowledgeum, Luk, SmackBot, Cirejcon, Ashenai, ChXu, CarbonCopy, McGeddon, Palinurus,
WebDrake, Jim62sch, David Shear, Neptunius, Adrian232, Gunnar.Kaestle, Lsommerer, Bmord, Jab843, Yamaguchi , Gilliam, The Gnome,
ThorinMuglindir, Kmarinas86, Chris the speller, Bduke, Tisthammerw, MalafayaBot, Complexica, Bonaparte, Desp~enwiki, Zmanish, Verrai, Ben Rogers, Sholto Maud, Andyparkins, H-J-Niemann, EPM, Dreadstar, DMacks, Henning Makholm, Sadi Carnot, Mikaduki, Zchenyu,
AThing, Miftime, Rklawton, Doanison, JorisvS, Mgiganteus1, Nonsuch, IronGargoyle, AwesomeMachine, Stikonas, MrArt, Peyre, Xionbox,
Astrobradley, Dan Gluck, Seqsea, K, Michaelbusch, CzarB, Kommando797, Spk ben, George100, Tubbyspencer, Josedanielc, Mikiemike, Ale
jrb, Wafulz, AlbertSM, Father Ignatius, Jucati, Emilio Juanatey, Myasuda, Cydebot, Rieman 82, Meno25, Ring0, Miguel de Servet, Michael C
Price, DumbBOT, JodyB, Spookpadda, Daa89563, LeBofSportif, DMZ, Headbomb, Marek69, John254, EdJohnston, AntiVandalBot, Widefox, Gkhan, Canadian-Bacon, Narssarssuaq, MER-C, Physical Chemist, Acroterion, Meeples, Magioladitis, VoABot II, Mbarbier, Hubbardaie,
Daarznieks, Dirac66, Hbent, Heqwm, Tercer, Wkussmaul, Jtir, Aeternium, Hweimer, R'n'B, Mbweissman, Time traveller, J.delanoy, Pharaoh of
the Wizards, Musaran, Ian.thomson, Bluecheese333, Salih, LordAnubisBOT, Frisettes, Stootoon, Ppithermo, VolkovBot, Larryisgood, Joeoettinger, ABF, Speaker to Lampposts, JayEsJay, Rei-bot, Anonymous Dissident, Michael H 34, LeaveSleaves, Natg 19, Maxim, Antixt, Enviroboy,
San Diablo, Zebas, Kbrose, Subh83, SieBot, YonaBot, BotMultichill, Dawn Bard, Caltas, Jewk, Crash Underride, Arjun r acharya, Discrete,

246

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Nrsmith, Jdaloner, Barry Fruitman, Sunrise, Denisarona, Vanished user qkqknjitkcse45u3, ClueBot, PaulLowrance, The Thing That Should Not
Be, Wisemove, Hjlim, Bbanerje, Lbrewer42, LizardJr8, LonelyBeacon, Manishearth, Jimbomonkey, Nymf, Simonmckenzie, Wndl42, Estirabot,
Sun Creator, Laughitup2, Nas ru, AC+79 3888, Crowsnest, DumZiBoT, Darkicebot, AP Shinobi, BodhisattvaBot, Jovianeye, Lilmy13, Gonfer,
Subversive.sound, Aunt Entropy, MystBot, Addbot, Magus732, Jncraton, Ashanda, MrOllie, CarsracBot, Favonian, Tide rolls, Lightbot, Gatewayontrigue, Teles, Arbitrarily0, Hartz, Luckas-bot, Yobot, Fraggle81, Sanyi4, Egbertus, AnomieBOT, Rubinbot, Jim1138, Jacob2718, Materialscientist, Citation bot, Chemeditor, LilHelpa, Xqbot, Nanog, Aa77zz, GrouchoBot, ChristopherKingChemist, Rhettballew, Waleswatcher,
Sin.pecado, Chjoaygame, FrescoBot, Tobby72, JMS Old Al, D'ohBot, RWG00, Tomerbot, Vh mby, Citation bot 1, PigFlu Oink, Pinethicket, I
dream of horses, Jonesey95, AnandaDaldal, Serols, Alfredwongpuhk, , Howzeman, Klangenfurt, Naji Khaleel, Yappy2bhere,
LoStrangolatore, RjwilmsiBot, Ptbptb, Aircorn, EmausBot, John of Reading, Lea phys, Da500063, Netheril96, Dcirovic, Arjun S Ariyil,
Evanh2008, JSquish, John Cline, Bollyje, Mattedia, Kenan82, Tls60, WikiPidi, Ems2715, BF6-NJITWILL, Spicemix, Rocketrod1960, ClueBot NG, Snoid Headly, Jj1236, Mormequill, Widr, WikiPuppies, Helpful Pixie Bot, Bibcode Bot, Lowercase sigmabot, BG19bot, Savarona1,
Cdh1001, Ugncreative Usergname, Glevum, Rs2360, Crio, Rowan Adams, Pratyya Ghosh, LeeMcLoughlin1975, Adwaele, Mdkssner, Jchammel, Pterodactyloid, Lugia2453, Zmicier P., Jochen Burghardt, Reatlas, Nerlost, Nicksola, Glenn Tamblyn, Mre env, The-vegan-muser, Aspro89,
Prokaryotes, Nakitu, PhoenixPub, Ammamaretu, Skr15081997, Burnandquiver, Monkbot, Douglas Cotton, Wiki jeri, IagoQnsi, Trackteur,
Tylerleeredd, Theeditinprogress, BiologicalMe, Jorge Guerra Pires, KH-1, Crystallizedcarbon, Yusefghouth, Captain Chesapeake, CAPTAIN
RAJU, Klaus Schmidt-Rohr and Anonymous: 561
Third law of thermodynamics Source: https://en.wikipedia.org/wiki/Third_law_of_thermodynamics?oldid=709781075 Contributors: The
Anome, XJaM, Cherkash, Rob Hooft, Reddi, Stismail, Grendelkhan, Vamos, Fredrik, Guy Peters, Cutler, Giftlite, Smjg, Tom harrison, Everyking, Ned Morrell, Karol Langner, D6, Pjacobi, Bender235, Duk, Helix84, Keenan Pepper, Andrewpmk, PAR, Jheald, Gene Nygaard,
Miaow Miaow, SeventyThree, Nanite, Chobot, YurikBot, Chris Capoccia, Wavesmikey, Okedem, Salsb, SCZenz, Dhollm, E2mb0t~enwiki,
Tony1, CWenger, Sbyrnes321, McGeddon, Unyoyega, Gilliam, Sandycx, Colonies Chris, Malosse, Rrburke, Marosszk, BZegarski, Sadi Carnot,
Majorclanger, 2T, K, Richard75, Einstein runner, Astrochemist, Gogo Dodo, Ring0, Khattab01~enwiki, Dchristle, Thijs!bot, Barticus88, Widefox, MER-C, Magioladitis, Alan Holyday, Edward321, Canberra User, Masaki K, Mbweissman, Time traveller, Ssault, Olulade, CardinalDan,
VolkovBot, Malinaccier, A4bot, Wolfrock, Zebas, Kbrose, Natox, SieBot, Gerakibot, Oxymoron83, OKBot, Bewporteous, Mygerardromance,
WikiLaurent, TSRL, ClueBot, LAX, Wikijens, MigFP, Happysam92, Spitre, Addbot, CarsracBot, Luckas-bot, Sanyi4, AnomieBOT, Rubinbot, Jim1138, JackieBot, Citation bot, Xqbot, Draxtreme, GrouchoBot, RibotBOT, Waleswatcher, Erik9, Chjoaygame, D'ohBot, Jonesey95,
Nobleness of Mind, Hb2007, EmausBot, John of Reading, 8digits, Shuipzv3, Wmayner, Nexia asx, Spicemix, ClueBot NG, Alchemist314,
Helpful Pixie Bot, Bibcode Bot, BG19bot, CityOfSilver, Bush6984, Rs2360, Zedshort, Nitcho1as12, SimmeD, Patton622, Adwaele, Cup o'
Java, Cesaranieto~enwiki, Ankitdwivedimi6, FiredanceThroughTheNight, Dakkagon, Sball004, Gareld Gareld, Krishtafar, Wikixenia and
Anonymous: 109
History of thermodynamics Source: https://en.wikipedia.org/wiki/History_of_thermodynamics?oldid=713637911 Contributors: Collabi, Lumos3, Arkuat, Gandalf61, Cutler, Karol Langner, Eric Forste, PAR, Marianika~enwiki, Carcharoth, Benbest, Rjwilmsi, Ligulem, Srleer,
Chobot, Gaius Cornelius, CambridgeBayWeather, Ragesoss, Dhollm, Moe Epsilon, Rayc, Netrapt, Tropylium, SmackBot, Jagged 85, TimBentley, Colonies Chris, A.R., DMacks, Ligulembot, Mion, Sadi Carnot, Pilotguy, JzG, JorisvS, Peterlewis, Special-T, AdultSwim, Lottamiata,
Myasuda, FilipeS, Gtxfrance, Doug Weller, M karzarj, Barticus88, D.H, Greg L, EdJogg, VoABot II, Cardamon, Jtir, Inwind, ElinorD, Riick,
Enviroboy, Radagast3, Natox, SieBot, I Love Pi, Anchor Link Bot, Tomasz Prochownik, MCCRogers, Taroaldo, J8079s, Djr32, CohesionBot,
Eeekster, XLinkBot, Saeed.Veradi, Ariconte, Kwjbot, Addbot, Lightbot, Wikkidd, Luckas-bot, Yobot, Ptbotgourou, Ajh16, AnomieBOT, Citation bot, ArthurBot, Xqbot, J04n, GrouchoBot, ChristopherKingChemist, SassoBot, Geraldo61, Fortdj33, Machine Elf 1735, Citation bot
1, TobeBot, Marie Poise, Syncategoremata, ClueBot NG, Helpful Pixie Bot, Bibcode Bot, Ludi Romani, Bfong2828, SoledadKabocha, Belief
action, Nerlost, Sibyl Gray, Yikkayaya, CleanEnergyPundit and Anonymous: 32
An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction Source: https://en.wikipedia.org/wiki/An_
Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction?oldid=715176110 Contributors: Jdpipe, Dominus, Charles Matthews, Bloodshedder, Cutler, MakeRocketGoNow, Mdd, Wijnand, GregorB, Rjwilmsi, Tim!, Ligulem, Vclaw, Jaraalbe, RussBot, Dhollm, Qero, Itub, SmackBot, Localzuk, Peterlewis, Wizard191, Cydebot, Mrmrbeaniepiece, Gioto, Nyttend, PC78, TomyDuby, Inwind,
Guillaume2303, Kdruhl, Good Olfactory, Airplaneman, Tassedethe, Lightbot, Citation bot, ChristopherKingChemist, ClueBot NG, Saehry,
Nerlost, VexorAbVikipdia and Anonymous: 2
Control volume Source: https://en.wikipedia.org/wiki/Control_volume?oldid=642129926 Contributors: Jdpipe, Silversh, Rich Farmbrough,
Xezbeth, Mairi, Mdd, RJFJR, Kbdank71, Mathbot, Siddhant, Matador, Dhollm, Bjs1234, Plober, Chris the speller, Bluebot, HydrogenSu, Sadi
Carnot, Wanstr, Wolfram.Tungsten, STBot, FelixTheCat85, Salih, Dolphin51, Cacadril, Crowsnest, Addbot, Iwfyita, ZroBot and Anonymous:
11
Ideal gas Source: https://en.wikipedia.org/wiki/Ideal_gas?oldid=716419913 Contributors: SimonP, Peterlin~enwiki, Ben-Zin~enwiki, FlorianMarquardt, Patrick, Michael Hardy, Wshun, GTBacchus, Looxix~enwiki, Ellywa, Nikai, Schneelocke, Bamos, Robbot, Hankwang, Kizor, COGDEN, Soilguy3, Tea2min, Enochlau, Giftlite, Wolfkeeper, Herbee, Brona, Bensaccount, Louis Labrche, Kraton, Karol Langner, H Padleckas,
Tsemii, Edsanville, Brianjd, Pjacobi, Vsmith, Altmany, SpookyMulder, Bender235, Chewie, Nigelj, Avathar~enwiki, Nk, Keenan Pepper, PAR,
Cdc, Rebroad, H2g2bob, -kkm, Gene Nygaard, BillC, GregorB, Palica, Nanite, Margospl, Chobot, ChrisChiasson, YurikBot, Hairy Dude, JabberWok, CambridgeBayWeather, Rick lightburn, D. F. Schmidt, Dhollm, Aaron Schulz, Bota47, 2over0, Aleksas, TBadger, CWenger, Paul
D. Anderson, Bo Jacoby, CrniBombarder!!!, SmackBot, Kmarinas86, Chris the speller, ViceroyInterus, GregRM, MalafayaBot, Complexica,
Colonies Chris, Moosesheppy, Whpq, Michael Ross, Just plain Bill, Sadi Carnot, Lambiam, Kpengboy, MTSbot~enwiki, Tawkerbot2, OlexiyO,
Joelholdsworth, Cydebot, Nonagonal Spider, Headbomb, Bigbill2303, JustAGal, Escarbot, Leftynm, Nosbig, JAnDbot, Davidtwu, Bongwarrior, Corpeter~enwiki, User A1, Mythealias, CommonsDelinker, Leyo, Slugger, Huzzlet the bot, Davidr222, Landarski, Bigjoestalin, Stan J
Klimas, Tarotcards, Hesam 8529022, VolkovBot, DSRH, Theosch, Malinaccier, Tsi43318, Riick, Nosferattr, SieBot, Gerakibot, Man Its So
Loud In Here, Adamtester, Thekingofspain, Qmantoast, ClueBot, Razimantv, Mild Bill Hiccup, Turbojet, Vql, CarlosPatio, Katanada, Khunglongcon, WikiDao, Prowikipedians, Addbot, Power.corrupts, Fieldday-sunday, EconoPhysicist, Ckk253, PranksterTurtle, Mean Free Path,
Tide rolls, Zorrobot, Luckas-bot, Yobot, Fraggle81, Kipoc, Paranoidhuman, Materialscientist, Xqbot, Nickkid5, GrouchoBot, ChristopherKingChemist, RibotBOT, E0steven, SD5, BoomerAB, Chjoaygame, Nagoltastic, FrescoBot, FoxBot, , EmausBot, WikitanvirBot,
Mrericsully, HiW-Bot, Kiwi128, AManWithNoPlan, Donner60, ClueBot NG, CocuBot, Movses-bot, Tr00rle, Kevinjasm, Piguy101, Brad7777,

12.1. TEXT

247

Aisteco, Uopchem25asdf, BeaumontTaz, YDelta, Mike666234, HiYahhFriend, MantleMeat, Trackteur, Macofe, Carlojoseph14, Alligator420,
Mtthwknnd4 and Anonymous: 181
Real gas Source: https://en.wikipedia.org/wiki/Real_gas?oldid=709413052 Contributors: Charles Matthews, Robbot, Giftlite, Brianjd, PAR,
Velella, Jost Riedel, Rjwilmsi, Boccobrock, Dhollm, Tony1, Closedmouth, SmackBot, Colonies Chris, Anakata, Gogo Dodo, Raoul NK, Headbomb, Fayenatic london, Olaf, Stan J Klimas, Heero Kirashami, Vanished user 39948282, TXiKiBoT, Theosch, LeaveSleaves, Meters, Logan, Jpuppy, StaticGull, Marco zannotti, ClueBot, Ideal gas equation, Alexbot, Katanada, Crowsnest, Addbot, Power.corrupts, Download, LinkFA-Bot,
84user, Krano, Luckas-bot, Takuma-sa, Azylber, Sonia, Gumok, Omnipaedista, Shadowjams, BenzolBot, Pinethicket, MinkeyBuddy, MastiBot,
Jauhienij, EmausBot, Klbrain, Dcirovic, ZroBot, Zl1corvette, ClueBot NG, Jwchong, UAwiki, Ushakaron, Mn-imhotep, Sarah george mesiha,
Marvin W. Hile, Zrephel, Jianhui67, VIKRAMGUPTAJI and Anonymous: 65
Thermodynamic process Source: https://en.wikipedia.org/wiki/Thermodynamic_process?oldid=715721366 Contributors: Glenn, Giftlite,
Andycjp, Karol Langner, Paul August, Alansohn, PAR, GangofOne, YurikBot, Bhny, Wavesmikey, Dhollm, Bota47, Jeh, SmackBot,
MalafayaBot, Chlewbot, Lambiam, Karenjc, Thijs!bot, JAnDbot, R'n'B, Spshu, DorganBot, VolkovBot, ABF, Philip Trueman, Lechatjaune,
Jackfork, AlleborgoBot, Natox, SieBot, Gerakibot, OKBot, FearChild, Cerireid, Addbot, Amirber, BepBot, Luckas-bot, Ptbotgourou, Choij,
Daniele Pugliesi, ArthurBot, Erik9bot, Chjoaygame, Jauhienij, EmausBot, Mmeijeri, ClueBot NG, Pcarmour, Helpful Pixie Bot, J824h,
BG19bot, F=q(E+v^B), Glacialfox, Prokaryotes, DavRosen, Quenhitran, Dhyannesh Dev, BadFaithEditor, Metlapalli sai kiran kanth, K Sikdar,
Shrodinger X and Anonymous: 31
Isobaric process Source: https://en.wikipedia.org/wiki/Isobaric_process?oldid=717073640 Contributors: Peterlin~enwiki, Ellywa, Glenn,
AugPi, Wik, Robbot, Karol Langner, Discospinster, Rgdboer, Duk, Orzetto, Keenan Pepper, Margosbot~enwiki, YurikBot, Dhollm, Plober,
SmackBot, Loodog, Patau, Sabate, Damouns, Thijs!bot, Gkhan, JAnDbot, JaGa, El Belga, VolkovBot, Lechatjaune, T0lk, Insanity Incarnate,
Kbrose, SieBot, Mike2vil, WikiBotas, Hjlim, Auntof6, Crowsnest, MystBot, Addbot, Jncraton, PV=nRT, Luckas-bot, Yobot, TaBOT-zerem,
Sanyi4, Rubinbot, GrouchoBot, Pyther, FrescoBot, , LucienBOT, Simeon89, Pinethicket, Dance-a-day, TjBot, Ripchip Bot, EmausBot, WikitanvirBot, Carultch, ClueBot NG, Anagogist, AvocatoBot, BattyBot, IkamusumeFan, CarrieVS, Zziccardi, Ebag7125, Tyler.neysmith
and Anonymous: 41
Isochoric process Source: https://en.wikipedia.org/wiki/Isochoric_process?oldid=715485863 Contributors: Peterlin~enwiki, Ixfd64, Ellywa,
Glenn, AugPi, Robbot, BenFrantzDale, Karol Langner, ArneBab, Rich Farmbrough, CDN99, DanielNuyu, Duk, Gene Nygaard, Knuckles,
YurikBot, Dhollm, Bota47, StuRat, Plober, Mejor Los Indios, KocjoBot~enwiki, Ortho, A.Z., David Legrand, Mahlerite, ALittleSlow, Thijs!bot,
Kerotan, Nyq, Freddyd945, JaGa, Ydw, Shoessss, VolkovBot, LokiClock, JhsBot, Nightkhaos, AlleborgoBot, Kbrose, SieBot, BotMultichill,
Lara bran, ClueBot, Wikijens, DragonBot, MystBot, Addbot, Skyezx, Nachoj, PV=nRT, Zorrobot, Luckas-bot, Yobot, Sanyi4, Xqbot, GrouchoBot, Pyther, Erik9bot, OgreBot, RedBot, Thi Nhi, Jerd10, TjBot, Iy6, Chuchung712, Voltaire169, ClueBot NG, IkamusumeFan, Ginsuloft, JJMC89 and Anonymous: 46
Isothermal process Source: https://en.wikipedia.org/wiki/Isothermal_process?oldid=717175757 Contributors: Roadrunner, Peterlin~enwiki,
Glenn, Cyan, AugPi, Dcoetzee, Robbot, HaeB, Karol Langner, Rich Farmbrough, Robotje, Duk, Dungodung, LOL, Shpoo, Nneonneo, Gelo71,
Yuta Aoki, Margosbot~enwiki, Chobot, Bgwhite, RussBot, Postglock, CambridgeBayWeather, Adamrush, Dhollm, Plober, SmackBot, David
Shear, Mcdu, Con, Akriasas, Lambiam, Patau, Vanisaac, OlexiyO, Astrochemist, Mtpaley, John254, Kathovo, JAnDbot, JaGa, JCraw,
R'n'B, Lechatjaune, Pedvi, !dea4u, Romeoracz, SieBot, Yintan, WikiBotas, EoGuy, DragonBot, Forbes72, MystBot, Addbot, Jncraton, Tide
rolls, PV=nRT, Legobot, Luckas-bot, Ptbotgourou, Sanyi4, Nallimbot, Rtanz, Rubinbot, Jim1138, Xqbot, Trueravenfan, GrouchoBot, Erik9bot,
Jwilson75503, Thi Nhi, EmausBot, Netheril96, A2soup, AManWithNoPlan, ClueBot NG, KrDa, AnkurBargotra, Uopchem0251, BattyBot,
ChrisGualtieri, Dexbot, Namige, Evan585619, Rajawaseem6, Retired Pchem Prof, Eden-K121D and Anonymous: 95
Adiabatic process Source: https://en.wikipedia.org/wiki/Adiabatic_process?oldid=717492687 Contributors: AxelBoldt, CYD, Bryan Derksen,
AdamW, Andre Engels, JeLuF, Roadrunner, Peterlin~enwiki, Icarus~enwiki, Edward, Michael Hardy, Tim Starling, Glenn, AugPi, Hike395,
Ec5618, Steinsky, Kaare, Grendelkhan, Phys, Raul654, Donarreiskoer, Robbot, Chancemill, Sverdrup, Moink, Wereon, Enochlau, Giftlite,
Mat-C, BenFrantzDale, Mboverload, Andycjp, Gunnar Larsson, Karol Langner, Klemen Kocjancic, Discospinster, Rich Farmbrough, Guanabot,
Vsmith, Bender235, Evand, Gershwinrb, Bobo192, Kghose, Duk, Giraedata, Jtalledo, PAR, BernardH, Count Iblis, Artur adib, Gene Nygaard,
Dan100, Linas, SeventyThree, Palica, Rjwilmsi, JLM~enwiki, Ucucha, Chobot, DVdm, Bgwhite, Triku~enwiki, YurikBot, Hairy Dude, RussBot, Stassats, NawlinWiki, Dhollm, Mlouns, Tony1, Fsiler, Plober, Mejor Los Indios, SmackBot, Slashme, InverseHypercube, Giraldusfaber,
The Gnome, Dauto, ThorinMuglindir, Bluebot, Kevinbevin9, Sbharris, Tschwenn, Smokefoot, Hgilbert, Dr. Crash, SashatoBot, Shrew, Loodog,
KostasG, Breno, Mgiganteus1, NongBot~enwiki, Patau, Rm w a vu, Joe Frickin Friday, Tac2z, Phuzion, Tawkerbot2, Mika1h, W.F.Galway,
Rracecarr, Thijs!bot, E. Ripley, Thljcl, Escarbot, Stannered, Mikenorton, TAnthony, MSBOT, Magioladitis, AuburnPilot, Aka042, Dirac66,
User A1, Dbrunner, Pgrin, AstroHurricane001, Choihei, Stan J Klimas, NewEnglandYankee, Molly-in-md, Balawd, Dhaluza, STBotD, Deor,
VolkovBot, Kyle the bot, Plenumchamber~enwiki, Venny85, MajorHazard, Kbrose, David Straight, SieBot, Ivan tambuk, Damorbel, VVVBot,
Oxymoron83, Anchor Link Bot, Hamiltondaniel, Breeet, Dolphin51, Denisarona, ClueBot, IceUnshattered, Mild Bill Hiccup, Heathmoor,
Alexbot, JLewis98856, Pcmproducts, Amaruca, Ecomesh, NevemTeve, Stefano Schiavon, ChrisHodgesUK, Mscript, Bannerts, Addbot, The
Geologist, Alkonblanko, Masegado, Sarasknight, Lindert, EconoPhysicist, BepBot, AnnaFrance, Ginosbot, Emilio juanatey, Zorrobot, Ettrig,
Legobot, Luckas-bot, Yobot, Tohd8BohaithuGh1, Sirsparksalot, Sanyi4, Synchronism, AnomieBOT, Ciphers, Xtreme219, Darkroll, Materialscientist, Xqbot, GrouchoBot, Sheeson, Chjoaygame, FrescoBot, Sapphirus, Jschnur, RedBot, Serols, FoxBot, Eracer55, TCarey, Tbhotch,
RjwilmsiBot, EmausBot, John of Reading, DacodaNelson, Mobius Bot, Carultch, Donner60, Eg-T2g, ClueBot NG, Pvnuel, KL56-NJITWILL,
Clive.gregory, Rogerwillismillsii, Bibcode Bot, Alexgotsis, Bauka91 91, Royourboat, Lynskyder, YumOooze, Zedshort, Warrenrob50, Armasd,
Dexbot, C5st4wr6ch, Coolitic, Destroyer130, Jodosma, Samgo27, Bcheah, Toyalima, JCMPC, Femkemilene, Monkbot, Krishtafar, Appleuseryu
and Anonymous: 222
Isenthalpic process Source: https://en.wikipedia.org/wiki/Isenthalpic_process?oldid=645636432 Contributors: Glenn, Karol Langner, Count
Iblis, Gene Nygaard, NawlinWiki, Dhollm, Hirudo, Thorney?, SmackBot, Bduke, Xyabc, Rracecarr, Hasanpasha, StuartF, Stan J Klimas,
Davecrosby uk, Dolphin51, Editor2020, MystBot, Addbot, LatitudeBot, Zorrobot, Amirobot, Citation bot, EmausBot, WikitanvirBot, ZroBot,
Rmashhadi, Helpful Pixie Bot, Titodutta, MusikAnimal, Monkbot and Anonymous: 10
Isentropic process Source: https://en.wikipedia.org/wiki/Isentropic_process?oldid=708145999 Contributors: JeLuF, Michael Hardy, Kingturtle, Darkwind, Glenn, Richy, Duk, PAR, Jheald, Ling Kah Jai, Linas, YurikBot, Dhollm, Arthur Rubin, SmackBot, Ohconfucius, Pierre cb,

248

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Dr.K., Mikiemike, Cydebot, Rracecarr, Thijs!bot, AntiVandalBot, QuiteUnusual, Ac44ck, Mythealias, Stan J Klimas, Liveste, Zojj, Billcarr178,
VolkovBot, Jamelan, AlleborgoBot, Edbrambley, Tresiden, WereSpielChequers, Anchor Link Bot, Dolphin51, Seanzer, DumZiBoT, MystBot,
Addbot, LatitudeBot, Zorrobot, Luckas-bot, Yobot, Ptbotgourou, Fraggle81, Doogleface, Daniele Pugliesi, Materialscientist, Daniel Souza,
Taha-yasseri, Chjoaygame, , HRoestBot, MastiBot, EmausBot, WikitanvirBot, Mmeijeri, Hhhippo, ChuispastonBot, ClueBot NG,
Magneticmoment, Zedshort, Oldscouser, GoShow, Manul, Ahujamukesh007, Prymshbmg, Amortias, Tyler.neysmith, Vickyyugy, Scipsycho,
Buba and Anonymous: 78
Polytropic process Source: https://en.wikipedia.org/wiki/Polytropic_process?oldid=702584774 Contributors: AxelBoldt, Glenn, Mihail
Vasiliev, Karol Langner, Nick Mks, SDC, Dhollm, Alain r, Plober, SmackBot, Pegua, Mikiemike, Gogo Dodo, JamesAM, Edokter, JAnDbot,
JeConrad, PhilKnight, Ac44ck, R'n'B, Jebadge, VolkovBot, DoorsAjar, Lechatjaune, AlleborgoBot, AllHailZeppelin, Alexbot, NellieBly,
MTessier, Addbot, RN1970, PV=nRT, Luckas-bot, Sanyi4, JackieBot, Materialscientist, GrouchoBot, , Erik9bot, Tpholm, Jzana,
BG19bot, Zedshort, Cky2250, IkamusumeFan, YFdyh-bot, Marcello Pas, Danhatton, CasualJJ, Miller.alexb and Anonymous: 46
Introduction to entropy Source: https://en.wikipedia.org/wiki/Introduction_to_entropy?oldid=711449081 Contributors: Edward, Kku,
Tea2min, Dratman, Dave souza, Art LaPella, Army1987, Pharos, Gary, PAR, Jheald, Carcharoth, DaveApter, Vegaswikian, Fresheneesz,
Loom91, Grafen, Retired username, Dhollm, Brisvegas, Light current, Serendipodous, User24, SmackBot, Giraldusfaber, Bduke, T.J. Crowder, Microfrost, Xyzzyplugh, Sadi Carnot, PAS, Lazylaces, JorisvS, 16@r, Kirbytime, K, FilipeS, Cydebot, Gtxfrance, Headbomb, John254,
MarshBot, Dylan Lake, Ray Eston Smith Jr, Hypergeek14, Dirac66, BigrTex, Davidm617617, TJKluegel, Papparolf, Adam C C, Davwillev,
Zain Ebrahim111, Sesshomaru, Kbrose, ConfuciusOrnis, Dolphin51, Ac1201, Rodhullandemu, Plastikspork, Crowsnest, Yobot, Zaereth, DBBabyboydavey, Kissnmakeup, AnomieBOT, Daniele Pugliesi, Ipatrol, EryZ, Danno uk, Citation bot, LilHelpa, DanP4522874, Chjoaygame,
FrescoBot, Vh mby, Nilock, Vrenator, Combee123, Sixtylarge2000, Drozdyuk, Wayne Slam, ClueBot NG, BG19bot, Michelino12, Marko
Petek, Gsoverby, Prokaryotes, W. P. Uzer, Yikkayaya, Hwmoon90, Patrickrowanandrews, DomFerreira01 and Anonymous: 72
Entropy Source: https://en.wikipedia.org/wiki/Entropy?oldid=717502595 Contributors: Tobias Hoevekamp, Chenyu, CYD, Bryan Derksen,
Zundark, The Anome, BlckKnght, Awaterl, XJaM, Roadrunner, Peterlin~enwiki, Jdpipe, Heron, Youandme, Olivier, Stevertigo, PhilipMW,
Michael Hardy, Macvienna, Zeno Gantner, Looxix~enwiki, J'raxis, Humanoid, Darkwind, AugPi, Jiang, Kaihsu, Jani~enwiki, Mxn, Smack, Disdero, Tantalate, Timwi, Reddi, Terse, Dysprosia, Jitse Niesen, Andrewman327, Piolinfax, Tpbradbury, Saltine, J D, Atuin, Raul654, Wetman,
Lumos3, Jni, Phil Boswell, Ruudje, Robbot, Fredrik, Alrasheedan, Naddy, Sverdrup, Texture, Hadal, David Edgar, Ianml, Aetheling, Tea2min,
Connelly, Paisley, Giftlite, Graeme Bartlett, DavidCary, Haeleth, BenFrantzDale, Lee J Haywood, Herbee, Xerxes314, Everyking, Anville,
Dratman, Henry Flower, NotableException, Gracefool, Macrakis, Christofurio, Zeimusu, Yath, Gunnar Larsson, Karol Langner, JimWae, Mjs,
H Padleckas, Pmanderson, Icairns, Arcturus, Tsemii, Edsanville, E David Moyer, Mschlindwein, Freakofnurture, Lone Isle, Rich Farmbrough,
KillerChihuahua, Pjacobi, Vsmith, Dave souza, Gianluigi, Mani1, Paul August, Bender235, Kbh3rd, Kjoonlee, Geoking66, RJHall, Pt, El
C, Laurascudder, Aaronbrick, Chuayw2000, Bobo192, Marathoner, Wisdom89, Giraedata, VBGFscJUn3, Physicistjedi, 99of9, Obradovic
Goran, Haham hanuka, Mdd, Geschichte, Gary, Mennato, Arthena, Keenan Pepper, Benjah-bmm27, Riana, Iris lorain, PAR, Melaen, Velella,
Knowledge Seeker, Jheald, Count Iblis, Drat, Egg, Artur adib, Lerdsuwa, Gene Nygaard, Oleg Alexandrov, Omnist, Sandwiches, Joriki, Velho,
Simetrical, MartinSpacek, Woohookitty, Linas, TigerShark, StradivariusTV, Jacobolus, Wijnand, EnSamulili, Pkeck, Mouvement, Jwanders,
Eleassar777, Tygar, SeventyThree, Jonathan48, DL5MDA, Aarghdvaark, Graham87, Marskell, V8rik, Nanite, Rjwilmsi, Thechamelon, HappyCamper, Ligulem, TheIncredibleEdibleOompaLoompa, Dougluce, MarnetteD, GregAsche, FlaBot, RobertG, Mathbot, Nihiltres, Gurch, Frelke,
Intgr, Fresheneesz, Srleer, Physchim62, WhyBeNormal, Chobot, DVdm, VolatileChemical, YurikBot, Wavelength, Jimp, Alpt, Kafziel, Wolfmankurd, Bobby1011, Loom91, Bhny, JabberWok, Stephenb, Gaius Cornelius, Wimt, Ugur Basak, Odysses, Shanel, NawlinWiki, SAE1962,
Sitearm, Retired username, Dhollm, Ellwyz, Crasshopper, Shotgunlee, Dr. Ebola, DeadEyeArrow, Bota47, Rayc, Brisvegas, Doetoe, Ms2ger,
WAS 4.250, Vadept, Light current, Enormousdude, Theodolite, Ballchef, The Fish, ChrisGriswold, Theda, CharlesHBennett, Chaiken, Paganpan,
Bo Jacoby, Pentasyllabic, Pipifax, DVD R W, ChemGardener, Itub, Attilios, Otheus, SmackBot, ElectricRay, Reedy, InverseHypercube, KnowledgeOfSelf, Jim62sch, David Shear, Mscuthbert, Ixtli, Jab843, Pedrose, Edgar181, Xaosux, Hmains, Betacommand, Skizzik, ThorinMuglindir,
Kmarinas86, Oneismany, Master Jay, Kurykh, QTCaptain, Bduke, Dreg743, Complexica, Imaginaryoctopus, Basalisk, Nbarth, Sciyoshi~enwiki,
Dlenmn, Colonies Chris, Darth Panda, Chrislewis.au, BW95, Zachorious, Can't sleep, clown will eat me, Ajaxkroon, ZezzaMTE, Apostolos Margaritis, Shunpiker, Homestarmy, AltheaJ, Ddon, Memming, Engwar, Nakon, J.Wolfe@unsw.edu.au, G716, LoveMonkey, Metamagician3000, Sadi Carnot, Yevgeny Kats, SashatoBot, Tsiehta, Lambiam, AThing, Oenus, Eric Hawthorne, MagnaMopus, Lakinekaki, Mbeychok,
JorisvS, Mgiganteus1, Nonsuch, Dftb, Physis, Slakr, Dicklyon, Tiogalinha~enwiki, Abjad, Dr.K., Cbuckley, HappyVR, Adodge, BranStark,
HisSpaceResearch, K, Astrobayes, Paul venter, Gmaster108, RekishiEJ, Jive Dadson, JRSpriggs, Emote, Patrickwooldridge, Vaughan Pratt,
CmdrObot, Hanspi, Jsd, Dgw, BassBone, Omnichic82, Electricmic, NE Ent, Adhanali, FilipeS, Jac16888, Cydebot, Natasha2006, Kanags,
WillowW, Gtxfrance, Mike Christie, Rieman 82, Gogo Dodo, Sam Staton, Hkyriazi, Rracecarr, Miguel de Servet, Michael C Price, Rize
Above, Soumya.92, Aintsemic, Hugozam, Gurudev23, Csdidier, Abtract, Yian, Thijs!bot, Epbr123, Lg king, Opabinia regalis, Moveaway00,
LeBofSportif, Teh tennisman, Kahriman~enwiki, Fred t hamster, Headbomb, Neligterink, Esowteric, Electron9, EdJohnston, D.H, Dartbanks,
DJ Creature, Stannered, Seaphoto, FrankLambert, Ray Eston Smith Jr, Tim Shuba, MECU, Astavats, Serpents Choice, JAnDbot, MER-C, Reallybored999, Physical Chemist, XerebZ, RebelRobot, Magioladitis, Bongwarrior, VoABot II, Avjoska, Bargebum, Tonyfaull, HGHSTROJAN,
Dirac66, User A1, Jacobko, Glen, Steevven1, DGG, Hdt83, GuidoGer, Keith D, Ronburk, Pbroks13, Leyo, Mbweissman, Mausy5043, HEL,
J.delanoy, Captain panda, Jorgenumata, Numbo3, Peter Chastain, Josterhage, Maurice Carbonaro, Thermbal, Shawn in Montreal, Camarks, Cmbreuel, Nwbeeson, Touch Of Light, Constatin666999, Pundit, Edzevallos, Juliancolton, Linshukun, DorganBot, Rising*From*Ashes, Inwind,
Lseixas, Izno, Idioma-bot, Fimbulfamb, Cuzkatzimhut, Ballhausip, Larryisgood, Macedonian, Pasquale.Carelli, LokiClock, Philip Trueman,
Nikhil Sanjay Bapat, TXiKiBoT, BJNartowt, Antoni Barau, Rei-bot, Anonymous Dissident, Drestros power, Hai2410, Vendrov, Leafyplant, Raymondwinn, Billgdiaz, Mwilso24, Kpedersen1, Mouse is back, Koen Van de moortel~enwiki, UeHThygesen, Synthebot, Sesshomaru, Locke9k,
Arcfrk, Nagy, Tennismaniac2112, Bojack727, Katzmik, EmxBot, Vbrayne, Kbrose, SieBot, Wolf.312, Moonriddengirl, Paradoctor, Gerakibot,
Vanished user 82345ijgeke4tg, Arjun r acharya, Happysailor, Radon210, AngelOfSadness, LidiaFourdraine, Georgette2, Hamiltondaniel, WikiLaurent, Geo Plourde, Mad540trix, Dolphin51, Emansf, ClueBot, Compdude47, Foxj, Yurko~enwiki, The Thing That Should Not Be, Ciacco,
Plastikspork, Dtguelph, Riskdoc, Drmies, Bbanerje, ILikeMIDI, Josemald, Lbertolotti, DragonBot, Djr32, Awickert, Graphitepalms, PhySusie,
Tnxman307, M.O.X, Wingwongdong, Revotfel, SchreiberBike, Galor612, Versus22, Edkarpov, Passwordwas1234, DumZiBoT, TimothyRias, Tuuky, XLinkBot, Gnowor, Superkan619, BodhisattvaBot, Boob12, Ost316, Quidproquo2004, Gonfer, MilesTerrex, Subversive.sound,
Private Pilot, WikiDao, Aunt Entropy, NCDane, JohnBonham69, Debzer, Phidus, Addbot, Eric Drexler, Tanhabot, Favonian, Ruddy9hell,
Causticorulos, Mean Free Path, Dougbateman, Tide rolls, Suz115, Gatewayontrigue, Gail, Legobot, Luckas-bot, Yobot, Zaereth, WikiDan61,

12.1. TEXT

249

Ht686rg90, Legobot II, Kissnmakeup, JHomueller~enwiki, AnomieBOT, Cantanchorus, IRP, Galoubet, Piano non troppo, Materialscientist,
Citation bot, ArthurBot, DirlBot, Branxton, FreeRangeFrog, Xqbot, Engineering Guy, Addihockey10, Jerey Mall, DSisyphBot, Necron909,
Raamaiden, Srich32977, Almabot, Munozdj, Schwijker, GrouchoBot, Tnf37, Ute in DC, Philip2357, Omnipaedista, RibotBOT, Waleswatcher,
Smallman12q, Garethb1961, Mishka.medvezhonok, Chjoaygame, GT5162, Maghemite, C1t1v151on, Theowoo, Craig Pemberton, BenzolBot,
Kwiki, Vh mby, MorphismOfDoom, DrilBot, Pinethicket, I dream of horses, HRoestBot, Marsiancba, Martinvl, Calmer Waters, Jschnur, RedBot, Tcnuk, Nora lives, SkyMachine, IVAN3MAN, Nobleness of Mind, Quantumechanic, TobeBot, Jschissel, Lotje, DLMcN, Dinamik-bot,
Vrenator, Lordloihi, Bookbuddi, Rr parker, Stroppolo, Gegege13, DARTH SIDIOUS 2, Mean as custard, Woogee, Dick Chu, Regancy42, Drpriver, Massieu, Prasadmalladi, EmausBot, John of Reading, Lea phys, 12seda78, 478jjjz, Heoigi, Netheril96, Dcirovic, K6ka, Serketan, Capcom1116, Oceans and oceans, Akhil 0950, JSquish, F, Mkratz, Lateg, , Cobaltcigs, Quondum, Glockenklang1, Parodi, Music Sorter,
Pachyphytum, Schurasbrat, Zueignung, Carmichael, RockMagnetist, Tritchls, GP modernus, DASHBotAV, ResearchRave, Mikhail Ryazanov,
Debu334, ClueBot NG, Tschijnmotschau, Intoronto1125, Chester Markel, Marechal Ney, Widr, Natron25, Amircrypto, Helpful Pixie Bot, Art
and Muscle, Jack sherrod, Ramaksoud2000, Bibcode Bot, BZTMPS, Jescott007, Scyllagist, Bths83Cu87Aiu06, Juro2351, Paolo Lipparini,
DIA-888, FutureTrillionaire, Zedshort, Cky2250, Uopchem2510, Uopchem2517, Millennium bug, Justincheng12345-bot, Bobcorn123321,
LEBOLTZMANN2, Smileguy91, Toni 001, ChrisGualtieri, Layzeeboi, Adwaele, JYBot, APerson, AlecTaylor, Thinkadoodle, Webclient101,
Mogism, Makecat-bot, Jiejie9988, CuriousMind01, Sfzh, Ssteve90266, KingQueenPrince, Blue3snail, Thearchontect, Spetalnick, Rjg83, Curatrice, Random Dude Who Is Cool, Sajjadha, Mattia Guerri, Probeb217, Loverthehater, TheNyleve, Rkswb, Prokaryotes, DavRosen, Damin
A. Fernndez Beanato, Bruce Chen 0010334, Jianhui67, W. P. Uzer, PhoenixPub, Technoalpha, ProKro, Anrnusna, Saad bin zubair, QuantumMatt101, Dragonlord Jack, Elenceq, Monkbot, Yikkayaya, Eczanne, Lamera1234, TaeYunPark, ClockWork96, Georgeciobanu, Gbkrishnappa2015, Eliodorochia, KasparBot, Asterixf2, Gaeanautes, Ericliu shu, Miller.alexb, Tanmay pathak987654, TomKaufmann869, Spinrade,
Stemwinders, Ssmmachen, Samuelchuuu, PhyKBA, JosiahWilard, WandaLan, Sir.Arjit Chauhan and Anonymous: 807
Pressure Source: https://en.wikipedia.org/wiki/Pressure?oldid=717281106 Contributors: AxelBoldt, Magnus Manske, Mav, Bryan Derksen,
Zundark, The Anome, Tarquin, Cable Hills, Peterlin~enwiki, DavidLevinson, Jdpipe, Heron, Patrick, Infrogmation, Smelialichu, Michael Hardy,
Tim Starling, Pit~enwiki, Fuzzie, GTBacchus, Delirium, Minesweeper, Egil, Mkweise, Ellywa, Ahoerstemeier, Mac, , Glenn,
Smack, GRAHAMUK, Halfdan, Ehn, Emperorbma, RodC, Charles Matthews, Jay, Pheon, DJ Clayworth, Tpbradbury, Jimbreed, Omegatron,
Fvw, Robbot, Hankwang, Pigsonthewing, Chris 73, R3m0t, Peak, Merovingian, Bkell, Moink, Hadal, UtherSRG, Aetheling, Cronian~enwiki,
Tea2min, Giftlite, Smjg, Harp, Wolfkeeper, Tom harrison, Herbee, Mark.murphy, Wwoods, Michael Devore, Bensaccount, Thierryc, Jackol,
Simian, Gadum, Lst27, Anoopm, Ackerleytng, Jossi, DragonySixtyseven, Johnux, Icairns, Zfr, Sam Hocevar, Lindberg G Williams Jr, Urhixidur, Peter bertok, Sonett72, Rich Farmbrough, Guanabot, Vsmith, Sam Derbyshire, Mani1, Paul August, MarkS, SpookyMulder, LemRobotry,
Calair, Pmcm, Lankiveil, Joanjoc~enwiki, Shanes, Sietse Snel, RoyBoy, Spoon!, Bobo192, Marco Polo, Fir0002, Meggar, Duk, LeonardoGregianin, Evgeny, Foobaz, Dungodung, La goutte de pluie, Unused000701, MPerel, Hooperbloob, Musiphil, Alansohn, Brosen~enwiki, Dbeardsl,
Jeltz, Goldom, Kotasik, Katana, PAR, Malo, Snowolf, Velella, Ish ishwar, Shoey, Gene Nygaard, ZakuSage, Oleg Alexandrov, Reinoutr, Armando, Pol098, Commander Keane, Keta, Wocky, Isnow, Crucis, Gimboid13, Palica, FreplySpang, NebY, Koavf, Isaac Rabinovitch, RayC,
Tawker, Daano15, Yamamoto Ichiro, FlaBot, Gurch, AlexCovarrubias, Takometer, Yggdrasilsroot, Srleer, Ahunt, Chobot, DVdm, YurikBot,
Zaidpjd~enwiki, Jimp, Spaully, Ytrottier, SpuriousQ, Stephenb, Gaius Cornelius, Yyy, Alex Bakharev, Bovineone, Wimt, NawlinWiki, Wiki alf,
Test-tools~enwiki, Kdkeller, Dhollm, Moe Epsilon, Alex43223, JHCaueld, Scottsher, Deeday-UK, FF2010, Light current, Johndburger, Redgolpe, HereToHelp, Tonyho, RG2, Profero, NeilN, ChemGardener, SmackBot, RDBury, Blue520, KocjoBot~enwiki, Jrockley, Gilliam, Skizzik,
Jamie C, Bluebot, Audacity, NCurse, MK8, Oli Filth, MalafayaBot, SchftyThree, Complexica, Kourd, DHN-bot~enwiki, Colonies Chris,
Zven, Suicidalhamster, Can't sleep, clown will eat me, DHeyward, Fiziker, JonHarder, Yidisheryid, Fuhghettaboutit, Tvaughn05, Bowlhover,
Nakon, Kntrabssi, Dreadstar, Smokefoot, Drphilharmonic, Sadi Carnot, FelisLeo, Cookie90, SashatoBot, Finejon, Dbtfz, Gobonobo, Middlec, Tktktk, Mbeychok, BLUE, Chodorkovskiy, Patau, MarkSutton, Willy turner, Waggers, Peter Horn, Hgrobe, Shoeofdeath, Wjejskenewr,
CharlesM, Courcelles, Tawkerbot2, Bstepp99, Petr Matas, Zakian49, Fnfal, WeggeBot, Gerhardt m, Cydebot, Fnlayson, Gogo Dodo, Rracecarr, Dancter, Odie5533, AndersFeder, Bookgrrl, Karuna8, Epbr123, Bot-maru, LeBofSportif, Headbomb, Marek69, Iviney, Greg L, Oreo
Priest, Porqin, AntiVandalBot, Garbagecansrule, Opelio, Credema, Adz 619, B7582, JAnDbot, Hemingrubbish, MER-C, Nthep, Marsey04,
Hello32020, Andonic, Easchi, Magioladitis, Bongwarrior, VoABot II, JNW, Rivertorch, Midgrid, Dirac66, Chris G, DerHexer, Waninge,
Yellowing, Mania112, Ashishbhatnagar72, Wikianon, Seba5618, MartinBot, Rob0571, LedgendGamer, J.delanoy, Trusilver, Piercetheorganist, Mike.lifeguard, Gzkn, Lantonov, Salih, Mikael Hggstrm, Yadevol, Warut, Belovedfreak, Cmichael, Fylwind, SlightlyMad, M bastow,
TraceyR, Idioma-bot, VolkovBot, Trebacz, Martin Cole, Philip Trueman, Dbooksta, TXiKiBoT, Oshwah, Zidonuke, Malinaccier, Ranmamaru, Hqb, JayC, Qxz, Anna Lincoln, Jetforme, Martin451, From-cary, Zondi, Greg searle, Krushia, Vincent Grosskopf, Neparis, Admkushwaha, EJF, SieBot, Coee, Tresiden, Caltas, Arda Xi, AlonCoret, Flyer22 Reborn, Tiptoety, Antzervos, Oxymoron83, Sr4delta, Lightmouse,
The Valid One, OKBot, Vituzzu, StaticGull, Anchor Link Bot, TheGreatMango, Geo Plourde, Dolphin51, Denisarona, Xjwiki, Faithlessthewonderboy, Codynke6, ClueBot, LAX, The Thing That Should Not Be, Uxorion, Jan1nad, Smichr, Drmies, Mild Bill Hiccup, Wolvereness,
Orthoepy, Liempt, DragonBot, Djr32, Excirial, SubstanceDx99, Joa po, Nigelleelee, Lartoven, Sun Creator, L1f07bscs0035, JamieS93, Razorame, Plasmic Physics, Versus22, SoxBot III, Uri2~enwiki, Rvoorhees, Antti29, XLinkBot, BodhisattvaBot, FactChecker1199, TZGreat, Gotta
catch 'em all yo, Gonfer, Fzxboy, WikiDao, Jpfru2, Addbot, AVand, Some jerk on the Internet, Vanished user kksudjekkdfjlrd, Betterusername, Sir cumalot, Sen Travers, Ronhjones, Fieldday-sunday, Adrian147, CanadianLinuxUser, Fluernutter, Morning277, Glane23, Favonian,
Jasper Deng, 84user, Tide rolls, Lightbot, Cesiumfrog, Ralf Roletschek, Superboy112233, HerculeBot, Snaily, Legobot, Luckas-bot, Yobot,
Ht686rg90, AnomieBOT, DemocraticLuntz, Daniele Pugliesi, Sfaefaol, Jim1138, AdjustShift, Rudolf.hellmuth, Kingpin13, Nyanhtoo, Flewis,
Bluerasberry, Materialscientist, Felyza, GB fan, Jemandwicca, Xqbot, Transity, .45Colt, Jerey Mall, Wyklety, Gap9551, Time501, GrouchoBot, Derintelligente, ChristopherKingChemist, Mathonius, Energybender, Shadowjams, Keo Ross Sangster, Aaron Kauppi, SD5, Imveracious, BoomerAB, GliderMaven, Pascaldulieu, FrescoBot, LucienBOT, Tlork Thunderhead, BenzolBot, Jamesooders, Haein45, Pinethicket,
HRoestBot, Calmer Waters, Hamtechperson, Jschnur, RedBot, Marcmarroquin, Pbsouthwood, Jujutacular, Bgpaulus, Jonkerz, Navidh.ahmed,
Vrenator, Darsie42, Jerd10, DARTH SIDIOUS 2, Onel5969, Mean as custard, DRAGON BOOSTER, Newty23125, William Shi, EmausBot,
Tommy2010, Wikipelli, Dcirovic, K6ka, Thecheesykid, JSquish, Shuipzv3, Empty Buer, Hazard-SJ, Quondum, Talyor Will, Morgankevinj, Perseus, Son of Zeus, Tls60, Orange Suede Sofa, RockMagnetist, DASHBotAV, 28bot, ClueBot NG, Jack Greenmaven, Mythicism, This
lousy T-shirt, Neeraj1997, Cj005257, Frietjes, Jessica-NJITWILL, Braincricket, Angelo Michael, Widr, Christ1013, Rectangle546, Becarlson,
Analwarrior, Wiki13, ElphiBot, Joydeep, Saurabhbaptista, Franz99, YVSREDDY, Cky2250, Matt Hayter, Shikhar1089,
, Kasamasa, Anujjjj, Mrt3366, Jack No1, Shyncat, Avengingbandit, Forcez, JYBot, Librscorp, Mysterious Whisper, Superduck463, Frosty, Sriharsh1234, The
Anonymouse, Reatlas, Resolution3.464, Paikrishnan, Masterbait123, Jasualcomni, DavidLeighEllis, Montyv, FizykLJF, Wyn.junior, Mahusha,

250

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Trackteur, Johnnprince203, Bog snorkeller, Crystallizedcarbon, Jokeop, Alicemitchellweddingpressure, Esquivalience, Engmeas, KasparBot,
JJMC89, Christoerekman, Vishrut Malik, Sharasque, Harmon758, Aditi Tripathi09, Gulercetin, BlueUndigo13, Mizaan Shamaun and Anonymous: 766
Thermodynamic temperature Source: https://en.wikipedia.org/wiki/Thermodynamic_temperature?oldid=713032171 Contributors: AxelBoldt, The Anome, AdamW, Roadrunner, Baclan, Lumos3, Robbot, Romanm, Cutler, Giftlite, Lethe, Eequor, Jaan513, Rich Farmbrough,
Pjacobi, Xezbeth, RJHall, Evolauxia, Giraedata, Nk, Keenan Pepper, Ricky81682, PAR, Velella, Skatebiker, Gene Nygaard, Blaxthos,
Woohookitty, Rparson, Benbest, Pol098, Emerson7, DePiep, Nanite, Koavf, Erebus555, Gurch, Kri, Spacepotato, Loom91, CambridgeBayWeather, Trovatore, Dhollm, BOT-Superzerocool, Enormousdude, Pifvyubjwm, Smurrayinchester, Katieh5584, Teply, Sbyrnes321, SmackBot,
David Shear, Pedrose, Ephraim33, Chris the speller, Bluebot, Thumperward, Sadads, Sbharris, Henning Makholm, Mion, Sadi Carnot, Schnazola, Breno, JoseREMY, Mgiganteus1, Nonsuch, Collect, Frokor, JRSpriggs, Kylu, Rieman 82, Thijs!bot, LeBofSportif, Headbomb, Greg L,
Braindrain0000, JAnDbot, Poga, WikipedianProlic, Limtohhan, Ashishbhatnagar72, DinoBot, Laura1822, CommonsDelinker, Leyo, Mpk138,
ARTE, DorganBot, Skarnani, VolkovBot, Je G., Hqb, Geometry guy, Wiae, Kbrose, SieBot, Damorbel, VVVBot, Chromaticity, Lightmouse,
Anchor Link Bot, JL-Bot, ImageRemovalBot, ClueBot, ChandlerMapBot, 718 Bot, Estirabot, Sun Creator, Frostus, Addbot, MrOllie, Lightbot,
CountryBot, Yobot, AnomieBOT, Daniele Pugliesi, Materialscientist, YBG, Kithira, GrouchoBot, Shirik, Amaury, Vivekakulharia, Dave3457,
Chjoaygame, Jatosado, FrescoBot, Simuliid, CheesyBiscuit, Glider87, Pinethicket, , JokerXtreme, Aleitner, Bearycool,
EmausBot, Super48paul, KHamsun, Netheril96, Sibom, Dondervogel 2, BrokenAnchorBot, Donner60, Frangojohnson, ChuispastonBot, ClueBot NG, Gareth Grith-Jones, Matthiaspaul, Snotbot, Frietjes, Jeremy W Powell, BG19bot, Entton1, Bauka91 91, Kisokj, Cyberbot II, YFdyhbot, Ugog Nizdast, Wikifan2744, Bubba58, Johnny Cook12345678987 and Anonymous: 85
Volume (thermodynamics) Source: https://en.wikipedia.org/wiki/Volume_(thermodynamics)?oldid=690570181 Contributors: Gene Nygaard,
Physchim62, Dhollm, Gilliam, Dreadstar, Md2perpe, Cydebot, Mikael Hggstrm, Lightmouse, Ktr101, Clayt85, MystBot, Addbot, Lightbot, Yobot, Ptbotgourou, Daniele Pugliesi, Miracleworker5263, , Louperibot, Trappist the monk, EmausBot, ZroBot, Cobaltcigs,
ClueBot NG, Muon, BG19bot, Dbrawner, Cky2250, Acratta, Blackbombchu, TeaLover1996 and Anonymous: 15
Thermodynamic system Source: https://en.wikipedia.org/wiki/Thermodynamic_system?oldid=710216276 Contributors: Toby Bartels, Fxmastermind, Eric119, Stevenj, Smack, Filemon, Giftlite, Peruvianllama, Andycjp, Blazotron, Rdsmith4, Icairns, Jfraser, Helix84, Mdd, Alansohn,
Rw63phi, PAR, Pion, Jheald, Dan100, BD2412, Chobot, Wavesmikey, Gaius Cornelius, Dhollm, Jpbowen, Bota47, Light current, E Wing,
SmackBot, Bomac, MalafayaBot, Stepho-wrs, DinosaursLoveExistence, Sadi Carnot, 16@r, CmdrObot, Cydebot, Krauss, Sting, Headbomb,
Stannered, Akradecki, JAnDbot, Athkalani~enwiki, MSBOT, .anacondabot, VoABot II, Rich257, KConWiki, Dirac66, An1MuS, Ac44ck,
Pbroks13, Nev1, Trusilver, Maurice Carbonaro, Cmbankester, Usp, VolkovBot, Kbrose, PaddyLeahy, SieBot, Mercenario97, OKBot, ClueBot,
Auntof6, Excirial, PixelBot, Wdford, Mikaey, SchreiberBike, Addbot, CarsracBot, Redheylin, Glane23, Ht686rg90, Fraggle81, Becky Sayles,
AnomieBOT, Materialscientist, ArthurBot, Xqbot, DSisyphBot, GrouchoBot, Chjoaygame, FrescoBot, Pshmell, Pinethicket, RedBot, Thinking of England, Artem Korzhimanov, AznFiddl3r, EmausBot, Abpk62, Glockenklang1, ClueBot NG, Gokulchandola, Loopy48, BZTMPS,
BG19bot, Gryon5147, Tutelary, ChrisGualtieri, Upsidedowntophat, Adwaele, Frosty, PhoenixPub, Eclipsis Proteo, Zortwort, Klaus SchmidtRohr and Anonymous: 75
Heat capacity Source: https://en.wikipedia.org/wiki/Heat_capacity?oldid=715890769 Contributors: Heron, Edward, Patrick, Michael Hardy,
Ppareit, Looxix~enwiki, Ellywa, Julesd, Glenn, Samw, Tantalate, Krithin, Smallcog, Schusch, Romanm, Modulatum, Sverdrup, Giftlite, BenFrantzDale, Bensaccount, Jason Quinn, Bobblewik, ThePhantom, Karol Langner, Icairns, Gscshoyru, Tsemii, Edsanville, Vsmith, Xezbeth,
Nabla, Joanjoc~enwiki, Kwamikagami, RAM, Jung dalglish, Pearle, I-hunter, Yhr, Mc6809e, PAR, Jheald, Gene Nygaard, Ian Moody, Kelly
Martin, Pol098, Palica, Marudubshinki, Rjwilmsi, FlaBot, Margosbot~enwiki, Yrfeloran, Chobot, DVdm, YurikBot, Wavelength, RobotE,
Jimp, RussBot, Madkayaker, Gaius Cornelius, Grafen, Trovatore, Dhollm, Voidxor, E2mb0t~enwiki, Poppy, JPushkarH, Mumuwenwu, SDS,
GrinBot~enwiki, Bo Jacoby, Tom Morris, Hansonrstolaf, Edgar181, Skizzik, ThorinMuglindir, Chris the speller, Complexica, Sbharris, Sct72,
John, JorisvS, CaptainVindaloo, Spiel496, MTSbot~enwiki, Iridescent, V111P, The Letter J, Vaughan Pratt, CmdrObot, Shorespirit, Quarkboard, Myasuda, Cydebot, Christian75, Mikewax, Thijs!bot, Memty Bot, Andyjsmith, Marek69, Greg L, Vincent88~enwiki, Ste4k, JAnDbot,
BenB4, Magioladitis, Riceplaytexas, Engineman, Chemical Engineer, Dirac66, Mythealias, Anaxial, Alro, R'n'B, Leyo, Mausy5043, Thermbal,
Brien Clark, Notreallydavid, NewEnglandYankee, RayForma, Ojovan, Brvman, AlnoktaBOT, TheOtherJesse, 8thstar, Philip Trueman, Oshwah, Aymatth2, Meters, Demize, Kbrose, JDHeinzmann, Damorbel, BotMultichill, Cwkmail, Revent, Flyer22 Reborn, Allmightyduck, Anchor
Link Bot, Dolphin51, Denisarona, Elassint, ClueBot, Bbanerje, Auntof6, Dh78~enwiki, Djr32, KyuubiSeal, Rathemis, Peacheshead, Johnuniq,
TimothyRias, Forbes72, WikHead, NellieBly, Alberisch~enwiki, Gniemeyer, Addbot, Boomur, CanadianLinuxUser, Keds0, Snaily, Yobot,
AnomieBOT, DemocraticLuntz, Rubinbot, Daniele Pugliesi, Materialscientist, Citation bot, Eumolpo, Ulf Heinsohn, Chthonicdaemon, GrouchoBot, Ccmwiki~enwiki, Tufor, A. di M., Thehelpfulbot, Khakiandmauve, Chjoaygame, Banak, Italianice84, Bergdohle, Mfwitten, Cannolis,
Citation bot 1, Maggyero, Chenopodiaceous, Pinethicket, I dream of horses, Dheknesn, Mogren, Dtrx, Sbembenek18, Thi Nhi, Soeren.b.c, Minimac, J36miles, EmausBot, John of Reading, Ajraddatz, Tpudlik, Dewritech, Gowtham vmj, Onegumas, Wikipelli, K6ka, Hhhippo, Ronk01,
Osure, Quondum, Mmww123, AManWithNoPlan, Wayne Slam, Hpubliclibrary, Donner60, ChuispastonBot, RockMagnetist, 28bot, Pulsfordp, ClueBot NG, Cwmhiraeth, Ulund, School of Stone, Physics is all gnomes, The Master of Mayhem, O.Koslowski, Rezabot, Danim,
MerlIwBot, ImminentFate, Magneticmoment, Helpful Pixie Bot, Lolm8, Calabe1992, Bibcode Bot, ElZarco, BG19bot, Yafjj215, AvocatoBot,
Ushakaron, Tcep, Jschmalzel, Saiprasadrm, Zedshort, Physicsch, Martkat08, MathewTownsend, Anthonymcnug, BattyBot, David.moreno72,
VijayGargUA, Cyberbot II, Ytic nam, Heithm, LHcheM, Adwaele, JYBot, Webclient101, Yauran, Makecat-bot, Sarah george mesiha, Zmicier
P., Mgibby5, Reatlas, Joeinwiki, C5st4wr6ch, Epicgenius, Luke arnold16, Akiaterry, AresLiam, Kogge, Newestcastleman, JCMPC, Kernkkk,
Meumeul, Amortias, Baharmajorana, Mario Casteln Castro, Fleivium, TaeYunPark, LfSeoane, Thizzlehatter, Zppix, Cyrej, Scipsycho, Nickabernethy, Sweepy, TheOldOne1939, Clinton Kepler and Anonymous: 312
Compressibility Source: https://en.wikipedia.org/wiki/Compressibility?oldid=711876248 Contributors: Maury Markowitz, Michael Hardy,
Aarchiba, Moriori, Chris Roy, Mor~enwiki, Mintleaf~enwiki, BenFrantzDale, Leonard G., Pne, Sam Hocevar, HasharBot~enwiki, AMR, PAR,
Count Iblis, Gene Nygaard, GregorB, Rjwilmsi, Cryonic Mammoth, Deklund, RobotE, RussBot, Twin Bird, Dhollm, Valeriecoman, HeartofaDog, Commander Keane bot, Rpspeck, Powerfool, COMPFUNK2, Wiz9999, Mwtoews, John, Iepeulas, Lenoxus, Pacerlaser, Courcelles,
Covalent, Novous, TheTito, Basar, Thijs!bot, Headbomb, JustAGal, EarthPerson, JAnDbot, Tigga, Ibjt4ever, Magioladitis, Ehdr, Msd3k, Red
Sunset, R'n'B, Deans-nl, Zygimantus, Uncle Dick, KudzuVine, Sandman619, CWii, YuryKirienko, Wiae, Andy Dingley, Gerakibot, Ra'ike,
Algorithms, ClueBot, Binksternet, Tzm41, Crowsnest, Freireib, Addbot, DOI bot, TStein, Mpz, Ale66, Luckas-bot, Yobot, Daniele Pugliesi,

12.1. TEXT

251

Citation bot 1, Pinethicket, Agrasa, RjwilmsiBot, Ankid, EmausBot, ZroBot, Redhanker, AManWithNoPlan, Stwalczyk, Whoop whoop pull
up, Mjbmrbot, ClueBot NG, Helpful Pixie Bot, Bibcode Bot, BG19bot, Mn-imhotep, Eio, Mogism, Anrnusna, Trackteur and Anonymous: 44
Thermal expansion Source: https://en.wikipedia.org/wiki/Thermal_expansion?oldid=708928126 Contributors: Fred Bauder, Delirium, Andrewman327, Cdang, Giftlite, BenFrantzDale, Alexf, Deewiant, Thorsten1, Grm wnr, ChrisRuvolo, Vsmith, Bender235, Quietly, Art LaPella,
Hooperbloob, Knucmo2, Zachlipton, Alansohn, PAR, Snowolf, TaintedMustard, Gene Nygaard, StuTheSheep, Linas, Mindmatrix, Aidanlister, Pol098, Firien, Knuckles, Prashanthns, Susato, Paxsimius, Mandarax, NCdave, Jclemens, Nanite, Rjwilmsi, Matt Deres, ACrush, Gurch,
Chobot, YurikBot, Charles Gaudette, Akamad, Alex Bakharev, ArcticFlame, Grafen, Dhollm, Moe Epsilon, DeadEyeArrow, CWenger, GrinBot~enwiki, That Guy, From That Show!, Luk, Yvwv, SmackBot, Slashme, Da2ce7, Eupedia, Gilliam, Reza1615, EndingPop, Mion, Harryboyles, ML5, Paladinwannabe2, Dan Gluck, Wizard191, Iridescent, Courcelles, Mcginnly, Ironmagma, Saintrain, Thijs!bot, Epbr123, Headbomb, Nick Number, Escarbot, Porqin, QuiteUnusual, RogueNinja, JAnDbot, Ibjt4ever, Jinxinzzi, Asplace, Bongwarrior, VoABot II, JamesBWatson, Christophe.Finot, Raggiante~enwiki, Cardamon, R'n'B, Zygimantus, Eybot~enwiki, J.delanoy, Trusilver, Dani setiawan, Mike.lifeguard,
Davidprior, Auegel, Jcwf, TomasBat, In Transit, STBotD, Ojovan, AntoniusJ~enwiki, Squids and Chips, WOSlinker, Hqb, Leaf of Silver,
Claidheamohmor, Gerakibot, Yintan, Mothmolevna, Chromaticity, Masgatotkaca, Csloomis, OKBot, AllHailZeppelin, Kanonkas, ClueBot,
Sealsrock!, The Thing That Should Not Be, Ken l lee, Mild Bill Hiccup, Harland1, Largedizkool, Adrian dakota, DragonBot, Awickert, CohesionBot, PixelBot, Leonard^Bloom, P1415926535, La Pianista, Ammm3478, 1ForTheMoney, Ngebbett, David.Boettcher, Addbot, Xp54321,
Otisjimmy1, Chzz, Jgrosay~enwiki, Quercus solaris, Tide rolls, Teles, Karthik3186, Yobot, Zaereth, AnomieBOT, Gtz, Piano non troppo,
Materialscientist, E235, Citation bot, Clark89, LilHelpa, Xqbot, Qq19342174, Cristianrodenas, RibotBOT, Kyng, Dpinna85, Dan6hell66,
Jatosado, Black.je, Pinethicket, A8UDI, Serols, , Tbhotch, RjwilmsiBot, MagnInd, Bento00, DASHBot, Hhhippo, Pololei, Confession0791, AManWithNoPlan, Pun, RockMagnetist, Teaktl17, ClueBot NG, Ronaldjo, Gareth Grith-Jones, Satellizer, Ulrich67, Mmarre,
Helpful Pixie Bot, Bibcode Bot, BG19bot, Angry birds fan Club, Dentalplanlisa, Eio, Arc1977, BattyBot, Tmariem, Mahmud Halimi Wardag,
Owoturo tboy, YannLar, Csuino, TwoTwoHello, Hwangrox99, QueenMisha, Reatlas, LukeMcMahon, Katelyn.kitzinger, Alexwho314, Aguner,
Lektio, Prokaryotes, Ginsuloft, Stamptrader, JOb, VolpeCenter, Emaw61, Monkbot, Jkutil18, Mybalonyhasarstname, Trackteur, R-joven,
Richard Hebb, DiscantX, Deepak pandey mj, JenniferBaeuml, Pusith95 and Anonymous: 320
Thermodynamic potential Source: https://en.wikipedia.org/wiki/Thermodynamic_potential?oldid=714210912 Contributors: Xavic69,
Michael Hardy, Cimon Avaro, Trainspotter~enwiki, Terse, Phil Boswell, Aetheling, Giftlite, Karol Langner, Icairns, Edsanville, Willhsmit,
Discospinster, El C, Pearle, Keenan Pepper, PAR, Fawcett5, Count Iblis, V8rik, Rjwilmsi, JillCon, ChrisChiasson, GangofOne, Wavesmikey,
Chaos, Dhollm, Bota47, That Guy, From That Show!, SmackBot, Incnis Mrsi, Pavlovi, Bomac, Kmarinas86, MalafayaBot, Huwmanbeing,
Cybercobra, Drphilharmonic, Sadi Carnot, Eli84, Kareemjee, Ring0, LeBofSportif, Headbomb, JAnDbot, Magioladitis, Joshua Davis, Dorgan,
Lseixas, Sheliak, VolkovBot, Larryisgood, VasilievVV, A4bot, Nightwoof, Fractalizator, Kbrose, Hobojaks, SieBot, Thermodude, Pinkadelica,
EoGuy, Tize, Niceguyedc, Vql, Alexbot, Addbot, DOI bot, Steven0309, Download, Numbo3-bot, Serge Lachinov, Yobot, Fragaria Vesca,
Ptbotgourou, Aboalbiss, Rubinbot, Danno uk, Citation bot, ArthurBot, LilHelpa, Lianglei0304, FrescoBot, DrilBot, EmausBot, WikitanvirBot,
Netheril96, Dcirovic, Shivankmehra, SporkBot, Helpful Pixie Bot, BG19bot, F=q(E+v^B), ArmbrustBot, JOb, Monkbot and Anonymous: 46
Enthalpy Source: https://en.wikipedia.org/wiki/Enthalpy?oldid=717005226 Contributors: Bryan Derksen, Taw, Toby Bartels, Peterlin~enwiki,
Edward, Llywrch, Kku, Gbleem, Looxix~enwiki, Darkwind, Julesd, AugPi, Smack, Ehn, Omegatron, Lumos3, Gentgeen, Robbot, Fredrik, Chris
73, Puckly, Caknuck, Lupo, Diberri, Buster2058, Connelly, Giftlite, Donvinzk, Markus Kuhn, Bensaccount, Luigi30, Glengarry, LucasVB, Gunnar Larsson, Karol Langner, Nek, Icairns, C4~enwiki, Tsemii, Mike Rosoft, Discospinster, Rich Farmbrough, Guanabot, ZeroOne, RoyBoy,
Kedmond, Atraxani, Giraedata, Helix84, Sam Korn, Mdd, Benjah-bmm27, PAR, BernardH, Dagimar, Count Iblis, Drat, Dirac1933, Vuo,
Gene Nygaard, Wesley Moy, StradivariusTV, Isnow, Palica, Mandarax, BD2412, JonathanDursi, Yurik, Eteq, Tlroche, Pasky, Dar-Ape, FlaBot,
Jrtayloriv, TeaDrinker, Don Gosiewski, Srleer, Physchim62, Flying Jazz, YurikBot, Wavelength, TexasAndroid, Jimp, Brandmeister (old),
Dotancohen, Chaos, Salsb, Banes, Dhollm, Tony1, Someones life, Izuko, Cmcfarland, Jrf, RG2, Innity0, Mejor Los Indios, Tom Morris, Itub,
Sardanaphalus, SmackBot, Slashme, Bomac, Edgar181, Kdliss, Betacommand, JSpudeman, Kmarinas86, Bduke, Master of Puppets, Complexica, JoeBlogsDord, Sciyoshi~enwiki, DHN-bot~enwiki, Skatche, Sbharris, Colonies Chris, JohnWheater, TheKMan, Fbianco, Drphilharmonic,
Sadi Carnot, Ohconfucius, Spiritia, SashatoBot, Mgiganteus1, The real bicky, Beetstra, Teeteetee, Spiel496, Willandbeyond, Happy-melon,
Gosolowe, Az1568, Dc3~enwiki, Mikiemike, Robbyduy, WeggeBot, Grj23, Karenjc, Myasuda, Mct mht, Gregbard, Phdrahmed, Yaris678,
Cydebot, Kupirijo, Llort, Christian75, Viridae, Tunheim, Chandni chn, Thijs!bot, Runch, Odyssey1989, Headbomb, John254, F l a n k e r,
Dawnseeker2000, Escarbot, The Obento Musubi, Teentje, Gioto, Seaphoto, Madbehemoth, Ani td, JAnDbot, Hans Mayer, MER-C, Larrybaxter, RebelRobot, JamesBWatson, Dirac66, User A1, DerHexer, JamMan, Gwern, MartinBot, JCraw, Keith D, Pbroks13, Felixbecker2,
Hairchrm, S1dorner, Rlsheehan, Numbo3, Salih, Ohms law, BlGene, Smitjo, DorganBot, Useight, Lseixas, Sheliak, AlnoktaBOT, VasilievVV,
TXiKiBoT, Jomasecu, BertSen, A4bot, Anonymous Dissident, Broadbot, Mezzaluna, Venny85, Nobull67, Andy Dingley, Yk Yk Yk, GauteHope, Riick, AlleborgoBot, Neparis, LOTRrules, Kbrose, SieBot, Spartan, ToePeu.bot, Phe-bot, Matthew Yeager, Conairh, Antonio Lopez,
Evilstudent, WikiLaurent, Dolphin51, Tuntable, ClueBot, Hjlim, Qhudspeth, Wikiste, Jusdafax, P. M. Sakkas, Morekitsch, Pdch, Ngebendi,
Natty sci~enwiki, Thehelpfulone, AC+79 3888, Qwfp, Egmontaz, Crowsnest, Rreagan007, Gonfer, Some jerk on the Internet, Wickey-nl,
EconoPhysicist, Causticorulos, Wakeham, Tide rolls, Lightbot, Gail, Margin1522, Legobot, Yobot, Amirobot, KamikazeBot, KarlHegbloom,
TimeVariant, AnomieBOT, Daniele Pugliesi, Materialscientist, ArthurBot, LilHelpa, Xqbot, Br77rino, GrouchoBot, Omnipaedista, RibotBOT,
Kyng, Vikky2904, Bytbox, FrescoBot, Citation bot 1, Winterst, AMSask, Lesath, Jandalhandler, TobeBot, Tehfu, Begomber, Matlsarefun, Diannaa, Sergius-eu, EmausBot, John of Reading, Faraz shaukat ali, KHamsun, Trinibones, Hhhippo, Grondilu, Shivankmehra, Raggot, Flag cloud,
Jadzia2341, Vacant999, Elaz85, Scientic29, RockMagnetist, DASHBotAV, Xanchester, Mikhail Ryazanov, ClueBot NG, Senthilvel32, Mesoderm, TransportObserver, Helpful Pixie Bot, Calabe1992, Bibcode Bot, BG19bot, Hz.tiang, J991, Kookookook, Bioe205fun, ChrisGualtieri,
Adwaele, Emresulun93, BeaumontTaz, Frosty, Gaurav.gautam17, Mark viking, Coleslime5403, Bruce Chen 0010334, Jianhui67, Stevengus,
Elenceq, AKS.9955, Jim Carter, Voluntas V, Yesufu29, Undened51 and Anonymous: 353
Internal energy Source: https://en.wikipedia.org/wiki/Internal_energy?oldid=717018178 Contributors: Bryan Derksen, Peterlin~enwiki,
Patrick, Michael Hardy, SebastianHelm, Cyan, Andres, J D, Robbot, Hankwang, Fabiform, Giftlite, Andries, Dratman, Bensaccount, Bobblewik, H Padleckas, Icairns, Edsanville, Spiko-carpediem~enwiki, El C, Shanes, Euyyn, Kine, Nhandler, Haham hanuka, Lysdexia, PAR,
Count Iblis, RainbowOfLight, Reaverdrop, GleasSpty, Isnow, BD2412, Qwertyus, Saperaud~enwiki, Rjwilmsi, Thechamelon, HappyCamper,
Margosbot~enwiki, ChrisChiasson, DVdm, Bgwhite, YurikBot, RussBot, Stassats, Dhollm, 2over0, RG2, SmackBot, Oloumi, David Shear,
KocjoBot~enwiki, Ddcampayo, BirdValiant, ThorinMuglindir, Zgyor~enwiki, Persian Poet Gal, MalafayaBot, Complexica, DHN-bot~enwiki,
Sbharris, Rrburke, AFP~enwiki, Henning Makholm, Sadi Carnot, Vina-iwbot~enwiki, Stikonas, Vaughan Pratt, CmdrObot, Xanthoxyl, Cydebot,

252

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Christian75, Omicronpersei8, Barticus88, Headbomb, Bobblehead, Mr pand, Ste4k, Trakesht, JAnDbot, PhilKnight, Davidtwu, Magioladitis,
VoABot II, Cardamon, MartinBot, R'n'B, LedgendGamer, Pdcook, Lseixas, Squids and Chips, Sheliak, VolkovBot, TXiKiBoT, SQL, Riick,
SHL-at-Sv, Kbrose, SieBot, Da Joe, The way, the truth, and the light, Andrewjlockley, Dolphin51, Atif.t2, ClueBot, The Thing That Should Not
Be, Mild Bill Hiccup, SuperHamster, Djr32, CohesionBot, Jusdafax, DeltaQuad, Hans Adler, ChrisHodgesUK, Crowsnest, Avoided, Hess88,
Thatguyint, Addbot, Xp54321, DOI bot, Arcturus87, Aboctok, Morning277, CarsracBot, PV=nRT, Luckas-bot, Yobot, Fraggle81, Becky
Sayles, AnomieBOT, Daniele Pugliesi, Ipatrol, Materialscientist, The High Fin Sperm Whale, Citation bot, LilHelpa, Xqbot, J04n, GrouchoBot,
Mnmngb, MLauba, Vatbey, Chjoaygame, Maghemite, RWG00, Cannolis, Citation bot 1, DrilBot, Pinethicket, Jonesey95, MastiBot, RazielZero,
FoxBot, Derild4921, Gosnap0, Artemis Fowl III, LcawteHuggle, John of Reading, WikitanvirBot, Max139, Dewritech, Faolin42, Sportgirl426,
GoingBatty, Googamooga, Mmeijeri, Hhhippo, JSquish, F, Timmytoddler, Qclijun, Vramasub, ClueBot NG, NuclearEnergy, Mariraja2007,
Cky2250, Aisteco, Acratta, Adwaele, Qsq, Eli4ph, Jamesx12345, Galobtter, Anaekh, SkateTier, Trackteur, The Last Arietta, Scipsycho, LuFangwen, Amangautam1995, Stemwinders, Todyreli and Anonymous: 166
Ideal gas law Source: https://en.wikipedia.org/wiki/Ideal_gas_law?oldid=717248295 Contributors: CYD, Vicki Rosenzweig, Bryan Derksen,
Tarquin, Andre Engels, William Avery, SimonP, FlorianMarquardt, Patrick, JakeVortex, BrianHansen~enwiki, Mark Foskey, Vivin, Tantalate,
Ozuma~enwiki, Robbot, COGDEN, Wereon, Isopropyl, Mattaschen, Enochlau, Alexwcovington, Giftlite, Bensaccount, Alexf, Karol Langner,
Icairns, ELApro, Mike Rosoft, Venu62, Noisy, Discospinster, Hydrox, Vsmith, Femto, Grick, Bobo192, Avathar~enwiki, Larryv, Riana, Lee
S. Svoboda, Shoey, Gene Nygaard, StradivariusTV, Kmg90, Johan Lont, Mandarax, MassGalactusUniversum, Jan van Male, Eteq, Rjwilmsi,
Koavf, Sango123, FlaBot, Intersoa, Jrtayloriv, Fresheneesz, Scroteau96, SteveBaker, Physchim62, Krishnavedala, ARAJ, YurikBot, Huw Powell, Jimp, Quinlan Vos~enwiki, Gaius Cornelius, CambridgeBayWeather, LMSchmitt, Bb3cxv, Dhollm, Ruhrsch, Acit, Zwobot, T, Someones
life, User27091, Smaines, WAS 4.250, 2over0, U.S.Vevek, Nlitement, Junglecat, Bo Jacoby, Bwiki, SmackBot, Mitchan, Incnis Mrsi, Sal.farina,
Pennywisdom2099, Dave19880, Kmarinas86, Bluebot, Kunalmehta, Silly rabbit, Tianxiaozhang~enwiki, CSWarren, DHN-bot~enwiki, Metal
Militia, Berland, Samir.Mesic, Ollien, PiMaster3, G716, Foxhunt king, Just plain Bill, SashatoBot, Esrever, Mbeychok, JorisvS, IronGargoyle,
Ranmoth, Carhas0, Peter Horn, Sifaka, Majora4, Mikiemike, MC10, Astrochemist, Kimtaeil, Christian75, Thijs!bot, Headbomb, Jakirkham,
Electron9, RedWasp, Hmrox, AntiVandalBot, KMossey, Nehahaha, Seaphoto, Prolog, Coolhandscot, Fern Forest, AdamGomaa, Magioladitis, VoABot II, Baccyak4H, Kittyemo, ANONYMOUS COWARD0xC0DE, JaGa, Nirupambits, MartinBot, Rock4p, Mbweissman, J.delanoy,
SimpsonDG, P.wormer, Nwbeeson, Habadasher, Juliancolton, KudzuVine, Nasanbat, VolkovBot, Error9312, Drax Conqueror, Barneca, Philip
Trueman, Rbingama, TXiKiBoT, Malinaccier, Comtraya, Rexeken, LanceBarber, Hanjabba, Riick, Brianga, Hoopssheaer, Givegains, SieBot,
Flyer22 Reborn, Baxter9, Oxymoron83, Smaug123, 123ilikecheese, Lightmouse, JerroldPease-Atlanta, Nskillen, COBot, Adamtester, Thomjakobsen, Pinkadelica, ClueBot, GorillaWarfare, Kharazia, The 888th Avatar, Vql, Jmk, Excirial, Pdch, DumZiBoT, Hseo, TZGreat, Frood,
RP459, QuantumGlow, Dj-dios-del-sol, SkyLined, Dnvrfantj, Addbot, Power.corrupts, LaaknorBot, Eelpop, CarsracBot, LinkFA-Bot, Lightbot, Loupeter, Legobot, Yobot, Ptbotgourou, Daniele Pugliesi, JackieBot, Materialscientist, Nickkid5, Quark1005, Craftyminion, GrouchoBot,
ChristopherKingChemist, RibotBOT, , Dougofborg, Kamran28, Khakiandmauve, StephenWade, EntropyTrap, Lambda(T), Happydude69 yo, Mrahner, Michael93555, D'ohBot, RWG00, Zmcdargh, Citation bot 1, Kishmakov, I dream of horses, RedBot, Pbsouthwood,
Cramyourspam, Orenburg1, Geraldo62, Diblidabliduu, Jade Harley, RjwilmsiBot, MagnInd, Steve Belkins, EmausBot, Tdindorf, Razor2988,
RA0808, Jerry858, Dcirovic, Ssp37097, JSquish, ZroBot, Susfele, MarkclX, Stovl, SporkBot, YnnusOiramo, Donner60, Odysseus1479, Theislikerice, RockMagnetist, George Makepeace, ClueBot NG, BubblyWantedXx, Helloimriley, Wrecker1431, Rezabot, Widr, Bibcode Bot, Mariansavu, MusikAnimal, AvocatoBot, Mark Arsten, Trevayne08, F=q(E+v^B), Klilidiplomus, TechNickL1, Egm4313.s12, NJIT HUMrudyh,
NJIT HUMNV, Waterproof-breathable, AlanParkerFrance, Dexbot, Epicgenius, I am One of Many, Blackbombchu, Zenibus, JustBerry, Ginsuloft, Keojukwu, DudeWithAFeud, Whizzy1999, Fuguangwei, Evanrelf, Monkbot, Nojedi, Trackteur, ChaquiraM, Smanojprabhakar, riugena, CAPTAIN RAJU, Pwags3147, The Master 6969, Qzd, KapteynCook and Anonymous: 400
Fundamental thermodynamic relation Source: https://en.wikipedia.org/wiki/Fundamental_thermodynamic_relation?oldid=704331825 Contributors: PAR, Batmanand, Count Iblis, John Baez, Dhollm, Katieh5584, SmackBot, Betacommand, Sadi Carnot, Dicklyon, Robomojo,
Ahjulsta, Towerman86, Gogobera, BertSen, Kbrose, SieBot, ClueBot, CohesionBot, Addbot, Tnowotny, PV=nRT, LucienBOT, KHamsun,
Netheril96, ZroBot, Makecat, BG19bot, Liquidityinsta, Mela widiawati, Klaus Schmidt-Rohr and Anonymous: 23
Heat engine Source: https://en.wikipedia.org/wiki/Heat_engine?oldid=716542100 Contributors: Mav, The Anome, Stokerm, Mirwin, Roadrunner, Jdpipe, Heron, Icarus~enwiki, Isis~enwiki, Ram-Man, Ubiquity, Kku, Delirium, Ronz, CatherineMunro, Glenn, GCarty, Charles Matthews,
Tantalate, Far neil, Greenrd, Omegatron, Lumos3, Phil Boswell, Robbot, Academic Challenger, Cyrius, Cutler, Buster2058, Ancheta Wis,
Mat-C, Wolfkeeper, Tom harrison, Mcapdevila, Pashute, PlatinumX, LiDaobing, Karol Langner, Oneiros, NathanHurst, Rich Farmbrough,
Vsmith, Liberatus, Femto, Rbj, Jwonder, Jung dalglish, Giraedata, Nk, Exomnium, Alansohn, PAR, Gene Nygaard, Oleg Alexandrov, Garylhewitt, Fingers-of-Pyrex, Peter Beard, WadeSimMiser, Rtdrury, Rjwilmsi, Lionelbrits, Maustrauser, Fresheneesz, Lmatt, Scimitar, Chobot,
DVdm, Triku~enwiki, Siddhant, YurikBot, Wavelength, Borgx, JabberWok, Gaius Cornelius, Wimt, Anomalocaris, Eb Oesch, Dhollm, Scs,
Tony1, Bota47, Nikkimaria, Lio , Back ache, A Doon, ArielGold, RG2, Eric Norby, GrinBot~enwiki, SkerHawx, SmackBot, Gilliam, Bluebot, Exprexxo, Complexica, Mbertsch, SundarBot, Bob Castle, Sadi Carnot, Adsllc, Loodog, Beetstra, Stikonas, Dodo bird, Meld, Hu12,
MFago, GDallimore, IanOfNorwich, Mikiemike, BFD1, CuriousEric, Dwolsten, Chris23~enwiki, Cydebot, Odie5533, Michael C Price, DumbBOT, RottweilerCS, Efranco~enwiki, , Gralo, Headbomb, Paquitotrek, Strongriley, Northumbrian, EdJogg, TimVickers, Aspensti, JAnDbot,
Andrew Swallow, Mauk2, VoABot II, Rich257, JMBryant, Catgut, Animum, Allstarecho, Jtir, Rettetast, Tom Gundtofte-Bruun, Fredrosse,
Nono64, FactsAndFigures, Ignacio Icke, Lbeaumont, Andejons, STBotD, WarFox, Engware, Lseixas, Funandtrvl, VolkovBot, Larryisgood,
TXiKiBoT, NPrice, LeaveSleaves, Abjkf, Senpai71, Why Not A Duck, SieBot, Gerakibot, Viskonsas, Flyer22 Reborn, Oxymoron83, Animagi1981, YinZhang, Robvanbasten, Dolphin51, Martarius, ClueBot, Toy 121, Arunsingh16, AdrianAbel, Thingg, Vilkapi, YouRang?, Gonfer,
Kbdankbot, Klundarr, Addbot, LaaknorBot, CarsracBot, Vyom25, Tide rolls, Lightbot, , Loupeter, Megaman en m, Legobot, Luckas-bot,
Yobot, Pentajism, Typenolies, AnomieBOT, Daniele Pugliesi, Jim1138, Piano non troppo, Theseeker4, Bluerasberry, Citation bot, LovesMacs,
Jeriee, In fact, Shadowjams, GliderMaven, Thayts, , Steve Quinn, HamburgerRadio, Lotje, Antipastor, Jfmantis, Orphan
Wiki, Sheeana, Hhhippo, ZroBot, DavidMCEddy, Matt tuke, Wagino 20100516, Yerocus, Peterh5322, Teapeat, Rememberway, ClueBot
NG, Anagogist, Loopy48, Teep111, Widr, Calabe1992, Bibcode Bot, Lowercase sigmabot, BG19bot, MusikAnimal, Zedshort, O8h7w, BattyBot, Bangjiwoo, TooComplicated, Prokaryotes, Monkbot, Tashi19, IvanZhilin, KasparBot, Valaratar, Klaus Schmidt-Rohr, Azamali1947 and
Anonymous: 205
Thermodynamic cycle Source: https://en.wikipedia.org/wiki/Thermodynamic_cycle?oldid=687246415 Contributors: Glenn, Robbot, Wolfkeeper, Dratman, H Padleckas, CDN99, Kjkolb, Gene Nygaard, Palica, Ttjoseph, Siddhant, YurikBot, Borgx, Dhollm, Troodon~enwiki, Cov-

12.2. IMAGES

253

ington, KnightRider~enwiki, SmackBot, Gilliam, Bluebot, Tsca.bot, Ryan Roos, DMacks, Mion, Sadi Carnot, UberCryxic, Mbeychok, EmreDuran, Mig8tr, Ring0, Teratornis, Zanhsieh, Thijs!bot, Headbomb, MSBOT, JamesBWatson, Akhil999in, Jtir, MartinBot, Sigmundg, Felipebm,
Andy Dingley, Kropotkine 113, Treekids, Ariadacapo, Turbojet, Sylvain.quoilin, Erodium, Cerireid, Skarebo, Addbot, CarsracBot, Yobot,
AnomieBOT, Shadowjams, Samwb123, I dream of horses, Bluest, AXRL, EmausBot, WikitanvirBot, Frostbite sailor, Allforrous, Donner60,
ChuispastonBot, ClueBot NG, Incompetence, Guy vandegrift, Zedshort, APerson, Anushrut93, Faizan, Scie8, Hjd28 and Anonymous: 45

12.2 Images
File:13-07-23-kienbaum-unterdruckkammer-33.jpg
Source:
https://upload.wikimedia.org/wikipedia/commons/e/eb/
13-07-23-kienbaum-unterdruckkammer-33.jpg License: CC BY 3.0 Contributors: Own work Original artist: Ralf Roletschek
File:1D_normal_modes_(280_kB).gif Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/1D_normal_modes_%28280_kB%
29.gif License: CC-BY-SA-3.0 Contributors: This is a compressed version of the Image:1D normal modes.gif phonon animation on Wikipedia
Commons that was originally created by Rgis Lachaume and freely licensed. The original was 6,039,343 bytes and required long-duration
downloads for any article which included it. This version is 4.7% the size of the original and loads much faster. This version also has an interframe delay of 40 ms (v.s. the originals 100 ms). Including processing time for each frame, this version runs at a frame rate of about 2022.5
Hz on a typical computer, which yields a more uid motion. Greg L 00:41, 4 October 2006 (UTC). (from http://en.wikipedia.org/wiki/Image:
1D_normal_modes_%28280_kB%29.gif) Original artist: Original Uploader was Greg L (talk) at 00:41, 4 October 2006.
File:Adiabatic.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/49/Adiabatic.svg License: CC-BY-SA-3.0 Contributors:
Image:Adiabatic.png Original artist: User:Stannered
File:Aluminium_cylinder.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/Aluminium_cylinder.jpg License: CC BY-SA
3.0 Contributors: Own work Original artist:
File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public domain
Contributors: Own work, based o of Image:Ambox scales.svg Original artist: Dsmurat (talk contribs)
File:Anders_Celsius.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9f/Anders_Celsius.jpg License: Public domain Contributors: This is a cleaned up version of what appears at The Uppsala Astronomical Observatory, which is part of Uppsala University. The full-size
original image of the painting appears here, which can be accessed via this history page at the observatorys Web site.
Original artist: Olof Arenius
File:Barometer_mercury_column_hg.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b9/Barometer_mercury_column_hg.
jpg License: CC BY-SA 2.5 Contributors: Own work Original artist: Hannes Grobe 19:02, 3 September 2006 (UTC)
File:Benjamin_Thompson.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/Benjamin_Thompson.jpg License: Public domain Contributors:
http://www.sil.si.edu/imagegalaxy/imagegalaxy_imageDetail.cfm?id_image=3087
http://www.sil.si.edu/digitalcollections/hst/scientific-identity/CF/by_name_display_results.cfm?scientist=Rumford,%20Benjamin%
20Thompson,%20Count
Original artist: Not specied[1][2]
File:Boltzmann2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/Boltzmann2.jpg License: Public domain Contributors:
Uni Frankfurt Original artist:
Unknown<a href='//www.wikidata.org/wiki/Q4233718' title='wikidata:Q4233718'><img
alt='wikidata:Q4233718' src='https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/20px-Wikidata-logo.svg.png'
width='20' height='11' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.svg.png
1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/40px-Wikidata-logo.svg.png 2x' data-le-width='1050'
data-le-height='590' /></a>
File:Brayton_cycle.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/Brayton_cycle.svg License: CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Can_T=0_be_reached.jpg Source: https://upload.wikimedia.org/wikipedia/en/c/c7/Can_T%3D0_be_reached.jpg License: CC-BY-SA3.0 Contributors:
Made by SliteWrite
Original artist:
Adwaele
File:Carl_von_Linn.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/68/Carl_von_Linn%C3%A9.jpg License: Public domain Contributors: Nationalmuseum press photo, cropped with colors slightly adjusted Original artist: Alexander Roslin
File:Carnot2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/ec/Carnot2.jpg License: Public domain Contributors: ? Original
artist: ?
File:Carnot_engine_(hot_body_-_working_body_-_cold_body).jpg Source:
https://upload.wikimedia.org/wikipedia/commons/c/c7/
Carnot_engine_%28hot_body_-_working_body_-_cold_body%29.jpg License: Public domain Contributors: Own work (Original text: I (Libb
Thims (talk)) created this work entirely by myself.) Original artist: Libb Thims (talk)
File:Carnot_heat_engine_2.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/22/Carnot_heat_engine_2.svg License: Public
domain Contributors: Based upon Image:Carnot-engine.png Original artist: Eric Gaba (Sting - fr:Sting)

254

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

File:Clausius-1.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/34/Clausius-1.jpg License: Public domain Contributors:


unknown Original artist: Unknown<a href='//www.wikidata.org/wiki/Q4233718' title='wikidata:Q4233718'><img alt='wikidata:Q4233718'
src='https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/20px-Wikidata-logo.svg.png'
width='20'
height='11'
srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.svg.png
1.5x,
https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/40px-Wikidata-logo.svg.png 2x' data-le-width='1050'
data-le-height='590' /></a>
File:Clausius.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/40/Clausius.jpg License: Public domain Contributors: http://
www-history.mcs.st-andrews.ac.uk/history/Posters2/Clausius.html Original artist: Original uploader was user:Sadi Carnot at en.wikipedia
File:Close-packed_spheres.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/Close-packed_spheres.jpg License: CC-BYSA-3.0 Contributors: English Wikipedia Original artist: User:Greg L
File:Coefficient_dilatation_lineique_aciers.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b2/Coefficient_dilatation_
lineique_aciers.svg License: CC0 Contributors: Own work, data from OTUA Original artist: Cdang
File:Coefficient_dilatation_volumique_isobare_PP_semicristallin_Tait.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/
6e/Coefficient_dilatation_volumique_isobare_PP_semicristallin_Tait.svg License: CC BY-SA 3.0 Contributors: Own work Original artist:
Cdang
File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: CC-BY-SA-3.0 Contributors: ?
Original artist: ?
File:Crystal_energy.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/14/Crystal_energy.svg License: LGPL Contributors:
Own work conversion of Image:Crystal_128_energy.png Original artist: Dhateld
File:DebyeVSEinstein.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/54/DebyeVSEinstein.jpg License: Public domain
Contributors: ? Original artist: ?
File:Dehnungsfuge.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d6/Dehnungsfuge.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Deriving_Kelvin_Statement_from_Clausius_Statement.svg Source:
https://upload.wikimedia.org/wikipedia/commons/8/83/
Deriving_Kelvin_Statement_from_Clausius_Statement.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Netheril96
File:DiatomicSpecHeat1.png Source: https://upload.wikimedia.org/wikipedia/commons/0/07/DiatomicSpecHeat1.png License: Public domain Contributors: Own work Original artist: User:PAR
File:DiatomicSpecHeat2.png Source: https://upload.wikimedia.org/wikipedia/commons/6/64/DiatomicSpecHeat2.png License: Public domain Contributors: Own work Original artist: User:PAR
File:Drikkeglas_med_brud-1.JPG Source: https://upload.wikimedia.org/wikipedia/commons/e/eb/Drikkeglas_med_brud-1.JPG License:
CC BY-SA 3.0 Contributors: Own work Original artist: Arc1977
File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The Tango!
Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the le, specically: Andreas Nilsson, and Jakub Steiner (although
minimally).
File:Eight_founding_schools.png Source: https://upload.wikimedia.org/wikipedia/commons/8/85/Eight_founding_schools.png License:
Public domain Contributors: Own work Original artist: Libb Thims
File:Energy_thru_phase_changes.png Source: https://upload.wikimedia.org/wikipedia/en/1/18/Energy_thru_phase_changes.png License:
Cc-by-sa-3.0 Contributors: ? Original artist: ?
File:Entropyandtemp.PNG Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Entropyandtemp.PNG License: CC-BY-SA-3.0
Contributors: Transferred from en.wikipedia to Commons. Original artist: AugPi at English Wikipedia
File:First_law_open_system.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/86/First_law_open_system.svg License: Public
domain Contributors:
First_law_open_system.png Original artist:
derivative work: Pbroks13 (talk)
File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
File:GFImg3.png Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/GFImg3.png License: CC BY 2.5 Contributors: Transferred
from en.wikipedia to Commons by Sreejithk2000 using CommonsHelper. Original artist: Engware at English Wikipedia
File:GFImg4.png Source: https://upload.wikimedia.org/wikipedia/commons/8/86/GFImg4.png License: CC BY 2.5 Contributors: Transferred
from en.wikipedia to Commons by Sreejithk2000 using CommonsHelper. Original artist: Engware at English Wikipedia
File:Gaylussac.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/2f/Gaylussac.jpg License: Public domain Contributors:
chemistryland.com Original artist: Franois Sraphin Delpech
File:Green_check.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/03/Green_check.svg License: Public domain Contributors:
Derived from Image:Yes check.svg by Gregory Maxwell Original artist: gmaxwell

12.2. IMAGES

255

File:Guillaume_Amontons.png Source: https://upload.wikimedia.org/wikipedia/commons/c/ca/Guillaume_Amontons.png License: Public


domain Contributors: circa 1870: French physicist Guillaume Amontons (1663 - 1705) demonstrates the semaphore in the Luxembourg Gardens, Paris in 1690. Original Publication: From an illustration published in Paris circa 1870. Close-up approximating bust. Original artist: Unknown<a href='//www.wikidata.org/wiki/Q4233718' title='wikidata:Q4233718'><img alt='wikidata:Q4233718' src='https://upload.wikimedia.
org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/20px-Wikidata-logo.svg.png' width='20' height='11' srcset='https://upload.wikimedia.
org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/
thumb/f/ff/Wikidata-logo.svg/40px-Wikidata-logo.svg.png 2x' data-le-width='1050' data-le-height='590' /></a>
File:Heat_engine.png Source: https://upload.wikimedia.org/wikipedia/en/a/a2/Heat_engine.png License: CC-BY-SA-3.0 Contributors: ?
Original artist: ?
File:Helmet_logo_for_Underwater_Diving_portal.png
Source:
https://upload.wikimedia.org/wikipedia/commons/5/5e/Helmet_
logo_for_Underwater_Diving_portal.png License: Public domain Contributors: This le was derived from Kask-nurka.jpg: <a
href='//commons.wikimedia.org/wiki/File:Kask-nurka.jpg' class='image'><img alt='Kask-nurka.jpg' src='https://upload.wikimedia.org/
wikipedia/commons/thumb/0/0a/Kask-nurka.jpg/50px-Kask-nurka.jpg' width='50' height='70' srcset='https://upload.wikimedia.org/
wikipedia/commons/thumb/0/0a/Kask-nurka.jpg/75px-Kask-nurka.jpg 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/0/0a/
Kask-nurka.jpg/100px-Kask-nurka.jpg 2x' data-le-width='981' data-le-height='1371' /></a>
Original artist: Kask-nurka.jpg: User:Julo
File:Ice-calorimeter.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/35/Ice-calorimeter.jpg License: Public domain Contributors: originally uploaded http://en.wikipedia.org/wiki/Image:Ice-calorimeter.jpg Original artist: Originally en:User:Sadi Carnot
File:IceBlockNearJoekullsarlon.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/IceBlockNearJoekullsarlon.jpg License: CC BY-SA 4.0 Contributors: Own work Original artist: Andreas Tille
File:Ice_water.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/0c/Ice_water.jpg License: Public domain Contributors: ? Original artist: ?
File:Ideal_gas_isotherms.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e2/Ideal_gas_isotherms.png License: Public domain Contributors: ? Original artist: ?
File:Ideal_gas_isotherms.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/92/Ideal_gas_isotherms.svg License: CC0 Contributors: Own work Original artist: Krishnavedala
File:Isentropic.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/Isentropic.jpg License: CC BY-SA 3.0 Contributors: Own
work Original artist: Tyler.neysmith
File:Isobaric_process_plain.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d0/Isobaric_process_plain.svg License: CC
BY-SA 3.0 Contributors: Own work Original artist: IkamusumeFan
File:Isochoric_process_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9d/Isochoric_process_SVG.svg License: CC
BY-SA 3.0 Contributors: Own work Original artist: IkamusumeFan
File:Isothermal_process.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Isothermal_process.svg License: CC0 Contributors: Own work Original artist: Netheril96
File:JHLambert.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/JHLambert.jpg License: Public domain Contributors: ?
Original artist: ?
File:Jacques_Alexandre_Csar_Charles.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/98/Jacques_Alexandre_C%C3%
A9sar_Charles.jpg License: Public domain Contributors: This image is available from the United States Library of Congress's Prints and Photographs division under the digital ID ppmsca.02185.
This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing for more information.

Original artist: Unknown<a href='//www.wikidata.org/wiki/Q4233718' title='wikidata:Q4233718'><img alt='wikidata:Q4233718' src='https:


//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/20px-Wikidata-logo.svg.png' width='20' height='11' srcset='https:
//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.svg.png 1.5x, https://upload.wikimedia.org/
wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/40px-Wikidata-logo.svg.png 2x' data-le-width='1050' data-le-height='590' /></a>
File:James-clerk-maxwell3.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/6f/James-clerk-maxwell3.jpg License: Public
domain Contributors: ? Original artist: ?
File:Joule{}s_Apparatus_(Harper{}s_Scan).png Source: https://upload.wikimedia.org/wikipedia/commons/c/c3/Joule%27s_Apparatus_
%28Harper%27s_Scan%29.png License: Public domain Contributors: Harpers New Monthly Magazine, No. 231, August, 1869. Original artist: Unknown<a href='//www.wikidata.org/wiki/Q4233718' title='wikidata:Q4233718'><img alt='wikidata:Q4233718' src='https://
upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/20px-Wikidata-logo.svg.png' width='20' height='11' srcset='https:
//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.svg.png 1.5x, https://upload.wikimedia.org/
wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/40px-Wikidata-logo.svg.png 2x' data-le-width='1050' data-le-height='590' /></a>
File:Linia_dilato.png Source: https://upload.wikimedia.org/wikipedia/commons/d/dc/Linia_dilato.png License: CC BY-SA 3.0 Contributors:
Own work Original artist: Walber
File:Liquid_helium_superfluid_phase.jpg Source:
https://upload.wikimedia.org/wikipedia/commons/b/ba/Liquid_helium_superfluid_
phase.jpg License: Public domain Contributors: Liquid_helium_superuid_phase.tif Original artist: Bmatulis
File:Maquina_vapor_Watt_ETSIIM.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9e/Maquina_vapor_Watt_ETSIIM.
jpg License: CC-BY-SA-3.0 Contributors: Enciclopedia Libre Original artist: Nicols Prez
File:Maxwell_Dist-Inverse_Speed.png Source: https://upload.wikimedia.org/wikipedia/en/d/d0/Maxwell_Dist-Inverse_Speed.png License:
Cc-by-sa-3.0 Contributors: ? Original artist: ?

256

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

File:P-v_diagram_of_a_simple_cycle.svg Source:
https://upload.wikimedia.org/wikipedia/commons/4/4c/P-v_diagram_of_a_simple_
cycle.svg License: CC0 Contributors: Own work Original artist: Olivier Cleynen
File:PV_plot_adiab_sim.png Source: https://upload.wikimedia.org/wikipedia/commons/1/10/PV_plot_adiab_sim.png License: Public domain Contributors: Own work Original artist: Mikiemike
File:PV_real1.PNG Source: https://upload.wikimedia.org/wikipedia/commons/8/8d/PV_real1.PNG License: CC-BY-SA-3.0 Contributors:
Eigenes Archiv Original artist: Pedro Servera ( 2005)
File:Parmenides.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/ed/Parmenides.jpg License: CC-BY-SA-3.0 Contributors: ?
Original artist: ?
File:PdV_work_cycle.gif Source: https://upload.wikimedia.org/wikipedia/commons/c/c6/PdV_work_cycle.gif License: CC BY-SA 3.0 Contributors: Own work Original artist: Guy vandegrift
File:PlatformHolly.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/81/PlatformHolly.jpg License: Public domain Contributors: http://www.netl.doe.gov/technologies/oil-gas/Petroleum/projects/EP/ResChar/15127Venoco.htm -- U.S. Department of Energy Original
artist: employee of the U.S. government: public domain
File:Polytropic.gif Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/Polytropic.gif License: CC BY-SA 3.0 Contributors: This
graphic was created with matplotlib. Original artist: IkamusumeFan
File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ? Original artist: ?
File:Pressure_exerted_by_collisions.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/94/Pressure_exerted_by_collisions.
svg License: CC BY-SA 3.0 Contributors: Own work, see http://www.becarlson.com/ Original artist: Becarlson
File:Pressure_force_area.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/ff/Pressure_force_area.svg License: CC BY-SA
3.0 Contributors: Own work Original artist: Klaus-Dieter Keller
File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0 Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
File:Rail_buckle.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b8/Rail_buckle.jpg License: Public domain Contributors:
Transferred from en.wikipedia to Commons. Original artist: The original uploader was Trainwatcher at English Wikipedia
File:Rankine_William_signature.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/58/Rankine_William_signature.jpg License: Public domain Contributors: Frontispiece of Miscellaneous Scientic Papers Original artist: William Rankine
File:Real_Gas_Isotherms.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Real_Gas_Isotherms.svg License: CC BY-SA
3.0 Contributors: Own work Original artist: Raoul NK
File:Red_x.svg Source: https://upload.wikimedia.org/wikipedia/en/b/ba/Red_x.svg License: PD Contributors: ? Original artist: ?
File:Robert_Boyle_0001.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b3/Robert_Boyle_0001.jpg License: Public domain
Contributors: http://www.bbk.ac.uk/boyle/Issue4.html Original artist: Johann Kerseboom
File:SI_base_unit.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c8/SI_base_unit.svg License: CC BY-SA 3.0 Contributors:
I (Dono (talk)) created this work entirely by myself. Base on http://www.newscientist.com/data/images/archive/2622/26221501.jpg Original
artist: Dono (talk)
File:Sadi_Carnot.jpeg Source: https://upload.wikimedia.org/wikipedia/commons/8/80/Sadi_Carnot.jpeg License: Public domain Contributors: http://www-history.mcs.st-and.ac.uk/history/PictDisplay/Carnot_Sadi.html Original artist: Louis-Lopold Boilly
File:Savery-engine.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/cc/Savery-engine.jpg License: Public domain Contributors: Image copy/pasted from http://www.humanthermodynamics.com/HT-history.html Original artist: Institute of Human Thermodynamics
and IoHT Publishing Ltd.
File:Schematic_of_compressor.png Source: https://upload.wikimedia.org/wikipedia/commons/3/38/Schematic_of_compressor.png License:
CC BY-SA 3.0 Contributors: en:File:Schematic of throttling and compressor 01.jpg Original artist: en:User:Adwaele
File:Schematic_of_throttling.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8f/Schematic_of_throttling.png License: CC
BY-SA 3.0 Contributors: en:File:Schematic of throttling and compressor 01.jpg Original artist: en:User:Adwaele
File:Science.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/54/Science.jpg License: Public domain Contributors: ? Original
artist: ?
File:Speakerlink-new.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3b/Speakerlink-new.svg License: CC0 Contributors:
Own work Original artist: Kelvinsong
File:SpongeDiver.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/81/SpongeDiver.jpg License: Public domain Contributors:
Own work Original artist: Bryan Shrode
File:Stirling_Cycle.png Source: https://upload.wikimedia.org/wikipedia/commons/d/dc/Stirling_Cycle.png License: CC-BY-SA-3.0 Contributors: Transferred from en.wikipedia to Commons. Original artist: Zephyris at English Wikipedia
File:Stirling_Cycle.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/25/Stirling_Cycle.svg License: Public domain Contributors: Own work Original artist: Nickez

12.3. CONTENT LICENSE

257

File:Stirling_Cycle_color.png Source: https://upload.wikimedia.org/wikipedia/commons/a/af/Stirling_Cycle_color.png License: Public domain Contributors: I created this modication of the original image (File:Stirling Cycle.svg) to clarify the temperature change that occurs during
the Stirling cycle Original artist: Kmote at English Wikipedia
File:Stylised_Lithium_Atom.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e1/Stylised_Lithium_Atom.svg License: CCBY-SA-3.0 Contributors: based o of Image:Stylised Lithium Atom.png by Halfdan. Original artist: SVG by Indolences. Recoloring and ironing
out some glitches done by Rainer Klute.
File:Symbol_book_class2.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/89/Symbol_book_class2.svg License: CC BY-SA
2.5 Contributors: Mad by Lokal_Prol by combining: Original artist: Lokal_Prol
File:Symbol_list_class.svg Source: https://upload.wikimedia.org/wikipedia/en/d/db/Symbol_list_class.svg License: Public domain Contributors: ? Original artist: ?
File:System_boundary.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b6/System_boundary.svg License: Public domain
Contributors: en:Image:System-boundary.jpg Original artist: en:User:Wavesmikey, traced by User:Stannered
File:System_boundary2.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/63/System_boundary2.svg License: CC BY-SA 4.0
Contributors: Own work Original artist: Krauss
File:Temperature-entropy_chart_for_steam,_US_units.svg
Source:
https://upload.wikimedia.org/wikipedia/commons/6/63/
Temperature-entropy_chart_for_steam%2C_US_units.svg License: CC BY-SA 3.0 Contributors: Own workData retrieved from: E.W.
Lemmon, M.O. McLinden and D.G. Friend, Thermophysical Properties of Fluid Systems in NIST Chemistry WebBook, NIST Standard
Reference Database Number 69, Eds. P.J. Linstrom and W.G. Mallard, National Institute of Standards and Technology, Gaithersburg MD,
20899, http://webbook.nist.gov, (retrieved November 2, 2010).) Original artist: Emok
File:Thermally_Agitated_Molecule.gif Source: https://upload.wikimedia.org/wikipedia/commons/2/23/Thermally_Agitated_Molecule.gif
License: CC-BY-SA-3.0 Contributors: http://en.wikipedia.org/wiki/Image:Thermally_Agitated_Molecule.gif Original artist: en:User:Greg L
File:Thermodynamics.png Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/Thermodynamics.png License: CC BY-SA 3.0
Contributors: Own work Original artist: Miketwardos
File:Translational_motion.gif Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Translational_motion.gif License: CC-BYSA-3.0 Contributors: English Wikipedia Original artist: A.Greg, en:User:Greg L
File:Triple_expansion_engine_cropped.png Source: https://upload.wikimedia.org/wikipedia/commons/3/33/Triple_expansion_engine_
cropped.png License: CC BY 2.5 Contributors: crop of en::Image:Triple_expansion_engine_animation.gif Original artist: Emoscopes
File:Ts_diagram_of_N2_02.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/03/Ts_diagram_of_N2_02.jpg License: CC-BY-SA3.0 Contributors:
made with slitewrite
Original artist:
Adwaele
File:Wiens_law.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/Wiens_law.svg License: CC-BY-SA-3.0 Contributors:
Own work based on JPG version Curva Planck TT.jpg Original artist: 4C
File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License: CCBY-SA-3.0 Contributors: This le was derived from Wiki letter w.svg: <a href='//commons.wikimedia.org/wiki/File:Wiki_letter_w.svg'
class='image'><img alt='Wiki letter w.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/50px-Wiki_
letter_w.svg.png' width='50' height='50' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/75px-Wiki_
letter_w.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/100px-Wiki_letter_w.svg.png 2x'
data-le-width='44' data-le-height='44' /></a>
Original artist: Derivative work by Thumperward
File:Wikiquote-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikiquote-logo.svg License: Public domain Contributors: ? Original artist: ?
File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Public domain
Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk contribs), based on original
logo tossed together by Brion Vibber
File:Willard_Gibbs.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/8b/Willard_Gibbs.jpg License: Public domain Contributors: ? Original artist: ?
File:William_Thomson_1st_Baron_Kelvin.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/de/William_Thomson_1st_
Baron_Kelvin.jpg License: Public domain Contributors: From http://ihm.nlm.nih.gov/images/B16057 (via en.wikipedia as Image:Lord+
Kelvin.jpg/all following user names refer to en.wikipedia): Original artist: Unknown<a href='//www.wikidata.org/wiki/Q4233718'
title='wikidata:Q4233718'><img alt='wikidata:Q4233718' src='https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.
svg/20px-Wikidata-logo.svg.png' width='20' height='11' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.
svg/30px-Wikidata-logo.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/40px-Wikidata-logo.
svg.png 2x' data-le-width='1050' data-le-height='590' /></a>
File:Zero-point_energy_v.s._motion.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/79/Zero-point_energy_v.s._motion.
jpg License: CC-BY-SA-3.0 Contributors: Transferred from en.wikipedia to Commons by Undead_warrior using CommonsHelper. Original
artist: Greg L at English Wikipedia

12.3 Content license


Creative Commons Attribution-Share Alike 3.0

Das könnte Ihnen auch gefallen