Sie sind auf Seite 1von 292

Thermodynamics

Everything you need to know

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Thu, 19 Dec 2013 07:41:43 UTC

Contents
Articles
Chapter 1. Introduction
Classical Thermodynamics Statistical Thermodynamics Chemical Thermodynamics Equilibrium Thermodynamics Non-equilibrium Thermodynamics 1 1 24 33 39 40 49 49 53 72 90 96 96 104 107 107 109 115 120 120 123 125 127 135 139 139 144 161 172

Chapter 2. Laws of Thermodynamics


Zeroth First Second Third

Chapter 3. History
History of thermodynamics An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction

Chapter 4. System State


Control volume Ideal gas Real gas

Chapter 5. System Processes


Isobaric process Isochoric process Isothermal process Adiabatic process Polytropic process

Chapter 6. System Properties


Introduction to entropy Entropy Pressure Thermodynamic temperature

Volume

192 197 197 219 223 232 232 239 249 256 256 261 261 264 272 278 278 279

Chapter 7. Material Properties


Heat capacity Compressibility Thermal expansion

Chapter 8. Potentials
Thermodynamic potential Enthalpy Internal energy

Chapter 9. Equations
Ideal gas law

Chapter 10. Fundamentals


Fundamental thermodynamic relation Heat engine Carnot cycle

Chapter 11. Philosophy


Heat death paradox Loschmidt's paradox

References
Article Sources and Contributors Image Sources, Licenses and Contributors 282 287

Article Licenses
License 289

Chapter 1. Introduction
Classical Thermodynamics

Annotated color version of the original 1824 Carnot heat engine showing the hot body (boiler), working body (system, steam), and cold body (water), the letters labeled according to the stopping points in Carnot cycle

Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Thermodynamics is a branch of natural science concerned with heat and its relation to energy and work. It defines macroscopic variables (such as temperature, internal energy, entropy, and pressure) that characterize materials and radiation, and explains how they are related and by what laws they change with time. Thermodynamics describes the average behavior of very large numbers of microscopic constituents, and its laws can be derived from statistical mechanics.

Classical Thermodynamics Thermodynamics applies to a wide variety of topics in science and engineeringsuch as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. Results of thermodynamic calculations are essential for other fields of physics and for chemistry, chemical engineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, and materials scienceand useful in other fields such as economics. Much of the empirical content of thermodynamics is contained in the four laws. The first law asserts the existence of a quantity called the internal energy of a system, which is distinguishable from the kinetic energy of bulk movement of the system and from its potential energy with respect to its surroundings. The first law distinguishes transfers of energy between closed systems as heat and as work.[2][3][4] The second law concerns two quantities called temperature and entropy. Entropy expresses the limitations, arising from what is known as irreversibility, on the amount of thermodynamic work that can be delivered to an external system by a thermodynamic process. Temperature, whose properties are also partially described by the zeroth law of thermodynamics, quantifies the direction of energy flow as heat between two systems in thermal contact and quantifies the common-sense notions of "hot" and "cold". Historically, thermodynamics developed out of a desire to increase the efficiency of early steam engines, particularly through the work of French physicist Nicolas Lonard Sadi Carnot (1824) who believed that the efficiency of heat engines was the key that could help France win the Napoleonic Wars. Irish-born British physicist Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854: Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency. Initially, the thermodynamics of heat engines concerned mainly the thermal properties of their 'working materials', such as steam. This concern was then linked to the study of energy transfers in chemical processes, for example to the investigation, published in 1840, of the heats of chemical reactions[5] by Germain Hess, which was not originally explicitly concerned with the relation between energy exchanges by heat and work. Chemical thermodynamics studies the role of entropy in chemical reactions.[6][7][8][9] Also, statistical thermodynamics, or statistical mechanics, gave explanations of macroscopic thermodynamics by statistical predictions of the collective motion of particles based on the mechanics of their microscopic behavior.

Introduction
The plain term 'thermodynamics' refers to macroscopic description of bodies and processes.[10] "Any reference to atomic constitution is foreign to classical thermodynamics."[11] The qualified term 'statistical thermodynamics' refers to descriptions of bodies and processes in terms of the atomic constitution of matter, mainly described by sets of items all alike, so as to have equal probabilities. Thermodynamics arose from the study of energy transfers that can be strictly resolved into two distinct components, heat and work, specified by macroscopic variables.[12][13] Thermodynamic equilibrium is one of the most important concepts for thermodynamics.[14] The temperature of a system in thermodynamic equilibrium is well defined, and is perhaps the most characteristic quantity of thermodynamics. As the systems and processes of interest are taken further from thermodynamic equilibrium, their exact thermodynamical study becomes more difficult. Relatively simple approximate calculations, however, using the variables of equilibrium thermodynamics, are of much practical value in engineering. In many important practical cases, such as heat engines or refrigerators, the systems consist of many subsystems at different temperatures and pressures. In practice, thermodynamic calculations deal effectively with these complicated dynamic systems provided the equilibrium thermodynamic variables are nearly enough well-defined. Basic for thermodynamics are the concepts of system and surroundings.[15] The surroundings of a thermodynamic system consist of physical devices and of other thermodynamic systems that can interact with it. An example of a thermodynamic surrounding is a heat bath, which is considered to be held at a prescribed temperature, regardless of

Classical Thermodynamics the interactions it might have with the system. There are two fundamental kinds of physical entity in thermodynamics, states of a system, and thermodynamic processes of a systems. This allows two fundamental approaches to thermodynamic reasoning, that in terms of states of a system, and that in terms of cyclic processes of a system. Also necessary for thermodynamic reasoning are thermodynamic operations. A thermodynamic system can be defined in terms of its states. In this way, a thermodynamic system is a macroscopic physical object, explicitly specified in terms of macroscopic physical and chemical variables that describe its macroscopic properties. The macroscopic state variables of thermodynamics have been recognized in the course of empirical work in physics and chemistry. A thermodynamic system can also be defined in terms of the processes it can undergo. Of particular interest are cyclic processes. This was the way of the founders of thermodynamics in the first three quarters of the nineteenth century. A thermodynamic operation is a conceptual step that changes the definition of a system or its surroundings. For example, the partition between two thermodynamic systems can be removed so as to produce a single system. There is a sense in which Maxwell's demon if he existed would be able to violate the laws or of thermodynamics because he is permitted to perform thermodynamic operations, which are permitted to be unnatural. For thermodynamics and statistical thermodynamics to apply to a process in a body, it is necessary that the atomic mechanisms of the process fall into just two classes: those so rapid that, in the time frame of the process of interest, the atomic states effectively visit all of their accessible range, bringing the system to its state of internal thermodynamic equilibrium; and those so slow that their progress can be neglected in the time frame of the process of interest.[16][17] The rapid atomic mechanisms mediate the macroscopic changes that are of interest for thermodynamics and statistical thermodynamics, because they quickly bring the system near enough to thermodynamic equilibrium. "When intermediate rates are present, thermodynamics and statistical mechanics cannot be applied." Such intermediate rate atomic processes do not bring the system near enough to thermodynamic equilibrium in the time frame of the macroscopic process of interest. This separation of time scales of atomic processes is a theme that recurs throughout the subject. For example, classical thermodynamics is characterized by its study of materials that have equations of state or characteristic equations. They express relations between macroscopic mechanical variables and temperature that are reached much more rapidly than the progress of any imposed changes in the surroundings, and are in effect variables of state for thermodynamic equilibrium. They express the constitutive peculiarities of the material of the system. A classical material can usually be described by a function that makes pressure dependent on volume and temperature, the resulting pressure being established much more rapidly than any imposed change of volume or temperature.[18][19][20][21] The present article takes a gradual approach to the subject, starting with a focus on cyclic processes and thermodynamic equilibrium, and then gradually beginning to further consider non-equilibrium systems. Thermodynamic facts can often be explained by viewing macroscopic objects as assemblies of very many microscopic or atomic objects that obey Hamiltonian dynamics.[22][23] The microscopic or atomic objects exist in species, the objects of each species being all alike. Because of this likeness, statistical methods can be used to account for the macroscopic properties of the thermodynamic system in terms of the properties of the microscopic species. Such explanation is called statistical thermodynamics; also often it is also referred to by the term 'statistical mechanics', though this term can have a wider meaning, referring to 'microscopic objects', such as economic quantities, that do not obey Hamiltonian dynamics.

Classical Thermodynamics

History
The history of thermodynamics as a scientific discipline generally begins with Otto von Guericke who, in 1650, built and designed the world's first vacuum pump and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the physicist and chemist Robert Boyle had learned of Guericke's designs and, in 1656, in coordination with scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, The thermodynamicists representative of the original eight founding schools of Boyle's Law was formulated, stating thermodynamics. The schools with the most-lasting effect in founding the modern that for a gas at constant temperature, versions of thermodynamics are the Berlin school, particularly as established in Rudolf Clausiuss 1865 textbook The Mechanical Theory of Heat, the Vienna school, with the its pressure and volume are inversely statistical mechanics of Ludwig Boltzmann, and the Gibbsian school at Yale University, proportional. In 1679, based on these American engineer Willard Gibbs' 1876 On the Equilibrium of Heterogeneous Substances concepts, an associate of Boyle's launching chemical thermodynamics. named Denis Papin built a steam digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated. Later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine, followed by Thomas Newcomen in 1712. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. The concepts of heat capacity and latent heat, which were necessary for development of thermodynamics, were developed by professor Joseph Black at the University of Glasgow, where James Watt worked as an instrument maker. Watt consulted with Black on tests of his steam engine, but it was Watt who conceived the idea of the external condenser, greatly raising the steam engine's efficiency.[24] Drawing on all the previous work led Sadi Carnot, the "father of thermodynamics", to publish Reflections on the Motive Power of Fire (1824), a discourse on heat, power, energy and engine efficiency. The paper outlined the basic energetic relations between the Carnot engine, the Carnot cycle, and motive power. It marked the start of thermodynamics as a modern science. The first thermodynamic textbook was written in 1859 by William Rankine, originally trained as a physicist and a civil and mechanical engineering professor at the University of Glasgow. The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin). The foundations of statistical thermodynamics were set out by physicists such as James Clerk Maxwell, Ludwig Boltzmann, Max Planck, Rudolf Clausius and J. Willard Gibbs.

Classical Thermodynamics From 1873 to '76, the American mathematical physicist Josiah Willard Gibbs published a series of three papers, the most famous being "On the equilibrium of heterogeneous substances". Gibbs showed how thermodynamic processes, including chemical reactions, could be graphically analyzed. By studying the energy, entropy, volume, chemical potential, temperature and pressure of the thermodynamic system, one can determine if a process would occur spontaneously. Chemical thermodynamics was further developed by Pierre Duhem, Gilbert N. Lewis, Merle Randall, and E. A. Guggenheim, who applied the mathematical methods of Gibbs.

Etymology
The etymology of thermodynamics has an intricate history. It was first spelled in a hyphenated form as an adjective (thermo-dynamic) and from 1854 to 1868 as the noun thermo-dynamics to represent the science of generalized heat engines. The components of the word thermo-dynamic are derived from the Greek words therme, meaning "heat," and dynamis, meaning "power" (Haynie claims that the word was coined around 1840).[25]

The lifetimes of some of the most important contributors to thermodynamics.

Pierre Perrot claims that the term thermodynamics was coined by James Joule in 1858 to designate the science of relations between heat and power. Joule, however, never used that term, but used instead the term perfect thermo-dynamic engine in reference to Thomsons 1849 phraseology. By 1858, thermo-dynamics, as a functional term, was used in William Thomson's paper An Account of Carnot's Theory of the Motive Power of Heat.[]

Branches of description
Thermodynamic systems are theoretical constructions used to model physical systems that exchange matter and energy in terms of the laws of thermodynamics. The study of thermodynamical systems has developed into several related branches, each using a different fundamental model as a theoretical or experimental basis, or applying the principles to varying types of systems.

Classical thermodynamics
Classical thermodynamics accounts for the adventures of thermodynamic systems in terms, either of their time-invariant equilibrium states, or else of their continually repeated cyclic processes, but, formally, not both in the same account. It uses only time-invariant, or equilibrium, macroscopic quantities measurable in the laboratory, counting as time-invariant a long-term time-average of a quantity, such as a flow, generated by a continually repetitive process.[26][27] Classical thermodynamics does not admit change over time as a fundamental factor in its account of processes. An equilibrium state stands endlessly without change over time, while a continually repeated cyclic process runs endlessly without change over time. In the account in terms of equilibrium states of a system, a state of thermodynamic equilibrium in a simple system (as defined below in this article), with no externally imposed force field, is spatially homogeneous. In the classical account strictly and purely in terms of cyclic processes, the spatial interior of the 'working body' of a cyclic process is not considered; the 'working body' thus does not have a defined internal thermodynamic state of its own because no assumption is made that it should be in thermodynamic equilibrium; only its inputs and outputs of energy as heat and work are considered.[28] It is of course possible, and indeed common, for the account in terms of equilibrium states of a system to describe cycles composed of indefinitely many equilibrium states.

Classical Thermodynamics Classical thermodynamics was originally concerned with the transformation of energy in cyclic processes, and the exchange of energy between closed systems defined only by their equilibrium states. For these, the distinction between transfers of energy as heat and as work was central. As classical thermodynamics developed, the distinction between heat and work became less central. This was because there was more interest in open systems, for which the distinction between heat and work is not simple, and is beyond the scope of the present article. Alongside amount of heat transferred as a fundamental quantity, entropy, considered below, was gradually found to be a more generally applicable concept, especially when chemical reactions are of interest. Massieu in 1869 considered entropy as the basic dependent thermodynamic variable, with energy potentials and the reciprocal of thermodynamic temperature as fundamental independent variables. Massieu functions can be useful in present-day non-equilibrium thermodynamics. In 1875, in the work of Josiah Willard Gibbs, the basic thermodynamic quantities were energy potentials, such as internal energy, as dependent variables, and entropy, considered as a fundamental independent variable.[29] All actual physical processes are to some degree irreversible. Classical thermodynamics can consider irreversible processes, but its account in exact terms is restricted to variables that refer only to initial and final states of thermodynamic equilibrium, or to rates of input and output that do not change with time. For example, classical thermodynamics can consider long-time-average rates of flows generated by continually repeated irreversible cyclic processes. Also it can consider irreversible changes between equilibrium states of systems consisting of several phases (as defined below in this article), or with removable or replaceable partitions. But for systems that are described in terms of equilibrium states, it considers neither flows, nor spatial inhomogeneities in simple systems with no externally imposed force fields such as gravity. In the account in terms of equilibrium states of a system, descriptions of irreversible processes refer only to initial and final static equilibrium states; rates of progress are not considered.[30][31]

Local equilibrium thermodynamics


Local equilibrium thermodynamics is concerned with the time courses and rates of progress of irreversible processes in systems that are smoothly spatially inhomogeneous. It admits time as a fundamental quantity, but only in a restricted way. Rather than considering time-invariant flows as long-term-average rates of cyclic processes, local equilibrium thermodynamics considers time-varying flows in systems that are described by states of local thermodynamic equilibrium, as follows. For processes that involve only suitably small and smooth spatial inhomogeneities and suitably small changes with time, a good approximation can be found through the assumption of local thermodynamic equilibrium. Within the large or global region of a process, for a suitably small local region, this approximation assumes that a quantity known as the entropy of the small local region can be defined in a particular way. That particular way of definition of entropy is largely beyond the scope of the present article, but here it may be said that it is entirely derived from the concepts of classical thermodynamics; in particular, neither flow rates nor changes over time are admitted into the definition of the entropy of the small local region. It is assumed without proof that the instantaneous global entropy of a non-equilibrium system can be found by adding up the simultaneous instantaneous entropies of its constituent small local regions. Local equilibrium thermodynamics considers processes that involve the time-dependent production of entropy by dissipative processes, in which kinetic energy of bulk flow and chemical potential energy are converted into internal energy at time-rates that are explicitly accounted for. Time-varying bulk flows and specific diffusional flows are considered, but they are required to be dependent variables, derived only from material properties described only by static macroscopic equilibrium states of small local regions. The independent state variables of a small local region are only those of classical thermodynamics.

Classical Thermodynamics

Generalized or extended thermodynamics


Like local equilibrium thermodynamics, generalized or extended thermodynamics also is concerned with the time courses and rates of progress of irreversible processes in systems that are smoothly spatially inhomogeneous. It describes time-varying flows in terms of states of suitably small local regions within a global region that is smoothly spatially inhomogeneous, rather than considering flows as time-invariant long-term-average rates of cyclic processes. In its accounts of processes, generalized or extended thermodynamics admits time as a fundamental quantity in a more far-reaching way than does local equilibrium thermodynamics. The states of small local regions are defined by macroscopic quantities that are explicitly allowed to vary with time, including time-varying flows. Generalized thermodynamics might tackle such problems as ultrasound or shock waves, in which there are strong spatial inhomogeneities and changes in time fast enough to outpace a tendency towards local thermodynamic equilibrium. Generalized or extended thermodynamics is a diverse and developing project, rather than a more or less completed subject such as is classical thermodynamics.[32][33] For generalized or extended thermodynamics, the definition of the quantity known as the entropy of a small local region is in terms beyond those of classical thermodynamics; in particular, flow rates are admitted into the definition of the entropy of a small local region. The independent state variables of a small local region include flow rates, which are not admitted as independent variables for the small local regions of local equilibrium thermodynamics. Outside the range of classical thermodynamics, the definition of the entropy of a small local region is no simple matter. For a thermodynamic account of a process in terms of the entropies of small local regions, the definition of entropy should be such as to ensure that the second law of thermodynamics applies in each small local region. It is often assumed without proof that the instantaneous global entropy of a non-equilibrium system can be found by adding up the simultaneous instantaneous entropies of its constituent small local regions. For a given physical process, the selection of suitable independent local non-equilibrium macroscopic state variables for the construction of a thermodynamic description calls for qualitative physical understanding, rather than being a simply mathematical problem concerned with a uniquely determined thermodynamic description. A suitable definition of the entropy of a small local region depends on the physically insightful and judicious selection of the independent local non-equilibrium macroscopic state variables, and different selections provide different generalized or extended thermodynamical accounts of one and the same given physical process. This is one of the several good reasons for considering entropy as an epistemic physical variable, rather than as a simply material quantity. According to a respected author: "There is no compelling reason to believe that the classical thermodynamic entropy is a measurable property of nonequilibrium phenomena, ..."[34]

Statistical thermodynamics
Statistical thermodynamics, also called statistical mechanics, emerged with the development of atomic and molecular theories in the second half of the 19th century and early 20th century. It provides an explanation of classical thermodynamics. It considers the microscopic interactions between individual particles and their collective motions, in terms of classical or of quantum mechanics. Its explanation is in terms of statistics that rest on the fact the system is composed of several species of particles or collective motions, the members of each species respectively being in some sense all alike.

Thermodynamic equilibrium
Equilibrium thermodynamics studies transformations of matter and energy in systems at or near thermodynamic equilibrium. In thermodynamic equilibrium, a system's properties are, by definition, unchanging in time. In thermodynamic equilibrium no macroscopic change is occurring or can be triggered; within the system, every microscopic process is balanced by its opposite; this is called the principle of detailed balance. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial state, subject to specified constraints, to calculate what the equilibrium state of the system is.[35]

Classical Thermodynamics In theoretical studies, it is often convenient to consider the simplest kind of thermodynamic system. This is defined variously by different authors.[36][37][38][39] For the present article, the following definition is convenient, as abstracted from the definitions of various authors. A region of material with all intensive properties continuous in space and time is called a phase. A simple system is for the present article defined as one that consists of a single phase of a pure chemical substance, with no interior partitions. Within a simple isolated thermodynamic system in thermodynamic equilibrium, in the absence of externally imposed force fields, all properties of the material of the system are spatially homogeneous.[40] Much of the basic theory of thermodynamics is concerned with homogeneous systems in thermodynamic equilibrium.[41] Most systems found in nature or considered in engineering are not in thermodynamic equilibrium, exactly considered. They are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. For example, according to Callen, "in absolute thermodynamic equilibrium all radioactive materials would have decayed completely and nuclear reactions would have transmuted all nuclei to the most stable isotopes. Such processes, which would take cosmic times to complete, generally can be ignored.". Such processes being ignored, many systems in nature are close enough to thermodynamic equilibrium that for many purposes their behaviour can be well approximated by equilibrium calculations.

Quasi-static transfers between simple systems are nearly in thermodynamic equilibrium and are reversible
It very much eases and simplifies theoretical thermodynamical studies to imagine transfers of energy and matter between two simple systems that proceed so slowly that at all times each simple system considered separately is near enough to thermodynamic equilibrium. Such processes are sometimes called quasi-static and are near enough to being reversible.[42][43]

Natural processes are partly described by tendency towards thermodynamic equilibrium and are irreversible
If not initially in thermodynamic equilibrium, simple isolated thermodynamic systems, as time passes, tend to evolve naturally towards thermodynamic equilibrium. In the absence of externally imposed force fields, they become homogeneous in all their local properties. Such homogeneity is an important characteristic of a system in thermodynamic equilibrium in the absence of externally imposed force fields. Many thermodynamic processes can be modeled by compound or composite systems, consisting of several or many contiguous component simple systems, initially not in thermodynamic equilibrium, but allowed to transfer mass and energy between them. Natural thermodynamic processes are described in terms of a tendency towards thermodynamic equilibrium within simple systems and in transfers between contiguous simple systems. Such natural processes are irreversible.[44]

Non-equilibrium thermodynamics
Non-equilibrium thermodynamics[45] is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium; it is also called thermodynamics of irreversible processes. Non-equilibrium thermodynamics is concerned with transport processes and with the rates of chemical reactions.[46] Non-equilibrium systems can be in stationary states that are not homogeneous even when there is no externally imposed field of force; in this case, the description of the internal state of the system requires a field theory.[47][48][49] One of the methods of dealing with non-equilibrium systems is to introduce so-called 'internal variables'. These are quantities that express the local state of the system, besides the usual local thermodynamic variables; in a sense such variables might be seen as expressing the 'memory' of the materials. Hysteresis may sometimes be described in this way. In contrast to the usual thermodynamic variables, 'internal variables' cannot be controlled by external manipulations.[50] This

Classical Thermodynamics approach is usually unnecessary for gases and liquids, but may be useful for solids.[51] Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods.

Laws of thermodynamics
Thermodynamics states a set of four laws that are valid for all systems that fall within the constraints implied by each. In the various theoretical descriptions of thermodynamics these laws may be expressed in seemingly differing forms, but the most prominent formulations are the following: Zeroth law of thermodynamics: If two systems are each in thermal equilibrium with a third, they are also in thermal equilibrium with each other. This statement implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems under consideration. Systems are said to be in thermal equilibrium with each other if spontaneous molecular thermal energy exchanges between them do not lead to a net exchange of energy. This law is tacitly assumed in every measurement of temperature. For two bodies known to be at the same temperature, deciding if they are in thermal equilibrium when put into thermal contact does not require actually bringing them into contact and measuring any changes of their observable properties in time.[52] In traditional statements, the law provides an empirical definition of temperature and justification for the construction of practical thermometers. In contrast to absolute thermodynamic temperatures, empirical temperatures are measured just by the mechanical properties of bodies, such as their volumes, without reliance on the concepts of energy, entropy or the first, second, or third laws of thermodynamics.[53] Empirical temperatures lead to calorimetry for heat transfer in terms of the mechanical properties of bodies, without reliance on mechanical concepts of energy. The physical content of the zeroth law has long been recognized. For example, Rankine in 1853 defined temperature as follows: "Two portions of matter are said to have equal temperatures when neither tends to communicate heat to the other."[54] Maxwell in 1872 stated a "Law of Equal Temperatures".[55] He also stated: "All Heat is of the same kind."[56] Planck explicitly assumed and stated it in its customary present-day wording in his formulation of the first two laws.[57] By the time the desire arose to number it as a law, the other three had already been assigned numbers, and so it was designated the zeroth law. First law of thermodynamics: The increase in internal energy of a closed system is equal to the difference of the heat supplied to the system and the work done by it: U = Q - W [58][59][60][61][62][63][64][65][66][67] The first law of thermodynamics asserts the existence of a state variable for a system, the internal energy, and tells how it changes in thermodynamic processes. The law allows a given internal energy of a system to be reached by any combination of heat and work. It is important that internal energy is a variable of state of the system (see Thermodynamic state) whereas heat and work are variables that describe processes or changes of the state of systems. The first law observes that the internal energy of an isolated system obeys the principle of conservation of energy, which states that energy can be transformed (changed from one form to another), but cannot be created or destroyed.[68][69][70][71] Second law of thermodynamics: Heat cannot spontaneously flow from a colder location to a hotter location. The second law of thermodynamics is an expression of the universal principle of dissipation of kinetic and potential energy observable in nature. The second law is an observation of the fact that over time, differences in temperature, pressure, and chemical potential tend to even out in a physical system that is isolated from the outside world. Entropy is a measure of how much this process has progressed. The entropy of an isolated system that is not in equilibrium tends to increase over time, approaching a maximum value at equilibrium. In classical thermodynamics, the second law is a basic postulate applicable to any system involving heat energy transfer; in statistical thermodynamics, the second law is a consequence of the assumed randomness of molecular chaos. There are many versions of the second law, but they all have the same effect, which is to explain the

Classical Thermodynamics phenomenon of irreversibility in nature. Third law of thermodynamics: As a system approaches absolute zero the entropy of the system approaches a minimum value. The third law of thermodynamics is a statistical law of nature regarding entropy and the impossibility of reaching absolute zero of temperature. This law provides an absolute reference point for the determination of entropy. The entropy determined relative to this point is the absolute entropy. Alternate definitions are, "the entropy of all systems and of all states of a system is smallest at absolute zero," or equivalently "it is impossible to reach the absolute zero of temperature by any finite number of processes". Absolute zero is 273.15 C (degrees Celsius), or 459.67 F (degrees Fahrenheit) or 0 K (kelvin).

10

System models
Types of transfers permitted
type of partition type of transfer Mass and energy Work Heat permeable to matter permeable to energy but impermeable to matter adiabatic adynamic and impermeable to matter isolating + 0 0 0 0 0 + + 0 0 0 + 0 + 0

A diagram of a generic thermodynamic system

An important concept in thermodynamics is the thermodynamic system, a precisely defined region of the universe under study. Everything in the universe except the system is known as the surroundings. A system is separated from the remainder of the universe by a boundary, which may be actual, or merely notional and fictive, but by convention delimits a finite volume. Transfers of work, heat, or matter between the system and the surroundings take place across this boundary. The boundary may or may not have properties that restrict what can be transferred across it. A system may have several distinct boundary sectors or partitions separating it from the surroundings, each characterized by how it restricts transfers, and being permeable to its characteristic transferred quantities.

The volume can be the region surrounding a single atom resonating energy, as Max Planck defined in 1900;[citation needed] it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824; it can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics; it could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics. Anything that passes across the boundary needs to be accounted for in a proper transfer balance equation. Thermodynamics is largely about such transfers. Boundary sectors are of various characters: rigid, flexible, fixed, moveable, actually restrictive, and fictive or not actually restrictive. For example, in an engine, a fixed boundary sector means the piston is locked at its position; then no pressure-volume work is done across it. In that same engine, a moveable boundary allows the piston to move in and out, permitting pressure-volume work. There is no restrictive boundary sector for the whole earth including its atmosphere, and so roughly speaking, no pressure-volume work is done on or by the whole earth system. Such a

Classical Thermodynamics system is sometimes said to be diabatically heated or cooled by radiation.[72][73] Thermodynamics distinguishes classes of systems by their boundary sectors. An open system has a boundary sector that is permeable to matter; such a sector is usually permeable also to energy, but the energy that passes cannot in general be uniquely sorted into heat and work components. Open system boundaries may be either actually restrictive, or else non-restrictive. A closed system has no boundary sector that is permeable to matter, but in general its boundary is permeable to energy. For closed systems, boundaries are totally prohibitive of matter transfer. An adiabatically isolated system has only adiabatic boundary sectors. Energy can be transferred as work, but transfers of matter and of energy as heat are prohibited. A purely diathermically isolated system has only boundary sectors permeable only to heat; it is sometimes said to be adynamically isolated and closed to matter transfer. A process in which no work is transferred is sometimes called adynamic.[74] An isolated system has only isolating boundary sectors. Nothing can be transferred into or out of it. Engineering and natural processes are often described as composites of many different component simple systems, sometimes with unchanging or changing partitions between them. A change of partition is an example of a thermodynamic operation.

11

States and processes


There are three fundamental kinds of entity in thermodynamics, states of a system, processes of a system, and thermodynamic operations. This allows three fundamental approaches to thermodynamic reasoning, that in terms of states of thermodynamic equilibrium of a system, and that in terms of time-invariant processes of a system, and that in terms of cyclic processes of a system. The approach through states of thermodynamic equilibrium of a system requires a full account of the state of the system as well as a notion of process from one state to another of a system, but may require only an idealized or partial account of the state of the surroundings of the system or of other systems. The method of description in terms of states of thermodynamic equilibrium has limitations. For example, processes in a region of turbulent flow, or in a burning gas mixture, or in a Knudsen gas may be beyond "the province of thermodynamics".[75][76][77] This problem can sometimes be circumvented through the method of description in terms of cyclic or of time-invariant flow processes. This is part of the reason why the founders of thermodynamics often preferred the cyclic process description. Approaches through processes of time-invariant flow of a system are used for some studies. Some processes, for example Joule-Thomson expansion, are studied through steady-flow experiments, but can be accounted for by distinguishing the steady bulk flow kinetic energy from the internal energy, and thus can be regarded as within the scope of classical thermodynamics defined in terms of equilibrium states or of cyclic processes.[78] Other flow processes, for example thermoelectric effects, are essentially defined by the presence of differential flows or diffusion so that they cannot be adequately accounted for in terms of equilibrium states or classical cyclic processes.[79][80] The notion of a cyclic process does not require a full account of the state of the system, but does require a full account of how the process occasions transfers of matter and energy between the principal system (which is often called the working body) and its surroundings, which must include at least two heat reservoirs at different known and fixed temperatures, one hotter than the principal system and the other colder than it, as well as a reservoir that can receive energy from the system as work and can do work on the system. The reservoirs can alternatively be regarded as auxiliary idealized component systems, alongside the principal system. Thus an account in terms of cyclic processes requires at least four contributory component systems. The independent variables of this account are the amounts of energy that enter and leave the idealized auxiliary systems. In this kind of account, the working body is

Classical Thermodynamics often regarded as a "black box",[81] and its own state is not specified. In this approach, the notion of a properly numerical scale of empirical temperature is a presupposition of thermodynamics, not a notion constructed by or derived from it.

12

Account in terms of states of thermodynamic equilibrium


When a system is at thermodynamic equilibrium under a given set of conditions of its surroundings, it is said to be in a definite thermodynamic state, which is fully described by its state variables. If a system is simple as defined above, and is in thermodynamic equilibrium, and is not subject to an externally imposed force field, such as gravity, electricity, or magnetism, then it is homogeneous, that is say, spatially uniform in all respects.[82] In a sense, a homogeneous system can be regarded as spatially zero-dimensional, because it has no spatial variation. If a system in thermodynamic equilibrium is homogeneous, then its state can be described by a few physical variables, which are mostly classifiable as intensive variables and extensive variables.[83] An intensive variable is one that is unchanged with the thermodynamic operation of scaling of a system. An extensive variable is one that simply scales with the scaling of a system, without the further requirement used just below here, of additivity even when there is inhomogeneity of the added systems. Examples of extensive thermodynamic variables are total mass and total volume. Under the above definition, entropy is also regarded as an extensive variable. Examples of intensive thermodynamic variables are temperature, pressure, and chemical concentration; intensive thermodynamic variables are defined at each spatial point and each instant of time in a system. Physical macroscopic variables can be mechanical, material, or thermal. Temperature is a thermal variable; according to Guggenheim, "the most important conception in thermodynamics is temperature." Intensive variables have the property that if any number of systems, each in its own separate homogeneous thermodynamic equilibrium state, all with the same respective values of all of their intensive variables, regardless of the values of their extensive variables, are laid contiguously with no partition between them, so as to form a new system, then the values of the intensive variables of the new system are the same as those of the separate constituent systems. Such a composite system is in a homogeneous thermodynamic equilibrium. Examples of intensive variables are temperature, chemical concentration, pressure, density of mass, density of internal energy, and, when it can be properly defined, density of entropy.[84] In other words, intensive variables are not altered by the thermodynamic operation of scaling. For the immediately present account just below, an alternative definition of extensive variables is considered, that requires that if any number of systems, regardless of their possible separate thermodynamic equilibrium or non-equilibrium states or intensive variables, are laid side by side with no partition between them so as to form a new system, then the values of the extensive variables of the new system are the sums of the values of the respective extensive variables of the individual separate constituent systems. Obviously, there is no reason to expect such a composite system to be in a homogeneous thermodynamic equilibrium. Examples of extensive variables in this alternative definition are mass, volume, and internal energy. They depend on the total quantity of mass in the system.[85] In other words, although extensive variables scale with the system under the thermodynamic operation of scaling, nevertheless the present alternative definition of an extensive variable requires more than this: it requires also its additivity regardless of the inhomogeneity (or equality or inequality of the values of the intensive variables) of the component systems. Though, when it can be properly defined, density of entropy is an intensive variable, for inhomogeneous systems, entropy itself does not fit into this alternative classification of state variables.[86][87] The reason is that entropy is a property of a system as a whole, and not necessarily related simply to its constituents separately. It is true that for any number of systems each in its own separate homogeneous thermodynamic equilibrium, all with the same values of intensive variables, removal of the partitions between the separate systems results in a composite homogeneous

Classical Thermodynamics system in thermodynamic equilibrium, with all the values of its intensive variables the same as those of the constituent systems, and it is reservedly or conditionally true that the entropy of such a restrictively defined composite system is the sum of the entropies of the constituent systems. But if the constituent systems do not satisfy these restrictive conditions, the entropy of a composite system cannot be expected to be the sum of the entropies of the constituent systems, because the entropy is a property of the composite system as a whole. Therefore, though under these restrictive reservations, entropy satisfies some requirements for extensivity defined just above, entropy in general does not fit the immediately present definition of an extensive variable. Being neither an intensive variable nor an extensive variable according to the immediately present definition, entropy is thus a stand-out variable, because it is a state variable of a system as a whole. A non-equilibrium system can have a very inhomogeneous dynamical structure. This is one reason for distinguishing the study of equilibrium thermodynamics from the study of non-equilibrium thermodynamics. The physical reason for the existence of extensive variables is the time-invariance of volume in a given inertial reference frame, and the strictly local conservation of mass, momentum, angular momentum, and energy. As noted by Gibbs, entropy is unlike energy and mass, because it is not locally conserved. The stand-out quantity entropy is never conserved in real physical processes; all real physical processes are irreversible.[88] The motion of planets seems reversible on a short time scale (millions of years), but their motion, according to Newton's laws, is mathematically an example of deterministic chaos. Eventually a planet suffers an unpredictable collision with an object from its surroundings, outer space in this case, and consequently its future course is radically unpredictable. Theoretically this can be expressed by saying that every natural process dissipates some information from the predictable part of its activity into the unpredictable part. The predictable part is expressed in the generalized mechanical variables, and the unpredictable part in heat. Other state variables can be regarded as conditionally 'extensive' subject to reservation as above, but not extensive as defined above. Examples are the Gibbs free energy, the Helmholtz free energy, and the enthalpy. Consequently, just because for some systems under particular conditions of their surroundings such state variables are conditionally conjugate to intensive variables, such conjugacy does not make such state variables extensive as defined above. This is another reason for distinguishing the study of equilibrium thermodynamics from the study of non-equilibrium thermodynamics. In another way of thinking, this explains why heat is to be regarded as a quantity that refers to a process and not to a state of a system. A system with no internal partitions, and in thermodynamic equilibrium, can be inhomogeneous in the following respect: it can consist of several so-called 'phases', each homogeneous in itself, in immediate contiguity with other phases of the system, but distinguishable by their having various respectively different physical characters, with discontinuity of intensive variables at the boundaries between the phases; a mixture of different chemical species is considered homogeneous for this purpose if it is physically homogeneous.[89] For example, a vessel can contain a system consisting of water vapour overlying liquid water; then there is a vapour phase and a liquid phase, each homogeneous in itself, but still in thermodynamic equilibrium with the other phase. For the immediately present account, systems with multiple phases are not considered, though for many thermodynamic questions, multiphase systems are important.

13

Classical Thermodynamics Equation of state The macroscopic variables of a thermodynamic system in thermodynamic equilibrium, in which temperature is well defined, can be related to one another through equations of state or characteristic equations. They express the constitutive peculiarities of the material of the system. The equation of state must comply with some thermodynamic constraints, but cannot be derived from the general principles of thermodynamics alone.

14

Thermodynamic processes between states of thermodynamic equilibrium


A thermodynamic process is defined by changes of state internal to the system of interest, combined with transfers of matter and energy to and from the surroundings of the system or to and from other systems. A system is demarcated from its surroundings or from other systems by partitions that more or less separate them, and may move as a piston to change the volume of the system and thus transfer work. Dependent and independent variables for a process A process is described by changes in values of state variables of systems or by quantities of exchange of matter and energy between systems and surroundings. The change must be specified in terms of prescribed variables. The choice of which variables are to be used is made in advance of consideration of the course of the process, and cannot be changed. Certain of the variables chosen in advance are called the independent variables.[90] From changes in independent variables may be derived changes in other variables called dependent variables. For example a process may occur at constant pressure with pressure prescribed as an independent variable, and temperature changed as another independent variable, and then changes in volume are considered as dependent. Careful attention to this principle is necessary in thermodynamics.[91] Changes of state of a system In the approach through equilibrium states of the system, a process can be described in two main ways. In one way, the system is considered to be connected to the surroundings by some kind of more or less separating partition, and allowed to reach equilibrium with the surroundings with that partition in place. Then, while the separative character of the partition is kept unchanged, the conditions of the surroundings are changed, and exert their influence on the system again through the separating partition, or the partition is moved so as to change the volume of the system; and a new equilibrium is reached. For example, a system is allowed to reach equilibrium with a heat bath at one temperature; then the temperature of the heat bath is changed and the system is allowed to reach a new equilibrium; if the partition allows conduction of heat, the new equilibrium is different from the old equilibrium. In the other way, several systems are connected to one another by various kinds of more or less separating partitions, and to reach equilibrium with each other, with those partitions in place. In this way, one may speak of a 'compound system'. Then one or more partitions is removed or changed in its separative properties or moved, and a new equilibrium is reached. The Joule-Thomson experiment is an example of this; a tube of gas is separated from another tube by a porous partition; the volume available in each of the tubes is determined by respective pistons; equilibrium is established with an initial set of volumes; the volumes are changed and a new equilibrium is established.[92][93][94][95] Another example is in separation and mixing of gases, with use of chemically semi-permeable membranes.[96]

Classical Thermodynamics Commonly considered thermodynamic processes It is often convenient to study a thermodynamic process in which a single variable, such as temperature, pressure, or volume, etc., is held fixed. Furthermore, it is useful to group these processes into pairs, in which each variable held constant is one member of a conjugate pair. Several commonly studied thermodynamic processes are: Isobaric process: occurs at constant pressure Isochoric process: occurs at constant volume (also called isometric/isovolumetric) Isothermal process: occurs at a constant temperature Adiabatic process: occurs without loss or gain of energy as heat Isentropic process: a reversible adiabatic process occurs at a constant entropy, but is a fictional idealization. Conceptually it is possible to actually physically conduct a process that keeps the entropy of the system constant, allowing systematically controlled removal of heat, by conduction to a cooler body, to compensate for entropy produced within the system by irreversible work done on the system. Such isentropic conduct of a process seems called for when the entropy of the system is considered as an independent variable, as for example when the internal energy is considered as a function of the entropy and volume of the system, the natural variables of the internal energy as studied by Gibbs. Isenthalpic process: occurs at a constant enthalpy Isolated process: no matter or energy (neither as work nor as heat) is transferred into or out of the system It is sometimes of interest to study a process in which several variables are controlled, subject to some specified constraint. In a system in which a chemical reaction can occur, for example, in which the pressure and temperature can affect the equilibrium composition, a process might occur in which temperature is held constant but pressure is slowly altered, just so that chemical equilibrium is maintained all the way. There is a corresponding process at constant temperature in which the final pressure is the same but is reached by a rapid jump. Then it can be shown that the volume change resulting from the rapid jump process is smaller than that from the slow equilibrium process. The work transferred differs between the two processes.

15

Account in terms of cyclic processes


A cyclic process[97] is a process that can be repeated indefinitely often without changing the final state of the system in which the process occurs. The only traces of the effects of a cyclic process are to be found in the surroundings of the system or in other systems. This is the kind of process that concerned early thermodynamicists such as Carnot, and in terms of which Kelvin defined absolute temperature,[98][99] before the use of the quantity of entropy by Rankine[100] and its clear identification by Clausius.[101] For some systems, for example with some plastic working substances, cyclic processes are practically nearly unfeasible because the working substance undergoes practically irreversible changes. This is why mechanical devices are lubricated with oil and one of the reasons why electrical devices are often useful. A cyclic process of a system requires in its surroundings at least two heat reservoirs at different temperatures, one at a higher temperature that supplies heat to the system, the other at a lower temperature that accepts heat from the system. The early work on thermodynamics tended to use the cyclic process approach, because it was interested in machines that converted some of the heat from the surroundings into mechanical power delivered to the surroundings, without too much concern about the internal workings of the machine. Such a machine, while receiving an amount of heat from a higher temperature reservoir, always needs a lower temperature reservoir that accepts some lesser amount of heat, the difference in amounts of heat being converted to work.[][102] Later, the internal workings of a system became of interest, and they are described by the states of the system. Nowadays, instead of arguing in terms of cyclic processes, some writers are inclined to derive the concept of absolute temperature from the concept of entropy, a variable of state.

Classical Thermodynamics

16

Instrumentation
There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device that measures any parameter of a thermodynamic system. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law PV=nRT, the volume of such a sample can be used as an indicator of temperature; in this manner it defines temperature. Although pressure is defined mechanically, a pressure-measuring device, called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device that measures and define the internal energy of a system. A thermodynamic reservoir is a system so large that it does not appreciably alter its state parameters when brought into contact with the test system. It is used to impose a particular value of a state parameter upon the system. For example, a pressure reservoir is a system at a particular pressure, which imposes that pressure upon any test system that it is mechanically connected to. The Earth's atmosphere is often used as a pressure reservoir.

Conjugate variables
A central concept of thermodynamics is that of energy. By the First Law, the total energy of a system and its surroundings is conserved. Energy may be transferred into a system by heating, compression, or addition of matter, and extracted from a system by cooling, expansion, or extraction of matter. In mechanics, for example, energy transfer equals the product of the force applied to a body and the resulting displacement. Conjugate variables are pairs of thermodynamic concepts, with the first being akin to a "force" applied to some thermodynamic system, the second being akin to the resulting "displacement," and the product of the two equalling the amount of energy transferred. The common conjugate variables are: Pressure-volume (the mechanical parameters); Temperature-entropy (thermal parameters); Chemical potential-particle number (material parameters).

Potentials
Thermodynamic potentials are different quantitative measures of the stored energy in a system. Potentials are used to measure energy changes in systems as they evolve from an initial state to a final state. The potential used depends on the constraints of the system, such as constant temperature or pressure. For example, the Helmholtz and Gibbs energies are the energies available in a system to do useful work when the temperature and volume or the pressure and temperature are fixed, respectively. The five most well known potentials are:

Classical Thermodynamics

17

Name Internal energy

Symbol

Formula

Natural variables

Helmholtz free energy Enthalpy Gibbs free energy Landau Potential (Grand potential) ,

where

is the temperature,

the entropy,

the pressure,

the volume,

the chemical potential,

the

number of particles in the system, and

is the count of particles types in the system.

Thermodynamic potentials can be derived from the energy balance equation applied to a thermodynamic system. Other thermodynamic potentials can also be obtained through Legendre transformation.

Axiomatics
Most accounts of thermodynamics presuppose the law of conservation of mass, sometimes with,[103] and sometimes without,[104][105] explicit mention. Particular attention is paid to the law in accounts of non-equilibrium thermodynamics.[106][107] One statement of this law is "The total mass of a closed system remains constant." Another statement of it is "In a chemical reaction, matter is neither created nor destroyed."[108] Implied in this is that matter and energy are not considered to be interconverted in such accounts. The full generality of the law of conservation of energy is thus not used in such accounts. In 1909, Constantin Carathodory presented a purely mathematical axiomatic formulation, a description often referred to as geometrical thermodynamics, and sometimes said to take the "mechanical approach" to thermodynamics. The Carathodory formulation is restricted to equilibrium thermodynamics and does not attempt to deal with non-equilibrium thermodynamics, forces that act at a distance on the system, or surface tension effects.[109] Moreover, Carathodory's formulation does not deal with materials like water near 4 C, which have a density extremum as a function of temperature at constant pressure.[110][111] Carathodory used the law of conservation of energy as an axiom from which, along with the contents of the zeroth law, and some other assumptions including his own version of the second law, he derived the first law of thermodynamics. Consequently one might also describe Carathodory's work as lying in the field of energetics,[112] which is broader than thermodynamics. Carathodory presupposed the law of conservation of mass without explicit mention of it. Since the time of Carathodory, other influential axiomatic formulations of thermodynamics have appeared, which like Carathodory's, use their own respective axioms, different from the usual statements of the four laws, to derive the four usually stated laws.[113][114][115] Many axiomatic developments assume the existence of states of thermodynamic equilibrium and of states of thermal equilibrium. States of thermodynamic equilibrium of compound systems allow their component simple systems to exchange heat and matter and to do work on each other on their way to overall joint equilibrium. Thermal equilibrium allows them only to exchange heat. The physical properties of glass depend on its history of being heated and cooled and, strictly speaking, glass is not in thermodynamic equilibrium. According to Herbert Callen's widely cited 1985 text on thermodynamics: "An essential prerequisite for the measurability of energy is the existence of walls that do not permit transfer of energy in the form of heat.".[116] According to Werner Heisenberg's mature and careful examination of the basic concepts of physics, the theory of heat has a self-standing place.[117] From the viewpoint of the axiomatist, there are several different ways of thinking about heat, temperature, and the second law of thermodynamics. The Clausius way rests on the empirical fact that heat is conducted always down,

Classical Thermodynamics never up, a temperature gradient. The Kelvin way is to assert the empirical fact that conversion of heat into work by cyclic processes is never perfectly efficient. A more mathematical way is to assert the existence of a function of state called the entropy that tells whether a hypothesized process occurs spontaneously in nature. A more abstract way is that of Carathodory that in effect asserts the irreversibility of some adiabatic processes. For these different ways, there are respective corresponding different ways of viewing heat and temperature. The ClausiusKelvinPlanck way This way prefers ideas close to the empirical origins of thermodynamics. It presupposes transfer of energy as heat, and empirical temperature as a scalar function of state. According to Gislason and Craig (2005): "Most thermodynamic data come from calorimetry..."[118] According to Kondepudi (2008): "Calorimetry is widely used in present day laboratories."[119] In this approach, what is often currently called the zeroth law of thermodynamics is deduced as a simple consequence of the presupposition of the nature of heat and empirical temperature, but it is not named as a numbered law of thermodynamics. Planck attributed this point of view to Clausius, Kelvin, and Maxwell. Planck wrote (on page 90 of the seventh edition, dated 1922, of his treatise) that he thought that no proof of the second law of thermodynamics could ever work that was not based on the impossibility of a perpetual motion machine of the second kind. In that treatise, Planck makes no mention of the 1909 Carathodory way, which was well known by 1922. Planck for himself chose a version of what is just above called the Kelvin way.[120] The development by Truesdell and Bharatha (1977) is so constructed that it can deal naturally with cases like that of water near 4 C. The way that assumes the existence of entropy as a function of state This way also presupposes transfer of energy as heat, and it presupposes the usually stated form of the zeroth law of thermodynamics, and from these two it deduces the existence of empirical temperature. Then from the existence of entropy it deduces the existence of absolute thermodynamic temperature. The Carathodory way This way presupposes that the state of a simple one-phase system is fully specifiable by just one more state variable than the known exhaustive list of mechanical variables of state. It does not explicitly name empirical temperature, but speaks of the one-dimensional "non-deformation coordinate". This satisfies the definition of an empirical temperature, that lies on a one-dimensional manifold. The Carathodory way needs to assume moreover that the one-dimensional manifold has a definite sense, which determines the direction of irreversible adiabatic process, which is effectively assuming that heat is conducted from hot to cold. This way presupposes the often currently stated version of the zeroth law, but does not actually name it as one of its axioms. According to one author, Carathodory's principle, which is his version of the second law of thermodynamics, does not imply the increase of entropy when work is done under adiabatic conditions (as was noted by Planck[121]). Thus Carathodory's way leaves unstated a further empirical fact that is needed for a full expression of the second law of thermodynamics.[122]

18

Scope of thermodynamics
Originally thermodynamics concerned material and radiative phenomena that are experimentally reproducible. For example, a state of thermodynamic equilibrium is a steady state reached after a system has aged so that it no longer changes with the passage of time. But more than that, for thermodynamics, a system, defined by its being prepared in a certain way must, consequent on every particular occasion of preparation, upon aging, reach one and the same eventual state of thermodynamic equilibrium, entirely determined by the way of preparation. Such reproducibility is because the systems consist of so many molecules that the molecular variations between particular occasions of preparation have negligible or scarcely discernable effects on the macroscopic variables that are used in thermodynamic descriptions. This led to Boltzmann's discovery that entropy had a statistical or probabilistic nature. Probabilistic and statistical explanations arise from the experimental reproducibility of the phenomena.[123] Gradually, the laws of thermodynamics came to be used to explain phenomena that occur outside the experimental laboratory. For example, phenomena on the scale of the earth's atmosphere cannot be reproduced in a laboratory experiment. But processes in the atmosphere can be modeled by use of thermodynamic ideas, extended well beyond

Classical Thermodynamics the scope of laboratory equilibrium thermodynamics.[124][125][126] A parcel of air can, near enough for many studies, be considered as a closed thermodynamic system, one that is allowed to move over significant distances. The pressure exerted by the surrounding air on the lower face of a parcel of air may differ from that on its upper face. If this results in rising of the parcel of air, it can be considered to have gained potential energy as a result of work being done on it by the combined surrounding air below and above it. As it rises, such a parcel usually expands because the pressure is lower at the higher altitudes that it reaches. In that way, the rising parcel also does work on the surrounding atmosphere. For many studies, such a parcel can be considered nearly to neither gain nor lose energy by heat conduction to its surrounding atmosphere, and its rise is rapid enough to leave negligible time for it to gain or lose heat by radiation; consequently the rising of the parcel is near enough adiabatic. Thus the adiabatic gas law accounts for its internal state variables, provided that there is no precipitation into water droplets, no evaporation of water droplets, and no sublimation in the process. More precisely, the rising of the parcel is likely to occasion friction and turbulence, so that some potential and some kinetic energy of bulk converts into internal energy of air considered as effectively stationary. Friction and turbulence thus oppose the rising of the parcel.[127][128]

19

Applied fields
Atmospheric thermodynamics Biological thermodynamics Black hole thermodynamics Chemical thermodynamics Equilibrium thermodynamics Geology Industrial ecology (re: Exergy) Maximum entropy thermodynamics Non-equilibrium thermodynamics Philosophy of thermal and statistical physics Psychrometrics Quantum thermodynamics Statistical thermodynamics Thermoeconomics

References
[1] http:/ / en. wikipedia. org/ w/ index. php?title=Template:Thermodynamics& action=edit [2] Crawford, F.H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc., pp. 106107. [3] Haase, R. (1963/1969). Thermodynamics of Irreversible Processes, translated in English, Addison-Wesley, Reading MA, pp. 1011. [4] Mnster, A. (1970). [5] Hess, H. (1840). Thermochemische Untersuchungen (http:/ / gallica. bnf. fr/ ark:/ 12148/ bpt6k151359/ f397. image. r=Annalen der Physik (Leipzig) 125. langEN), Annalen der Physik und Chemie (Poggendorff, Leipzig) 126(6): 385404. [6] Gibbs, Willard, J. (1876). Transactions of the Connecticut Academy, III, pp. 108248, Oct. 1875 May 1876, and pp. 343524, May 1877 July 1878. [7] Duhem, P.M.M. (1886). Le Potential Thermodynamique et ses Applications, Hermann, Paris. [8] Guggenheim, E.A. (1933). Modern Thermodynamics by the Methods of J.W. Gibbs, Methuen, London. [9] Guggenheim, E.A. (1949/1967) [10] Reif, F. (1965). Fundamentals of Statistical and Thermal Physics, McGraw-Hill Book Company, New York, page 122. [11] Fowler, R., Guggenheim, E.A. (1939), p. 3. [12] Bridgman, P.W. (1943). The Nature of Thermodynamics, Harvard University Press, Cambridge MA, p. 48. [13] Partington, J.R. (1949), page 118. [14] Tisza, L. (1966), p. 18. [15] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester, ISBN 978-0-470-01598-8. Includes local equilibrium thermodynamics.

Classical Thermodynamics
[16] Fowler, R., Guggenheim, E.A. (1939), p. 13. [17] Tisza, L. (1966), pp. 7980. [18] Planck, M. 1923/1926, page 5. [19] Partington, p. 121. [20] Adkins, pp. 1920. [21] Haase, R. (1971), pages 1116. [22] Balescu, R. (1975). Equilibrium and Nonequilibrium Statistical Mechanics, Wiley-Interscience, New York, ISBN 0-471-04600-0. [23] Schrdinger, E. (1946/1967). Statistical Thermodynamics. A Course of Seminar Lectures, Cambridge University Press, Cambridge UK. [24] The Newcomen engine was improved from 1711 until Watt's work, making the efficiency comparison subject to qualification, but the increase from the Newcomen 1765 version was on the order of 100%. [25] Oxford English Dictionary, Oxford University Press, Oxford UK. [26] Pippard, A.B. (1957), p. 70. [27] Partington, J.R. (1949), p. 615621. [28] Serrin, J. (1986). An outline of thermodynamical structure, Chapter 1, pp. 332 in Serrin, J., editor, New Perspectives in Thermodynamics, SpringerVerlag, Berlin, ISBN 3-540-15931-2. [29] Callen, H.B. (1960/1985), Chapter 6, pages 131152. [30] Callen, H.B. (1960/1985), p. 13. [31] Landsberg, P.T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, ISBN 0-19-851142-6, p. 1. [32] Eu, B.C. (2002). [33] Lebon, G., Jou, D., Casas-Vzquez, J. (2008). [34] Grandy, W.T., Jr (2008), passim and p. 123. [35] Callen, H.B. (1985), p. 26. [36] Gibbs J.W. (1875), pp. 115116. [37] Bryan, G.H. (1907), p. 5. [38] Haase, R. (1971), p. 13. [39] Bailyn, M. (1994), p. 145. [40] Bailyn, M. (1994), Section 6.11. [41] Planck, M. (1897/1903), passim. [42] Partington, J.R. (1949), p. 129. [43] Callen, H.B. (1960/1985), Section 42. [44] Guggenheim, E.A. (1949/1967), 1.12. [45] de Groot, S.R., Mazur, P., Non-equilibrium thermodynamics,1969, North-Holland Publishing Company, Amsterdam-London [46] Fowler, R., Guggenheim, E.A. (1939), p. vii. [47] Gyarmati, I. (1967/1970) Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated by E. Gyarmati and W.F. Heinz, Springer, New York, pp. 414. Includes classical non-equilibrium thermodynamics. [48] Ziegler, H., (1983). An Introduction to Thermomechanics, North-Holland, Amsterdam, ISBN 0-444-86503-9 [49] Balescu, R. (1975). Equilibrium and Non-equilibrium Statistical Mechanics, Wiley-Interscience, New York, ISBN 0-471-04600-0, Section 3.2, pp. 6472. [50] Lebon, G., Jou, D., Casas-Vzquez, J. (2008), Chapter 8. [51] Callen, H.B. (1960/1985), p. 14. [52] Moran, Michael J. and Howard N. Shapiro, 2008. Fundamentals of Engineering Thermodynamics. 6th ed. Wiley and Sons: 16. [53] Planck, M. (1897/1903), p. 1. [54] Rankine, W.J.M. (1953). Proc. Roy. Soc. (Edin.), 20(4). [55] Maxwell, J.C. (1872), page 32. [56] Maxwell, J.C. (1872), page 57. [57] Planck, M. (1897/1903), pp. 12. [58] Clausius, R. (1850). Ueber de bewegende Kraft der Wrme und die Gesetze, welche sich daraus fr de Wrmelehre selbst ableiten lassen, Annalen der Physik und Chemie, 155 (3): 368394. [59] Rankine, W.J.M. (1850). On the mechanical action of heat, especially in gases and vapours. Trans. Roy. Soc. Edinburgh, 20: 147190. (http:/ / www. archive. org/ details/ miscellaneoussci00rank) [60] Helmholtz, H. von. (1897/1903). Vorlesungen ber Theorie der Wrme, edited by F. Richarz, Press of Johann Ambrosius Barth, Leipzig, Section 46, pp. 176182, in German. [61] Planck, M. (1897/1903), p. 43. [62] Guggenheim, E.A. (1949/1967), p. 10. [63] Sommerfeld, A. (1952/1956), Section 4 A, pp. 1316. [64] Lewis, G.N., Randall, M. (1961). Thermodynamics, second edition revised by K.S. Pitzer and L. Brewer, McGraw-Hill, New York, p. 35. [65] Bailyn, M. (1994), page 79. [66] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester, ISBN 978-0-470-01598-8, p. 59.

20

Classical Thermodynamics
[67] Khanna, F.C., Malbouisson, A.P.C., Malbouisson, J.M.C., Santana, A.E. (2009). Thermal Quantum Field Theory. Algebraic Aspects and Applications, World Scientific, Singapore, ISBN 978-981-281-887-4, p. 6. [68] Helmholtz, H. von, (1847). Ueber die Erhaltung der Kraft, G. Reimer, Berlin. [69] Joule, J.P. (1847). On matter, living force, and heat, Manchester Courier, May 5 and May 12, 1847. [70] Partington, J.R. (1949), page 150. [71] Kondepudi & Prigogine (1998), pages 31-32. [72] Goody, R.M., Yung, Y.L. (1989). Atmospheric Radiation. Theoretical Basis, second edition, Oxford University Press, Oxford UK, ISBN 0-19-505134-3, p. 5 [73] Wallace, J.M., Hobbs, P.V. (2006). Atmospheric Science. An Introductory Survey, second edition, Elsevier, Amsterdam, ISBN 978-0-12-732951-2, p. 292. [74] Partington, J.R. (1913). A Text-book of Thermodynamics (http:/ / www. archive. org/ details/ textbookofthermo00partiala), Van Nostrand, New York, page 37. [75] Glansdorff, P., Prigogine, I., (1971). Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley-Interscience, London, ISBN 0-471-30280-5, page 15. [76] Haase, R., (1971), page 16. [77] Eu, B.C. (2002), p. 13. [78] Adkins, C.J. (1968/1975), pp. 4649. [79] Adkins, C.J. (1968/1975), p. 172. [80] Lebon, G., Jou, D., Casas-Vzquez, J. (2008), pp. 3738. [81] Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, London, pp. 117118. [82] Guggenheim, E.A. (1949/1967), p. 6. [83] Lavenda, B.H. (1978). Thermodynamics of Irreversible Processes, Macmillan, London, ISBN 0-333-21616-4, p. 12. [84] Guggenheim, E.A. (1949/1967), p. 19. [85] Guggenheim, E.A. (1949/1967), pp. 1819. [86] Grandy, W.T., Jr (2008), Chapter 5, pp. 5968. [87] Kondepudi & Prigogine (1998), pp. 116118. [88] Guggenheim, E.A. (1949/1967), Section 1.12, pp. 1213. [89] Planck, M. (1897/1903), p. 65. [90] Planck, M. (1923/1926), Section 152A, pp. 121123. [91] Prigogine, I. Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co., London, p. 1. [92] Planck, M. (1897/1903), Section 70, pp. 4850. [93] Guggenheim, E.A. (1949/1967), Section 3.11, pp. 9292. [94] Sommerfeld, A. (1952/1956), Section 1.5 C, pp. 2325. [95] Callen, H.B. (1960/1985), Section 6.3. [96] Planck, M. (1897/1903), Section 236, pp. 211212. [97] Serrin, J. (1986). Chapter 1, 'An Outline of Thermodynamical Structure', pp. 332, especially p. 8, in New Perspectives in Thermodynamics, edited by J. Serrin, Springer, Berlin, ISBN 3-540-15931-2. [98] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester, ISBN 978-0-470-01598-8, Section 3.2, pp. 106108. [99] Truesdell, C.A. (1980), Section 11B, pp. 306310. [100] Truesdell, C.A. (1980), Sections 8G,8H, 9A, pp. 207224. [101] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester, ISBN 978-0-470-01598-8, Section 3.3, pp. 108114. [102] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester, ISBN 978-0-470-01598-8, Sections 3.1,3.2, pp. 97108. [103] Ziegler, H. (1977). An Introduction to Thermomechanics, North-Holland, Amsterdam, ISBN 0-7204-0432-0. [104] Planck M. (1922/1927). [105] Guggenheim, E.A. (1949/1967). [106] de Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North Holland, Amsterdam. [107] Gyarmati, I. (1970). Non-equilibrium Thermodynamics, translated into English by E. Gyarmati and W.F. Heinz, Springer, New York. [108] Tro, N.J. (2008). Chemistry. A Molecular Approach, Pearson Prentice-Hall, Upper Saddle River NJ, ISBN 0-13-100065-9. [109] Turner, L.A. (1962). Simplification of Carathodory's treatment of thermodynamics, Am. J. Phys. 30: 781786. [110] Turner, L.A. (1962). Further remarks on the zeroth law, Am. J. Phys. 30: 804806. [111] Thomsen, J.S., Hartka, T.J., (1962). Strange Carnot cycles; thermodynamics of a system with a density maximum, Am. J. Phys. 30: 2633, 30: 388389. [112] Duhem, P. (1911). Trait d'Energetique, Gautier-Villars, Paris. [113] Callen, H.B. (1960/1985). [114] Truesdell, C., Bharatha, S. (1977). The Concepts and Logic of Classical Thermodynamics as a Theory of Heat Engines, Rigorously Constructed upon the Foundation Laid by S. Carnot and F. Reech, Springer, New York, ISBN 0-387-07971-8. [115] Wright, P.G. (1980). Conceptually distinct types of thermodynamics, Eur. J. Phys. 1: 8184. [116] Callen, H.B. (1960/1985), p. 16.

21

Classical Thermodynamics
[117] Heisenberg, W. (1958). Physics and Philosophy, Harper & Row, New York, pp. 9899. [118] Gislason, E.A., Craig, N.C. (2005). Cementing the foundations of thermodynamics:comparison of system-based and surroundings-based definitions of work and heat, J. Chem. Thermodynamics 37: 954966. [119] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester, ISBN 978-0-470-01598-8, p. 63. [120] Planck, M. (1922/1927). [121] Planck, M. (1926). ber die Begrndung des zweiten Hauptsatzes der Thermodynamik, Sitzungsberichte der Preuischen Akademie der Wissenschaften, physikalisch-mathematischen Klasse, pp. 453463. [122] Mnster, A. (1970). Classical Thermodynamics, translated by E.S. Halberstadt, WileyInterscience, London, ISBN 0-471-62430-6, p 41. [123] Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems, Oxford University Press, Oxford UK, ISBN 978-0-19-954617-6. p. 49. [124] Iribarne, J.V., Godson, W.L. (1973/1989). Atmospheric thermodynamics, second edition, reprinted 1989, Kluwer Academic Publishers, Dordrecht, ISBN 90-277-1296-4. [125] Peixoto, J.P., Oort, A.H. (1992). Physics of climate, American Institute of Physics, New York, ISBN 0-88318-712-4 [126] North, G.R., Erukhimova, T.L. (2009). Atmospheric Thermodynamics. Elementary Physics and Chemistry, Cambridge University Press, Cambridge UK, ISBN 978-0-521-89963-5. [127] Holton, J.R. (2004). An Introduction of Dynamic Meteorology, fourth edition, Elsevier, Amsterdam, ISBN 978-0-12-354015-7. [128] Mak, M. (2011). Atmospheric Dynamics, Cambridge University Press, Cambridge UK, ISBN 978-0-521-19573-7.

22

Cited bibliography
Adkins, C.J. (1968/1975). Equilibrium Thermodynamics, second edition, McGraw-Hill, London, ISBN 0-07-084057-1. Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3. Bryan, G.H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications (http://archive.org/details/ost-physics-thermodynamicsin00bryauoft), B.G. Teubner, Leipzig. Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, ISBN 0-471-86256-8. Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, ISBN 1-4020-0788-4. Fowler, R., Guggenheim, E.A. (1939). Statistical Thermodynamics, Cambridge University Press, Canbridge UK. Gibbs, J.W. (1875). On the equilibrium of heterogeneous substances, Transactions of the Connecticut Academy of Arts and Sciences, 3: 108248. Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems, Oxford University Press, Oxford, ISBN 978-0-19-954617-6. Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, (1st edition 1949) 5th edition 1967, North-Holland, Amsterdam. Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 197 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73117081. Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics. From Heat Engines to Dissipative Structures, John Wiley & Sons, ISBN 0-471-97393-9. Lebon, G., Jou, D., Casas-Vzquez, J. (2008). Understanding Non-equilibrium Thermodynamics, Springer, Berlin, ISBN 978-3-540-74251-7. Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London. Pippard, A.B. (1957). The Elements of Classical Thermodynamics, Cambridge University Press. Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London. (https://ia700200.us.archive.org/15/items/treatiseonthermo00planrich/treatiseonthermo00planrich.pdf)

Classical Thermodynamics Planck, M. (1923/1926). Treatise on Thermodynamics, third English edition translated by A. Ogg from the seventh German edition, Longmans, Green & Co., London. Sommerfeld, A. (1952/1956). Thermodynamics and Statistical Mechanics, Academic Press, New York. Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA. Truesdell, C.A. (1980). The Tragicomical History of Thermodynamics, 18221854, Springer, New York, ISBN 0-387-90403-4.

23

Further reading
Goldstein, Martin, and Inge F. (1993). The Refrigerator and the Universe. Harvard University Press. ISBN0-674-75325-9. OCLC 32826343 (http://www.worldcat.org/oclc/32826343). A nontechnical introduction, good on historical and interpretive matters. Kazakov, Andrei (JulyAugust 2008). "Web Thermo Tables an On-Line Version of the TRC Thermodynamic Tables" (http://nvl-i.nist.gov/pub/nistpubs/jres/113/4/V113.N04.A03.pdf). Journal of Research of the National Institutes of Standards and Technology 113 (4): 209220. The following titles are more technical: Cengel, Yunus A., & Boles, Michael A. (2002). Thermodynamics an Engineering Approach. McGraw Hill. ISBN0-07-238332-1. OCLC 45791449 52263994 57548906 (http://www.worldcat.org/oclc/45791449+ 52263994+57548906). Fermi, E. (1956). Thermodynamics, Dover, New York. Kittel, Charles & Kroemer, Herbert (1980). Thermal Physics. W. H. Freeman Company. ISBN0-7167-1088-9. OCLC 32932988 48236639 5171399 (http://www.worldcat.org/oclc/32932988+48236639+5171399).

External links
Thermodynamics Data & Property Calculation Websites (http://tigger.uic.edu/~mansoori/Thermodynamic. Data.and.Property_html) Thermodynamics OpenCourseWare (http://ocw.nd.edu/aerospace-and-mechanical-engineering/ thermodynamics) from the University of Notre Dame Thermodynamics at ScienceWorld (http://scienceworld.wolfram.com/physics/topics/Thermodynamics.html) Biochemistry Thermodynamics (http://www.wiley.com/legacy/college/boyer/0470003790/reviews/thermo/ thermo_intro.htm) Engineering Thermodynamics A Graphical Approach (http://www.ent.ohiou.edu/~thermo/)

Statistical Thermodynamics

24

Statistical Thermodynamics
Statistical mechanics

Thermodynamics Kinetic theory

e [1]

v t

Statistical mechanics is a branch of mathematical physics that studies, using probability theory, the average behaviour of a mechanical system where the state of the system is uncertain.[2] The present understanding of the universe indicates that its fundamental laws are mechanical in nature, and that all physical systems are therefore governed by mechanical laws at a microscopic level. These laws are precise equations of motion that map any given initial state to a corresponding future state at a later time. There is however a disconnect between these laws and everyday life, as we do not find it necessary (nor easy) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics is a collection of mathematical tools that are used to fill this disconnect between the laws of mechanics and the practical experience of incomplete knowledge. A common use of statistical mechanics is in explaining the thermodynamic behaviour of large systems. Microscopic mechanical laws do not contain concepts such as temperature, heat, or entropy, however statistical mechanics shows how these concepts arise from the natural uncertainty that arises about the state of a system when that system is prepared in practice. The benefit of using statistical mechanics is that it provides exact methods to connect thermodynamic quantities (such as heat capacity) to microscopic behaviour, whereas in classical thermodynamics the only available option would be to just measure and tabulate such quantities for various materials. Statistical mechanics also makes it possible to extend the laws of thermodynamics to cases which are not considered in classical thermodynamics, for example microscopic systems and other mechanical systems with few degrees of freedom. This branch of statistical mechanics which treats and extends classical thermodynamics is known as statistical thermodynamics or equilibrium statistical mechanics. Statistical mechanics also finds use outside equilibrium. An important subbranch known as non-equilibrium statistical mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions, or flows of particles and heat. Unlike with equilibrium, there is no exact formalism that applies to non-equilibrium statistical mechanics in general and so this branch of statistical mechanics remains an active area of theoretical research.

Statistical Thermodynamics

25

Principles: mechanics and ensembles


In physics there are two types of mechanics usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach to is to consider two ingredients: 1. The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics). 2. An equation of motion which carries the state forward in time: Hamilton's equations (classical mechanics) or the time-dependent Schrdinger equation (quantum mechanics) Using these two ingredients, the state at any other time, past or future, can in principle be calculated. Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinates. In quantum statistical mechanics, the ensemble is a probability distribution over pure states,[3] and can be compactly summarized as a density matrix. As is usual for probabilities, the ensemble can be interpreted in different ways: an ensemble can be taken to represent the various possible states that a single system could be in (epistemic probability, a form of knowledge), or the members of the ensemble can be understood as the states of the systems in experiments repeated on independent systems which have been prepared in an similar but imperfectly controlled manner (empirical probability), in the limit of an infinite number of trials. These two meanings are equivalent for many purposes, and will be used interchangeably in this article. However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state. One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to that state.[4] The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.

Statistical Thermodynamics

26

Statistical thermodynamics
The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to explain the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material. As an example, one might ask what is it about a thermodynamic system of NH3 molecules that determines the free energy characteristic of that compound? Classical thermodynamics does not provide the answer. If, for example, we were given spectroscopic data, of this body of gas molecules, such as bond length, bond angle, bond rotation, and flexibility of the bonds in NH3 we should see that the free energy could not be other than it is. To prove this true, we need to bridge the gap between the microscopic realm of atoms and molecules and the macroscopic realm of classical thermodynamics. Statistical mechanics demonstrates how the thermodynamic parameters of a system, such as temperature and pressure, are related to microscopic behaviours of such constituent atoms and molecules. Although we may understand a system generically, in general we lack information about the state of a specific instance of that system. For this reason the notion of statistical ensemble (a probability distribution over possible states) is necessary. Furthermore, in order to reflect that the material is in a thermodynamic equilibrium, it is necessary to introduce a corresponding statistical mechanical definition of equilibrium. The analogue of thermodynamic equilibrium in statistical thermodynamics is the ensemble property of statistical equilibrium, described in the previous section. An additional assumption in statistical thermodynamics is that the system is isolated (no varying external forces are acting on the system), so that its total energy does not vary over time. A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.).

Fundamental postulate
There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. An additional postulate is necessary to motivate why the ensemble for a given system should have one form or another. A common approach found in many textbooks is to take the equal a priori probability postulate. This postulate states that For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge. The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate: Ergodic hypothesis: An ergodic state is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system the only equilibrium ensemble at fixed energy is the microcanonical ensemble. (However, most systems are not ergodic.) Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation. Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy). Other fundamental postulates for statistical mechanics have also been proposed.[5] In any case, the reason for establishing the microcanonical ensemble is mainly axiomatic. The microcanonical ensemble itself is mathematically awkward to use for real calculations, and even very simple finite systems can only be solved approximately. However, it is possible to use the microcanonical ensemble to construct a hypothetical

Statistical Thermodynamics infinite thermodynamic reservoir that has an exactly defined notion of temperature and chemical potential. Once this reservoir has been established, it can be used to justify exactly the canonical ensemble or grand canonical ensemble (see below) for any other system by considering the contact of this system with the reservoir. These other ensembles are those actually used in practical statistical mechanics calculations as they are mathematically simpler and also correspond to a much more realistic situation (energy not known exactly).

27

Three thermodynamic ensembles


There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics. The microcanonical ensemble describes a system with a precisely given energy and fixed composition (precise number of particles). The microcanonical ensemble contains with equal probability each possible state that is consistent with that energy and composition. The canonical ensemble describes a system of fixed composition that is in thermal equilibrium[6] with a heat bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the different states in the ensemble are accorded different probabilities depending on their total energy. The grand canonical ensemble describes a system with non-fixed composition (uncertain particle numbers) that is in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of particles; the different states in the ensemble are accorded different probabilities depending on their total energy and total particle numbers.
Thermodynamic ensembles Microcanonical Fixed variables N, V, E Canonical N, V, T Grand canonical , V, T Grand partition function

Microscopic features Number of microstates Canonical partition function

Macroscopic function

Boltzmann entropy

Helmholtz free energy

Grand potential

Statistical fluctuations and the macroscopic limit The thermodynamic ensembles' most significant difference is that they either admit uncertainty in the variables of energy or particle number, or that those variables are fixed to particular values. While this difference can be observed in some cases, for macroscopic systems the thermodynamic ensembles are usually observationally equivalent. The limit of large systems in statistical mechanics is known as the thermodynamic limit. In the thermodynamic limit the microcanonical, canonical, and grand canonical ensembles tend to give identical predictions about thermodynamic characteristics. This means that one can specify either total energy or temperature and arrive at the same result; likewise one can specify either total particle number or chemical potential. Given these considerations, the best ensemble to choose for the calculation of the properties of a macroscopic system is usually just the ensemble which allows the result to be derived most easily. Important cases where the thermodynamic ensembles do not give identical results include: Systems at a phase transition. Systems with long-range interactions. Microscopic systems.

Statistical Thermodynamics In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterizedin other words, the ensemble that reflects the knowledge about that system.

28

Illustrative example (a gas)


The above concepts can be illustrated for the specific case of one liter of ammonia gas at standard conditions. (Note that statistical thermodynamics is not restricted to the study of macroscopic gases, and the example of a gas is given here to illustrate concepts. Statistical mechanics and statistical thermodynamics apply to all mechanical systems (including microscopic systems) and to all phases of matter: liquids, solids, plasmas, gases, nuclear matter, quark matter.) A simple way to prepare one litre sample of ammonia in a standard condition is to take a very large reservoir of ammonia at those standard conditions, and connect it to a previously evacuated one-litre container. After ammonia gas has entered the container and the container has been given time to reach thermodynamic equilibrium with the reservoir, the container is then sealed and isolated. In thermodynamics, this is a repeatable process resulting in a very well defined sample of gas with a precise description. We now consider the corresponding precise description in statistical thermodynamics. Although this process is well defined and repeatable in a macroscopic sense, we have no information about the exact locations and velocities of each and every molecule in the container of gas. Moreover, we do not even know exactly how many molecules are in the container; even supposing we knew exactly the average density of the ammonia gas in general, we do not know how many molecules of the gas happened to be inside our container at the moment when we sealed it. The sample is in equilibrium and is in equilibrium with the reservoir: we could reconnect it to the reservoir for some time, and then re-seal it, and our knowledge about the state of the gas would not change. In this case, our knowledge about the state of the gas is precisely described by the grand canonical ensemble. Provided we have an accurate microscopic model of the ammonia gas, we could in principle compute all thermodynamic properties of this sample of gas by using the distribution provided by the grand canonical ensemble. Hypothetically, we could use an extremely sensitive weight scale to measure exactly the mass of the container before and after introducing the ammonia gas, so that we can exactly know the number of ammonia molecules. After we make this measurement, then our knowledge about the gas would correspond to the canonical ensemble. Finally, suppose by some hypothetical apparatus we can measure exactly the number of molecules and also measure exactly the total energy of the system. Supposing furthermore that this apparatus gives us no further information about the molecules' positions and velocities, our knowledge about the system would correspond to the microcanonical ensemble. Even after making such measurements, however, our expectations about the behaviour of the gas do not change appreciably. This is because the gas sample is macroscopic and approximates very well the thermodynamic limit, so the different ensembles behave similarly. This can be demonstrated by considering how small the actual fluctuations would be. Suppose that we knew the number density of ammonia gas was exactly 3.041022 molecules per liter inside the reservoir of ammonia gas used to fill the one-litre container. In describing the container with the grand canonical ensemble, then, the average number of molecules would be and the uncertainty (standard deviation) in the number of molecules would be (assuming Poisson

distribution), which is relatively very small compared to the total number of molecules. Upon measuring the particle number (thus arriving at a canonical ensemble) we should find very nearly 3.041022 molecules. For example the probability of finding more than 3.0400011022 or less than 3.0399991022 molecules would be about 1 in 103000000000.[7]

Statistical Thermodynamics

29

Calculation methods
Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities. Exact There are some cases which allow exact solutions. For very small microscopic systems, the ensembles can be directly computed by simply enumerating over all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all phase space in classical mechanics). Some large systems consist of many separable microscopic systems, and each of the subsystems can be analysed independently. Notably, idealized gases of non-interacting particles have this property, allowing exact derivations of MaxwellBoltzmann statistics, FermiDirac statistics, and BoseEinstein statistics. A few large systems with interaction have been solved. By the use of subtle mathematical techniques, exact solutions have been found for a few toy models. Some examples include the Bethe ansatz, square-lattice Ising model in zero field, hard hexagon model. Monte Carlo One approximate approach that is particularly well suited to computers is the Monte Carlo method, which examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level. The MetropolisHastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble. Path integral Monte Carlo, also used to sample the canonical ensemble. Other Molecular dynamics simulations can be used to calculate microcanonical ensemble averages, in ergodic systems. Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful.

Non-equilibrium statistical mechanics


There are many physical phenomena of interest that involve quasi-thermodynamic processes out of equilibrium, for example: heat transport by the internal motions in a material, driven by a temperature imbalance, electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance, spontaneous chemical reactions driven by a decrease in free energy, friction, dissipation, quantum decoherence, systems being pumped by external forces (optical pumping, etc.), and irreversible processes in general.

All of these processes occur over time with characteristic rates, and these rates are of importance for engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.)

Statistical Thermodynamics In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. Unfortunately, these ensemble evolution equations inherit much of the complexity of the underling mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to add additional ingredients besides probability and reversible mechanics. Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections.

30

Stochastic methods
One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier. Boltzmann transport equation: An early form of stochastic mechanics appeared even before the term "statistical mechanics" had been coined, in studies of kinetic theory. James Clerk Maxwell had demonstrated that molecular collisions would lead to apparently chaotic motion inside a gas. Ludwig Boltzmann subsequently showed that, by taking this molecular chaos for granted as a complete randomization, the motions of particles in a gas would follow a simple Boltzmann transport equation that would rapidly restore a gas to an equilibrium state (see H-theorem). The Boltzmann transport equation and related approaches are important tools in non-equilibrium statistical mechanics due to their extreme simplicity. These approximations work well in systems where the "interesting" information is immediately (after just one collision) scrambled up into subtle correlations, which essentially restricts them to rarefied gases. The Boltzmann transport equation has been found to be very useful in simulations of electron transport in lightly doped semiconductors (in transistors), where the electrons are indeed analogous to a rarefied gas. A quantum technique related in theme is the random phase approximation. BBGKY hierarchy: In liquids and dense gases, it is not valid to immediately discard the correlations between particles after one collision. The BBGKY hierarchy (BogoliubovBornGreenKirkwoodYvon hierarchy) gives a method for deriving Boltzmann-type equations but also extending them beyond the dilute gas case, to include correlations after a few collisions. Keldysh formalism (a.k.a. NEGFnon-equilibrium Green functions): A quantum approach to including stochastic dynamics is found in the Keldysh formalism. This approach often used in electronic quantum transport calculations.

Statistical Thermodynamics

31

Near-equilibrium methods
Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory. A remarkable result, as formalized by the fluctuation-dissipation theorem, is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation-dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics. A few of the theoretical tools used to make this connection include: Fluctuationdissipation theorem Onsager reciprocal relations GreenKubo relations LandauerBttiker formalism MoriZwanzig formalism

Hybrid methods
An advanced approach uses a combination of stochastic methods and linear response theory. As an example, one approach to compute quantum coherence effects (weak localization, conductance fluctuations) in the conductance of an electronic system is the use of the Green-Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method.

Applications outside thermodynamics


The ensemble formalism also can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in: propagation of uncertainty over time, regression analysis of gravitational orbits, ensemble forecasting of weather.

History
In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion. In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Five years later, in 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwells paper and was so inspired by it that he spent much of his life developing the subject further. Statistical mechanics proper was initiated in the 1870s with the work of Ludwig Boltzmann, much of which was collectively published in Boltzmann's 1896 Lectures on Gas Theory.[8] Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem, transport theory, thermal equilibrium, the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time

Statistical Thermodynamics non-equilibrium statistical mechanics, with his H-theorem. The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1902.[9] "Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. Whereas Boltzmann had focussed almost entirely on the case of a macroscopic ideal gas, Gibbs' 1902 book formalized statistical mechanics as a fully general approach to address all mechanical systemsmacroscopic or microscopic, gaseous or non-gaseous. Gibbs' methods were initially derived in the framework classical mechanics, however they were of such generality that they were found to adapt easily to the later quantum mechanics, and still form the foundation of statistical mechanics to this day.

32

Notes
[1] http:/ / en. wikipedia. org/ w/ index. php?title=Template:Statistical_mechanics& action=edit [2] The term statistical mechanics is sometimes used to refer to only statistical thermodynamics. This article takes the broader view. By some definitions, statistical physics is an even broader term which statistically studies any type of physical system, but is often taken to be synonymous with statistical mechanics. [3] The probabilities in quantum statistical mechanics should not be confused with quantum superposition. While a quantum ensemble can contain states with quantum superpositions, a single quantum state cannot be used to represent an ensemble. [4] Statistical equilibrium should not be confused with mechanical equilibrium. The latter occurs when a mechanical system has completely ceased to evolve even on a microscopic scale, due to being in a state with a perfect balancing of forces. Statistical equilibrium generally involves states that are very far from mechanical equilibrium. [5] J. Uffink, " Compendium of the foundations of classical statistical physics. (http:/ / philsci-archive. pitt. edu/ 2691/ 1/ UffinkFinal. pdf)" (2006) [6] The transitive thermal equilibrium (as in, "X is thermal equilibrium with Y") used here means that the ensemble for the first system is not perturbed when the system is allowed to weakly interact with the second system. [7] This is so unlikely as to be practically impossible. The statistical physicist mile Borel noted that, compared to the improbabilities found in statistical mechanics, it would be more likely that monkeys typing randomly on a typewriter would happen to reproduce the books of the world. See infinite monkey theorem. [8] (section 1.2) [9] According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871.

References External links


Philosophy of Statistical Mechanics (http://plato.stanford.edu/entries/statphys-statmech/) article by Lawrence Sklar for the Stanford Encyclopedia of Philosophy. Sklogwiki - Thermodynamics, statistical mechanics, and the computer simulation of materials. (http://www. sklogwiki.org/) SklogWiki is particularly orientated towards liquids and soft condensed matter. Statistical Thermodynamics (http://history.hyperjeff.net/statmech.html) - Historical Timeline Thermodynamics and Statistical Mechanics (http://farside.ph.utexas.edu/teaching/sm1/statmech.pdf) by Richard Fitzpatrick Lecture Notes in Statistical Mechanics and Mesoscopics (http://arxiv.org/abs/1107.0568) by Doron Cohen

Chemical Thermodynamics

33

Chemical Thermodynamics
Chemical thermodynamics is the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. Chemical thermodynamics involves not only laboratory measurements of various thermodynamic properties, but also the application of mathematical methods to the study of chemical questions and the spontaneity of processes. The structure of chemical thermodynamics is based on the first two laws of thermodynamics. Starting from the first and second laws of thermodynamics, four equations called the "fundamental equations of Gibbs" can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the thermodynamic system can be derived using relatively simple mathematics. This outlines the mathematical framework of chemical thermodynamics.

History
In 1865, the German physicist Rudolf Clausius, in his Mechanical Theory of Heat, suggested that the principles of thermochemistry, e.g. the heat evolved in combustion reactions, could be applied to the principles of thermodynamics.[1] Building on the work of Clausius, between the years 1873-76 the American mathematical physicist Willard Gibbs published a series of three papers, the most famous one being the paper On the Equilibrium of Heterogeneous Substances. In these papers, Gibbs showed how the first two laws of thermodynamics could be measured graphically and mathematically to determine both the thermodynamic equilibrium of chemical reactions as well as their tendencies to occur or proceed. Gibbs collection of papers provided the first unified body of thermodynamic theorems from the principles developed by others, such as Clausius and Sadi Carnot. During the early 20th century, two major publications successfully J. Willard Gibbs - founder of chemical thermodynamics applied the principles developed by Gibbs to chemical processes, and thus established the foundation of the science of chemical thermodynamics. The first was the 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall. This book was responsible for supplanting the chemical affinity for the term free energy in the English-speaking world. The second was the 1933 book Modern Thermodynamics by the methods of Willard Gibbs written by E. A. Guggenheim. In this manner, Lewis, Randall, and Guggenheim are considered as the founders of modern chemical thermodynamics because of the major contribution of these two books in unifying the application of thermodynamics to chemistry.

Chemical Thermodynamics

34

Overview
The primary objective of chemical thermodynamics is the establishment of a criterion for the determination of the feasibility or spontaneity of a given transformation.[2] In this manner, chemical thermodynamics is typically used to predict the energy exchanges that occur in the following processes: 1. Chemical reactions 2. Phase changes 3. The formation of solutions The following state functions are of primary concern in chemical thermodynamics: Internal energy (U) Enthalpy (H) Entropy (S) Gibbs free energy (G)

Most identities in chemical thermodynamics arise from application of the first and second laws of thermodynamics, particularly the law of conservation of energy, to these state functions. The 3 laws of thermodynamics: 1. The energy of the universe is constant. 2. In any spontaneous process, there is always an increase in entropy of the universe 3. The entropy of a perfect crystal at 0 Kelvin is zero

Chemical energy
Chemical energy is the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. Breaking or making of chemical bonds involves energy, which may be either absorbed or evolved from a chemical system. Energy that can be released (or absorbed) because of a reaction between a set of chemical substances is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in internal energy of a chemical reaction. Where is the internal energy of formation of the reactant molecules that can be calculated from the bond energies of the various chemical bonds of the molecules under consideration and is the internal energy of formation of the product molecules. The internal energy change of a process is equal to the heat change if it is measured under conditions of constant volume, as in a closed rigid container such as a bomb calorimeter. However, under conditions of constant pressure, as in reactions in vessels open to the atmosphere, the measured heat change is not always equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is called the enthalpy change; in this case the enthalpy of formation). Another useful term is the heat of combustion, which is the energy released due to a combustion reaction and often applied in the study of fuels. Food is similar to hydrocarbon fuel and carbohydrate fuels, and when it is oxidized, its caloric content is similar (though not assessed in the same way as a hydrocarbon fuel see food energy). In chemical thermodynamics the term used for the chemical potential energy is chemical potential, and for chemical transformation an equation most often used is the Gibbs-Duhem equation.

Chemical Thermodynamics

35

Chemical reactions
In most cases of interest in chemical thermodynamics there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which always create entropy unless they are at equilibrium, or are maintained at a "running equilibrium" through "quasi-static" changes by being coupled to constraining devices, such as pistons or electrodes, to deliver and receive external work. Even for homogeneous "bulk" materials, the free energy functions depend on the composition, as do all the extensive thermodynamic potentials, including the internal energy. If the quantities {Ni}, the number of chemical species, are omitted from the formulae, it is impossible to describe compositional changes.

Gibbs function
For a "bulk" (unstructured) system they are the last remaining extensive variables. For an unstructured, homogeneous "bulk" system, there are still various extensive compositional variables {Ni} that G depends on, which specify the composition, the amounts of each chemical substance, expressed as the numbers of molecules present or (dividing by Avogadro's number), the numbers of moles

For the case where only PV work is possible

in which i is the chemical potential for the i-th component in the system

The expression for dG is especially useful at constant T and P, conditions which are easy to achieve experimentally and which approximates the condition in living creatures

Chemical affinity
While this formulation is mathematically defensible, it is not particularly transparent since one does not simply add or remove molecules from a system. There is always a process involved in changing the composition; e.g., a chemical reaction (or many), or movement of molecules from one phase (liquid) to another (gas or solid). We should find a notation which does not seem to imply that the amounts of the components (Ni} can be changed independently. All real processes obey conservation of mass, and in addition, conservation of the numbers of atoms of each kind. Whatever molecules are transferred to or from should be considered part of the "system". Consequently we introduce an explicit variable to represent the degree of advancement of a process, a progress variable for the extent of reaction (Prigogine & Defay, p.18; Prigogine, pp.47; Guggenheim, p.37.62), and to the use of the partial derivative G/ (in place of the widely used "G", since the quantity at issue is not a finite change). The result is an understandable expression for the dependence of dG on chemical reactions (or other processes). If there is just one reaction

If we introduce the stoichiometric coefficient for the i-th component in the reaction

which tells how many molecules of i are produced or consumed, we obtain an algebraic expression for the partial derivative

Chemical Thermodynamics

36

where, (De Donder; Progoine & Defay, p.69; Guggenheim, pp.37,240), we introduce a concise and historical name for this quantity, the "affinity", symbolized by A, as introduced by Thophile de Donder in 1923. The minus sign comes from the fact the affinity was defined to represent the rule that spontaneous changes will ensue only when the change in the Gibbs free energy of the process is negative, meaning that the chemical species have a positive affinity for each other. The differential for G takes on a simple form which displays its dependence on compositional change

If there are a number of chemical reactions going on simultaneously, as is usually the case

a set of reaction coordinates {j}, avoiding the notion that the amounts of the components (Ni} can be changed independently. The expressions above are equal to zero at thermodynamic equilibrium, while in the general case for real systems, they are negative because all chemical reactions proceeding at a finite rate produce entropy. This can be made even more explicit by introducing the reaction rates dj/dt. For each and every physically independent process (Prigogine & Defay, p.38; Prigogine, p.24)

This is a remarkable result since the chemical potentials are intensive system variables, depending only on the local molecular milieu. They cannot "know" whether the temperature and pressure (or any other system variables) are going to be held constant over time. It is a purely local criterion and must hold regardless of any such constraints. Of course, it could have been obtained by taking partial derivatives of any of the other fundamental state functions, but nonetheless is a general criterion for (T times) the entropy production from that spontaneous process; or at least any part of it that is not captured as external work. (See Constraints below.) We now relax the requirement of a homogeneous bulk system by letting the chemical potentials and the affinity apply to any locality in which a chemical reaction (or any other process) is occurring. By accounting for the entropy production due to irreversible processes, the inequality for dG is now replaced by an equality

or

Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and/or its surrounding. Or it may go partly toward doing external work and partly toward creating entropy. The important point is that the extent of reaction for a chemical reaction may be coupled to the displacement of some external mechanical or electrical quantity in such a way that one can advance only if the other one also does. The coupling may occasionally be rigid, but it is often flexible and variable.

Solutions
In solution chemistry and biochemistry, the Gibbs free energy decrease (G/, in molar units, denoted cryptically by G) is commonly used as a surrogate for (T times) the entropy produced by spontaneous chemical reactions in situations where there is no work being done; or at least no "useful" work; i.e., other than perhaps some PdV. The assertion that all spontaneous reactions have a negative G is merely a restatement of the fundamental thermodynamic relation, giving it the physical dimensions of energy and somewhat obscuring its significance in terms of entropy. When there is no useful work being done, it would be less misleading to use the Legendre

Chemical Thermodynamics transforms of the entropy appropriate for constant T, or for constant T and P, the Massieu functions F/T and G/T respectively.

37

Non equilibrium
Generally the systems treated with the conventional chemical thermodynamics are either at equilibrium or near equilibrium. Ilya Prigogine developed the thermodynamic treatment of open systems that are far from equilibrium. In doing so he has discovered phenomena and structures of completely new and completely unexpected types. His generalized, nonlinear and irreversible thermodynamics has found surprising applications in a wide variety of fields. The non equilibrium thermodynamics has been applied for explaining how ordered structures e.g. the biological systems, can develop from disorder. Even if Onsager's relations are utilized, the classical principles of equilibrium in thermodynamics still show that linear systems close to equilibrium always develop into states of disorder which are stable to perturbations and cannot explain the occurrence of ordered structures. Prigogine called these systems dissipative systems, because they are formed and maintained by the dissipative processes which take place because of the exchange of energy between the system and its environment and because they disappear if that exchange ceases. They may be said to live in symbiosis with their environment. The method which Prigogine used to study the stability of the dissipative structures to perturbations is of very great general interest. It makes it possible to study the most varied problems, such as city traffic problems, the stability of insect communities, the development of ordered biological structures and the growth of cancer cells to mention but a few examples.

System constraints
In this regard, it is crucial to understand the role of walls and other constraints, and the distinction between independent processes and coupling. Contrary to the clear implications of many reference sources, the previous analysis is not restricted to homogenous, isotropic bulk systems which can deliver only PdV work to the outside world, but applies even to the most structured systems. There are complex systems with many chemical "reactions" going on at the same time, some of which are really only parts of the same, overall process. An independent process is one that could proceed even if all others were unaccountably stopped in their tracks. Understanding this is perhaps a thought experiment in chemical kinetics, but actual examples exist. A gas reaction which results in an increase in the number of molecules will lead to an increase in volume at constant external pressure. If it occurs inside a cylinder closed with a piston, the equilibrated reaction can proceed only by doing work against an external force on the piston. The extent variable for the reaction can increase only if the piston moves, and conversely, if the piston is pushed inward, the reaction is driven backwards. Similarly, a redox reaction might occur in an electrochemical cell with the passage of current in wires connecting the electrodes. The half-cell reactions at the electrodes are constrained if no current is allowed to flow. The current might be dissipated as joule heating, or it might in turn run an electrical device like a motor doing mechanical work. An automobile lead-acid battery can be recharged, driving the chemical reaction backwards. In this case as well, the reaction is not an independent process. Some, perhaps most, of the Gibbs free energy of reaction may be delivered as external work. The hydrolysis of ATP to ADP and phosphate can drive the force times distance work delivered by living muscles, and synthesis of ATP is in turn driven by a redox chain in mitochondria and chloroplasts, which involves the transport of ions across the membranes of these cellular organelles. The coupling of processes here, and in the previous examples, is often not complete. Gas can leak slowly past a piston, just as it can slowly leak out of a rubber balloon. Some reaction may occur in a battery even if no external current is flowing. There is usually a coupling coefficient, which may depend on relative rates, which determines what percentage of the driving free energy is turned into external work, or captured as "chemical work"; a misnomer for the free energy of another chemical

Chemical Thermodynamics process.

38

References
[1] Clausius, R. (1865). The Mechanical Theory of Heat with its Applications to the Steam Engine and to Physical Properties of Bodies. London: John van Voorst, 1 Paternoster Row. MDCCCLXVII. [2] Klotz, I. (1950). Chemical Thermodynamics. New York: Prentice-Hall, Inc.

Further reading
Herbert B. Callen (1960). Thermodynamics. Wiley & Sons. The clearest account of the logical foundations of the subject. ISBN0-471-13035-4. Library of Congress Catalog No. 60-5597 Ilya Prigogine & R. Defay, translated by D.H. Everett; Chapter IV (1954). Chemical Thermodynamics. Longmans, Green & Co. Exceptionally clear on the logical foundations as applied to chemistry; includes non-equilibrium thermodynamics. Ilya Prigogine (1967). Thermodynamics of Irreversible Processes, 3rd ed. Interscience: John Wiley & Sons. A simple, concise monograph explaining all the basic ideas. Library of Congress Catalog No. 67-29540 E.A. Guggenheim (1967). Thermodynamics: An Advanced Treatment for Chemists and Physicists, 5th ed. North Holland; John Wiley & Sons (Interscience). A remarkably astute treatise. Library of Congress Catalog No. 67-20003 Th. De Donder (1922). Bull. Ac. Roy. Belg. (Cl. Sc.) (5) 7: 197, 205.

External links
Chemical Thermodynamics (http://www.shodor.org/UNChem/advanced/thermo/index.html) - University of North Carolina Chemical energetics (http://www.chem1.com/acad/webtext/chemeq/) (Introduction to thermodynamics and the First Law) Thermodynamics of chemical equilibrium (http://www.chem1.com/acad/webtext/thermeq/) (Entropy, Second Law and free energy)

Equilibrium Thermodynamics

39

Equilibrium Thermodynamics
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Equilibrium Thermodynamics is the systematic study of transformations of matter and energy in systems as they approach equilibrium. The word equilibrium implies a state of balance. Equilibrium thermodynamics, in origins, derives from analysis of the Carnot cycle. Here, typically a system, as cylinder of gas, is set out of balance via heat input from a combustion reaction. Then, through a series of steps, as the system settles into its final equilibrium state, work is extracted. In an equilibrium state there are no unbalanced potentials, or driving forces, within the system. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial state, subject to accurately specified constraints, to calculate what the state of the system will be once it has reached equilibrium. An equilibrium state is obtained by seeking the extrema of a thermodynamic potential function, whose nature depends on the constraints imposed on the system. For example, a chemical reaction at constant temperature and pressure will reach equilibrium at a minimum of its components' Gibbs free energy and a maximum of their entropy. Equilibrium thermodynamics differs from non-equilibrium thermodynamics, in that, with the latter, the state of the system under investigation will typically not be uniform but will vary locally in those as energy, entropy, and temperature distributions as gradients are imposed by dissipative thermodynamic fluxes. In equilibrium thermodynamics, by contrast, the state of the system will be considered uniform throughout, defined macroscopically by those quantities as temperature, pressure, or volume. Here, typically, systems are studied as they change from one state to another Ruppeiner geometry is a type of information geometry used to study thermodynamics. It claims that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model. This geometrical model is based on the idea that there exist equilibrium states which can be represented by points on two-dimensional surface and the distance between these equilibrium states is related to the fluctuation between them.

Equilibrium Thermodynamics

40

References
Adkins, C.J. (1983). Equilibrium Thermodynamics, 3rd Ed. Cambridge: Cambridge University Press. Cengel, Y. & Boles, M. (2002). Thermodynamics an Engineering Approach, 4th Ed. (textbook). New York: McGraw Hill. Kondepudi, D. & Prigogine, I. (2004). Modern Thermodynamics From Heat Engines to Dissipative Structures (textbook). New York: John Wiley & Sons. Perrot, P. (1998). A to Z of Thermodynamics (dictionary). New York: Oxford University Press.

Non-equilibrium Thermodynamics
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Non-equilibrium thermodynamics is a branch of thermodynamics that deals with thermodynamic systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium; for they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems and to chemical reactions. Non-equilibrium thermodynamics is concerned with transport processes and with the rates of chemical reactions.[1] Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. One fundamental difference between equilibrium thermodynamics and non-equilibrium thermodynamics lies in the behaviour of inhomogeneous systems, which require for their study knowledge of rates of reaction which are not considered in equilibrium thermodynamics of homogeneous systems. This is discussed below. Another fundamental difference is the difficulty in defining entropy in macroscopic terms for systems not in thermodynamic equilibrium.[2][3]

Non-equilibrium Thermodynamics

41

Overview
Non-equilibrium thermodynamics is a work in progress, not an established edifice. This article will try to sketch some approaches to it and some concepts important for it. Some concepts of particular importance for non-equilibrium thermodynamics include time rate of dissipation of energy (Rayleigh 1873, Onsager 1931, also[4][5]), time rate of entropy production (Onsager 1931), thermodynamic fields,[6][7][8] dissipative structure, and non-linear dynamical structure. Of interest is the thermodynamic study of non-equilibrium steady states, in which entropy production and some flows are non-zero, but there is no time variation. One initial approach to non-equilibrium thermodynamics is sometimes called 'classical irreversible thermodynamics'. There are other approaches to non-equilibrium thermodynamics, for example extended irreversible thermodynamics, and generalized thermodynamics,[9] but they are hardly touched on in the present article.

Quasi-radiationless non-equilibrium thermodynamics of matter in laboratory conditions


According to Wildt (see also Essex), current versions of non-equilibrium thermodynamics ignore radiant heat; they can do so because they refer to laboratory quantities of matter under laboratory conditions with temperatures well below those of stars. At laboratory temperatures, in laboratory quantities of matter, thermal radiation is weak and can be practically nearly ignored. But, for example, atmospheric physics is concerned with large amounts of matter, occupying cubic kilometers, that, taken as a whole, are not within the range of laboratory quantities; then thermal radiation cannot be ignored.

Local equilibrium thermodynamics


The terms 'classical irreversible thermodynamics' and 'local equilibrium thermodynamics' are sometimes used to refer to a version of non-equilibrium thermodynamics that demands certain simplifying assumptions, as follows. The assumptions have the effect of making each very small volume element of the system effectively homogeneous, or well-mixed, or without an effective spatial structure, and without kinetic energy of bulk flow or of diffusive flux. Even within the thought-frame of classical irreversible thermodynamics, care is needed in choosing the independent variables[10] for systems. In some writings, it is assumed that the intensive variables of equilibrium thermodynamics are sufficient as the independent variables for the task (such variables are considered to have no 'memory', and do not show hysteresis); in particular, local flow intensive variables are not admitted as independent variables; local flows are considered as dependent on quasi-static local intensive variables. (In other writings, local flow variables are considered; these might be considered as classical by analogy with the time-invariant long-term time-averages of flows produced by endlessly repeated cyclic processes; examples with flows are in the thermoelectric phenomena known as the Seebeck and the Peltier effects, considered by Kelvin in the nineteenth century and by Onsager in the twentieth.[11] These effects occur at metal junctions, which were originally effectively treated as two-dimensional surfaces, with no spatial volume, and no spatial variation.) Also it is assumed that the local entropy density is the same function of the other local intensive variables as in equilibrium; this is called the local thermodynamic equilibrium assumption[][][][12][13][14] (see also Keizer (1987)[15]). Radiation is ignored because it is transfer of energy between regions, which can be remote from one another. In the classical irreversible thermodynamic approach, there is allowed very small spatial variation, from very small volume element to adjacent very small volume element, but it is assumed that the global entropy of the system can be found by simple spatial integration of the local entropy density; this means that spatial structure cannot contribute as it properly should to the global entropy assessment for the system. This approach assumes spatial and temporal continuity and even differentiability of locally defined intensive variables such as temperature and internal energy density. All of these are very stringent demands. Consequently, this approach can deal with only a very limited range of phenomena. This approach is nevertheless valuable because it can deal well with some macroscopically observable phenomena.

Non-equilibrium Thermodynamics

42

Extended irreversible thermodynamics


Extended irreversible thermodynamics is a branch of non-equilibrium thermodynamics that goes outside the restriction to the local equilibrium hypothesis. The space of state variables is enlarged by including the fluxes of mass, momentum and energy and eventually higher order fluxes. The formalism is well-suited for describing high-frequency processes and small-length scales materials.

Basic concepts
There are many examples of stationary non-equilibrium systems, some very simple, like a system confined between two thermostats at different temperatures or the ordinary Couette flow, a fluid enclosed between two flat walls moving in opposite directions and defining non-equilibrium conditions at the walls. Laser action is also a non-equilibrium process, but it depends on departure from local thermodynamic equilibrium and is thus beyond the scope of classical irreversible thermodynamics; here a strong temperature difference is maintained between two molecular degrees of freedom (with molecular laser, vibrational and rotational molecular motion), the requirement for two component 'temperatures' in the one small region of space, precluding local thermodynamic equilibrium, which demands that only one temperature be needed. Damping of acoustic perturbations or shock waves are non-stationary non-equilibrium processes. Driven complex fluids, turbulent systems and glasses are other examples of non-equilibrium systems. The mechanics of macroscopic systems depends on a number of extensive quantities. It should be stressed that all systems are permanently interacting with their surroundings, thereby causing unavoidable fluctuations of extensive quantities. Equilibrium conditions of thermodynamic systems are related to the maximum property of the entropy. If the only extensive quantity that is allowed to fluctuate is the internal energy, all the other ones being kept strictly constant, the temperature of the system is measurable and meaningful. The system's properties are then most conveniently described using the thermodynamic potential Helmholtz free energy (A = U - TS), a Legendre transformation of the energy. If, next to fluctuations of the energy, the macroscopic dimensions (volume) of the system are left fluctuating, we use the Gibbs free energy (G = U + PV - TS), where the system's properties are determined both by the temperature and by the pressure. Non-equilibrium systems are much more complex and they may undergo fluctuations of more extensive quantities. The boundary conditions impose on them particular intensive variables, like temperature gradients or distorted collective motions (shear motions, vortices, etc.), often called thermodynamic forces. If free energies are very useful in equilibrium thermodynamics, it must be stressed that there is no general law defining stationary non-equilibrium properties of the energy as is the second law of thermodynamics for the entropy in equilibrium thermodynamics. That is why in such cases a more generalized Legendre transformation should be considered. This is the extended Massieu potential. By definition, the entropy (S) is a function of the collection of extensive quantities . Each extensive quantity has a conjugate intensive variable comparison to the definition given in this link) so that: We then define the extended Massieu function as follows: (a restricted definition of intensive variable is used here by

where

is Boltzmann's constant, whence

The independent variables are the intensities. Intensities are global values, valid for the system as a whole. When boundaries impose to the system different local conditions, (e.g. temperature differences), there are intensive variables representing the average value and others

Non-equilibrium Thermodynamics representing gradients or higher moments. The latter are the thermodynamic forces driving fluxes of extensive properties through the system. It may be shown that the Legendre transformation changes the maximum condition of the entropy (valid at equilibrium) in a minimum condition of the extended Massieu function for stationary states, no matter whether at equilibrium or not.

43

Stationary states, fluctuations, and stability


In thermodynamics one is often interested in a stationary state of a process, allowing that the stationary state include the occurrence of unpredictable and experimentally unreproducible fluctuations in the state of the system. The fluctuations are due to the system's internal sub-processes and to exchange of matter or energy with the system's surroundings that create the constraints that define the process. If the stationary state of the process is stable, then the unreproducible fluctuations involve local transient decreases of entropy. The reproducible response of the system is then to increase the entropy back to its maximum by irreversible processes: the fluctuation cannot be reproduced with a significant level of probability. Fluctuations about stable stationary states are extremely small except near critical points (Kondepudi and Prigogine 1998, page 323).[16] The stable stationary state has a local maximum of entropy and is locally the most reproducible state of the system. There are theorems about the irreversible dissipation of fluctuations. Here 'local' means local with respect to the abstract space of thermodynamic coordinates of state of the system. If the stationary state is unstable, then any fluctuation will almost surely trigger the virtually explosive departure of the system from the unstable stationary state. This can be accompanied by increased export of entropy.

Local thermodynamic equilibrium


The scope of present-day non-equilibrium thermodynamics does not cover all physical processes. A condition for the validity of many studies in non-equilibrium thermodynamics of matter is that they deal with what is known as local thermodynamic equilibrium.

Local thermodynamic equilibrium of ponderable matter


Local thermodynamic equilibrium of matter (see also Keizer (1987) means that conceptually, for study and analysis, the system can be spatially and temporally divided into 'cells' or 'micro-phases' of small (infinitesimal) size, in which classical thermodynamical equilibrium conditions for matter are fulfilled to good approximation. These conditions are unfulfilled, for example, in very rarefied gases, in which molecular collisions are infrequent; and in the boundary layers of a star, where radiation is passing energy to space; and for interacting fermions at very low temperature, where dissipative processes become ineffective. When these 'cells' are defined, one admits that matter and energy may pass freely between contiguous 'cells', slowly enough to leave the 'cells' in their respective individual local thermodynamic equilibria with respect to intensive variables. One can think here of two 'relaxation times' separated by order of magnitude.[17] The longer relaxation time is of the order of magnitude of times taken for the macroscopic dynamical structure of the system to change. The shorter is of the order of magnitude of times taken for a single 'cell' to reach local thermodynamic equilibrium. If these two relaxation times are not well separated, then the classical non-equilibrium thermodynamical concept of local thermodynamic equilibrium loses its meaning and other approaches have to be proposed, see for instance Extended irreversible thermodynamics. For example, in the atmosphere, the speed of sound is much greater than the wind speed; this favours the idea of local thermodynamic equilibrium of matter for atmospheric heat transfer studies at altitudes below about 60 km where sound propagates, but not above 100 km, where, because of the paucity of intermolecular collisions, sound does not propagate.

Non-equilibrium Thermodynamics

44

Milne's 1928 definition of local thermodynamic equilibrium in terms of radiative equilibrium


Milne (1928), thinking about stars, gave a definition of 'local thermodynamic equilibrium' in terms of the thermal radiation of the matter in each small local 'cell'. He defined 'local thermodynamic equilibrium' in a 'cell' by requiring that it macroscopically absorb and spontaneously emit radiation as if it were in radiative equilibrium in a cavity at the temperature of the matter of the 'cell'. Then it strictly obeys Kirchhoff's law of equality of radiative emissivity and absorptivity, with a black body source function. The key to local thermodynamic equilibrium here is that the rate of collisions of ponderable matter particles such as molecules should far exceed the rates of creation and annihilation of photons.

Entropy in evolving systems


It is pointed out[18] by W.T. Grandy Jr that entropy, though it may be defined for a non-equilibrium system, is when strictly considered, only a macroscopic quantity that refers to the whole system, and is not a dynamical variable and in general does not act as a local potential that describes local physical forces. Under special circumstances, however, one can metaphorically think as if the thermal variables behaved like local physical forces. The approximation that constitutes classical irreversible thermodynamics is built on this metaphoric thinking.

Flows and forces


The fundamental relation of classical equilibrium thermodynamics [19]

expresses the change in entropy and chemical potential . particle number

of a system as a function of the intensive quantities temperature and of the differentials of the extensive quantities energy , volume

, pressure and

Following Onsager (1931,I), let us extend our considerations to thermodynamically non-equilibrium systems. As a basis, we need locally defined versions of the extensive macroscopic quantities , and and of the intensive macroscopic quantities , and . For classical non-equilibrium studies, we will consider some new locally defined intensive macroscopic variables. We can, under suitable conditions, derive these new variables by locally defining the gradients and flux densities of the basic locally defined macroscopic quantities. Such locally defined gradients of intensive macroscopic variables are called 'thermodynamic forces'. They 'drive' flux densities, perhaps misleadingly often called 'fluxes', which are dual to the forces. These quantities are defined in the article on Onsager reciprocal relations. Establishing the relation between such forces and flux densities is a problem in statistical mechanics. Flux densities ( ) may be coupled. The article on Onsager reciprocal relations considers the stable near-steady thermodynamically non-equilibrium regime, which has dynamics linear in the forces and flux densities. In stationary conditions, such forces and associated flux densities are by definition time invariant, as also are the system's locally defined entropy and rate of entropy production. Notably, according to Ilya Prigogine and others, when an open system is in conditions that allow it to reach a stable stationary thermodynamically non-equilibrium state, it organizes itself so as to minimize total entropy production defined locally. This is considered further below. One wants to take the analysis to the further stage of describing the behaviour of surface and volume integrals of non-stationary local quantities; these integrals are macroscopic fluxes and production rates. In general the dynamics of these integrals are not adequately described by linear equations, though in special cases they can be so described.

Non-equilibrium Thermodynamics

45

The Onsager relations


Following Section III of Rayleigh (1873), Onsager (1931, I) showed that in the regime where both the flows ( are small and the thermodynamic forces ( the flows: ) vary slowly, the rate of creation of entropy ) is linearly related to

and the flows are related to the gradient of the forces, parametrized by a matrix of coefficients conventionally denoted :

from which it follows that:

The second law of thermodynamics requires that the matrix called the Onsager reciprocal relations.

be positive definite. Statistical mechanics is symmetric. This fact is

considerations involving microscopic reversibility of dynamics imply that the matrix

Speculated thermodynamic extremum principles for energy dissipation and entropy production
Jou, Casas-Vazquez, Lebon (1993) note that classical non-equilibrium thermodynamics "has seen an extraordinary expansion since the second world war", and they refer to the Nobel prizes for work in the field awarded to Lars Onsager and Ilya Prigogine. Martyushev and Seleznev (2006) note the importance of entropy in the evolution of natural dynamical structures: "Great contribution has been done in this respect by two scientists, namely Clausius, ... , and Prigogine." Prigogine in his 1977 Nobel Lecture[20] said: "... non-equilibrium may be a source of order. Irreversible processes may lead to a new type of dynamic states of matter which I have called dissipative structures." Glansdorff and Prigogine (1971) wrote on page xx: "Such 'symmetry breaking instabilities' are of special interest as they lead to a spontaneous 'self-organization' of the system both from the point of view of its space order and its function." Analyzing the Rayleigh-Bnard convection cell phenomenon, Chandrasekhar (1961)[21] wrote "Instability occurs at the minimum temperature gradient at which a balance can be maintained between the kinetic energy dissipated by viscosity and the internal energy released by the buoyancy force." With a temperature gradient greater than the minimum, viscosity can dissipate kinetic energy as fast as it is released by convection due to buoyancy, and a steady state with convection is stable. The steady state with convection is often a pattern of macroscopically visible hexagonal cells with convection up or down in the middle or at the 'walls' of each cell, depending on the temperature dependence of the quantities; in the atmosphere under various conditions it seems that either is possible. (Some details are discussed by Lebon, Jou, and Casas-Vsquez (2008) on pages 143-158.) With a temperature gradient less than the minimum, viscosity and heat conduction are so effective that convection cannot keep going. Glansdorff and Prigogine (1971) on page xv wrote "Dissipative structures have a quite different [from equilibrium structures] status: they are formed and maintained through the effect of exchange of energy and matter in non-equilibrium conditions." They were referring to the dissipation function of Rayleigh (1873) that was used also by Onsager (1931, I, 1931, II). On pages 7880 of their book Glansdorff and Prigogine (1971) consider the stability of laminar flow that was pioneered by Helmholtz; they concluded that at a stable steady state of sufficiently slow laminar flow, the dissipation function was minimum.

Non-equilibrium Thermodynamics These advances have led to proposals for various extremal principles for the "self-organized" rgimes that are possible for systems governed by classical linear and non-linear non-equilibrium thermodynamical laws, with stable stationary rgimes being particularly investigated. Convection introduces effects of momentum which appear as non-linearity in the dynamical equations. In the more restricted case of no convective motion, Prigogine wrote of "dissipative structures". ilhav (1997)[22] offers the opinion that "... the extremum principles of [equilibrium] thermodynamics ... do not have any counterpart for [non-equilibrium] steady states (despite many claims in the literature)."

46

Prigogines proposed theorem of minimum entropy production


In 1945 Prigogine (see also Prigogine (1947)[23]) proposed a Theorem of Minimum Entropy Production which applies only to the linear regime near a stationary thermodynamically non-equilibrium state. The proof offered by Prigogine is open to serious criticism. A critical and unsupportive discussion of Prigogine's proposal is offered by Grandy (2008). The rate of entropy production has been shown to be a non-monotonic function of time during the approach to steady state heat convection, which contradicts the proposal that it is an extremum in the optimum non-equilibrium state.

Speculated principles of maximum entropy production and minimum energy dissipation


Onsager (1931, I) wrote: "Thus the vector field J of the heat flow is described by the condition that the rate of increase of entropy, less the dissipation function, be a maximum." Careful note needs to be taken of the opposite signs of the rate of entropy production and of the dissipation function, appearing in the left-hand side of Onsager's equation (5.13) on Onsager's page 423. Although largely unnoticed at the time, Ziegler proposed an idea early with his work in the mechanics of plastics in 1961, and later in his book on thermomechanics revised in 1983,[7] and in various papers (e.g., Ziegler (1987),). Ziegler never stated his principle as a universal law but he may have intuited this. He demonstrated his principle using vector space geometry based on an orthogonality condition which only worked in systems where the velocities were defined as a single vector or tensor, and thus, as he wrote at p.347, was impossible to test by means of macroscopic mechanical models, and was, as he pointed out, invalid in compound systems where several elementary processes take place simultaneously. In relation to the earth's atmospheric energy transport process, according to Tuck (2008),[24] "On the macroscopic level, the way has been pioneered by a meteorologist (Paltridge 1975, 2001)." Initially Paltridge (1975) used the terminology "minimum entropy exchange", but after that, for example in Paltridge (1978), and in Paltridge (1979), he used the now current terminology "maximum entropy production" to describe the same thing. The logic of Paltridge's earlier work is open to serious criticism. Nicolis and Nicolis (1980) discuss Paltridge's work, and they comment that the behaviour of the entropy production is far from simple and universal. Later work by Paltridge focuses more on the idea of a dissipation function than on the idea of rate of production of entropy. Sawada (1981),[25] also in relation to the earth's atmospheric energy transport process, postulating a principle of largest amount of entropy increment per unit time, cites work in fluid mechanics by Malkus and Veronis (1958) as having "proven a principle of maximum heat current, which in turn is a maximum entropy production for a given boundary condition", but this inference is not logically valid. Again investigating planetary atmospheric dynamics, Shutts (1981) used an approach to the definition of entropy production, different from Paltridge's, to investigate a more abstract way to check the principle of maximum entropy production, and reported a good fit.

Non-equilibrium Thermodynamics

47

Prospects
Until recently, prospects for useful extremal principles in this area have seemed clouded. C. Nicolis (1999) concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum dissipation; she says this seems to rule out the existence of a global organizing principle, and comments that this is to some extent disappointing; she also points to the difficulty of finding a thermodynamically consistent form of entropy production. Another top expert offers an extensive discussion of the possibilities for principles of extrema of entropy production and of dissipation of energy: Chapter 12 of Grandy (2008) is very cautious, and finds difficulty in defining the 'rate of internal entropy production' in many cases, and finds that sometimes for the prediction of the course of a process, an extremum of the quantity called the rate of dissipation of energy may be more useful than that of the rate of entropy production; this quantity appeared in Onsager's 1931 origination of this subject. Other writers have also felt that prospects for general global extremal principles are clouded. Such writers include Glansdorff and Prigogine (1971), Lebon, Jou and Casas-Vsquez (2008), and ilhav (1997), as noted in the Wikipedia article on Extremal principles in non-equilibrium thermodynamics. A recent proposal may perhaps by-pass those clouded prospects.

Applications of non-equilibrium thermodynamics


Non-equilibrium thermodynamics has been successfully applied to describe biological systems such as Protein Folding/unfolding and transport through membranes.

References
[1] Fowler, R., Guggenheim, E.A. (1939). Statistical Thermodynamics, Cambridge University Press, Canbridge UK, page vii. [2] Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press. ISBN 978-0-19-954617-6. [3] Lebon, G., Jou, D., Casas-Vzquez, J. (2008). Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers, Springer-Verlag, Berlin, e-ISBN 978-3-540-74252-4. [4] Gyarmati, I. (1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated by E. Gyarmati and W.F. Heinz, Springer, Berlin. [5] Lavenda, B.H. (1978). Thermodynamics of Irreversible Processes, Macmillan, London, ISBN 0-333-21616-4. [6] Gyarmati, I. (1967/1970) Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated by E. Gyarmati and W.F. Heinz, Springer, New York, pages 4-14. [7] Ziegler, H., (1983). An Introduction to Thermomechanics, North-Holland, Amsterdam, ISBN 0-444-86503-9. [8] Balescu, R. (1975). Equilibrium and Non-equilibrium Statistical Mechanics, Wiley-Interscience, New York, ISBN 0-471-04600-0, Section 3.2, pages 64-72. [9] Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, ISBN 1-4020-0788-4. [10] Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London, page 1. [11] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester UK, ISBN 978-0-470-01598-8, pages 333-338. [12] Balescu, R. (1975). Equilibrium and Non-equilibrium Statistical Mechanics, John Wiley & Sons, New York, ISBN 0-471-04600-0. [13] Mihalas, D., Weibel-Mihalas, B. (1984). Foundations of Radiation Hydrodynamics, Oxford University Press, New York, ISBN 0-19-503437-6. (http:/ / www. filestube. com/ 9c5b2744807c2c3d03e9/ details. html) [14] Schloegl, F. (1989). Probability and Heat: Fundamentals of Thermostatistics, Freidr. Vieweg & Sohn, Brausnchweig, ISBN 3-528-06343-2. [15] Keizer, J. (1987). Statistical Thermodynamics of Nonequilibrium Processes, Springer-Verlag, New York, ISBN 0-387-96501-7. [16] Kondepudi, D., Prigogine, I, (1998). Modern Thermodynamics. From Heat Engines to Dissipative Structures, Wiley, Chichester, 1998, ISBN 0-471-97394-7. [17] Zubarev D. N.,(1974). Nonequilibrium Statistical Thermodynamics (http:/ / books. google. com/ books?id=SQy3AAAAIAAJ& hl=ru& source=gbs_ViewAPI), translated from the Russian by P.J. Shepherd, New York, Consultants Bureau. ISBN 0-306-10895-X; ISBN 978-0-306-10895-2. [18] Grandy 2004 see also (http:/ / physics. uwyo. edu/ ~tgrandy/ Statistical_Mechanics. html). [19] W. Greiner, L. Neise, and H. Stcker (1997), Thermodynamics and Statistical Mechanics (Classical Theoretical Physics) ,Springer-Verlag, New York, P85, 91, 101,108,116, ISBN 0-387-94299-8. [20] Prigogine, I. (1977). Time, Structure and Fluctuations, Nobel Lecture. (http:/ / nobelprize. org/ nobel_prizes/ chemistry/ laureates/ 1977/ prigogine-lecture. pdf) [21] Chandrasekhar, S. (1961). Hydrodynamic and Hydromagnetic Stability, Clarendon Press, Oxford.

Non-equilibrium Thermodynamics
[22] ilhav, M. (1997). The Mechanics and Thermodynamics of Continuous Media, Springer, Berlin, ISBN 3-540-58378-5, page 209. [23] Prigogine, I. (1947). tude thermodynamique des Phenomnes Irreversibles, Desoer, Liege. [24] Tuck, Adrian F. (2008) Atmospheric Turbulence: a molecular dynamics perspective, Oxford University Press. ISBN 978-0-19-923653-4. See page 33. [25] Sawada, Y. (1981). A thermodynamic variational principle in nonlinear non-equilibrium phenomena, Progress of Theoretical Physics 66: 68-76.

48

Further reading
Ziegler, Hans (1977): An introduction to Thermomechanics. North Holland, Amsterdam. ISBN 0-444-11080-1. Second edition (1983) ISBN 0-444-86503-9. Kleidon, A., Lorenz, R.D., editors (2005). Non-equilibrium Thermodynamics and the Production of Entropy, Springer, Berlin. ISBN 3-540-22495-5. Prigogine, I. (1955/1961/1967). Introduction to Thermodynamics of Irreversible Processes. 3rd edition, Wiley Interscience, New York. Zubarev D. N. (1974): Nonequilibrium Statistical Thermodynamics (http://books.google.com/ books?id=SQy3AAAAIAAJ&hl=ru&source=gbs_ViewAPI). New York, Consultants Bureau. ISBN 0-306-10895-X; ISBN 978-0-306-10895-2. Keizer, J. (1987). Statistical Thermodynamics of Nonequilibrium Processes, Springer-Verlag, New York, ISBN 0-387-96501-7. Zubarev D. N., Morozov V., Ropke G. (1996): Statistical Mechanics of Nonequilibrium Processes: Basic Concepts, Kinetic Theory. John Wiley & Sons. ISBN 3-05-501708-0. Zubarev D. N., Morozov V., Ropke G. (1997): Statistical Mechanics of Nonequilibrium Processes: Relaxation and Hydrodynamic Processes. John Wiley & Sons. ISBN 3-527-40084-2. Tuck, Adrian F. (2008). Atmospheric turbulence : a molecular dynamics perspective. Oxford University Press. ISBN 978-0-19-923653-4. Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press. ISBN 978-0-19-954617-6. Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures. John Wiley & Sons, Chichester. ISBN 0-471-97393-9. de Groot S.R., Mazur P. (1984). Non-Equilibrium Thermodynamics (Dover). ISBN 0-486-64741-2

External links
Stephan Herminghaus' Dynamics of Complex Fluids Department at the Max Planck Institute for Dynamics and Self Organization (http://web.archive.org/web/20110406071945/http://www-dcf.ds.mpg.de/build.php/ Titel/Research_english.html?sub=1&ver=en) Non-equilibrium Statistical Thermodynamics applied to Fluid Dynamics and Laser Physics (http://www. worldscibooks.com/physics/1622.html) - 1992- book by Xavier de Hemptinne. Nonequilibrium Thermodynamics of Small Systems (http://dx.doi.org/10.1063/1.2012462) PhysicsToday.org Into the Cool (http://www.intothecool.com/energetic.php) - 2005 book by Dorion Sagan and Eric D. Schneider, on nonequilibrium thermodynamics and evolutionary theory. Quantum Thermodynamics (http://www.quantumthermodynamics.org/) - list of good related articles from the quantum thermodynamics point of view Thermodynamics beyond local equilibrium (http://www.pnas.org/content/98/20/11081.full.pdf)

49

Chapter 2. Laws of Thermodynamics


Zeroth
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

The zeroth law of thermodynamics states that if two systems are each in thermal equilibrium with a third system, they are also in thermal equilibrium with each other. Two systems are said to be in the relation of thermal equilibrium if they are linked by a wall permeable only to heat, and do not change over time.[1] As a convenience of language, systems are sometimes also said to be in a relation of thermal equilibrium if they are not linked so as to be able to transfer heat to each other, but would not do so if they were connected by a wall permeable only to heat. The physical meaning of the law was expressed by Maxwell in the words: "All heat is of the same kind".[2] For this reason, another statement of the law is "All diathermal walls are equivalent".[3] The law is important for the mathematical formulation of thermodynamics, which needs the assertion that the relation of thermal equilibrium is an equivalence relation. This information is needed for the mathematical definition of temperature that will agree with the physical existence of valid thermometers.[4]

Zeroth law as equivalence relation


A system is said to be in thermal equilibrium when it experiences no net change of its observable state over time. The most precise statement of the zeroth law is that thermal equilibrium constitutes an equivalence relation on pairs of thermodynamic systems. In other words, the set of all equilibrated thermodynamic systems may be divided into subsets in which every system belongs to one and only one subset, and is in thermal equilibrium with every other member of that subset, and is not in thermal equilibrium with a member of any other subset. This means that a unique "tag" can be assigned to every system, and if the "tags" of two systems are the same, they are in thermal equilibrium with each other, and if they are not, they are not. Ultimately, this property is used to justify the use of thermodynamic temperature as a tagging system. Thermodynamic temperature provides further properties of

Zeroth thermally equilibrated systems, such as order and continuity with regard to "hotness" or "coldness", but these properties are not implied by the standard statement of the zeroth law. If it is specified that a system is in thermal equilibrium with itself (i.e., thermal equilibrium is reflexive), then the zeroth law may be stated as follows: If a body A, be in thermal equilibrium with two other bodies, B and C, then B and C are in thermal equilibrium with one another.[5] This statement asserts that thermal equilibrium is a Euclidean relation between thermodynamic systems. If we also grant that all thermodynamic systems are in thermal equilibrium with themselves, then thermal equilibrium is also a reflexive relation. Relations that are both reflexive and Euclidean are equivalence relations. One consequence of this reasoning is that thermal equilibrium is a transitive relationship: If A is in thermal equilibrium with B and B is in thermal equilibrium with C, then A is in thermal equilibrium with C. Another consequence is that the equilibrium relationship is symmetric: If A is in thermal equilibrium with B, then B is in thermal equilibrium with A. Thus we may say that two systems are in thermal equilibrium with each other, or that they are in mutual equilibrium. Implicitly assuming both reflexivity and symmetry, the zeroth law is therefore often expressed as:[6] If two systems are in thermal equilibrium with a third system, then they are in thermal equilibrium with each other. Again, implicitly assuming both reflexivity and symmetry, the zeroth law is occasionally expressed as the transitive relationship:[7] If A is in thermal equilibrium with B and if B is in thermal equilibrium with C, then A is in thermal equilibrium with C.

50

Foundation of temperature
The zeroth law establishes thermal equilibrium as an equivalence relationship. An equivalence relationship on a set (such as the set of thermally equilibrated systems) divides that set into a collection of distinct subsets ("disjoint subsets") where any member of the set is a member of one and only one such subset. In the case of the zeroth law, these subsets consist of systems which are in mutual equilibrium. This partitioning allows any member of the subset to be uniquely "tagged" with a label identifying the subset to which it belongs. Although the labeling may be quite arbitrary,[8] temperature is just such a labeling process which uses the real number system for tagging. The zeroth law justifies the use of suitable thermodynamic systems as thermometers to provide such a labeling, which yield any number of possible empirical temperature scales, and justifies the use of the second law of thermodynamics to provide an absolute, or thermodynamic temperature scale. Such temperature scales bring additional continuity and ordering (i.e., "hot" and "cold") properties to the concept of temperature. In the space of thermodynamic parameters, zones of constant temperature form a surface, that provides a natural order of nearby surfaces. One may therefore construct a global temperature function that provides a continuous ordering of states. The dimensionality of a surface of constant temperature is one less than the number of thermodynamic parameters, thus, for an ideal gas described with three thermodynamic parameters P, V and n, it is a two-dimensional surface. For example, if two systems of ideal gases are in equilibrium, then P1V1/N1 = P2V2/N2 where Pi is the pressure in the ith system, Vi is the volume, and Ni is the amount (in moles, or simply the number of atoms) of gas. The surface PV/N = const defines surfaces of equal thermodynamic temperature, and one may label defining T so that PV/N = RT, where R is some constant. These systems can now be used as a thermometer to calibrate other systems. Such systems are known as "ideal gas thermometers".

Zeroth

51

Physical meaning of the usual statement of the zeroth law


The present article states the zeroth law as it is often summarized in textbooks. Nevertheless, this usual statement perhaps does not explicitly convey the full physical meaning that underlies it. The underlying physical meaning was perhaps first clarified by Maxwell in his 1871 textbook. In Carathodory's (1909) theory, it is postulated that there exist walls "permeable only to heat", though heat is not explicitly defined in that paper. This postulate is a physical postulate of existence. It does not, however, as worded just previously, say that there is only one kind of heat. This paper of Carathodory states as proviso 4 of its account of such walls: "Whenever each of the systems S1 and S2 is made to reach equilibrium with a third system S3 under identical conditions, systems S1 and S2 are in mutual equilibrium".[9] It is the function of this statement in the paper, not there labeled as the zeroth law, to provide not only for the existence of transfer of energy other than by work or transfer of matter, but further to provide that such transfer is unique in the sense that there is only one kind of such wall, and one kind of such transfer. This is signaled in the postulate of this paper of Carathodory that precisely one non-deformation variable is needed to complete the specification of a thermodynamic state, beyond the necessary deformation variables, which are not restricted in number. It is therefore not exactly clear what Carathodory means when in the introduction of this paper he writes "It is possible to develop the whole theory without assuming the existence of heat, that is of a quantity that is of a different nature from the normal mechanical quantities." Maxwell (1871) discusses at some length ideas which he summarizes by the words "All heat is of the same kind". Modern theorists sometimes express this idea by postulating the existence of a unique one-dimensional hotness manifold, into which every proper temperature scale has a monotonic mapping.[10] This may be expressed by the statement that there is only one kind of temperature, regardless of the variety of scales in which it is expressed. Another modern expression of this idea is that "All diathermal walls are equivalent".[11] This might also be expressed by saying that there is precisely one kind of non-mechanical, non-matter-transferring contact equilibrium between thermodynamic systems. These ideas may be regarded as helping to clarify the physical meaning of the usual statement of the zeroth law of thermodynamics. It is the opinion of Lieb and Yngvason (1999) that the derivation from statistical mechanics of the law of entropy increase is a goal that has so far eluded the deepest thinkers.[12] Thus the idea remains open to consideration that the existence of heat and temperature are needed as coherent primitive concepts for thermodynamics, as expressed, for example, by Maxwell and Planck. On the other hand, Planck in 1926 clarified how the second law can be stated without reference to heat or temperature, by referring to the irreversible and universal nature of friction in natural thermodynamic processes.[13]

History
According to Arnold Sommerfeld, Ralph H. Fowler invented the title 'the zeroth law of thermodynamics' when he was discussing the 1935 text of Saha and Srivastava. They write on page 1 that "every physical quantity must be measurable in numerical terms". They presume that temperature is a physical quantity and then deduce the statement "If a body A is in temperature equilibrium with two bodies B and C, then B and C themselves will be in temperature equilibrium with each other". They then in a self-standing paragraph italicize as if to state their basic postulate: "Any of the physical properties of A which change with the application of heat may be observed and utilised for the measurement of temperature." They do not themselves here use the term 'zeroth law of thermodynamics'.[14][15] There are very many statements of these physical ideas in the physics literature long before this text, in very similar language. What was new here was just the label 'zeroth law of thermodynamics'. Fowler, with co-author Edward A. Guggenheim, wrote of the zeroth law as follows: ...we introduce the postulate: If two assemblies are each in thermal equilibrium with a third assembly, they are in thermal equilibrium with each other.

Zeroth They then proposed that "it may be shown to follow that the condition for thermal equilibrium between several assemblies is the equality of a certain single-valued function of the thermodynamic states of the assemblies, which may be called the temperature t, any one of the assemblies being used as a "thermometer" reading the temperature t on a suitable scale. This postulate of the "Existence of temperature" could with advantage be known as the zeroth law of thermodynamics". The first sentence of this present article is a version of this statement.[16] It is not explicitly evident in the existence statement of Fowler and Guggenheim that temperature refers to a unique attribute of a state of a system, such as is expressed in the idea of the hotness manifold. Also their statement refers explicitly to statistical mechanical assemblies, not explicitly to macroscopic thermodynamically defined systems.

52

References
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] Carathodory, C. (1909). Maxwell, J.C. (1871), p. 57. Bailyn, M. (1994), pp. 24, 144. Lieb, E.H., Yngvason, J. (1999), p. 56. Planck. M. (1914), p. 2. Buchdahl, H.A. (1966), p. 73. Kondepudi, D. (2008), p. 7. Dugdale, J.S. (1996), p. 35. Carathodory, C. (1909), Section 6. Serrin, J. (1986), p. 6. Bailyn, M. (1994), p. 23. Lieb, E.H., Yngvason, J. (1999), p. 5. Planck, M. (1926). Sommerfeld, A. (1951/1955), p. 1. Saha, M.N., Srivastava, B.N. (1935), p. 1. Fowler, R., Guggenheim, E.A. (1939/1965), p. 56.

Bibliography of cited references


Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 978-0-88318-797-5. H.A. Buchdahl (1966). The Concepts of Classical Thermodynamics. Cambridge University Press. C. Carathodory (1909). "Untersuchungen ber die Grundlagen der Thermodynamik". Mathematische Annalen (in German) 67: 355386. doi: 10.1007/BF01450409 (http://dx.doi.org/10.1007/BF01450409). A translation may be found here (http://neo-classical-physics.info/uploads/3/0/6/5/3065888/ caratheodory_-_thermodynamics.pdf). A partly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. Dugdale, J. S. (1996). Entropy and its Physical Interpretation. Taylor & Francis. ISBN0-7484-0569-0. Fowler, R., Guggenheim, E.A. (1939/1965). Statistical Thermodynamics. A version of Statistical Mechanics for Students of Physics and Chemistry, first printing 1939, reprinted with corrections 1965, Cambridge University Press, Cambridge UK. D. Kondepudi (2008). Introduction to Modern Thermodynamics (http://www.amazon.com/ Introduction-Modern-Thermodynamics-Dilip-Kondepudi). Wiley. ISBN978-0470-01598-8. Lieb, E.H., Yngvason, J. (1999). The physics and mathematics of the second law of thermodynamics, Physics Reports, 314: 196. Maxwell, J.C. (1871). Theory of Heat, Longmans, Green, and Co., London. Planck. M. (1914). The Theory of Heat Radiation (http://archive.org/details/theoryofheatradi00planrich), a translation by Masius, M. of the second German edition, P. Blakiston's Son & Co., Philadelphia. Planck, M. (1926). ber die Begrnding des zweiten Hauptsatzes der Thermodynamik, S.B. Preu. Akad. Wiss. phys. math. Kl.: 453463.

Zeroth Saha, M.N., Srivastava, B.N. (1935). A Treatise on Heat. (Including Kinetic Theory of Gases, Thermodynamics and Recent Advances in Statistical Thermodynamics), the second and revised edition of A Text Book of Heat, The Indian Press, Allahabad and Calcutta. Serrin, J. (1986). Chapter 1, 'An Outline of Thermodynamical Structure', pages 332, in New Perspectives in Thermodynamics, edited by J. Serrin, Springer, Berlin, ISBN 3-540-15931-2. Sommerfeld, A. (1951/1955). Thermodynamics and Statistical Mechanics, vol. 5 of Lectures on Theoretical Physics, edited by F. Bopp, J. Meixner, translated by J. Kestin, Academic Press, New York.

53

Further reading
Atkins, Peter (2007). Four Laws That Drive the Universe. New York: Oxford University Press. ISBN978-0-19-923236-9.

First
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

The first law of thermodynamics is a version of the law of conservation of energy, adapted for thermodynamic systems. The law of conservation of energy states that the total energy of an isolated system is constant; energy can be transformed from one form to another, but cannot be created or destroyed. The first law is often formulated by stating that the change in the internal energy of a closed system is equal to the amount of heat supplied to the system, minus the amount of work done by the system on its surroundings. Equivalently, perpetual motion machines of the first kind are impossible.

First

54

History
The process of development of the first law of thermodynamics was by way of many tries and mistakes of investigation, over a period of about half a century. The first full statements of the law came in 1850 from Rudolf Clausius and by William Rankine; Rankine's statement was perhaps not quite as clear and distinct as was Clausius'. A main aspect of the struggle was to deal with the previously proposed caloric theory of heat. Germain Hess in 1840 stated a conservation law for the so-called 'heat of reaction' for chemical reactions.[1] His law was later recognized as a consequence of the first law of thermodynamics, but Hess's statement was not explicitly concerned with the relation between energy exchanges by heat and work. According to Truesdell (1980), Julius Robert von Mayer in 1841 made a statement that meant that "in a process at constant pressure, the heat used to produce expansion is universally interconvertible with work", but this is not a general statement of the first law.[2][3]

Original statements: the "thermodynamic approach"


The original nineteenth century statements of the first law of thermodynamics appeared in a conceptual framework in which transfer of energy as heat was taken as a primitive notion, not defined or constructed by the theoretical development of the framework, but rather presupposed as prior to it and already accepted. The primitive notion of heat was taken as empirically established, especially through calorimetry regarded as a subject in its own right, prior to thermodynamics. Jointly primitive with this notion of heat were the notions of empirical temperature and thermal equilibrium. This framework did not presume a concept of energy in general, but regarded it as derived or synthesized from the prior notions of heat and work. By one author, this framework has been called the "thermodynamic" approach.[4] The first explicit statement of the first law of thermodynamics, by Rudolf Clausius in 1850, referred to cyclic thermodynamic processes. In all cases in which work is produced by the agency of heat, a quantity of heat is consumed which is proportional to the work done; and conversely, by the expenditure of an equal quantity of work an equal quantity of heat is produced.[5] Clausius also stated the law in another form, referring to the existence of a function of state of the system, the internal energy, and expressed it in terms of a differential equation for the increments of a thermodynamic process. This equation may described as follows: In a thermodynamic process involving a closed system, the increment in the internal energy is equal to the difference between the heat accumulated by the system and the work done by it.[6] Because of its definition in terms of increments, the value of the internal energy of a system is not uniquely defined. It is defined only up to an arbitrary additive constant of integration, which can be adjusted to give arbitrary reference zero levels. This non-uniqueness is in keeping with the abstract mathematical nature of the internal energy. The internal energy is customarily stated relative to a conventionally chosen standard reference state of the system. The concept of internal energy is considered by Bailyn to be of "enormous interest". Its quantity cannot be immediately measured, but can only be inferred, by differencing actual immediate measurements. Bailyn likens it to the energy states of an atom, that were revealed by Bohr's energy relation h = En En. In each case, an unmeasurable quantity (the internal energy, the atomic energy level) is revealed by considering the difference of measured quantities (increments of internal energy, quantities of emitted or absorbed radiative energy.[7]

First

55

Conceptual revision: the "mechanical approach"


In 1907, G.H. Bryan wrote about systems between which there is no transfer of matter (closed systems): "Definition. When energy flows from one system or part of a system to another otherwise than by the performance of mechanical work, the energy so transferred is called heat."[8] Largely through the influence of Max Born, in the twentieth century, this revised conceptual approach to the definition of heat came to be preferred by many writers, including Constantin Carathodory. It might be called the "mechanical approach".[9] This approach takes as its primitive notion energy transferred as work defined by mechanics. From this, it derives the notions of transfer of energy as heat, and of temperature, as theoretical developments. It regards calorimetry as a derived theory. It has an early origin in the nineteenth century, for example in the work of Helmholtz,[10] but also in the work of many others. For this approach, it is necessary to be sure that if there is transfer of energy associated with transfer of matter, then the transfer of energy other than by transfer of matter is by a physically separate pathway, and independently defined and measured, from the transfer of energy by transfer of matter.

Conceptually revised statement, according to the mechanical approach


The revised statement of the law takes the notions of adiabatic mechanical work, and of non-adiabatic transfer of energy, as empirically or theoretically established primitive notions. It rests on the primitive notion of walls, especially adiabatic walls, presupposed as physically established. Energy can pass such walls as only as adiabatic work, reversibly or irreversibly. If transfer of energy as work is not permitted between them, two systems separated by an adiabatic wall can come to their respective internal mechanical and material thermodynamic equilibrium states completely independently of one another.[11] The revised statement of the law postulates that a change in the internal energy of a system due to an arbitrary process of interest, that takes the system from its specified initial to its specified final state of internal thermodynamic equilibrium, can be determined through the physical existence of a reference process, for those specified states, that occurs purely through stages of adiabatic work. The revised statement is then For a closed system, in any arbitrary process of interest that takes it from an initial to a final state of internal thermodynamic equilibrium, the change of internal energy is the same as that for a reference adiabatic work process that links those two states. This is so regardless of the path of the process of interest, and regardless of whether it is an adiabatic or a non-adiabatic process. The reference adiabatic work process may be chosen arbitrarily from amongst the class of all such processes. This statement is much less close to the empirical basis than are the original statements, but is often regarded as conceptually parsimonious in that it rests only on the concepts of adiabatic work and of non-adiabatic processes, not on the concepts of transfer of energy as heat and of empirical temperature that are presupposed by the original statements. Largely through the influence of Max Born, it is often regarded as theoretically preferable because of this conceptual parsimony. Born particularly observes that the revised approach avoids thinking in terms of what he calls the "imported engineering" concept of heat engines.[] Basing his thinking on the mechanical approach, Born in 1921, and again in 1949, proposed to revise the definition of heat. In particular, he referred to the work of Constantin Carathodory, who had in 1909 stated the first law without defining quantity of heat. Born's definition was specifically for transfers of energy without transfer of matter, and it has been widely followed in textbooks (examples:). Born observes that a transfer of matter between two systems is accompanied by a transfer of internal energy that cannot be resolved into heat and work components. There can be pathways to other systems, spatially separate from that of the matter transfer, that allow heat and work transfer independent of and simultaneous with the matter transfer. Energy is conserved in such transfers.

First

56

Description
The first law of thermodynamics for a closed system was expressed in two ways by Clausius. One way referred to cyclic processes and the inputs and outputs of the system, but did not refer to increments in the internal state of the system. The other way referred to any incremental change in the internal state of the system, and did not expect the process to be cyclic. A cyclic process is one that can be repeated indefinitely often and still eventually leave the system in its original state. In each repetition of a cyclic process, the work done by the system is proportional to the heat consumed by the system. In a cyclic process in which the system does work on its surroundings, it is necessary that some heat be taken in by the system and some be put out, and the difference is the heat consumed by the system in the process. The constant of proportionality is universal and independent of the system and was measured by James Joule in 1845 and 1847, who described it as the mechanical equivalent of heat. For a closed system, in any process, the change in the internal energy is considered due to a combination of heat added to the system and work done by the system. Taking as a change in internal energy, one writes

where

and

are quantities of heat supplied to the system by its surroundings and of work done by the system

on its surroundings, respectively. This sign convention is implicit in Clausius' statement of the law given above, and is consistent with the use of thermodynamics to study heat engines, which provide useful work that is regarded as positive. In modern style of teaching science, however, it is conventional to use the IUPAC convention by which the first law is formulated in terms of the work done on the system. With this alternate sign convention for work, the first law for a closed system may be written:
[12]

This convention follows physicists such as Max Planck,[13] and considers all net energy transfers to the system as positive and all net energy transfers from the system as negative, irrespective of any use for the system as an engine or other device. When a system expands in a fictive quasistatic process, the work done by the system on the environment is the product, PdV, of pressure, P, and volume change, dV, whereas the work done on the system is -PdV. Using either sign convention for work, the change in internal energy of the system is:

where Q denotes the infinitesimal increment of heat supplied to the system from its surroundings. Work and heat are expressions of actual physical processes of supply or removal of energy, while the internal energy U is a mathematical abstraction that keeps account of the exchanges of energy that befall the system. Thus the term heat for Q means "that amount of energy added or removed by conduction of heat or by thermal radiation", rather than referring to a form of energy within the system. Likewise, the term work energy for W means "that amount of energy gained or lost as the result of work". Internal energy is a property of the system whereas work done and heat supplied are not. A significant result of this distinction is that a given internal energy change U can be achieved by, in principle, many combinations of heat and work.

First

57

Various statements of the law for closed systems


The law is of very great importance and generality and is consequently thought of from several points of view. Most careful textbook statements of the law express it for closed systems. It is stated in several ways, sometimes even by the same author.[14] For the thermodynamics of closed systems, the distinction between transfers of energy as work and as heat is central and is within the scope of the present article. For the thermodynamics of open systems, such a distinction is beyond the scope of the present article, but some limited comments are made on it in the section below headed 'First law of thermodynamics for open systems'. There are two main ways of stating a law of thermodynamics, physically or mathematically. They should be logically coherent and consistent with one another.[15] An example of a physical statement is that of Planck (1897/1903): It is in no way possible, either by mechanical, thermal, chemical, or other devices, to obtain perpetual motion, i.e. it is impossible to construct an engine which will work in a cycle and produce continuous work, or kinetic energy, from nothing.[16] This physical statement is restricted neither to closed systems nor to systems with states that are strictly defined only for thermodynamic equilibrium; it has meaning also for open systems and for systems with states that are not in thermodynamic equilibrium. An example of a mathematical statement is that of Crawford (1963): For a given system we let Ekin= large-scale mechanical energy, Epot= large-scale potential energy, and Etot= total energy. The first two quantities are specifiable in terms of appropriate mechanical variables, and by definition

For any finite process, whether reversible or irreversible,

The first law in a form that involves the principle of conservation of energy more generally is

Here Q and W are heat and work added, with no restrictions as to whether the process is reversible, quasistatic, or irreversible.[Warner, Am. J. Phys., 29, 124 (1961)][17] This statement by Crawford, for W, uses the sign convention of IUPAC, not that of Clausius. Though it does not explicitly say so, this statement refers to closed systems, and to internal energy U defined for bodies in states of thermodynamic equilibrium, which possess well-defined temperatures. The history of statements of the law for closed systems has two main periods, before and after the work of Bryan (1907),[18] of Carathodory (1909),[1] and the approval of Carathodory's work given by Born (1921).[] The earlier traditional versions of the law for closed systems are nowadays often considered to be out of date. Carathodory's celebrated presentation of equilibrium thermodynamics refers to closed systems, which are allowed to contain several phases connected by internal walls of various kinds of impermeability and permeability (explicitly including walls that are permeable only to heat). Carathodory's 1909 version of the first law of thermodynamics was stated in an axiom which refrained from defining or mentioning temperature or quantity of heat transferred. That axiom stated that the internal energy of a phase in equilibrium is a function of state, that the sum of the internal energies of the phases is the total internal energy of the system, and that the value of the total internal energy of the system is changed by the amount of work done adiabatically on it, considering work as a form of energy. That article considered this statement to be an expression of the law of conservation of energy for such systems. This version is

First nowadays widely accepted as authoritative, but is stated in slightly varied ways by different authors. The 1909 Carathodory statement of the law in axiomatic form does not mention heat or temperature, but the equilibrium states to which it refers are explicitly defined by variable sets that necessarily include "non-deformation variables", such as pressures, which, within reasonable restrictions, can be rightly interpreted as empirical temperatures,[19] and the walls connecting the phases of the system are explicitly defined as possibly impermeable to heat or permeable only to heat. According to Mnster (1970), "A somewhat unsatisfactory aspect of Carathodory's theory is that a consequence of the Second Law must be considered at this point [in the statement of the first law], i.e. that it is not always possible to reach any state 2 from any other state 1 by means of an adiabatic process." Mnster instances that no adiabatic process can reduce the internal energy of a system at constant volume. Carathodory's paper asserts that its statement of the first law corresponds exactly to Joule's experimental arrangement, regarded as an instance of adiabatic work. It does not point out that Joule's experimental arrangement performed essentially irreversible work, through friction of paddles in a liquid, or passage of electric current through a resistance inside the system, driven by motion of a coil and inductive heating, or by an external current source, which can access the system only by the passage of electrons, and so is not strictly adiabatic, because electrons are a form of matter, which cannot penetrate adiabatic walls. The paper goes on to base its main argument on the possibility of quasi-static adiabatic work, which is essentially reversible. The paper asserts that it will avoid reference to Carnot cycles, and then proceeds to base its argument on cycles of forward and backward quasi-static adiabatic stages, with isothermal stages of zero magnitude. Some respected modern statements of the first law for closed systems assert the existence of internal energy as a function of state defined in terms of adiabatic work and accept the idea that heat is not defined in its own right, that is to say calorimetrically or as due to temperature difference; they define heat as a residual difference between change of internal energy and work done on the system, when that work does not account for the whole of the change of internal energy and the system is not adiabatically isolated.[][][] Sometimes the concept of internal energy is not made explicit in the statement.[20][21][22] Sometimes the existence of the internal energy is made explicit but work is not explicitly mentioned in the statement of the first postulate of thermodynamics. Heat supplied is then defined as the residual change in internal energy after work has been taken into account, in a non-adiabatic process.[23] A respected modern author states the first law of thermodynamics as "Heat is a form of energy", which explicitly mentions neither internal energy nor adiabatic work. Heat is defined as energy transferred by thermal contact with a reservoir, which has a temperature, and is generally so large that addition and removal of heat do not alter its temperature.[24] A current student text on chemistry defines heat thus: "heat is the exchange of thermal energy between a system and its surroundings caused by a temperature difference." The author then explains how heat is defined or measured by calorimetry, in terms of heat capacity, specific heat capacity, molar heat capacity, and temperature.[25] A respected text disregards the Carathodory's exclusion of mention of heat from the statement of the first law for closed systems, and admits heat calorimetrically defined along with work and internal energy.[26] Another respected text defines heat exchange as determined by temperature difference, but also mentions that the Born (1921) version is "completely rigorous".[27] These versions follow the traditional approach that is now considered out of date, exemplified by that of Planck (1897/1903).[28]

58

First

59

Evidence for the first law of thermodynamics for closed systems


The first law of thermodynamics for closed systems was originally induced from empirically observed evidence. It is nowadays, however, taken to be the definition of heat via the law of conservation of energy and the definition of work in terms of changes in the external parameters of a system. The original discovery of the law was gradual over a period of perhaps half a century or more, and some early studies were in terms of cyclic processes.[] The following is an account in terms of changes of state of a closed system through compound processes that are not necessarily cyclic. This account first considers processes for which the first law is easily verified because of their simplicity, namely adiabatic processes (in which there is transfer as heat) and adynamic processes (in which there is no transfer as work).

Adiabatic processes
In an adiabatic process, there is transfer of energy as work but not as heat. For all adiabatic process that takes a system from a given initial state to a given final state, irrespective of how the work is done, the respective eventual total quantities of energy transferred as work are one and the same, determined just by the given initial and final states. The work done on the system is defined and measured by changes in mechanical or quasi-mechanical variables external to the system. Physically, adiabatic transfer of energy as work requires the existence of adiabatic enclosures. For instance, in Joule's experiment, the initial system is a tank of water with a paddle wheel inside. If we isolate thermally the tank and move the paddle wheel with a pulley and a weight we can relate the increase in temperature with the height descended by the mass. Now the system is returned to its initial state, isolated again, and the same amount of work is done on the tank using different devices (an electric motor, a chemical battery, a spring,...). In every case, the amount of work can be measured independently. The return to the initial state is not conducted by doing adiabatic work on the system. The evidence shows that the final state of the water (in particular, its temperature and volume) is the same in every case. It is irrelevant if the work is electrical, mechanical, chemical,... or if done suddenly or slowly, as long as it is performed in an adiabatic way, that is to say, without heat transfer into or out of the system. Evidence of this kind shows that to increase the temperature of the water in the tank, the qualitative kind of adiabatically performed work does not matter. No qualitative kind of adiabatic work has ever been observed to decrease the temperature of the water in the tank. A change from one state to another, for example an increase of both temperature and volume, may be conducted in several stages, for example by externally supplied electrical work on a resistor in the body, and adiabatic expansion allowing the body to do work on the surroundings. It needs to be shown that the time order of the stages, and their relative magnitudes, does not affect the amount of adiabatic work that needs to be done for the change of state. According to one respected scholar: "Unfortunately, it does not seem that experiments of this kind have ever been carried out carefully. ... We must therefore admit that the statement which we have enunciated here, and which is equivalent to the first law of thermodynamics, is not well founded on direct experimental evidence."[] This kind of evidence, of independence of sequence of stages, combined with the above-mentioned evidence, of independence of qualitative kind of work, would show the existence of a very important state variable that corresponds with adiabatic work, but not that such a state variable represented a conserved quantity. For the latter, another step of evidence is needed, which may be related to the concept of reversibility, as mentioned below. That very important state variable was first recognized and denoted by Clausius in 1850, but he did not then name it, and he defined it in terms not only of work but also of heat transfer in the same process. It was also independently recognized in 1850 by Rankine, who also denoted it ; and in 1851 by Kelvin who then called it "mechanical energy", and later "intrinsic energy". In 1865, after some hestitation, Clausius began calling his state function "energy". In 1882 it was named as the internal energy by Helmholtz.[29] If only adiabatic processes were

First of interest, and heat could be ignored, the concept of internal energy would hardly arise or be needed. The relevant physics would be largely covered by the concept of potential energy, as was intended in the 1847 paper of Helmholtz on the principle of conservation of energy, though that did not deal with forces that cannot be described by a potential, and thus did not fully justify the principle. Moreover, that paper was very critical of the early work of Joule that had by then been performed.[30] A great merit of the internal energy concept is that it frees thermodynamics from a restriction to cyclic processes, and allows a treatment in terms of thermodynamic states. In an adiabatic process, adiabatic work takes the system either from a reference state to an arbitrary one with internal energy , or from the state to the state : with internal energy

60

Except under the special, and strictly speaking, fictional, condition of reversibility, only one of the processes or is empirically feasible by a simple application of externally supplied work. The reason for this is given as the second law of thermodynamics and is not considered in the present article. The fact of such irreversibility may be dealt with in two main ways, according to different points of view. since the work of Bryan (1907), To deal with it nowadays, the most accepted way, followed by Carathodory,[31] is to rely on the previously established concept of quasi-static processes,[32][33][34] as follows. Actual physical processes of transfer of energy as work are always at least to some degree irreversible. The irreversibility is often due to mechanisms known as dissipative, that transform bulk kinetic energy into internal energy. Examples are friction and viscosity. If the process is performed more slowly, the frictional or viscous dissipation is less. In the limit of infinitely slow performance, the dissipation tends to zero and then the limiting process, though fictional rather than actual, is notionally reversible, and is called quasi-static. Throughout the course of the fictional limiting quasi-static process, the internal intensive variables of the system are equal to the external intensive variables, those that describe the reactive forces exerted by the surroundings.[35] This can be taken to justify the formula

Another way to deal with it is to allow that experiments with processes of heat transfer to or from the system may be used to justify the formula just above. Moreover, it deals to some extent with the problem of lack of direct experimental evidence that the time order of stages of a process does not matter in the determination of internal energy. This way does not provide theoretical purity in terms of adiabatic work processes, but is empirically feasible, and is in accord with experiments actually done, such as the Joule experiments mentioned just above, and with older traditions. The formula of the path This kind of empirical evidence, coupled with theory of this kind, largely justifies the following statement: For all adiabatic processes between two specified states of a closed system of any nature, the net work done is the same regardless the details of the process, and determines a state function called internal energy, ." above allows that to go by processes of quasi-static adiabatic work from the state to the state we can take a path that goes through the reference state , since the quasi-static adiabatic work is independent

Adynamic processes
A complementary observable aspect of the first law is about heat transfer. Adynamic transfer of energy as heat can be measured empirically by changes in the surroundings of the system of interest by calorimetry. This again requires the existence of adiabatic enclosure of the entire process, system and surroundings, though the separating wall between the surroundings and the system is thermally conductive or radiatively permeable, not adiabatic. A calorimeter can rely on measurement of sensible heat, which requires the existence of thermometers and

First

61 measurement of temperature change in bodies of known sensible heat capacity under specified conditions; or it can rely on the measurement of latent heat, through measurement of masses of material that change phase, at temperatures fixed by the occurrence of phase changes under specified conditions in bodies of known latent heat of phase change. The calorimeter can be calibrated by adiabatically doing externally determined work on it. The most accurate method is by passing an electric current from outside through a resistance inside the calorimeter. The calibration allows comparison of calorimetric measurement of quantity of heat transferred with quantity of energy transferred as work. According to one textbook, "The most common device for measuring is an adiabatic bomb calorimeter."[36] According to another textbook, "Calorimetry is widely used in present day laboratories."[37] According to one opinion, "Most thermodynamic data come from calorimetry..."[38] According to another opinion, "The most common method of measuring heat is with a calorimeter."[39] When the system evolves with transfer of energy as heat, without energy being transferred as work, in an adynamic process, the heat transferred to the system is equal to the increase in its internal energy:

General case for reversible processes


Heat transfer is practically reversible when it is driven by practically negligibly small temperature gradients. Work transfer is practically reversible when it occurs so slowly that there are no frictional effects within the system; frictional effects outside the system should also be zero if the process is to be globally reversible. For a particular reversible process in general, the work done reversibly on the system, , and the heat transferred reversibly to the system, are not required to occur respectively adiabatically or adynamically, but , through the space of

they must belong to the same particular process defined by its particular reversible path,

thermodynamic states. Then the work and heat transfers can occur and be calculated simultaneously. Putting the two complementary aspects together, the first law for a particular reversible process can be written

This combined statement is the expression the first law of thermodynamics for reversible processes for closed systems. In particular, if no work is done on a thermally isolated closed system we have . This is one aspect of the law of conservation of energy and can be stated: The internal energy of an isolated system remains constant.

General case for irreversible processes


If, in a process of change of state of a closed system, the energy transfer is not under a practically zero temperature gradient and practically frictionless, then the process is irreversible. Then the heat and work transfers may be difficult to calculate, and irreversible thermodynamics is called for. Nevertheless, the first law still holds and provides a check on the measurements and calculations of the work done irreversibly on the system, , and the heat transferred irreversibly to the system, , which belong to the same particular process defined by its particular irreversible path, This means that the internal energy , through the space of thermodynamic states. between two

is a function of state and that the internal energy change

states is a function only of the two states.

First

62

Overview of the weight of evidence for the law


The first law of thermodynamics is very general and makes so many predictions that they can hardly all be directly tested by experiment. Nevertheless, very very many of its predictions have been found empirically accurate. And very importantly, no accurately and properly conducted experiment has ever detected a violation of the law. Consequently, within its scope of applicability, the law is so reliably established, that, nowadays, rather than experiment being considered as testing the accuracy of the law, it is far more practical and realistic to think of the law as testing the accuracy of experiment. An experimental result that seems to violate the law may be assumed to be inaccurate or wrongly conceived, for example due to failure to consider an important physical factor.

State functional formulation for infinitesimal processes


When the heat and work transfers in the equations above are infinitesimal in magnitude, they are often denoted by , rather than exact differentials denoted by "d", as a reminder that heat and work do not describe the state of any system. The integral of an inexact differential depends upon the particular path taken through the space of thermodynamic parameters while the integral of an exact differential depends only upon the initial and final states. If the initial and final states are the same, then the integral of an inexact differential may or may not be zero, but the integral of an exact differential is always zero. The path taken by a thermodynamic system through a chemical or physical change is known as a thermodynamic process. For a homogeneous system, with a well-defined temperature and pressure, the expression for dU can be written in terms of exact differentials, if the work that the system does is equal to its pressure times the infinitesimal increase in its volume. Here one assumes that the changes are quasistatic, so slow that there is at each instant negligible departure from thermodynamic equilibrium within the system. In other words, W = -PdV where P is pressure and V is volume. As such a quasistatic process in a homogeneous system is reversible, the total amount of heat added to a closed system can be expressed as Q =TdS where T is the temperature and S the entropy of the system. Therefore, for closed, homogeneous systems:

The above equation is known as the fundamental thermodynamic relation, for which the independent variables are taken as S and V, with respect to which T and P are partial derivatives of U. While this has been derived for quasistatic changes, it is valid in general, as U can be considered as a thermodynamic state function of the independent variables S and V. As an example, one may suppose that the system is initially in a state of thermodynamic equilibrium defined by S and V. Then the system is suddenly perturbed so that thermodynamic equilibrium breaks down and no temperature and pressure can be defined. Eventually the system settles down again to a state of thermodynamic equilibrium, defined by an entropy and a volume that differ infinitesimally from the initial values. The infinitesimal difference in internal energy between the initial and final state satisfies the above equation. But the work done and heat added to the system do not satisfy the above expressions. Rather, they satisfy the inequalities: Q < TdS' and W < PdV'. In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the expression for dU becomes:

where dNi is the (small) increase in amount of type-i particles in the reaction, and i is known as the chemical potential of the type-i particles in the system. If dNi is expressed in mol then i is expressed in J/mol. The statement of the first law, using exact differentials is now:

First If the system has more external mechanical variables than just the volume that can change, the fundamental thermodynamic relation generalizes to:

63

Here the Xi are the generalized forces corresponding to the external variables xi. The parameters Xi are independent of the size of the system and are called intensive parameters and the xi are proportional to the size and called extensive parameters. For an open system, there can be transfers of particles as well as energy into or out of the system during a process. For this case, the first law of thermodynamics still holds, in the form that the internal energy is a function of state and the change of internal energy in a process is a function only of its initial and final states, as noted in the section below headed First law of thermodynamics for open systems. A useful idea from mechanics is that the energy gained by a particle is equal to the force applied to the particle multiplied by the displacement of the particle while that force is applied. Now consider the first law without the heating term: dU = -PdV. The pressure P can be viewed as a force (and in fact has units of force per unit area) while dVis the displacement (with units of distance times area). We may say, with respect to this work term, that a pressure difference forces a transfer of volume, and that the product of the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the system. It is useful to view the TdS term in the same light: here the temperature is known as a "generalized" force (rather than an actual mechanical force) and the entropy is a generalized displacement. Similarly, a difference in chemical potential between groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor. There is a generalized "force" of evaporation that drives water molecules out of the liquid. There is a generalized "force" of condensation that drives vapor molecules out of the vapor. Only when these two "forces" (or chemical potentials) are equal is there equilibrium, and the net rate of transfer zero. The two thermodynamic parameters that form a generalized force-displacement pair are called "conjugate variables". The two most familiar pairs are, of course, pressure-volume, and temperature-entropy.

Spatially inhomogeneous systems


Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903), which might be regarded as 'zero-dimensional' in the sense that they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long-range external forces.[40] How the total energy of a system is allocated between these three more specific kinds of energy varies according to the purposes of different writers; this is because these components of energy are to some extent mathematical artefacts rather than actually measured physical quantities. For any closed homogeneous component of an inhomogeneous closed system, if denotes the total energy of that component system, one may write

where

and

denote respectively the total kinetic energy and the total potential energy of the component denotes its internal energy.[41]

closed homogeneous system, and

Potential energy can be exchanged with the surroundings of the system when the surroundings impose a force field, such as gravitational or electromagnetic, on the system.

First A compound system consisting of two interacting closed homogeneous component subsystems has a potential energy of interaction between the subsystems. Thus, in an obvious notation, one may write

64

The quantity

in general lacks an assignment to either subsystem in a way that is not arbitrary, and this stands

in the way of a general non-arbitrary definition of transfer of energy as work. On occasions, authors make their various respective arbitrary assignments.[42] The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion of molecules that is classified as internal energy.[43] The rate of dissipation by friction of kinetic energy of localised bulk flow into internal energy,[44][45][46] whether in turbulent or in streamlined flow, is an important quantity in non-equilibrium thermodynamics. This is a serious difficulty for attempts to define entropy for time-varying spatially inhomogeneous systems.

First law of thermodynamics for open systems


For the first law of thermodynamics, there is no trivial passage of physical conception from the closed system view to an open system view.[47][48] For closed systems, the concepts of an adiabatic enclosure and of an adiabatic wall are fundamental. Matter and internal energy cannot permeate or penetrate such a wall. For an open system, there is a wall that allows penetration by matter. In general, matter in diffusive motion carries with it some internal energy, and some microscopic potential energy changes accompany the motion. An open system is not adiabatically enclosed. There are some cases in which a process for an open system can, for particular purposes, be considered as if it were for a closed system. In an open system, by definition hypothetically or potentially, matter can pass between the system and its surroundings. But when, in a particular case, the process of interest involves only hypothetical or potential but no actual passage of matter, the process can be considered as if it were for a closed system.

Internal energy for an open system


Since the revised and more rigorous definition of the internal energy of a closed system rests upon the possibility of processes by which adiabatic work takes the system from one state to another, this leaves a problem for the definition of internal energy for an open system, for which adiabatic work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection "cannot be reduced to mechanics".[49] In contrast to the case of closed systems, for open systems, in the presence of diffusion, there is no unconstrained and unconditional physical distinction between convective transfer of internal energy by bulk flow of matter, the transfer of internal energy without transfer of matter (usually called heat conduction and work transfer), and change of various potential energies.[50][51][52] The older traditional way and the conceptually revised (Carathodory) way agree that there is no physically unique definition of heat and work transfer processes between open systems.[53][54][55][56][57] In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible.[58] This problem is solved by recourse to the principle of conservation of energy. This principle allows a composite isolated system to be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously isolated systems can be subjected to the thermodynamic operation of placement between them of a wall permeable to matter and energy, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new single unpartitioned system.[59] The internal energies of the initial two systems and of the final new system, considered respectively as closed systems as above, can be measured. Then the law of conservation of energy requires that
[60][61]

First where Us and Uo denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems,[62] that fits well with the conceptually revised and rigorous statement of the law stated above. For the thermodynamic operation of adding two systems with internal energies U1 and U2, to produce a new system with internal energy U, one may write U = U1 + U2; the reference states for U, U1 and U2 should be specified accordingly, maintaining also that the internal energy of a system be proportional to its mass, so that the internal energies are extensive variables.[63] There is a sense in which this kind of additivity expresses a fundamental postulate that goes beyond the simplest ideas of classical closed system thermodynamics; the extensivity of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be recognized as a fourth law of thermodynamics, though this is not repeated by other authors.[64][65] Also of course

65

where Ns and No denote the changes in mole number of a component substance of the system and of its surroundings respectively. This is a statement of the law of conservation of mass.

Process of transfer of matter between an open system and its surroundings


A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between them if the surrounding subsystem is subjected to some thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem. The removal of the partition in the surroundings initiates a process of exchange between the system and its contiguous surrounding subsystem. An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature. A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long-range external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium.

First

66

Open system with multiple contacts


An open system can be in contact equilibrium with several other systems at once.[66][67][68][69][70][71][72] This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others. Because there are physically separate connections that are permeable to energy but impermeable to matter, between the system and its surroundings, energy transfers between them can occur with definite heat and work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of the variables that measure heat and work.[73] With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer cannot in general be uniquely resolved into heat and work components.[74] Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics:

where U0 denotes the change of internal energy of the system, and Ui denotes the change of internal energy of the ith of the m surrounding subsystems that are in open contact with the system, due to transfer between the system and that ith surrounding subsystem, and Q denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, and W denotes the energy transferred from the system to the surrounding subsystems that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow transfer of energy as work is not considered here. Combination of first and second laws If the system is described by the energetic fundamental equation, U0 =U0(S, V, Nj), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula

where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, Nj, and j, are defined as above.[75] For a general natural process, there is no simple termwise correspondence between equations (1) and (2), because they describe the process in different conceptual frames. Nevertheless, for the special fictive case of quasi-static transfers, there is a simple correspondence.[76] For this, it is supposed that the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely diathermal walls, and open connections with surrounding subsystems of completely controllable chemical potential (or equivalent controls for charged species). Then, for a suitable fictive quasi-static transfer, one can write

First For fictive quasi-static transfers for which the chemical potentials in the connected surrounding subsystems are suitably controlled, these can be put into equation (2) to yield

67

The reference does not actually write equation (3), but what it does write is fully compatible with it. There are several other accounts of this, in apparent mutual conflict.[77][78]

Non-equilibrium transfers
The transfer of energy between an open system and a single contiguous subsystem of its surroundings is considered also in non-equilibrium thermodynamics. The problem of definition arises also in this case. It may be allowed that the wall between the system and the subsystem is not only permeable to matter and to internal energy, but also may be movable so as to allow work to be done when the two systems have different pressures. In this case, the transfer of energy as heat is not defined. Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat. Except for the special case mentioned above when there is no actual transfer of matter, which can be treated as if for a closed system, in strictly defined thermodynamic terms, it follows that transfer of energy as heat is not defined. In this sense, there is no such thing as 'heat flow' for a continuous-flow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient. Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law, that the amount lost by the donor system is equal to the amount gained by the receptor system. Heat is not a state variable. For his 1947 definition of "heat transfer" for discrete open systems, the author Prigogine carefully explains at some length that his definition of it does not obey a balance law. He describes this as paradoxical.[79] The situation is clarified by Gyarmati, who shows that his definition of "heat transfer", for continuous-flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity. Gyarmati shows that his definition of "the heat flow vector" is strictly speaking a definition of flow of internal energy, not specifically of heat, and so it turns out that his use here of the word heat is contrary to the strict thermodynamic definition of heat, though it is more or less compatible with historical custom, that often enough did not clearly distinguish between heat and internal energy; he writes "that this relation must be considered to be the exact definition of the concept of heat flow, fairly loosely used in experimental physics and heat technics."[80] Apparently in a different frame of thinking from that of the above-mentioned paradoxical usage in the earlier sections of the historic 1947 work by Prigogine, about discrete systems, this usage of Gyarmati is consistent with the later sections of the same 1947 work by Prigogine, about continuous-flow systems, which use the term "heat flux" in just this way. This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous-flow systems. They write: "Again the flow of internal energy may be split into a convection flow uv and a conduction

First flow. This conduction flow is by definition the heat flow W. Therefore: j[U] = uv + W where u denotes the [internal] energy per unit mass. [These authors actually use the symbols E and e to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors actually use the symbol U to refer to total energy, including kinetic energy of bulk flow.]"[81] This usage is followed also by other writers on non-equilibrium thermodynamics such as Lebon, Jou, and Casas-Vsquez,[82] and de Groot and Mazur.[83] This usage is described by Bailyn as stating the non-convective flow of internal energy, and is listed as his definition number 1, according to the first law of thermodynamics. This usage is also followed by workers in the kinetic theory of gases.[84][85][86] This is not the ad hoc definition of "reduced heat flux" of Haase.[87] In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter. Moreover, the flow of matter is zero into or out of the cell that moves with the local center of mass. In effect, in this description, one is dealing with a system effectively closed to the transfer of matter. But still one can validly talk of a distinction between bulk flow and diffusive flow of internal energy, the latter driven by a temperature gradient within the flowing material, and being defined with respect to the local center of mass of the bulk flow. In this case of a virtually closed system, because of the zero matter transfer, as noted above, one can safely distinguish between transfer of energy as work, and transfer of internal energy as heat.[88]

68

References
[1] Hess, H. (1840). Thermochemische Untersuchungen, Annalen der Physik und Chemie (Poggendorff, Leipzig) 126(6): 385-404 (http:/ / gallica. bnf. fr/ ark:/ 12148/ bpt6k151359/ f397. image. r=Annalen der Physik (Leipzig) 125. langEN). [2] Truesdell, C.A. (1980), pp. 157-158. [3] Mayer, Robert (1841). Paper: 'Remarks on the Forces of Nature"; as quoted in: Lehninger, A. (1971). Bioenergetics - the Molecular Basis of Biological Energy Transformations, 2nd. Ed. London: The Benjamin/Cummings Publishing Company. [4] Bailyn, M. (1994), p. 79. [5] Clausius, R. (1850). Ueber die bewegende Kraft der Wrme und die Gesetze, welche sich daraus fr die Wrmelehre selbst ableiten lassen, Annalen der Physik und Chemie (Poggendorff, Leipzig), 155 (3): 368-394, particularly on page 373 (http:/ / gallica. bnf. fr/ ark:/ 12148/ bpt6k15164w/ f389. image), translation here taken from Truesdell, C.A. (1980), pp. 188-189. [6] Clausius, R. (1850). Ueber die bewegende Kraft der Wrme und die Gesetze, welche sich daraus fr die Wrmelehre selbst ableiten lassen, Annalen der Physik und Chemie (Poggendorff, Leipzig), 155 (3): 368-394, page 384 (http:/ / gallica. bnf. fr/ ark:/ 12148/ bpt6k15164w/ f400. image). [7] Bailyn, M. (1994), p. 80. [8] Bryan, G.H. (1907), p.47. Also Bryan had written about this in the Enzyklopdie der Mathematischen Wissenschaften, volume 3, p. 81. Also in 1906 Jean Baptiste Perrin wrote about it in Bull. de la socit franais de philosophie, volume 6, p. 81. [9] Bailyn, M. (1994), pp. 65, 79. [10] Helmholtz, H. (1847). [11] Bailyn, (1994), p. 82. [12] Quantities, Units and Symbols in Physical Chemistry (IUPAC Green Book) (http:/ / media. iupac. org/ publications/ books/ gbook/ IUPAC-GB3-2ndPrinting-Online-22apr2011. pdf) See Sec. 2.11 Chemical Thermodynamics [13] Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London. (https:/ / ia700200. us. archive. org/ 15/ items/ treatiseonthermo00planrich/ treatiseonthermo00planrich. pdf), p. 43 [14] Mnster, A. (1970). [15] Kirkwood, J.G., Oppenheim, I. (1961), pp. 3133. [16] Planck, M.(1897/1903), p. 86. [17] Crawford, F.H. (1963), pp. 106107. [18] Bryan, G.H. (1907), p. 47. [19] Buchdahl, H.A. (1966), p. 34. [20] Pippard, A.B. (1957/1966), p. 14. [21] Reif, F. (1965), p. 82. [22] Adkins, C.J. (1968/1983), p. 31. [23] Callen, H.B. (1960/1985), pp. 13, 17. [24] Kittel, C. Kroemer, H. (1980). Thermal Physics, (first edition by Kittel alone 1969), second edition, W.H. Freeman, San Francisco, ISBN 0-7167-1088-9, pp. 49, 227. [25] Tro, N.J. (2008). Chemistry. A Molecular Approach, Pearson/Prentice Hall, Upper Saddle River NJ, ISBN 0-13-100065-9, p. 246.

First
[26] Kirkwood, J.G., Oppenheim, I. (1961), pp. 1718. Kirkwood & Oppenheim 1961 is recommended by Mnster, A. (1970), p. 376. It is also cited by Eu, B.C. (2002), Generalized Thermodynamics, the Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, ISBN 1-4020-0788-4, pp. 18, 29, 66. [27] Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, (first edition 1949), fifth edition 1967, North-Holland, Amsterdam, pp. 910. Guggenheim 1949/1965 is recommended by Buchdahl, H.A. (1966), p. 218. It is also recommended by Mnster, A. (1970), p. 376. [28] Planck, M.(1897/1903). [29] Cropper, W.H. (1986). Rudolf Clausius and the road to entropy, Am. J. Phys., 54: 10681074. [30] Truesdell, C.A. (1980), pp. 161162. [31] Buchdahl, H.A. (1966), p. 43. [32] Maxwell, J. C. (1871). Theory of Heat, Longmans, Green, and Co., London, p. 150. [33] Planck, M. (1897/1903), Section 71, p. 52. [34] Bailyn, M. (1994), p. 95. [35] Adkins, C.J. (1968/1983), p. 35. [36] Atkins, P., de Paula, J. (1978/2010). Physical Chemistry, (first edition 1978), ninth edition 2010, Oxford University Press, Oxford UK, ISBN 978-0-19-954337-3, p. 54. [37] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester, ISBN 978-0-470-01598-8, p. 63. [38] Gislason, E.A., Craig, N.C. (2005). Cementing the foundations of thermodynamics:comparison of system-based and surroundings-based definitions of work and heat, J. Chem. Thermodynamics 37: 954966. [39] Rosenberg, R.M. (2010). From Joule to Caratheodory and Born: A conceptual evolution of the first law of thermodynamics, J. Chem. Edu., 87: 691693. [40] Bailyn, M. (1994), 254-256. [41] Glansdorff, P., Prigogine, I. (1971), page 8. [42] Tisza, L. (1966), p. 91. [43] Denbigh, K.G. (1951), p. 50. [44] Thomson, William (1852 a). " On a Universal Tendency in Nature to the Dissipation of Mechanical Energy (http:/ / zapatopi. net/ kelvin/ papers/ on_a_universal_tendency. html)" Proceedings of the Royal Society of Edinburgh for April 19, 1852 [This version from Mathematical and Physical Papers, vol. i, art. 59, pp. 511.] [45] Thomson, W. (1852 b). On a universal tendency in nature to the dissipation of mechanical energy, Philosophical Magazine 4: 304-306. [46] Helmholtz, H. (1869/1871). Zur Theorie der stationren Strme in reibenden Flssigkeiten, Verhandlungen des naturhistorisch-medizinischen Vereins zu Heidelberg, Band V: 1-7. Reprinted in Helmholtz, H. (1882), Wissenschaftliche Abhandlungen, volume 1, Johann Ambrosius Barth, Leipzig, pages 223-230 (http:/ / echo. mpiwg-berlin. mpg. de/ ECHOdocuViewfull?url=/ mpiwg/ online/ permanent/ einstein_exhibition/ sources/ QWH2FNX8/ index. meta& start=231& viewMode=images& pn=237& mode=texttool) [47] Mnster A. (1970), Sections 14, 15, pp. 4551. [48] Landsberg, P.T. (1978), p. 78. [49] Born, M. (1949), p. 44. [50] Denbigh, K.G. (1951), p. 56. Denbigh states in a footnote that he is indebted to correspondence with Professor E.A. Guggenheim and with Professor N.K. Adam. From this, Denbigh concludes "It seems, however, that when a system is able to exchange both heat and matter with its environment, it is impossible to make an unambiguous distinction between energy transported as heat and by the migration of matter, without already assuming the existence of the 'heat of transport'." [51] Fitts, D.D. (1962), p. 28. [52] Denbigh, K. (1954/1971), pp. 8182. [53] Mnster, A. (1970), p. 50. [54] Haase, R. (1963/1969), p. 15. [55] Haase, R. (1971), p. 20. [56] Smith, D.A. (1980). Definition of heat in open systems, Aust. J. Phys., 33: 95105. (http:/ / www. publish. csiro. au/ paper/ PH800095. htm) [57] Bailyn, M. (1994), p. 308. [58] Mnster, A. (1970), p. 46. [59] Tisza, L. (1966), p. 41. [60] Callen H.B. (1960/1985), p. 54. [61] Tisza, L. (1966), p. 110. [62] Tisza, L. (1966), p. 111. [63] Prigogine, I., (1955/1967), p. 12. [64] Landsberg, P.T. (1961), pp. 142, 387. [65] Landsberg, P.T. (1978), pp. 79,102. [66] Prigogine, I. (1947), p. 48. [67] Born, M. (1949), Appendix 8, pp. 146149. [68] Aston, J.G., Fritz, J.J. (1959), Chapter 9. [69] Kestin, J. (1961).

69

First
[70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] Landsberg, P.T. (1961), pp. 128142. Tisza, L. (1966), p. 108. Tschoegl, N.W. (2000), p. 201. Born, M. (1949), pp. 146147. Haase, R. (1971), p. 35. Callen, H.B, (1960/1985), p. 35. Aston, J.G., Fritz, J.J. (1959), Chapter 9. This is an unusually explicit account of some of the physical meaning of the Gibbs formalism. Buchdahl, H.A. (1966), Section 66, pp. 121125. Callen, J.B. (1960/1985), Section 2-1, pp. 3537. Prigogine, I., (1947), pp. 4849. Gyarmati, I. (1970), p. 68. Glansdorff, P, Prigogine, I, (1971), p. 9. Lebon, G., Jou, D., Casas-Vzquez, J. (2008), p. 45. de Groot, S.R., Mazur, P. (1962), p. 18. de Groot, S.R., Mazur, P. (1962), p. 169. Truesdell, C., Muncaster, R.G. (1980), p. 3. Balescu, R. (1997), p. 9. Haase, R. (1963/1969), p. 18. Eckart, C. (1940).

70

Cited sources
Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, (first edition 1968), third edition 1983, Cambridge University Press, ISBN 0-521-25445-0. Aston, J.G., Fritz, J.J. (1959). Thermodynamics and Statistical Thermodynamics, John Wiley & Sons, New York. Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3. Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London. Bryan, G.H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig (https://ia700208.us.archive.org/6/items/Thermodynamics/ Thermodynamics.tif). Balescu, R. (1997). Statistical Dynamics; Matter out of Equilibrium, Imperial College Press, London, ISBN 978-1-86094-045-3. Buchdahl, H.A. (1966), The Concepts of Classical Thermodynamics, Cambridge University Press, London. Callen, H.B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, ISBN 0471862568. Carathodory, C. (1909). Untersuchungen ber die Grundlagen der Thermodynamik, Mathematische Annalen, 67: 355386, doi: 10.1007/BF01450409 (http://dx.doi.org/10.1007/BF01450409). A translation may be found here (http://neo-classical-physics.info/uploads/3/0/6/5/3065888/caratheodory_-_thermodynamics.pdf). Also a mostly reliable translation is to be found (http://books.google.com.au/books?id=xwBRAAAAMAAJ& q=Investigation+into+the+foundations) at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. Crawford, F.H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc.. de Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, ISBN 0486647412. Denbigh, K.G. (1951). The Thermodynamics of the Steady State (http://books.google.com.au/books/about/ The_thermodynamics_of_the_steady_state.html?id=uoJGAAAAYAAJ&redir_esc=y), Methuen, London, Wiley, New York. Denbigh, K. (1954/1971). The Principles of Chemical Equilibrium. With Applications in Chemistry and Chemical Engineering, third edition, Cambridge University Press, Cambridge UK. Eckart, C. (1940). The thermodynamics of irreversible processes. I. The simple fluid, Phys. Rev. 58: 267269.

First Fitts, D.D. (1962). Nonequilibrium Thermodynamics. Phenomenological Theory of Irreversible Processes in Fluid Systems, McGraw-Hill, New York. Glansdorff, P., Prigogine, I., (1971). Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley, London, ISBN 0-471-30280-5. Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the 1967 Hungarian by E. Gyarmati and W.F. Heinz, Springer-Verlag, New York. Haase, R. (1963/1969). Thermodynamics of Irreversible Processes, English translation, Addison-Wesley Publishing, Reading MA. Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 197 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73117081. Helmholtz, H. (1847). Ueber die Erhaltung der Kraft. Eine physikalische Abhandlung, G. Reimer (publisher), Berlin, read on 23 July in a session of the Physikalischen Gesellschaft zu Berlin. Reprinted in Helmholtz, H. von (1882), Wissenschaftliche Abhandlungen (http://archive.org/details/wissenschaftlic00helmgoog), Band 1, J.A. Barth, Leipzig. Translated and edited by J. Tyndall, in Scientific Memoirs, Selected from the Transactions of Foreign Academies of Science and from Foreign Journals. Natural Philosophy (1853), volume 7, edited by J. Tyndall, W. Francis, published by Taylor and Francis, London, pp. 114162, reprinted as volume 7 of Series 7, The Sources of Science, edited by H. Woolf, (1966), Johnson Reprint Corporation, New York, and again in Brush, S.G., The Kinetic Theory of Gases. An Anthology of Classic Papers with Historical Commentary, volume 1 of History of Modern Physical Sciences, edited by N.S. Hall, Imperial College Press, London, ISBN 1-86094-347-0, pp. 89110. Kestin, J. (1961). On intersecting isentropics, Am. J. Phys., 29: 329331. Kirkwood, J.G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw-Hill Book Company, New York. Landsberg, P.T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York. Landsberg, P.T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, ISBN 0-19-851142-6. Lebon, G., Jou, D., Casas-Vzquez, J. (2008). Understanding Non-equilibrium Thermodynamics, Springer, Berlin, ISBN 978-3-540-74251-7. Mnster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, WileyInterscience, London, ISBN 0-471-62430-6. Pippard, A.B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge UK. Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London. (https://ia700200.us.archive.org/15/items/treatiseonthermo00planrich/treatiseonthermo00planrich.pdf) Prigogine, I. (1947). tude Thermodynamique des Phnomnes irrversibles, Dunod, Paris, and Desoers, Lige. Prigogine, I., (1955/1967). Introduction to Thermodynamics of Irreversible Processes, third edition, Interscience Publishers, New York. Reif, F. (1965). Fundamentals of Statistical and Thermal Physics, McGraw-Hill Book Company, New York. Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA. Truesdell, C.A. (1980). The Tragicomical History of Thermodynamics, 1822-1854, Springer, New York, ISBN 0-387-90403-4. Truesdell, C.A., Muncaster, R.G. (1980). Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a branch of Rational Mechanics, Academic Press, New York, ISBN 0-12-701350-4. Tschoegl, N.W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5.

71

First

72

Further reading
Goldstein, Martin, and Inge F. (1993). The Refrigerator and the Universe. Harvard University Press. ISBN0-674-75325-9. OCLC 32826343 (http://www.worldcat.org/oclc/32826343). Chpts. 2 and 3 contain a nontechnical treatment of the first law. engel Y.A. and Boles M. (2007). Thermodynamics: an engineering approach. McGraw-Hill Higher Education. ISBN0-07-125771-3. Chapter 2. Atkins P. (2007). Four Laws that drive the Universe. OUP Oxford. ISBN0-19-923236-9.

External links
MISN-0-158, The First Law of Thermodynamics (http://35.9.69.219/home/modules/pdf_modules/m158. pdf) (PDF file) by Jerzy Borysowicz for Project PHYSNET (http://www.physnet.org). First law of thermodynamics (http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node8. html) in the MIT Course Unified Thermodynamics and Propulsion (http://web.mit.edu/16.unified/www/ FALL/thermodynamics/notes/notes.html) from Prof. Z. S. Spakovszky

Second
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

The second law of thermodynamics states that the entropy of an isolated system never decreases, because isolated systems spontaneously evolve toward thermodynamic equilibriumthe state of maximum entropy. Equivalently, perpetual motion machines of the second kind are impossible. The second law is an empirically validated postulate of thermodynamics, but it can be understood and explained using the underlying quantum statistical mechanics. In the language of statistical mechanics, entropy is a measure of the number of microscopic configurations corresponding to a macroscopic state. Because thermodynamic equilibrium corresponds to a vastly greater number of microscopic configurations than any non-equilibrium state, it has the maximum entropy, and the second law follows because random chance alone practically guarantees that the system will evolve towards such thermodynamic equilibrium.

Second It is an expression of the fact that over time, differences in temperature, pressure, and chemical potential decrease in an isolated non-gravitational physical system, leading eventually to a state of thermodynamic equilibrium. The second law may be expressed in many specific ways, but the first formulation is credited to the French scientist Sadi Carnot in 1824 (see Timeline of thermodynamics). Strictly speaking, the early statements of the Second Law are only correct in a horizontal plane in a gravitational field. The second law has been shown to be equivalent to the internal energy U being a weakly convex function, when written as a function of extensive properties (mass, volume, entropy, ...).

73

Description
The first law of thermodynamics provides the basic definition of thermodynamic energy, also called internal energy, associated with all thermodynamic systems, but unknown in classical mechanics, and states the rule of conservation of energy in nature. The concept of energy in the first law does not, however, account for the observation that natural processes have a preferred direction of progress. For example, heat always flows spontaneously from regions of higher temperature to regions of lower temperature, and never the reverse, unless external work is performed on the system. The first law is completely symmetrical with respect to the initial and final states of an evolving system. The key concept for the explanation of this phenomenon through the second law of thermodynamics is the definition of a new physical property, the entropy. In a reversible process, an infinitesimal increment in the entropy (dS) of a system results from an infinitesimal transfer of heat (Q) to a closed system divided by the common temperature (T) of the system and the surroundings which supply the heat.[1]

The entropy of an isolated system in its own internal thermodynamic equilibrium does not change with time. An isolated system may consist initially of several subsystems, separated from one another by partitions, but still each in its own internal thermodynamic equilibrium. If the partitions are removed, the former subsystems will in general interact and produce a new common final system in its own internal thermodynamic equilibrium. The sum of the entropies of the initial subsystems is in general less than the entropy of the final common system. If all of the initial subsystems have all the same values of their intensive variables, then the sum of the initial entropies will be equal to the final common entropy, and the final common system will have the same values of its intensive variables. For a body in thermal equilibrium with another, there are indefinitely many empirical temperature scales, in general respectively depending on the properties of a particular reference thermometric body. Thermal equilibrium between two bodies entails that they have equal temperatures. The zeroth law of thermodynamics in its usual short statement allows recognition that two bodies have the same temperature, especially that a test body has the same temperature as a reference thermometric body. The second law allows a distinguished temperature scale, which defines an absolute, thermodynamic temperature, independent of the properties of any particular thermometric body.[2][3] The second law of thermodynamics may be expressed in many specific ways, the most prominent classical statements[4] being the statement by Rudolph Clausius (1854), the statement by Lord Kelvin (1851), and the statement in axiomatic thermodynamics by Constantin Carathodory (1909). These statements cast the law in general physical terms citing the impossibility of certain processes. The Clausius and the Kelvin statements have been shown to be equivalent.

Second

74

Carnot's principle
The historical origin of the second law of thermodynamics was in Carnot's principle. It refers to a cycle of a Carnot engine, fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibrium. The Carnot engine is an idealized device of special interest to engineers who are concerned with the efficiency of heat engines. Carnot's principle was recognized by Carnot at a time when the caloric theory of heat was seriously considered, before the recognition of the first law of thermodynamics, and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, it is physically equivalent to the second law of thermodynamics, and remains valid today. It states The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two heat reservoirs, and is independent of the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures.[5][6][7][8][9][10][11]

Clausius statement
The German scientist Rudolf Clausius laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work.[12] His formulation of the second law, which was published in German in 1854, is known as the Clausius statement: Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time. Heat cannot spontaneously flow from cold regions to hot regions without external work being performed on the system, which is evident from ordinary experience of refrigeration, for example. In a refrigerator, heat flows from cold to hot, but only when forced by an external agent, the refrigeration system.

Kelvin statement
Lord Kelvin expressed the second law as It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects.[13]

Planck's Principle
In 1926 Max Planck wrote an important paper on the basics of thermodynamics. He indicated the principle The internal energy of a closed system is increased by an isochoric adiabatic process. This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarily implicitly rely on those concepts, but it implies the content of the second law. A closely related statement is that "Frictional pressure never does positive work."[14] Using a now obsolete form of words, Planck himself wrote: "The production of heat by friction is irreversible."[15]

Second

75

Principle of Carathodory
Constantin Carathodory formulated thermodynamics on a purely mathematical axiomatic foundation. His statement of the second law is known as the Principle of Carathodory, which may be formulated as follows:[16] In every neighborhood of any state S of an adiabatically isolated system there are states inaccessible from S.[17] With this formulation he described the concept of adiabatic accessibility for the first time and provided the foundation for a new subfield of classical thermodynamics, often called geometrical thermodynamics. It follows from Carathodory's principle that quantity of energy quasi-statically transferred as heat is a holonomic process function, in other words, . Though it is almost customary in textbooks to say that Carathodory's principle expresses the second law and to treat it as equivalent to the Clausius or to the Kelvin-Planck statements, such is not the case. To get all the content of the second law, Carathodory's principle needs to be supplemented by Planck's principle, that isochoric work always increases the internal energy of a closed system that was initially in its own internal thermodynamic equilibrium.[][][18][19]

Equivalence of the Clausius and the Kelvin statements


Suppose there is an engine violating the Kelvin statement: i.e.,one that drains heat and converts it completely into work in a cyclic fashion without any other result. Now pair it with a reversed Carnot engine as shown by the graph. The net and sole effect of this newly created engine consisting of the two engines mentioned is transferring heat from the cooler

reservoir to the hotter one, which violates the Clausius statement. Thus a violation of the Kelvin statement implies a violation of the Clausius statement, i.e. the Clausius statement implies the Kelvin statement. We can prove in a similar manner that the Kelvin statement implies the Clausius statement, and hence the two are equivalent.

Derive Kelvin Statement from Clausius Statement

Second

76

Gravitational systems
In non-gravitational systems, objects always have positive heat capacity, meaning that the temperature rises with energy. Therefore, when energy flows from a high-temperature object to a low-temperature object, the source temperature is decreased while the sink temperature is increased; hence temperature differences tend to diminish over time. However, this is not always the case for systems in which the gravitational force is important. The most striking examples are black holes, which - according to theory - have negative heat capacity. The larger the black hole, the more energy it contains, but the lower its temperature. Thus, the supermassive black hole in the center of the milky way is supposed to have a temperature of 1014 K, much lower than the cosmic microwave background temperature of 2.7K, but as it absorbs photons of the cosmic microwave background its mass is increasing so that its low temperature further decreases with time. For this reason, gravitational systems tend towards non-even distribution of mass and energy.

Corollaries
Perpetual motion of the second kind
Before the establishment of the Second Law, many people who were interested in inventing a perpetual motion machine had tried to circumvent the restrictions of First Law of Thermodynamics by extracting the massive internal energy of the environment as the power of the machine. Such a machine is called a "perpetual motion machine of the second kind". The second law declared the impossibility of such machines.

Carnot theorem
Carnot's theorem (1824) is a principle that limits the maximum efficiency for any possible engine. The efficiency solely depends on the temperature difference between the hot and cold thermal reservoirs. Carnot's theorem states: All irreversible heat engines between two heat reservoirs are less efficient than a Carnot engine operating between the same reservoirs. All reversible heat engines between two heat reservoirs are equally efficient with a Carnot engine operating between the same reservoirs. In his ideal model, the heat of caloric converted into work could be reinstated by reversing the motion of the cycle, a concept subsequently known as thermodynamic reversibility. Carnot however further postulated that some caloric is lost, not being converted to mechanical work. Hence no real heat engine could realise the Carnot cycle's reversibility and was condemned to be less efficient. Though formulated in terms of caloric (see the obsolete caloric theory), rather than entropy, this was an early insight into the second law.

Clausius Inequality
The Clausius Theorem (1854) states that in a cyclic process

The equality holds in the reversible case[20] and the '<' is in the irreversible case. The reversible case is used to introduce the state function entropy. This is because in cyclic processes the variation of a state function is zero from state functionality.

Second

77

Thermodynamic temperature
For an arbitrary heat engine, the efficiency is:

where A is the work done per cycle. Thus the efficiency depends only on qC/qH. Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures T1 and T2 must have the same efficiency, that is to say, the efficiency is the function of temperatures only: In addition, a reversible heat engine operating between temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 andT3. This can only be the case if

Now consider the case where for any T2 and T3,

is a fixed reference temperature: the temperature of the triple point of water. Then

Therefore if thermodynamic temperature is defined by

then the function f, viewed as a function of thermodynamic temperature, is simply

and the reference temperature T1 will have the value 273.16. (Of course any reference temperature and any positive numerical value could be usedthe choice here corresponds to the Kelvin scale.)

Entropy
According to the Clausius equality, for a reversible process

That means the line integral

is path independent.

So we can define a state function S called entropy, which satisfies

With this we can only obtain the difference of entropy by integrating the above formula. To obtain the absolute value, we need the Third Law of Thermodynamics, which states that S=0 at absolute zero for perfect crystals. For any irreversible process, since entropy is a state function, we can always connect the initial and terminal status with an imaginary reversible process and integrating on that path to calculate the difference in entropy. Now reverse the reversible process and combine it with the said irreversible process. Applying Clausius inequality on this loop,

Thus,

Second

78

where the equality holds if the transformation is reversible. Notice that if the process is an adiabatic process, then , so .

Exergy, available useful work


An important and revealing idealized special case is to consider applying the Second Law to the scenario of an isolated system (called the total system or universe), made up of two parts: a sub-system of interest, and the sub-system's surroundings. These surroundings are imagined to be so large that they can be considered as an unlimited heat reservoir at temperature TR and pressure PR so that no matter how much heat is transferred to (or from) the sub-system, the temperature of the surroundings will remain TR; and no matter how much the volume of the sub-system expands (or contracts), the pressure of the surroundings will remain PR. Whatever changes to dS and dSR occur in the entropies of the sub-system and the surroundings individually, according to the Second Law the entropy Stot of the isolated total system must not decrease: According to the First Law of Thermodynamics, the change dU in the internal energy of the sub-system is the sum of the heat q added to the sub-system, less any work w done by the sub-system, plus any net chemical energy entering the sub-system d iRNi, so that: where iR are the chemical potentials of chemical species in the external surroundings. Now the heat leaving the reservoir and entering the sub-system is

where we have first used the definition of entropy in classical thermodynamics (alternatively, in statistical thermodynamics, the relation between entropy change, temperature and absorbed heat can be derived); and then the Second Law inequality from above. It therefore follows that any net work w done by the sub-system must obey

It is useful to separate the work w done by the subsystem into the useful work wu that can be done by the sub-system, over and beyond the work pR dV done merely by the sub-system expanding against the surrounding external pressure, giving the following relation for the useful work (exergy) that can be done:

It is convenient to define the right-hand-side as the exact derivative of a thermodynamic potential, called the availability or exergy E of the subsystem,

The Second Law therefore implies that for any process which can be considered as divided simply into a subsystem, and an unlimited temperature and pressure reservoir with which it is in contact,

i.e. the change in the subsystem's exergy plus the useful work done by the subsystem (or, the change in the subsystem's exergy less any work, additional to that done by the pressure reservoir, done on the system) must be less than or equal to zero. In sum, if a proper infinite-reservoir-like reference state is chosen as the system surroundings in the real world, then the Second Law predicts a decrease in E for an irreversible process and no change for a reversible process.

Second Is equivalent to This expression together with the associated reference state permits a design engineer working at the macroscopic scale (above the thermodynamic limit) to utilize the Second Law without directly measuring or considering entropy change in a total isolated system. (Also, see process engineer). Those changes have already been considered by the assumption that the system under consideration can reach equilibrium with the reference state without altering the reference state. An efficiency for a process or collection of processes that compares it to the reversible ideal may also be found (See second law efficiency.) This approach to the Second Law is widely utilized in engineering practice, environmental accounting, systems ecology, and other disciplines.

79

History
The first theory of the conversion of heat into mechanical work is due to Nicolas Lonard Sadi Carnot in 1824. He was the first to realize correctly that the efficiency of this conversion depends on the difference of temperature between an engine and its environment. Recognizing the significance of James Prescott Joule's work on the conservation of energy, Rudolf Clausius was the first to formulate the second law during 1850, in this form: heat does not flow spontaneously from cold to hot bodies. While common knowledge now, this was contrary to the caloric theory of heat popular at the time, which considered heat as a fluid. From there he was able to infer the principle of Sadi Carnot and the definition of entropy (1865). Established during the 19th century, the Kelvin-Planck statement of the Second Law says, "It is impossible for any device that operates on a cycle to receive heat from a single reservoir and produce a net amount of work." This was shown to be equivalent to the statement of Clausius.
Nicolas Lonard Sadi Carnot in the traditional uniform of a student of the cole Polytechnique.

The ergodic hypothesis is also important for the Boltzmann approach. It says that, over long periods of time, the time spent in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e. that all accessible microstates are equally probable over a long period of time. Equivalently, it says that time average and average over the statistical ensemble are the same. It has been shown that not only classical systems but also quantum mechanical ones tend to maximize their entropy over time. Thus the second law follows, given initial conditions with low entropy. More precisely, it has been shown that the local von Neumann entropy is at its maximum value with a very high probability. The result is valid for a large class of isolated quantum systems (e.g. a gas in a container). While the full system is pure and therefore does not have any entropy, the entanglement between gas and container gives rise to an increase of the local entropy of the gas. This result is one of the most important achievements of quantum thermodynamicsWikipedia:Disputed statement. Today, much effort in the field is attempting to understand why the initial conditions early in the universe were those of low entropy, as this is seen as the origin of the second law (see below).

Second

80

Informal descriptions
The second law can be stated in various succinct ways, including: It is impossible to produce work in the surroundings using a cyclic process connected to a single heat reservoir (Kelvin, 1851). It is impossible to carry out a cyclic process using an engine connected to two heat reservoirs that will have as its only effect the transfer of a quantity of heat from the low-temperature reservoir to the high-temperature reservoir (Clausius, 1854). If thermodynamic work is to be done at a finite rate, free energy must be expended. (Stoner, 2000)

Mathematical descriptions
In 1856, the German physicist Rudolf Clausius stated what he called the "second fundamental theorem in the mechanical theory of heat" in the following form:[]

where Q is heat, T is temperature and N is the "equivalence-value" of all uncompensated transformations involved in a cyclical process. Later, in 1865, Clausius would come to define "equivalence-value" as entropy. On the heels of this definition, that same year, the most famous version of the second law was read in a presentation at the Philosophical Society of Zurich on April 24, in which, in the end of his presentation, Clausius concludes:

The entropy of the universe tends to a maximum.


This statement is the best-known phrasing of the second law. Moreover, Rudolf Clausius owing to the general broadness of the terminology used here, e.g. universe, as well as lack of specific conditions, e.g. open, closed, or isolated, to which this statement applies, many people take this simple statement to mean that the second law of thermodynamics applies virtually to every subject imaginable. This, of course, is not true; this statement is only a simplified version of a more complex description. In terms of time variation, the mathematical statement of the second law for an isolated system undergoing an arbitrary transformation is:

where S is the entropy of the system and t is time. The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. An alternative way of formulating of the second law for isolated systems is: with with the sum of the rate of entropy production by all processes inside the system. The advantage of this

formulation is that it shows the effect of the entropy production. The rate of entropy production is a very important concept since it determines (limits) the efficiency of thermal machines. Multiplied with ambient temperature it gives the so-called dissipated energy .

Second The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is: with Here is the heat flow into the system is the temperature at the point where the heat enters the system. If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms. For open systems (also allowing exchange of matter): with Here is the flow of entropy into the system associated with the flow of matter entering the system. It should not

81

be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions. Statistical mechanics gives an explanation for the second law by postulating that a material is composed of atoms and molecules which are in constant motion. A particular set of positions and velocities for each particle in the system is called a microstate of the system and because of the constant motion, the system is constantly changing its microstate. Statistical mechanics postulates that, in equilibrium, each microstate that the system might be in is equally likely to occur, and when this assumption is made, it leads directly to the conclusion that the second law must hold in a statistical sense. That is, the second law will hold on average, with a statistical variation on the order of 1/N where N is the number of particles in the system. For everyday (macroscopic) situations, the probability that the second law will be violated is practically zero. However, for systems with a small number of particles, thermodynamic parameters, including the entropy, may show significant statistical deviations from that predicted by the second law. Classical thermodynamic theory does not deal with these statistical variations.

Derivation from statistical mechanics


Due to Loschmidt's paradox, derivations the Second Law have to make an assumption regarding the past, namely that the system is uncorrelated at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a boundary condition, and thus the second Law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the Big Bang), though other scenarios have also been suggested. Given these assumptions, in statistical mechanics, the Second Law is not a postulate, rather it is a consequence of the fundamental postulate, also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy[citation needed]. The first part of the second law, which states that the entropy of a thermally isolated system can only increase is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium. The entropy of an isolated system in thermal equilibrium containing an amount of energy of is:

where

is the number of quantum states in a small interval between

and

. Here

is a

macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on .

Second Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that is maximized as that is the most probable situation in equilibrium. If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value). The entropy of a system that is not in equilibrium can be defined as:

82

see here. Here the

is the probabilities for the system to be found in the states labeled by the subscript j. In are all equal to , and in that case

thermal equilibrium, the probabilities for states inside the energy interval

the general definition coincides with the previous definition of S that applies to the case of thermal equilibrium. Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of . We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state. Boltzmann's H-theorem, however, proves that the entropy will increase continuously as a function of time during the intermediate out of equilibrium state.

Derivation of the entropy change for reversible processes


The second part of the Second Law states that the entropy change of a system undergoing a reversible process is given by:

where the temperature is defined as:

See here for the justification for this definition. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x. According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in. The generalized force, X, corresponding to the external variable x is defined such that is the work performed by the system if x is increased by an amount dx. E.g., if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate is given by:

Since the system can be in any energy eigenstate within an interval of system as the expectation value of the above expression:

, we define the generalized force for the

To evaluate the average, we partition the within a range between and

energy eigenstates by counting how many of them have a value for . Calling this number , we have:

Second

83

The average defining the generalized force can now be written:

We can relate this to the derivative of the entropy w.r.t. x at constant energy E as follows. Suppose we change x to x + dx. Then will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between within the range between and and . Let's focus again on the energy eigenstates for which lies

. Since these energy eigenstates increase in energy by Y dx, all such

energy eigenstates that are in the interval ranging from E Y dx to E move from below E to above E. There are

such energy eigenstates. If to above is, of course, given by

, all these energy eigenstates will move into the range between . The number of energy eigenstates that move from below . The difference . Note that if Y dx is larger than . They are counted in both

and

and contribute to an increase in

is thus the net contribution to the increase in eigenstates that move from below E to above

there will be the energy and ,

therefore the above expression is also valid in that case. Expressing the above expression as a derivative w.r.t. E and summing over Y yields the expression:

The logarithmic derivative of

w.r.t. x is thus given by:

The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanishes in the thermodynamic limit. We have thus found that:

Combining this with

Gives:

Second

84

Derivation for systems described by the canonical ensemble


If a system is in thermal contact with a heat bath at some temperature T then, in equilibrium, the probability distribution over the energy eigenvalues are given by the canonical ensemble:

Here Z is a factor that normalizes the sum of all the probabilities to 1, this function is known as the partition function. We now consider an infinitesimal reversible change in the temperature and in the external parameters on which the energy levels depend. It follows from the general formula for the entropy:

that

Inserting the formula for

for the canonical ensemble in here gives:

General derivation from unitarity of quantum mechanics


The time development operator in quantum theory is unitary, because the Hamiltonian is hermitian. Consequently, the transition probability matrix is doubly stochastic, which implies the Second Law of Thermodynamics.[21][22] This derivation is quite general, based on the Shannon entropy, and does not require any assumptions beyond unitarity, which is universally accepted. It is a consequence of the irreversibility or singular nature of the general transition matrix.

Non-equilibrium states
It is only by convention, for the purposes of thermodynamic analysis, that any arbitrary occasion of space-time is said to be in thermodynamic equilibrium. In general, an occasion of space-time found in nature is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms, nothing in the entire universe is or has ever been truly in exact thermodynamic equilibrium.[23][24] If it is assumed, for the purposes of physical analysis, that one is dealing with a system in thermodynamic equilibrium, then statistically it is possible for that system to achieve moments of non-equilibrium. In some statistically unlikely events, hot particles "steal" the energy of cold particles, enough that the cold side gets colder and the hot side gets hotter, for a very brief time. The physics involved in such events is beyond the scope of classical equilibrium thermodynamics, and is the topic of the fluctuation theorem (not to be confused with the fluctuation-dissipation theorem). This was first proved by Bochov and Kuzovlev,[25] and later by Evans and Searles.[26] It gives a numerical estimate of the probability that a system away from equilibrium will have a certain change in entropy over a certain amount of time. The theorem is proved with the exact time reversible dynamical equations of motion but assumes the Axiom of Causality, which is equivalent to assuming uncorrelated initial conditions (namely, uncorrelated past). Such events have been observed at a small enough scale where the likelihood of such a thing happening is significant. Quantitative predictions of this theorem have been confirmed in laboratory experiments by use of optical tweezers apparatus.

Second

85

Arrow of time
The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction. The second law has been proposed to supply an explanation of the difference between moving forward and backwards in time, such as why the cause precedes the effect (the causal arrow of time).[27]

Controversies
Maxwell's demon
James Clerk Maxwell imagined one container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. One of the most famous responses to this question was suggested in 1929 by Le Szilrd and later by Lon Brillouin. Szilrd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. But later exceptions were found. [citation needed]

James Clerk Maxwell

Loschmidt's paradox
Loschmidt's paradox, also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from time-symmetric dynamics. This puts the time reversal symmetry of nearly all known low-level fundamental physical processes at odds with any attempt to infer from them the second law of thermodynamics which describes the behavior of macroscopic systems. Both of these are well-accepted principles in physics, with sound observational and theoretical support, yet they seem to be in conflict; hence the paradox. Due to this paradox, derivations of the Second Law have to make an assumption regarding the past, namely that the system is uncorrelated at some time in the past, or - equivalently - that the entropy in the past was lower than in the future. This assumption is usually thought as a boundary condition, and thus the second Law is ultimately derived from the initial conditions of the Big Bang.

Second

86

Gibbs paradox
In statistical mechanics, a simple derivation of the entropy of an ideal gas based on the Boltzmann distribution yields an expression for the entropy which is not extensive (is not proportional to the amount of gas in question). This leads to an apparent paradox known as the Gibbs paradox, allowing, for instance, the entropy of closed systems to decrease, violating the second law of thermodynamics. The paradox is averted by recognizing that the identity of the particles does not influence the entropy. In the conventional explanation, this is associated with an indistinguishability of the particles associated with quantum mechanics. However, a growing number of papers now take the perspective that it is merely the definition of entropy that is changed to ignore particle permutation (and thereby avert the paradox). The resulting equation for the entropy (of a classical ideal gas) is extensive, and is known as the Sackur-Tetrode equation.

Poincar recurrence theorem


The Poincar recurrence theorem states that certain systems will, after a sufficiently long time, return to a state very close to the initial state. The Poincar recurrence time is the length of time elapsed until the recurrence, which is of the order of .[28] The result applies to physical systems in which energy is conserved. The Recurrence theorem apparently contradicts the Second law of thermodynamics, which says that large dynamical systems evolve irreversibly towards the state with higher entropy, so that if one starts with a low-entropy state, the system will never return to it. There are many possible ways to resolve this paradox, but none of them is universally accepted[citation needed]. The most reasonable argument is that for typical thermodynamical systems the recurrence time is so large (many many times longer than the lifetime of the universe) that, for all practical purposes, one cannot observe the recurrence.

Future of the universe


It has been suggested in the past that since the entropy in the universe is continuously rising, the amount of free energy diminishes and the universe will arrive at a heat death, in which no work can be done and life cannot exist. An expanding universe, however, is not in a thermodynamical equilibrium, and simple considerations leading to the heat death scenario are not valid. Taking the current view of the universe into account, it has been proposed that the universe will probably exhibit a future in which all known energy sources (such as stars) will decay. Nevertheless, it may be the case that work in smaller and smaller energy scales will still be possible, so that "interesting things can continue to happen at ... increasingly low levels of energy".[29]

Quotations
The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations then so much the worse for Maxwell's equations. If it is found to be contradicted by observation well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation. Sir Arthur Stanley Eddington, The Nature of the Physical World (1927) The tendency for entropy to increase in isolated systems is expressed in the second law of thermodynamics perhaps the most pessimistic and amoral formulation in all human thought. Gregory Hill and Kerry Thornley, Principia Discordia (1965) There have been nearly as many formulations of the second law as there have been discussions of it. Philosopher / Physicist P.W. Bridgman, (1941)

Second Clausius is the author of the sybillic utterance, "The energy of the universe is constant; the entropy of the universe tends to a maximum." The objectives of continuum thermomechanics stop far short of explaining the "universe", but within that theory we may easily derive an explicit statement in some ways reminiscent of Clausius, but referring only to a modest object: an isolated body of finite size. Truesdell, C., Muncaster, R.G. (1980). Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a Branch of Rational Mechanics, Academic Press, New York, ISBN0-12-701350-4, p.17. The [historically] early appearance of life is certainly an argument in favour of the idea that life is the result of spontaneous self-organization that occurs whenever conditions for it permit. However, we must admit that we remain far from any quantitative theory. Prigogine, I., Stengers, I. (1984). Order Out of Chaos. Man's New Dialogue with Nature, Bantam Books, Toronto, ISBN 0-553-34082-4, p. 176.

87

References
[1] [2] [3] [4] Bailyn, M. (1994), p. 120. Zemansky, M.W. (1968), pp. 207209. Quinn, T.J. (1983), p. 8. Lieb, E.H., Yngvason, J. (1999).

[5] Carnot, S. (1824/1986). [6] Truesdell, C. (1980), Chapter 5. [7] Adkins, C.J. (1968/1983), pp. 5658. [8] Mnster, A. (1970), p. 11. [9] Kondepudi, D., Prigogine, I. (1998), pp.6775. [10] Lebon, G., Jou, D., Casas-Vzquez, J. (2008), p. 10. [11] Eu, B.C. (2002), pp. 3235. [12] Clausius, R. (1850). [13] Thomson, W. (1851). [14] Truesdell, C., Muncaster, R.G. (1980). Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a Branch of Rational Mechanics, Academic Press, New York, ISBN0-12-701350-4, p. 15. [15] Planck, M. (1926), p. 457, Wikipedia editor's translation. [16] Carathodory, C. (1909). [17] Buchdahl, H.A. (1966), p. 68. [18] Planck, M. (1926). [19] Buchdahl, H.A. (1966), p. 69. [20] Clausius theorem (http:/ / scienceworld. wolfram. com/ physics/ ClausiusTheorem. html) at Wolfram Research [21] Hugh Everett, "Theory of the Universal Wavefunction" (http:/ / www. pbs. org/ wgbh/ nova/ manyworlds/ pdf/ dissertation. pdf), Thesis, Princeton University, (1956, 1973), Appendix I, pp 121 ff, in particular equation (4.4) at the top of page 127, and the statement on page 29 that "it is known that the [Shannon] entropy [...] is a monotone increasing function of the time." [22] Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X Contains Everett's thesis: The Theory of the Universal Wavefunction, pp 3140. [23] Grandy, W.T., Jr (2008), p. 151. [24] Callen, H.B. (1960/1985), p. 15. [25] Bochov, G.N., Kuzovlev, Y.E. (1981). Nonlinear fluctuation-dissipation relations and stochastic models in nonequilibrium thermodynamics: I. Generalized fluctuation-dissipation theorem, Physica, 106A: 443-479. See also (http:/ / arxiv. org/ pdf/ 1106. 0589. pdf) [26] Attard, P. (2012). Non-Equilibrium Thermodynamics and Statistical Mechanics. Foundations and Applications, Oxford University Press, Oxford UK, 978-0-19-966276-0, p. 288. [27] chapter 6 [28] L. Dyson, J. Lindesay and L. Susskind, Is There Really a de Sitter/CFT Duality, JHEP 0208, 45 (2002) [29] F.C. Adams and G. Laughlin, A DYING UNIVERSE: The Long Term Fate and Evolution of Astrophysical Objects. Rev.Mod.Phys.69:337-372,1997. astro-ph/9701131v1 (http:/ / arxiv. org/ pdf/ astro-ph/ 9701131v1. pdf)

Second

88

Bibliography of citations
Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, (1st edition 1968), third edition 1983, Cambridge University Press, Cambridge UK, ISBN 0-521-25445-0. Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics, New York, ISBN 0-88318-797-3. Buchdahl, H.A. (1966). The Concepts of Classical Mechanics, Cambridge University Press, Cambridge UK. Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, ISBN 0-471-86256-8. C. Carathodory (1909). "Untersuchungen ber die Grundlagen der Thermodynamik" (http://gdz.sub. uni-goettingen.de/index.php?id=11&PPN=PPN235181684_0067&DMDID=DMDLOG_0033&L=1). Mathematische Annalen 67: 355-386. "Axiom II: In jeder beliebigen Umgebung eines willkrlich vorgeschriebenen Anfangszustandes gibt es Zustnde, die durch adiabatische Zustandsnderungen nicht beliebig approximiert werden knnen. (p.363)". A translation may be found here (http://neo-classical-physics.info/ uploads/3/0/6/5/3065888/caratheodory_-_thermodynamics.pdf). Also a mostly reliable translation is to be found (http://books.google.com.au/books?id=xwBRAAAAMAAJ&q=Investigation+into+the+foundations) at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.

Carnot, S. (1824/1986). Reflections on the motive power of fire (http://www.worldcat.org/title/ reflections-on-the-motive-power-of-fire-a-critical-edition-with-the-surviving-scientific-manuscripts-translated-and-edited-by-fox-r oclc/812944517&referer=brief_results), Manchester University Press, Manchester UK, ISBN 0719017416. Also here. (http://www.archive.org/stream/reflectionsonmot00carnrich#page/n7/mode/2up) Clausius, R. (1850). "Ueber Die Bewegende Kraft Der Wrme Und Die Gesetze, Welche Sich Daraus Fr Die Wrmelehre Selbst Ableiten Lassen" (http://gallica.bnf.fr/ark:/12148/bpt6k15164w/f518.image). Annalen der Physik 79: 368397, 500524. Retrieved 26 June 2012. Translated into English: Clausius, R. (July 1851). "On the Moving Force of Heat, and the Laws regardingthe Nature of Heat itself which are deducible therefrom" (http:/ /archive.org/details/londonedinburghd02lond). London, Edinburgh and Dublin Philosophical Magazine and Journal of Science. 4th 2 (VIII): 121; 102119. Retrieved 26 June 2012. Clausius, R. (1867). The Mechanical Theory of Heat with its Applications to the Steam Engine and to Physical Properties of Bodies (http://books.google.com/books?id=8LIEAAAAYAAJ&printsec=frontcover& dq=editions:PwR_Sbkwa8IC&hl=en&sa=X&ei=h6DgT5WnF46e8gSVvbynDQ& ved=0CDYQuwUwAA#v=onepage&q&f=false). London: John van Voorst. Retrieved 19 June 2012. Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, ISBN 1402007884. Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems (http://global.oup.com/ academic/product/entropy-and-the-time-evolution-of-macroscopic-systems-9780199546176?cc=au&lang=en& ). Oxford University Press. ISBN 978-0-19-954617-6. Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures, John Wiley & Sons, Chichester, ISBN 0471973939. Lebon, G., Jou, D., Casas-Vzquez, J. (2008). Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers, Springer-Verlag, Berlin, e-ISBN 978-3-540-74252-4. Lieb, E. H.; Yngvason, J. (1999). "The Physics and Mathematics of the Second Law of Thermodynamics". Physics Reports 310: 196. arXiv: cond-mat/9708200 (http://arxiv.org/abs/cond-mat/9708200). Bibcode: 1999PhR...310....1L (http://adsabs.harvard.edu/abs/1999PhR...310....1L). doi: 10.1016/S0370-1573(98)00082-9 (http://dx.doi.org/10.1016/S0370-1573(98)00082-9). Mnster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, WileyInterscience, London, ISBN 0-471-62430-6. Planck, M. (1926). ber die Begrnding des zweiten Hauptsatzes der Thermodynamik, S.B. Preu. Akad. Wiss. phys. math. Kl.: 453463.

Second Quinn, T.J. (1983). Temperature, Academic Press, London, ISBN 0-12-569680-9. Thomson, W. (March 1851). "On the Dynamical Theory of Heat, with numerical results deduced from Mr Joules equivalent of a Thermal Unit, and M. Regnaults Observations on Steam". Transactions of the Royal Society of Edinburgh XX (part II): 261268; 289298. Also published in Thomson, W. (December 1852). "On the Dynamical Theory of Heat, with numerical results deduced from Mr Joules equivalent of a Thermal Unit, and M. Regnaults Observations on Steam" (http://archive.org/details/londonedinburghp04maga). Philos. Mag. 4 IV (22): 821. Retrieved 25 June 2012. Truesdell, C. (1980). The Tragicomical History of Thermodynamics 1822-1854, Springer, New York, ISBN 0387904034. Zemansky, M.W. (1968). Heat and Thermodynamics. An Intermediate Textbook, fifth edition, McGraw-Hill Book Company, New York.

89

Further reading
Goldstein, Martin, and Inge F., 1993. The Refrigerator and the Universe. Harvard Univ. Press. Chpts. 49 contain an introduction to the Second Law, one a bit less technical than this entry. ISBN 978-0-674-75324-2 Leff, Harvey S., and Rex, Andrew F. (eds.) 2003. Maxwell's Demon 2 : Entropy, classical and quantum information, computing. Bristol UK; Philadelphia PA: Institute of Physics. ISBN 978-0-585-49237-7 Halliwell, J.J. (1994). Physical Origins of Time Asymmetry. Cambridge. ISBN0-521-56837-4.(technical). Carnot, Sadi; Thurston, Robert Henry (editor and translator) (1890). Reflections on the Motive Power of Heat and on Machines Fitted to Develop That Power. New York: J. Wiley & Sons. ( full text of 1897 ed.) (http://books. google.com/books?id=tgdJAAAAIAAJ)) ( html (http://www.history.rochester.edu/steam/carnot/1943/)) Stephen Jay Kline (1999). The Low-Down on Entropy and Interpretive Thermodynamics, La Ca ada, CA: DCW Industries. ISBN 1928729010. Kostic, M., Revisiting The Second Law of Energy Degradation and Entropy Generation: From Sadi Carnot's Ingenious Reasoning to Holistic Generalization. AIP Conf. Proc. 1411, pp.327350; doi: http://dx.doi.org/10.1063/1.3665247.American Institute of Physics, 2011. ISBN 978-0-7354-0985-9. Abstract at: (http://adsabs.harvard.edu/abs/2011AIPC.1411..327K). Full article (24 pages (http://scitation.aip.org/getpdf/servlet/GetPDFServlet?filetype=pdf& id=APCPCS001411000001000327000001&idtype=cvips&doi=10.1063/1.3665247&prog=normal& bypassSSO=1)), also at (http://www.kostic.niu.edu/2ndLaw/Revisiting The Second Law of Energy Degradation and Entropy Generation - From Carnot to Holistic Generalization-4.pdf).

External links
Stanford Encyclopedia of Philosophy: " Philosophy of Statistical Mechanics (http://plato.stanford.edu/entries/ statphys-statmech/)" by Lawrence Sklar. Second law of thermodynamics (http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node30. html) in the MIT Course Unified Thermodynamics and Propulsion (http://web.mit.edu/16.unified/www/ FALL/thermodynamics/notes/notes.html) from Prof. Z. S. Spakovszky E.T. Jaynes, 1988, " The evolution of Carnot's principle, (http://bayes.wustl.edu/etj/articles/ccarnot.pdf)" in G. J. Erickson and C. R. Smith (eds.)Maximum-Entropy and Bayesian Methods in Science and Engineering, Vol 1, p.267. Caratheodory, C., "Examination of the foundations of thermodynamics," trans. by D. H. Delphenich (http:// neo-classical-physics.info/uploads/3/0/6/5/3065888/caratheodory_-_thermodynamics.pdf)

Third

90

Third
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

The third law of thermodynamics is sometimes stated as follows: The entropy of a perfect crystal, at absolute zero kelvin, is exactly equal to zero. At zero kelvin the system must be in a state with the minimum possible energy, and this statement of the third law holds true if the perfect crystal has only one minimum energy state. Entropy is related to the number of possible microstates, and with only one microstate available at zero kelvin, the entropy is exactly zero.[1] Nernst-Simon statement follows:The entropy change associated with any condensed system undergoing a reversible isothermal process approaches zero as temperature approaches 0K, where condensed system refers to liquids and solids. Another simple formulation of the third law can be: It is impossible for any process, no matter how idealized, to reduce the entropy of a system to its zero point value in a finite number of operations. The constant value (not necessarily zero) is called the residual entropy of the system.[2] Physically, the law implies that it is impossible for any procedure to bring a system to the absolute zero of temperature in a finite number of steps.[3]

History
The third law was developed by the chemist Walther Nernst during the years 1906-1912, and is therefore often referred to as Nernst's theorem or Nernst's postulate. The third law of thermodynamics states that the entropy of a system at absolute zero is a well-defined constant. This is because a system at zero temperature exists in its ground state, so that its entropy is determined only by the degeneracy of the ground state. In 1912 Nernst stated the law thus: "It is impossible for any procedure to lead to the isotherm T = 0 in a finite number of steps."[4] An alternative version of the third law of thermodynamics as stated by Gilbert N. Lewis and Merle Randall in 1923:

Third If the entropy of each element in some (perfect) crystalline state be taken as zero at the absolute zero of temperature, every substance has a finite positive entropy; but at the absolute zero of temperature the entropy may become zero, and does so become in the case of perfect crystalline substances. This version states not only S will reach zero at 0 K, but S itself will also reach zero as long as the crystal has a ground state with only one configuration. Some crystals form defects which causes a residual entropy. This residual entropy disappears when the kinetic barriers to transitioning to one ground state are overcome. With the development of statistical mechanics, the third law of thermodynamics (like the other laws) changed from a fundamental law (justified by experiments) to a derived law (derived from even more basic laws). The basic law from which it is primarily derived is the statistical-mechanics definition of entropy for a large system:

91

where S is entropy, kB is the Boltzmann constant, and

is the number of microstates consistent with the

macroscopic configuration. The counting of states is from the reference state of absolute zero, which corresponds to the entropy of S0.

Explanation
In simple terms, the third law states that the entropy of a perfect crystal approaches zero as the absolute temperature approaches zero. This law provides an absolute reference point for the determination of entropy. The entropy determined relative to this point is the absolute entropy. Mathematically, the absolute entropy of any system at zero temperature is the natural log of the number of ground states times Boltzmann's constant kB. The entropy of a perfect crystal lattice as defined by Nernst's theorem is zero provided that its ground state is unique, because ln(1) = 0. An example of a system which does not have a unique ground state is one whose net spin is a half-integer, for which time-reversal symmetry gives two degenerate ground states. For such systems, the entropy at zero temperature is at least ln(2)kB (which is negligible on a macroscopic scale). Some crystalline systems exhibit geometrical frustration, where the structure of the crystal lattice prevents the emergence of a unique ground state. Ground-state helium (unless under pressure) remains liquid. and solid solutions retain large entropy at 0K, because they are large collections of nearly degenerate states, in which they become trapped out of equilibrium. Another example of a solid with many nearly-degenerate ground states, trapped out of equilibrium, is ice Ih, which has "proton disorder".
In addition, glasses

For the entropy at absolute zero to be zero, the magnetic moments of a perfectly ordered crystal must themselves be perfectly ordered; indeed, from an entropic perspective, this can be considered to be part of the definition of "perfect crystal". Only ferromagnetic, antiferromagnetic, and diamagnetic materials can satisfy this condition. Materials that remain paramagnetic at 0K, by contrast, may have many nearly-degenerate ground states (for example, in a spin glass), or may retain dynamic disorder (a quantum spin liquid).[citation needed]

Mathematical formulation
Consider a closed system in internal equilibrium. As the system is in equilibrium there are no irreversible processes so the entropy production is zero. During the heat supply temperature gradients are generated in the material, but the associated entropy production can be kept low enough if the heat is supplied slowly. The increase in entropy due to the added heat Q is then given by the second part of the Second law of thermodynamics which states that the entropy change of a system
undergoing a reversible process is given by (1)

The temperature rise T due to the heat Q is determined by the heat capacity C(T,X) according to

Third

92

(2)

is a symbolic notation for all parameters (such as pressure, magnetic field, liquid/solid fraction, etc.) which are kept constant during the heat supply. E.g. if the volume is constant we get the heat capacity at constant volume CV. In the case of a phase transition from liquid to solid, or from gas to liquid the parameter X can be the fraction of one of the two components. Combining relations (1) and (2) gives
The parameter X (3)

Integration of Eq.(3) from a reference temperature T0 to an arbitrary temperature T gives the entropy at temperature T
(4)

We now come to the mathematical formulation of the third law. There are three steps: 1: in the limit T00 the integral in Eq.(4) is finite. So that we may take T0=0 and write
(5)

2. the value of S(0,X) is independent of X. In mathematical form


(6)

So Eq.(5) can be further simplified to


(7)

Equation (6) can also be formulated as


(8)

In words: at absolute zero all isothermal processes are isentropic. Eq.(8) is the mathematical formulation of the third law. 3: as one is free to chose the zero of the entropy it is convenient to take
(9)

so that Eq.(7) reduces to the final form


(10)

The physical meaning of Eq.(9) is deeper than just a convenient selection of the zero of the entropy. It is due to the perfect order at zero kelvin as explained before.

Third

93

Consequences of the third law


Can absolute zero be obtained?
The third law is equivalent to the statement that "It is impossible by any procedure, no matter how idealized, to reduce the temperature of any system to zero temperature in a finite number of finite operations".[5]

Fig.1 Left side: Absolute zero can be reached in a finite number of steps if S(0,X1)S(0, X2). Right: An infinite number of steps is needed since S(0,X1)= S(0,X2).

The reason that T=0 cannot be reached according to the third law is explained as follows: Suppose that the temperature of a substance can be reduced in an isentropic process by changing the parameter X from X2 to X1. One can think of a multistage nuclear demagnetization setup where a magnetic field is switched on and off in a controlled way. [6] If there were an entropy difference at absolute zero, T=0 could be reached in a finite number of steps. However, at T=0 there is no entropy difference so an infinite number of steps would be needed. The process is illustrated in Fig.1.

Specific heat
Suppose that the heat capacity of a sample in the low temperature region can be approximated by C(T,X)=C0T, then
(11)

The integral is finite for T00 if >0. So the heat capacity of all substances must go to zero at absolute zero
(12)

The molar specific heat at constant volume of a monatomic classical ideal gas, such as helium at room temperature, is given by CV=(3/2)R with R the molar ideal gas constant. Substitution in Eq.(4) gives
(13)

In the limit T00 this expression diverges. Clearly a constant heat capacity does not satisfy Eq.(12). This means that a gas with a constant heat capacity all the way to absolute zero violates the third law of thermodynamics. The conflict is solved as follows: At a certain temperature the quantum nature of matter starts to dominate the behavior. Fermi particles follow Fermi-Dirac statistics and Bose particles follow Bose-Einstein statistics. In both cases the heat capacity at low temperatures is no longer temperature independent, even for ideal gases. For Fermi gases
(14)

Third with the Fermi temperature TF given by


(15)

94

Here NA is Avogadro's number, Vm the molar volume, and M the molar mass. For Bose gases
(16)

with TB given by
(17)

The specific heats given by Eq.(14) and (16) both satisfy Eq.(12).

Vapor pressure
The only liquids near absolute zero are He and He. Their heat of evaporation has a limiting value given by
(18)

with L0 and Cp constant. If we consider a container, partly filled with liquid and partly gas, the entropy of the liquid-gas mixture is
(19)

where Sl(T) is the entropy of the liquid and x is the gas fraction. Clearly the entropy change during the liquid-gas transition (x from 0 to 1) diverges in the limit of T0. This violates Eq.(8). Nature solves this paradox as follows: at temperatures below about 50 mK the vapor pressure is so low that the gas density is lower than the best vacuum in the universe. In other words: below 50 mK there is simply no gas above the liquid.

Latent heat of melting


The melting curves of He and He both extend down to absolute zero at finite pressure. At the melting pressure liquid and solid are in equilibrium. The third law demands that the entropies of the solid and liquid are equal at T=0. As a result the latent heat of melting is zero and the slope of the melting curve extrapolates to zero as a result of the Clausius-Clapeyron equation.

Thermal expansion coefficient


The thermal expansion coefficient is defined as
(20)

With the Maxwell relation


(21)

and Eq.(8) with X=p it is shown that

Third

95

(22)

So the thermal expansion coefficient of all materials must go to zero at zero kelvin.

References
[1] J. Wilks The Third Law of Thermodynamics Oxford University Press (1961). [2] Kittel and Kroemer, Thermal Physics (2nd ed.), page 49. [3] Wilks, J. (1971). The Third Law of Thermodynamics, Chapter 6 in Thermodynamics, volume 1, ed. W. Jost, of H. Eyring, D. Henderson, W. Jost, Physical Chemistry. An Advanced Treatise, Academic Press, New York, page 477. [4] Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics, New York, ISBN 0883187973, page 342. [5] Guggenheim, E.A. (1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, fifth revised edition, North-Holland Publishing Company, Amsterdam, page 157. [6] F. Pobell, Matter and Methods at Low Temperatures, (Springer-Verlag, Berlin, 2007)

Further reading
Goldstein, Martin & Inge F. (1993) The Refrigerator and the Universe. Cambridge MA: Harvard University Press. ISBN 0-674-75324-0. Chpt. 14 is a nontechnical discussion of the Third Law, one including the requisite elementary quantum mechanics. Braun, S.; Ronzheimer, J. P.; Schreiber, M.; Hodgman, S. S.; Rom, T.; Bloch, I.; Schneider, U. (2013). "Negative Absolute Temperature for Motional Degrees of Freedom". Science 339 (6115): 525. arXiv: 1211.0545 (http:// arxiv.org/abs/1211.0545). Bibcode: 2013Sci...339...52B (http://adsabs.harvard.edu/abs/2013Sci...339... 52B). doi: 10.1126/science.1227831 (http://dx.doi.org/10.1126/science.1227831). PMID 23288533 (http:// www.ncbi.nlm.nih.gov/pubmed/23288533). Lay summary (http://www.newscientist.com/article/ dn23042-butt-of-atoms-goes-beyond-absolute-zero.html) New Scientist (January 3, 2013).

96

Chapter 3. History
History of thermodynamics

The 1698 Savery Engine the world's first commercially-useful steam engine: built by Thomas Savery

Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

History of thermodynamics The history of thermodynamics is a fundamental strand in the history of physics, the history of chemistry, and the history of science in general. Owing to the relevance of thermodynamics in much of science and technology, its history is finely woven with the developments of classical mechanics, quantum mechanics, magnetism, and chemical kinetics, to more distant applied fields such as meteorology, information theory, and biology (physiology), and to technological developments such as the steam engine, internal combustion engine, cryogenics and electricity generation. The development of thermodynamics both drove and was driven by atomic theory. It also, albeit in a subtle manner, motivated new directions in probability and statistics; see, for example, the timeline of thermodynamics.

97

History
Contributions from ancient and medieval times
The ancients viewed heat as that related to fire. In 3000 BC, the ancient Egyptians viewed heat as related to origin mythologies. In the Western philosophical tradition, after much debate about the primal element among earlier pre-Socratic philosophers, Empedocles proposed a four-element theory, in which all substances derive from earth, water, air, and fire. The Empedoclean element of fire is perhaps the principal ancestor of later concepts such as phlogiston and caloric. Around 500 BC, the Greek philosopher Heraclitus became famous as the "flux and fire" philosopher for his proverbial utterance: "All things are flowing." Heraclitus argued that the three principal elements in nature were fire, earth, and water. Atomism is a central part of today's relationship between thermodynamics and statistical mechanics. Ancient thinkers such as Leucippus and Democritus, and later the Epicureans, by advancing atomism, laid the foundations for the later atomic theory. Until experimental proof of atoms was later provided in the 20th century, the atomic theory was driven largely by philosophical considerations and scientific intuition. Consequently, ancient philosophers used atomic theory to reach conclusions that today may be viewed as immature: for example, Democritus gives a vague atomistic description of the soul, namely that it is "built from thin, smooth, and round atoms, similar to those of fire". The 5th century BC, Greek philosopher Parmenides, in his only known work, a poem conventionally titled On Nature, uses verbal reasoning to postulate that a void, essentially what is now known as a vacuum, in nature could not occur. This view was supported by the arguments of Aristotle, but was criticized by Leucippus and Hero of Alexandria. From antiquity to the Middle Ages various arguments were put forward to prove or disapprove the existence of a vacuum and several attempts were made to construct a vacuum but all proved unsuccessful.
Heating a body, such as a segment of protein alpha helix (above), tends to cause its atoms to vibrate more, and to expand or change phase, if heating is continued; an axiom of nature noted by Herman Boerhaave in the in 1700s.

The European scientists Cornelius Drebbel, Robert Fludd, Galileo Galilei and Santorio Santorio in the 16th and 17th centuries were able to gauge the relative "coldness" or "hotness" of air, using a rudimentary air thermometer (or thermoscope). This may have been influenced by an earlier device which could expand and contract the air constructed by Philo of Byzantium and Hero of Alexandria.

History of thermodynamics Around 1600, the English philosopher and scientist Francis Bacon surmised: "Heat itself, its essence and quiddity is motion and nothing else." In 1643, Galileo Galilei, while generally accepting the 'sucking' explanation of horror vacui proposed by Aristotle, believed that natures vacuum-abhorrence is limited. Pumps operating in mines had already proven that nature would only fill a vacuum with water up to a height of ~30 feet. Knowing this curious fact, Galileo encouraged his former pupil Evangelista Torricelli to investigate these supposed limitations. Torricelli did not believe that vacuum-abhorrence (Horror vacui) in the sense of Aristotle's 'sucking' perspective, was responsible for raising the water. Rather, he reasoned, it was the result of the pressure exerted on the liquid by the surrounding air. To prove this theory, he filled a long glass tube (sealed at one end) with mercury and upended it into a dish also containing mercury. Only a portion of the tube emptied (as shown adjacent); ~30inches of the liquid remained. As the mercury emptied, and a vacuum was created at the top of the tube. This, the first man-made vacuum, effectively disproved Aristotles 'sucking' theory and affirmed the existence of vacuums in nature. The gravitational force on the heavy element that is Mercury prevented it from filling the vacuum. Nature may abhor a vacuum, but gravity does not care.

98

Transition from chemistry to thermochemistry


The theory of phlogiston arose in the 17th century, late in the period of alchemy. Its replacement by caloric theory in the 18th century is one of the historical markers of the transition from alchemy to chemistry. Phlogiston was a hypothetical substance that was presumed to be liberated from combustible substances during burning, and from metals during the process of rusting. Caloric, like phlogiston, was also presumed to be the "substance" of heat that would flow from a hotter body to a cooler body, thus warming it. The first substantial experimental challenges to caloric theory arose in Rumford's 1798 work, when he showed that boring cast iron cannons produced great amounts of heat which he ascribed to friction, and his work was among the first to undermine the caloric theory. The development of the steam engine also focused attention on calorimetry and the amount of heat produced from different types of coal. The first quantitative research on the heat changes during chemical reactions was initiated by Lavoisier using an ice calorimeter following research by Joseph Black on the latent heat of water.

More quantitative studies by James Prescott Joule in 1843 onwards provided soundly reproducible phenomena, and helped to place the subject of thermodynamics on a solid footing. William Thomson, for example, was still trying to explain Joule's observations within a caloric framework as late as 1850. The utility and explanatory power of kinetic theory, however, soon started to displace caloric and it was largely obsolete by the end of the 19th century. Joseph Black and Lavoisier made important contributions in the precise measurement of heat changes using the calorimeter, a subject which became known as thermochemistry.

The worlds first ice-calorimeter, used in the winter of 1782-83, by Antoine Lavoisier and Pierre-Simon Laplace, to determine the heat evolved in various chemical changes; calculations which were based on Joseph Blacks prior discovery of latent heat. These experiments mark the foundation of thermochemistry.[citation needed]

History of thermodynamics

99

Phenomenological thermodynamics
Boyle's law (1662) Charles's law was first published by Joseph Louis Gay-Lussac in 1802, but he referenced unpublished work by Jacques Charles from around 1787. The relationship had been anticipated by the work of Guillaume Amontons in 1702. Gay-Lussac's law (1802)

Birth of thermodynamics as science


At its origins, thermodynamics was the study of engines. A precursor of the engine was designed by the German scientist Otto von Guericke who, in 1650, designed and built the world's first vacuum pump and created the world's first ever vacuum known as the Magdeburg hemispheres. He was Robert Boyle. 1627-1691 driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'Nature abhors a vacuum'. Shortly thereafter, Irish physicist and chemist Robert Boyle had learned of Guericke's designs and in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed the pressure-volume correlation: P.V=constant. In that time, air was assumed to be a system of motionless particles, and not interpreted as a system of moving molecules. The concept of thermal motion came two centuries later. Therefore Boyle's publication in 1660 speaks about a mechanical concept: the air spring.[1] Later, after the invention of the thermometer, the property temperature could be quantified. This tool gave Gay-Lussac the opportunity to derive his law, which led shortly later to the ideal gas law. But, already before the establishment of the ideal gas law, an associate of Boyle's named Denis Papin built in 1679 a bone digester, which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated. Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and cylinder engine. He did not however follow through with his design. Nevertheless, in 1697, based on Papins designs, engineer Thomas Savery built the first engine. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. One such scientist was Sadi Carnot, the father of thermodynamics, who in 1824 published Reflections on the Motive Power of Fire, a discourse on heat, power, and engine efficiency. This marks the start of thermodynamics as a modern science.

History of thermodynamics

100

Hence, prior to 1698 and the invention of the Savery Engine, horses were used to power pulleys, attached to buckets, which lifted water out of flooded salt mines in England. In the years to follow, more variations of steam engines were built, such as the Newcomen Engine, and later the Watt Engine. In time, these early engines would eventually be utilized in place of horses. Thus, each engine began to be associated with a certain amount of "horse power" depending upon how many horses it had replaced. The main problem with these first engines was that they were slow and A Watt steam engine, the steam engine that propelled the Industrial Revolution in clumsy, converting less than 2% of the input Britain and the world fuel into useful work. In other words, large quantities of coal (or wood) had to be burned to yield only a small fraction of work output. Hence the need for a new science of engine dynamics was born. Most cite Sadi Carnots 1824 paper [2] Reflections on the Motive Power of Fire as the starting point for thermodynamics as a modern science. Carnot defined "motive power" to be the expression of the useful effect that a motor is capable of producing. Herein, Carnot introduced us to the first modern day definition of "work": weight lifted through a height. The desire to understand, via formulation, this useful effect in relation to "work" is at the core of all modern day thermodynamics. In 1843, James Joule experimentally found the mechanical equivalent of heat. In 1845, Joule reported his best-known experiment, involving the use of a falling weight to spin a paddle-wheel in a barrel of water, which allowed him to estimate a mechanical equivalent of heat of 819ftlbf/Btu (4.41 J/cal). This led to the theory of conservation of energy and explained why heat can do a work.[3]
Sadi Carnot (1796-1832): the "father" of thermodynamics

The name "thermodynamics," however, did not arrive until 1849, when the British mathematician and physicist William Thomson (Lord Kelvin) coined the term thermodynamics in a paper on the efficiency of steam engines.

In 1850, the famed mathematical physicist Rudolf Clausius defined the term entropy S to be the heat lost or turned into waste, stemming from the Greek word entrepein meaning to turn. In association with Clausius, in 1871, a Scottish mathematician and physicist James Clerk Maxwell formulated a new branch of thermodynamics called Statistical Thermodynamics, which functions to analyze large numbers of particles at equilibrium, i.e., systems where no changes are occurring, such that only their average properties as temperature T, pressure P, and volume V become important. Soon thereafter, in 1875, the Austrian physicist Ludwig Boltzmann formulated a precise connection between entropy S and molecular motion:

being defined in terms of the number of possible states [W] such motion could occupy, where k is the Boltzmann's constant.

History of thermodynamics The following year, 1876, was a seminal point in the development of human thought. During this essential period, chemical engineer Willard Gibbs, the first person in America to be awarded a PhD in engineering (Yale), published an obscure 300-page paper titled: On the Equilibrium of Heterogeneous Substances, wherein he formulated one grand equality, the Gibbs free energy equation, which gives a measure the amount of "useful work" attainable in reacting systems. Gibbs also originated the concept we now know as enthalpy H, calling it "a heat function for constant pressure". The modern word enthalpy would be coined many years later by Heike Kamerlingh Onnes, who based it on the Greek word enthalpein meaning to warm. Building on these foundations, those as Lars Onsager, Erwin Schrdinger, and Ilya Prigogine, and others, functioned to bring these engine "concepts" into the thoroughfare of almost every modern-day branch of science.

101

Kinetic theory
The idea that heat is a form of motion is perhaps an ancient one and is certainly discussed by Francis Bacon in 1620 in his Novum Organum. The first written scientific reflection on the microscopic nature of heat is probably to be found in a work by Mikhail Lomonosov, in which he wrote: "(..) movement should not be denied based on the fact it is not seen. Who would deny that the leaves of trees move when rustled by a wind, despite it being unobservable from large distances? Just as in this case motion remains hidden due to perspective, it remains hidden in warm bodies due to the extremely small sizes of the moving particles. In both cases, the viewing angle is so small that neither the object nor their movement can be seen." During the same years, Daniel Bernoulli published his book Hydrodynamics (1738), in which he derived an equation for the pressure of a gas considering the collisions of its atoms with the walls of a container. He proves that this pressure is two thirds the average kinetic energy of the gas in a unit volume. Bernoulli's ideas, however, made little impact on the dominant caloric culture. Bernoulli made a connection with Gottfried Leibniz's vis viva principle, an early formulation of the principle of conservation of energy, and the two theories became intimately entwined throughout their history. Though Benjamin Thompson suggested that heat was a form of motion as a result of his experiments in 1798, no attempt was made to reconcile theoretical and experimental approaches, and it is unlikely that he was thinking of the vis viva principle. John Herapath later independently formulated a kinetic theory in 1820, but mistakenly associated temperature with momentum rather than vis viva or kinetic energy. His work ultimately failed peer review and was neglected. John James Waterston in 1843 provided a largely accurate account, again independently, but his work received the same reception, failing peer review even from someone as well-disposed to the kinetic principle as Davy. Further progress in kinetic theory started only in the middle of the 19th century, with the works of Rudolf Clausius, James Clerk Maxwell, and Ludwig Boltzmann. In his 1857 work On the nature of the motion called heat, Clausius for the first time clearly states that heat is the average kinetic energy of molecules. This interested Maxwell, who in 1859 derived the momentum distribution later named after him. Boltzmann subsequently generalized his distribution for the case of gases in external fields. Boltzmann is perhaps the most significant contributor to kinetic theory, as he introduced many of the fundamental concepts in the theory. Besides the Maxwell-Boltzmann distribution mentioned above, he also associated the kinetic energy of particles with their degrees of freedom. The Boltzmann equation for the distribution function of a gas in non-equilibrium states is still the most effective equation for studying transport phenomena in gases and metals. By introducing the concept of thermodynamic probability as the number of microstates corresponding to the current macrostate, he showed that its logarithm is proportional to entropy.

History of thermodynamics

102

Branches of
The following list gives a rough outline as to when the major branches of thermodynamics came into inception: Thermochemistry - 1780s Classical thermodynamics - 1824 Chemical thermodynamics - 1876 Statistical mechanics - c. 1880s Equilibrium thermodynamics Engineering thermodynamics Chemical engineering thermodynamics - c. 1940s Non-equilibrium thermodynamics - 1941 Small systems thermodynamics - 1960s Biological thermodynamics - 1957 Ecosystem thermodynamics - 1959 Relativistic thermodynamics - 1965 Quantum thermodynamics - 1968 Black hole thermodynamics - c. 1970s Geological thermodynamics - c. 1970s Biological evolution thermodynamics - 1978 Geochemical thermodynamics - c. 1980s Atmospheric thermodynamics - c. 1980s Natural systems thermodynamics - 1990s Supramolecular thermodynamics - 1990s Earthquake thermodynamics - 2000 Drug-receptor thermodynamics - 2001 Pharmaceutical systems thermodynamics 2002

Ideas from thermodynamics have also been applied in other fields, for example: Thermoeconomics - c. 1970s

Entropy and the second law


Even though he was working with the caloric theory, Sadi Carnot in 1824 suggested that some of the caloric available for generating useful work is lost in any real process. In March 1851, while grappling to come to terms with the work of James Prescott Joule, Lord Kelvin started to speculate that there was an inevitable loss of useful heat in all processes. The idea was framed even more dramatically by Hermann von Helmholtz in 1854, giving birth to the spectre of the heat death of the universe. In 1854, William John Macquorn Rankine started to make use in calculation of what he called his thermodynamic function. This has subsequently been shown to be identical to the concept of entropy formulated by Rudolf Clausius in 1865. Clausius used the concept to develop his classic statement of the second law of thermodynamics the same year.

History of thermodynamics

103

Heat transfer
The phenomenon of heat conduction is immediately grasped in everyday life. In 1701, Sir Isaac Newton published his law of cooling. However, in the 17th century, it came to be believed that all materials had an identical conductivity and that differences in sensation arose from their different heat capacities. Suggestions that this might not be the case came from the new science of electricity in which it was easily apparent that some materials were good electrical conductors while others were effective insulators. Jan Ingen-Housz in 1785-9 made some of the earliest measurements, as did Benjamin Thompson during the same period. The fact that warm air rises and the importance of the phenomenon to meteorology was first realised by Edmund Halley in 1686. Sir John Leslie observed that the cooling effect of a stream of air increased with its speed, in 1804. Carl Wilhelm Scheele distinguished heat transfer by thermal radiation (radiant heat) from that by convection and conduction in 1777. In 1791, Pierre Prvost showed that all bodies radiate heat, no matter how hot or cold they are. In 1804, Leslie observed that a matt black surface radiates heat more effectively than a polished surface, suggesting the importance of black body radiation. Though it had become to be suspected even from Scheele's work, in 1831 Macedonio Melloni demonstrated that black body radiation could be reflected, refracted and polarised in the same way as light. James Clerk Maxwell's 1862 insight that both light and radiant heat were forms of electromagnetic wave led to the start of the quantitative analysis of thermal radiation. In 1879, Joef Stefan observed that the total radiant flux from a blackbody is proportional to the fourth power of its temperature and stated the StefanBoltzmann law. The law was derived theoretically by Ludwig Boltzmann in 1884.

Cryogenics
In 1702 Guillaume Amontons introduced the concept of absolute zero based on observations of gases. In 1810, Sir John Leslie froze water to ice artificially. The idea of absolute zero was generalised in 1848 by Lord Kelvin. In 1906, Walther Nernst stated the third law of thermodynamics.

References
[1] New Experiments physico-mechanicall, Touching the Spring of the Air and its Effects (1660). (http:/ / www. imss. fi. it/ vuoto/ eboyle. html) [2] http:/ / www. thermohistory. com/ carnot. pdf [3] James Prescott Joule: The Discovery of the Mechanical Equivalent of Heat (http:/ / www. juliantrubin. com/ bigten/ mechanical_equivalent_of_heat. html)

Further reading
Cardwell, D.S.L. (1971). From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age. London: Heinemann. ISBN0-435-54150-1. Leff, H.S. & Rex, A.F. (eds) (1990). Maxwell's Demon: Entropy, Information and Computing. Bristol: Adam Hilger. ISBN0-7503-0057-4.

External links
History of Statistical Mechanics and Thermodynamics (http://history.hyperjeff.net/statmech) - Timeline (1575 to 1980) @ Hyperjeff.net History of Thermodynamics (http://www.mhtl.uwaterloo.ca/courses/me354/history.html) - University of Waterloo Thermodynamic History Notes (http://www.wolframscience.com/reference/notes/1019b) WolframScience.com

History of thermodynamics Brief History of Thermodynamics (http://www.nuc.berkeley.edu/courses/classes/E-115/Slides/ A_Brief_History_of_Thermodynamics.pdf) - Berkeley [PDF] History of Thermodynamics (http://thermodynamicstudy.net/history.html) - ThermodynamicStudy.net Historical Background of Thermodynamics (http://che.konyang.ac.kr/COURSE/thermo/history/therm_his. html) - Carnegie-Mellon University History of Thermodynamics (http://www.nt.ntnu.no/users/haugwarb/Presentations/History of Thermodynamics/) - In Pictures

104

An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction
An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction, (1798), Philosophical Transaction of the Royal Society p.102 is a scientific paper by Benjamin Thompson, Count Rumford that provided a substantial challenge to established theories of heat and began the 19th century revolution in thermodynamics.

Background
Rumford was an opponent of the caloric theory of heat which held that heat was a fluid that could be neither created nor destroyed. He had further developed the view that all gases and liquids were absolute non-conductors of heat. His views were out of step with the accepted science of the time and the latter theory had particularly been attacked by John Dalton[1] and John Leslie[2].

Benjamin Thompson

Rumford was heavily influenced by the theological argument from design[3] and it is likely that he wished to grant water a privileged and providential status in the regulation of human life[4]. Though Rumford was to come to associate heat with motion, there is no evidence that he was committed to the kinetic theory or the principle of vis viva.

Experiments
Rumford had observed the frictional heat generated by boring cannon at the arsenal in Munich. Rumford immersed a cannon barrel in water and arranged for a specially blunted boring tool. He showed that the water could be boiled within roughly two and a half hours and that the supply of frictional heat was seemingly inexhaustible. Rumford confirmed that no physical change had taken place in the material of the cannon by comparing the specific heats of the material machined away and that remaining were the same. Rumford argued that the seemingly indefinite generation of heat was incompatible with the caloric theory. He contended that the only thing communicated to the barrel was motion. Rumford made no attempt to further quantify the heat generated or to measure the mechanical equivalent of heat.

An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction

105

Reception
Most established scientists, such as William Henry[5] and Thomas Thomson[6], believed that there was enough uncertainty in the caloric theory to allow its adaptation to account for the new results. It had certainly proved robust and adaptable up to that time. Furthermore, Thomson[7], Jns Jakob Berzelius and Antoine Csar Becquerel observed that electricity could be indefinitely generated by friction. No educated scientist of the time was willing to hold that electricity was not a fluid. Ultimately, Rumford's claim of the "inexhaustible" supply of heat was a reckless extrapolation from the study. Charles Haldat made some penetrating criticisms of the reproducibility of Rumford's results[8] and it is possible to see the whole experiment as somewhat tendentious[9].
Joule's apparatus for measuring the mechanical equivalent of heat.

However, the experiment inspired the work of James Prescott Joule in the 1840s. Joule's more exact measurements were pivotal in establishing the kinetic theory at the expense of caloric.

Notes
[1] http:/ / en. wikipedia. org/ wiki/ An_Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction#endnote_JD [2] http:/ / en. wikipedia. org/ wiki/ An_Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction#endnote_JL [3] http:/ / en. wikipedia. org/ wiki/ An_Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction#endnote_R1 [4] http:/ / en. wikipedia. org/ wiki/ An_Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction#endnote_C1 [5] http:/ / en. wikipedia. org/ wiki/ An_Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction#endnote_WH [6] http:/ / en. wikipedia. org/ wiki/ An_Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction#endnote_TT [7] http:/ / en. wikipedia. org/ wiki/ An_Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction#endnote_TT2 [8] http:/ / en. wikipedia. org/ wiki/ An_Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction#endnote_CH [9] http:/ / en. wikipedia. org/ wiki/ An_Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction#endnote_C2

1. ^ Cardwell (1971) p.99 2. ^ Leslie, J. (1804). An Experimental Enquiry into the Nature and Propagation of Heat. London. 3. ^ Rumford (1804) " An enquiry concerning the nature of heat and the mode of its communication (http://rstl. royalsocietypublishing.org/content/94/77.full.pdf+html)" Philosophical Transactions of the Royal Society p.77 4. ^ Cardwell (1971) pp99-100 5. ^ Henry, W. (1802) "A review of some experiments which have been supposed to disprove the materiality of heat", Manchester Memoirs v, p.603 6. ^ Thomson, T. "Caloric", Supplement on Chemistry, Encyclopdia Britannica, 3rd ed. 7. ^ Ibid 8. ^ Haldat, C.N.A (1810) "Inquiries concerning the heat produced by friction", Journal de Physique lxv, p.213 9. ^ Cardwell (1971) p.102

An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction

106

Bibliography
Cardwell, D.S.L. (1971). From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age. Heinemann: London. ISBN0-435-54150-1.

107

Chapter 4. System State


Control volume
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

In fluid mechanics and thermodynamics, a control volume is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a volume fixed in space or moving with constant velocity through which the fluid (gas or liquid) flows. The surface enclosing the control volume is referred to as the control surface.[1] At steady state, a control volume can be thought of as an arbitrary volume in which the mass of the fluid remains constant. As fluid moves through the control volume, the mass entering the control volume is equal to the mass leaving the control volume. At steady state, and in the absence of work and heat transfer, the energy within the control volume remains constant. It is analogous to the classical mechanics concept of the free body diagram.

Overview
Typically, to understand how a given physical law applies to the system under consideration, one first begins by considering how it applies to a small, control volume, or "representative volume". There is nothing special about a particular control volume, it simply represents a small part of the system to which physical laws can be easily applied. This gives rise to what is termed a volumetric, or volume-wise formulation of the mathematical model. One can then argue that since the physical laws behave in a certain way on a particular control volume, they behave the same way on all such volumes, since that particular control volume was not special in any way. In this way, the corresponding point-wise formulation of the mathematical model can be developed so it can describe the physical behaviour of an entire (and maybe more complex) system. In fluid mechanics the conservation equations (for instance, the Navier-Stokes equations) are in integral form. They therefore apply on volumes. Finding forms of the equation that are independent of the control volumes allows simplification of the integral signs.

Control volume

108

Substantive derivative
Computations in fluid mechanics often require that the regular time derivation operator is replaced by the substantive derivative operator . This can be seen as follows. Consider a bug that is moving through a volume where there is some scalar, e.g. pressure, that varies with time and position: . If the bug during the time interval from the bug experiences a change to moves from to then

in the scalar value,

(the total differential). If the bug is moving with velocity and we may write

the change in position is

where

is the gradient of the scalar field p. If the bug is just a fluid particle moving with the fluid's velocity field,

the same formula applies, but now the velocity vector is that of the fluid. The last parenthesized expression is the substantive derivative of the scalar pressure. Since the pressure p in this computation is an arbitrary scalar field, we may abstract it and write the substantive derivative operator as

References
James R. Welty, Charles E. Wicks, Robert E. Wilson & Gregory Rorrer Fundamentals of Momentum, Heat, and Mass Transfer ISBN 0-471-38149-7

Notes
[1] G.J. Van Wylen and R.E. Sonntag (1985), Fundamentals of Classical Thermodynamics, Section 2.1 (3rd edition), John Wiley & Sons, Inc., New York ISBN 0-471-82933-1

External links
Integral Approach to the Control Volume analysis of Fluid Flow (http://s6.aeromech.usyd.edu.au/aero/ cvanalysis/integral_approach.pdf)

Ideal gas

109

Ideal gas
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

An ideal gas is a theoretical gas composed of a set of randomly moving, non-interacting point particles. The ideal gas concept is useful because it obeys the ideal gas law, a simplified equation of state, and is amenable to analysis under statistical mechanics. At normal conditions such as standard temperature and pressure, most real gases behave qualitatively like an ideal gas. Many gases such as nitrogen, oxygen, hydrogen, noble gases, and some heavier gases like carbon dioxide can be treated like ideal gases within reasonable tolerances. Generally, a gas behaves more like an ideal gas at higher temperature and lower pressure, as the work which is against intermolecular forces becomes less significant compared with the particles' kinetic energy, and the size of the molecules becomes less significant compared to the empty space between them. The ideal gas model tends to fail at lower temperatures or higher pressures, when intermolecular forces and molecular size become important. It also fails for most heavy gases, such as many refrigerants, and for gases with strong intermolecular forces, notably water vapor. At some point of low temperature and high pressure, real gases undergo a phase transition, such as to a liquid or a solid. The model of an ideal gas, however, does not describe or allow phase transitions. These must be modeled by more complex equations of state. The ideal gas model has been explored in both the Newtonian dynamics (as in "kinetic theory") and in quantum mechanics (as a "gas in a box"). The ideal gas model has also been used to model the behavior of electrons in a metal (in the Drude model and the free electron model), and it is one of the most important models in statistical mechanics.

Ideal gas

110

Types of ideal gas


There are three basic classes of ideal gas: the classical or Maxwell-Boltzmann ideal gas, the ideal quantum Bose gas, composed of bosons, and the ideal quantum Fermi gas, composed of fermions. The classical ideal gas can be separated into two types: The classical thermodynamic ideal gas and the ideal quantum Boltzmann gas. Both are essentially the same, except that the classical thermodynamic ideal gas is based on classical statistical mechanics, and certain thermodynamic parameters such as the entropy are only specified to within an undetermined additive constant. The ideal quantum Boltzmann gas overcomes this limitation by taking the limit of the quantum Bose gas and quantum Fermi gas in the limit of high temperature to specify these additive constants. The behavior of a quantum Boltzmann gas is the same as that of a classical ideal gas except for the specification of these constants. The results of the quantum Boltzmann gas are used in a number of cases including the Sackur-Tetrode equation for the entropy of an ideal gas and the Saha ionization equation for a weakly ionized plasma.

Classical thermodynamic ideal gas


The thermodynamic properties of an ideal gas can be described by two equations: The equation of state of a classical ideal gas is the ideal gas law

This equation is derived from Boyle's Law: P and n); and Avogadro's Law: that Under ideal conditions,

(at constant T and n); Charles's Law:

(at constant

(at constant T and P). By combining the three laws, it would demonstrate . .

which would mean that ; that is,

The internal energy of an ideal gas given by: : where is the pressure is the volume is the amount of substance of the gas (in moles) is the gas constant (8.314 JK1mol-1) is the absolute temperature is a constant used in Boyle's Law is a proportionality constant; equal to is a proportionality constant; equal to is the internal energy is the dimensionless specific heat capacity at constant volume, 3/2 for monatomic gas, 5/2 for diatomic gas and 3 for more complex molecules. In order to switch from macroscopic quantities (left hand side of the following equation) to microscopic ones (right hand side), we use

where is the number of gas particles is the Boltzmann constant (1.3811023JK1).

Ideal gas The probability distribution of particles by velocity or energy is given by the Maxwell speed distribution. The ideal gas law is an extension of experimentally discovered gas laws. Real fluids at low density and high temperature approximate the behavior of a classical ideal gas. However, at lower temperatures or a higher density, a real fluid deviates strongly from the behavior of an ideal gas, particularly as it condenses from a gas into a liquid or as it deposits from a gas into a solid. This deviation is expressed as a compressibility factor. The ideal gas model depends on the following assumptions: The molecules of the gas are indistinguishable, small, hard spheres All collisions are elastic and all motion is frictionless (no energy loss in motion or collision) Newton's laws apply The average distance between molecules is much larger than the size of the molecules The molecules are constantly moving in random directions with a distribution of speeds There are no attractive or repulsive forces between the molecules or the surroundings

111

The assumption of spherical particles is necessary so that there are no rotational modes allowed, unlike in a diatomic gas. The following three assumptions are very related: molecules are hard, collisions are elastic, and there are no inter-molecular forces. The assumption that the space between particles is much larger than the particles themselves is of paramount importance, and explains why the ideal gas approximation fails at high pressures.

Heat capacity
The heat capacity at constant volume of n = 1 / R mole of any gas (so that n R = 1 JK1), including an ideal gas is:

where S is the entropy. This is the dimensionless heat capacity at constant volume, which is generally a function of temperature due to intermolecular forces. For moderate temperatures, the constant for a monatomic gas is while for a diatomic gas it is . It is seen that macroscopic measurements on heat capacity provide information on the microscopic structure of the molecules. The heat capacity at constant pressure of 1/R mole of ideal gas is:

where

is the enthalpy of the gas. and could vary with temperature, and a perfect

Sometimes, a distinction is made between an ideal gas, where gas, for which this is not the case.

Entropy
Using the results of thermodynamics only, we can go a long way in determining the expression for the entropy of an ideal gas. This is an important step since, according to the theory of thermodynamic potentials, if we can express the entropy as a function of U (U is a thermodynamic potential), volume V and the number of particles N, then we will have a complete statement of the thermodynamic behavior of the ideal gas. We will be able to derive both the ideal gas law and the expression for internal energy from it. Since the entropy is an exact differential, using the chain rule, the change in entropy when going from a reference state 0 to some other state with entropy S may be written as where:

Ideal gas where the reference variables may be functions of the number of particles N. Using the definition of the heat capacity at constant volume for the first differential and the appropriate Maxwell relation for the second we have:

112

Expressing

in terms of

as developed in the above section, differentiating the ideal gas equation of state, and

integrating yields:

which implies that the entropy may be expressed as:

where all constants have been incorporated into the logarithm as f(N) which is some function of the particle number N having the same dimensions as in order that the argument of the logarithm be dimensionless. We now impose the constraint that the entropy be extensive. This will mean that when the extensive parameters (V and N) are multiplied by a constant, the entropy will be multiplied by the same constant. Mathematically: From this we find an equation for the function f(N)

Differentiating this with respect to a, setting a equal to unity, and then solving the differential equation yields f(N):

where

which may vary for different gases, but will be independent of the thermodynamic state of the gas. It will . Substituting into the equation for the entropy:

have the dimensions of

and using the expression for the internal energy of an ideal gas, the entropy may be written:

Since this is an expression for entropy in terms of U, V, and N, it is a fundamental equation from which all other properties of the ideal gas may be derived. This is about as far as we can go using thermodynamics alone. Note that the above equation is flawed as the temperature approaches zero, the entropy approaches negative infinity, in contradiction to the third law of thermodynamics. In the above "ideal" development, there is a critical point, not at absolute zero, at which the argument of the logarithm becomes unity, and the entropy becomes zero. This is unphysical. The above equation is a good approximation only when the argument of the logarithm is much larger than unity the concept of an ideal gas breaks down at low values of V/N. Nevertheless, there will be a "best" value of the constant in the sense that the predicted entropy is as close as possible to the actual entropy, given the flawed assumption of ideality. A quantum-mechanical derivation of this constant is developed in the derivation of the Sackur-Tetrode equation which expresses the entropy of a monatomic ideal gas. In the Sackur-Tetrode theory the constant depends only upon the mass of the gas particle. The Sackur-Tetrode equation also suffers from a divergent entropy at absolute zero, but is a good approximation for the entropy of a monatomic ideal gas for high enough temperatures.

Ideal gas

113

Thermodynamic potentials
Expressing the entropy as a function of T, V, and N:

The chemical potential of the ideal gas is calculated from the corresponding equation of state (see thermodynamic potential):

where G is the Gibbs free energy and is equal to

so that:

The thermodynamic potentials for an ideal gas can now be written as functions of T, V, and N as:

where, as before,

. The most informative way of writing the potentials is in terms of their natural

variables, since each of these equations can be used to derive all of the other thermodynamic variables of the system. In terms of their natural variables, the thermodynamic potentials of a single-species ideal gas are:

In statistical mechanics, the relationship between the Helmholtz free energy and the partition function is fundamental, and is used to calculate the thermodynamic properties of matter; see configuration integral [1] for more details.

Speed of sound
The speed of sound in an ideal gas is given by

where is the adiabatic index is the entropy per particle of the gas. is the mass density of the gas. is the pressure of the gas.

Ideal gas is the universal gas constant is the temperature is the molar mass of the gas.

114

Table of ideal gas equations


See Table of thermodynamic equations: Ideal gas.

Ideal quantum gases


In the above mentioned Sackur-Tetrode equation, the best choice of the entropy constant was found to be proportional to the quantum thermal wavelength of a particle, and the point at which the argument of the logarithm becomes zero is roughly equal to the point at which the average distance between particles becomes equal to the thermal wavelength. In fact, quantum theory itself predicts the same thing. Any gas behaves as an ideal gas at high enough temperature and low enough density, but at the point where the Sackur-Tetrode equation begins to break down, the gas will begin to behave as a quantum gas, composed of either bosons or fermions. (See the gas in a box article for a derivation of the ideal quantum gases, including the ideal Boltzmann gas.) Gases tend to behave as an ideal gas over a wider range of pressures when the temperature reaches the Boyle temperature.

Ideal Boltzmann gas


The ideal Boltzmann gas yields the same results as the classical thermodynamic gas, but makes the following identification for the undetermined constant :

where is the thermal de Broglie wavelength of the gas and g is the degeneracy of states.

Ideal Bose and Fermi gases


An ideal gas of bosons (e.g. a photon gas) will be governed by Bose-Einstein statistics and the distribution of energy will be in the form of a Bose-Einstein distribution. An ideal gas of fermions will be governed by Fermi-Dirac statistics and the distribution of energy will be in the form of a Fermi-Dirac distribution.

References
[1] http:/ / clesm. mae. ufl. edu/ wiki. pub/ index. php/ Configuration_integral_%28statistical_mechanics%29

Real gas

115

Real gas
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Real gases as opposed to a perfect or ideal gas exhibit properties that cannot be explained entirely using the ideal gas law. To understand the behaviour of real gases, the following must be taken into account: compressibility effects; variable specific heat capacity; van der Waals forces; non-equilibrium thermodynamic effects; issues with molecular dissociation and elementary reactions with variable composition.

For most applications, such a detailed analysis is unnecessary, and the ideal gas approximation can be used with reasonable accuracy. On the other hand, real-gas models have to be used near the condensation point of gases, near critical points, at very high pressures, to explain the JouleThomson effect and in other less usual cases.

Real gas

116

Models
van der Waals model
Real gases are often modeled by taking into account their molar weight and molar volume

Where P is the pressure, T is the temperature, R the ideal gas constant, and Vm the molar volume. a and b are parameters that are determined empirically for each gas, but are sometimes estimated from their critical temperature (Tc) and critical pressure (Pc) using these relations:

RedlichKwong model
The RedlichKwong equation is another two-parameter equation that is used to model real gases. It is almost always more accurate than the van der Waals equation, and often more accurate than some equations with more than two parameters. The equation is

Isotherms of real gas (sketchy) Dark blue curves isotherms below the critical temperature. Green sections metastable states. The section to the left of point F normal liquid. Point F boiling point. Line FG equilibrium of liquid and gaseous phases. Section FA superheated liquid. Section FA stretched liquid (p<0). Section AC analytic continuation of isotherm, physically impossible. Section CG supercooled vapor. Point G dew point. The plot to the right of point G normal gas. Areas FAB and GCB are equal. Red curve Critical isotherm. Point K critical point. Light blue curves supercritical isotherms

where a and b two empirical parameters that are not the same parameters as in the van der Waals equation. These parameters can be determined:

Berthelot and modified Berthelot model


The Berthelot equation (named after D. Berthelot[1] is very rarely used,

but the modified version is somewhat more accurate

Real gas

117

Dieterici model
This model (named after C. Dieterici[2]) fell out of usage in recent years .

Clausius model
The Clausius equation (named after Rudolf Clausius) is a very simple three-parameter equation used to model gases.

where

where Vc is critical volume.

Virial model
The Virial equation derives from a perturbative treatment of statistical mechanics.

or alternatively

where A, B, C, A, B, and C are temperature dependent constants.

PengRobinson model
Peng-Robinson equation of state (named after D.-Y. Peng and D. B. Robinson) has the interesting property being useful in modeling some liquids as well as real gases.

Wohl model
The Wohl equation (named after A. Wohl[3]) is formulated in terms of critical values, making it useful when real gas constants are not available.

where

Real gas

118

BeattieBridgman model
[4]

This equation is based on five experimentally determined constants. It is expressed as

where

This equation is known to be reasonably accurate for densities up to about 0.8cr, where cr is the density of the substance at its critical point. The constants appearing in the above equation are available in following table when P is in KPa, v is in , T is in K and R=8.314
Gas Air Argon, Ar A0 a [5]

B0

131.8441 0.01931 130.7802 0.02328

0.04611 -0.001101 4.3410^4 0.03931 0.0 0.10476 0.07235 0.01400 0.0 5.9910^4 6.6010^5 40 504 4.2010^4 4.8010^4

Carbon Dioxide, CO2 507.2836 0.07132 Helium, He Hydrogen, H2 Nitrogen, N2 Oxygen, O2 2.1886 20.0117 0.05984

-0.00506 0.02096 -0.04359 0.05046 -0.00691 0.04624 0.004208

136.2315 0.02617 151.0857 0.02562

BenedictWebbRubin model
The BWR equation, sometimes referred to as the BWRS equation,

where d is the molar density and where a, b, c, A, B, C, , and are empirical constants. Note that the constant is a derivative of constant and therefore almost identical to 1.

References
[1] [2] [3] [4] [5] D. Berthelot in Travaux et Mmoires du Bureau international des Poids et Mesures Tome XIII (Paris: Gauthier-Villars, 1907) C. Dieterici, Ann. Phys. Chem. Wiedemanns Ann. 69, 685 (1899) A. Wohl "Investigation of the condition equation", Zeitschrift fr Physikalische Chemie (Leipzig) 87 pp. 139 (1914) Yunus A. Cengel and Michael A. Boles,Thermodynamics: An Engineering Approach 7th Edition, , McGraw-Hill, 2010,ISBN 007-352932-X Gordan J.Van Wylen and Richard E.Sonntage, Fundamental of classical Thermodynamics,3rd ed,new york,John Wiley &Sons,1986 P46 table 3.3

Dilip Kondepudi, Ilya Prigogine, Modern Thermodynamics, John Wiley & Sons, 1998, ISBN 0-471-97393-9 Hsieh, Jui Sheng, Engineering Thermodynamics, Prentice-Hall Inc., Englewood Cliffs, New Jersey 07632, 1993. ISBN 0-13-275702-8

Real gas Stanley M. Walas, Phase Equilibria in Chemical Engineering, Butterworth Publishers, 1985. ISBN 0-409-95162-5 M. Aznar, and A. Silva Telles, A Data Bank of Parameters for the Attractive Coefficient of the Peng-Robinson Equation of State, Braz. J. Chem. Eng. vol. 14 no. 1 So Paulo Mar. 1997, ISSN 0104-6632 An introduction to thermodynamics by Y. V. C. Rao (http://books.google.com/books?id=iYWiCXziWsEC& pg=PA55&dq=Beattie-Bridgman+=&hl=el&ei=qugCTtLnFo30sgbD3ZDtDQ&sa=X&oi=book_result& ct=book-thumbnail&resnum=1&ved=0CCwQ6wEwAA#v=onepage&q=Beattie-Bridgeman =&f=false) The corresponding-states principle and its practice: thermodynamic, transport and surface properties of fluids by Hong Wei Xiang (http://books.google.gr/books?id=DWRkfjIFdOIC&pg=PA52&lpg=PA52& dq=HarmensKnapp+equation+real+gas&source=bl&ots=CtwG9W9gv5& sig=rAHgTqR1Jzq1DU-PPPgSkjpjRYk&hl=el&ei=k-QCTrexAoftOY_k7ewN&sa=X&oi=book_result& ct=result&resnum=3&ved=0CCwQ6AEwAg#v=onepage&q=Knapp&f=false)

119

External links
http://www.ccl.net/cca/documents/dyoung/topics-orig/eq_state.html

120

Chapter 5. System Processes


Isobaric process
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

An isobaric process' is a thermodynamic process in which the pressure stays constant: P = 0. The term derives from the Greek iso-, (equal), and baros (weight). The heat transferred to the system does work but also changes the internal energy of the system:

According to the first law of thermodynamics, where W is work done on the system, U is internal energy, and Q is heat. Pressure-volume work by the closed system is defined as:

where means change over the whole process, whereas d denotes a differential. Since pressure is constant, this means that . Applying the ideal gas law, this becomes

assuming that the quantity of gas stays constant, e.g., there is no phase transition during a chemical reaction. According to the equipartition theorem, the change in internal energy is related to the temperature of the system by

The yellow area represents the work done

Isobaric process , where is specific heat at a constant volume.

121

Substituting the last two equations into the first equation produces:

, where is specific heat at a constant pressure.

Specific heat capacity


To find the molar specific heat capacity of the gas involved, the following equations apply for any general gas that is calorically perfect. The property is either called the adiabatic index or the heat capacity ratio. Some published sources might k instead of . Molar isochoric specific heat: . Molar isobaric specific heat: . The values for are for diatomic gasses like air and its major components, and for monatomic

gasses like the noble gasses. The formulas for specific heats would reduce in these special cases: Monatomic: and Diatomic: and An isobaric process is shown on a P-V diagram as a straight horizontal line, connecting the initial and final thermostatic states. If the process moves towards the right, then it is an expansion. If the process moves towards the left, then it is a compression.

Sign convention for work


The motivation for the specific sign conventions of thermodynamics comes from early development of heat engines. When designing a heat engine, the goal is to have the system produce and deliver work output. The source of energy in a heat engine, is a heat input. If the volume compresses (delta V = final volume - initial volume < 0), then W < 0. That is, during isobaric compression the gas does negative work, or the environment does positive work. Restated, the environment does positive work on the gas. If the volume expands (delta V = final volume - initial volume > 0), then W > 0. That is, during isobaric expansion the gas does positive work, or equivalently, the environment does negative work. Restated, the gas does positive work on the environment. If heat is added to the system, then Q > 0. That is, during isobaric expansion/heating, positive heat is added to the gas, or equivalently, the environment receives negative heat. Restated, the gas receives positive heat from the

Isobaric process environment. If the system rejects heat, then Q < 0. That is, during isobaric compression/cooling, negative heat is added to the gas, or equivalently, the environment receives positive heat. Restated, the environment receives positive heat from the gas.

122

Defining enthalpy
An isochoric process is described by the equation . It would be convenient to have a similar equation for isobaric processes. Substituting the second equation into the first yields The quantity U + p V is a state function so that it can be given a name. It is called enthalpy, and is denoted as H. Therefore an isobaric process can be more succinctly described as . Enthalpy and isobaric specific heat capacity are very useful mathematical constructs, since when analyzing a process in an open system, the situation of zero work occurs when the fluid flows at constant pressure. In an open system, enthalpy is the quantity which is useful to use to keep track of energy content of the fluid.

Variable density viewpoint


A given quantity (mass m) of gas in a changing volume produces a change in density . In this context the ideal gas law is written R(T ) = M P where T is thermodynamic temperature and M is molar mass. When R and M are taken as constant, then pressure P can stay constant as the density-temperature quadrant (,T ) undergoes a squeeze mapping.[1]

References
[1] Peter Olver (1999), Classical Invariant Theory, p. 217

Isochoric process

123

Isochoric process
An isochoric process, also called a constant-volume process, an isovolumetric process, or an isometric process, is a thermodynamic process during which the volume of the closed system undergoing such a process remains constant. An isochoric process is exemplified by the heating or the cooling of the contents of a sealed, inelastic container: The thermodynamic process is the addition or removal of heat; the isolation of the contents of the container establishes the closed system; and the inability of the container to deform imposes the constant-volume condition.

Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Formalism
An isochoric thermodynamic process is characterized by constant volume, i.e., pressure-volume work, since such work is defined by , where P is pressure. The sign convention is such that positive work is performed by the system on the environment. For a reversible process, the first law of thermodynamics gives the change in the system's internal energy: . The process does no

Replacing work with a change in volume gives

Since the process is isochoric,

, the previous equation now gives

Using the definition of specific heat capacity at constant volume, , Integrating both sides yields

Isochoric process Where is the specific heat capacity at constant volume, is initial temperature and is final temperature. We

124

conclude with:

On a pressure volume diagram, an isochoric process appears as a straight vertical line. Its thermodynamic conjugate, an isobaric process would appear as a straight horizontal line.

Ideal gas
If an ideal gas is used in an isochoric process, and the quantity of gas stays constant, then the increase in energy is proportional to an increase in temperature and pressure. Take for example a gas heated in a rigid container: the pressure and temperature of the gas will increase, but the volume will remain the same.

Ideal Otto cycle


The ideal Otto cycle is an example of an isochoric process when it is assumed that the burning of the Isochoric Process in the Pressure volume diagram. In this diagram, gasoline-air mixture in an internal combustion engine pressure increases, but volume remains constant. car is instantaneous. There is an increase in the temperature and the pressure of the gas inside the cylinder while the volume remains the same.

Etymology
The noun isochor and the adjective isochoric are derived from the Greek words (isos) meaning "equal", and (choros) meaning "space."

References External links


http://lorien.ncl.ac.uk/ming/webnotes/Therm1/revers/isocho.htm

Isothermal process

125

Isothermal process
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

An isothermal process is a change of a system, in which the temperature remains constant: T = 0. This typically occurs when a system is in contact with an outside thermal reservoir (heat bath), and the change occurs slowly enough to allow the system to continually adjust to the temperature of the reservoir through heat exchange. In contrast, an adiabatic process is where a system exchanges no heat with its surroundings (Q=0). In other words, in an isothermal process, the value T = 0 but Q 0, while in an adiabatic process, T 0 but Q = 0.

Details for an ideal gas


For the special case of a gas to which Boyle's law applies, the product pV is a constant if the gas is kept at isothermal conditions. However, in the cases where the product pv is an exponential term this does not comply. The value of the constant is nRT, where n is the number of moles of gas present and R is the ideal gas constant. In other words, the ideal gas law pV = nRT applies. This means that

holds. The family of curves generated by this equation is shown in the graph presented at the bottom right-hand of the page. Each curve is called an isotherm. Such graphs are termed indicator diagrams and were first used by James Watt and others to monitor the efficiency of engines. The temperature corresponding to each curve in the figure increases from the lower left to the upper right.

Several isotherms of an ideal gas on a p-V diagram

Isothermal process

126

Calculation of work
In thermodynamics, the work involved when a gas changes from state A to state B is simply

For an isothermal, reversible process, this integral equals the area under the relevant pressure-volume isotherm, and is indicated in purple in the figure (at the bottom right-hand of the page) for an ideal gas. Again, p = nRT / V applies and with T being constant (as this is an isothermal process), we have:

By convention, work is defined as the work the system does on its The purple area represents "work" for this isothermal change environment. If, for example, the system expands by a piston moving in the direction of force applied by the internal pressure of a gas, then the work is counted as positive, and as this work is done by using internal energy of the system, the result is that the internal energy decreases. Conversely, if the environment does work on the system so that its internal energy increases, the work is counted as negative. It is also worth noting that, for many systems, if the temperature is held constant, the internal energy of the system also is constant, and so . From First Law of Thermodynamics, , so it follows that for this same isothermal process. When no heat flows into or out of the gas because the temperature is constant, then there is no work done. Thus, work=0 which means external pressure is zero. This is called free expansion.

Applications
Isothermal processes can occur in any kind of system, including highly-structured machines, and even living cells. Various parts of the cycles of some heat engines are carried out isothermally and may be approximated by a Carnot cycle. Phase changes, such as melting or evaporation, are also isothermal processes. In Isothermal non flow Process, the work done by compressing the perfect gas (Pure Substance) is a negative work, as work is done on the system, as result of compression, the volume will decrease, and temperature will try to increase. To maintain the temperature at constant value (as the process is isothermal) heat energy has to leave the system and enter the environment. The amount of energy entering the environment is equal to the work done (by compressing the perfect gas) because internal energy does not change. The thermodynamic sign convention is that heat entering the environment is also negative. Thence -Q = W. In equation of work, the term nRT can be replaced by PV of any state of an ideal gas. The product of pressure and volume is in fact, 'Moving Boundary Work'; the systems boundaries are compressed. For Expansion the same theory is applied. As per Joule's Law for the perfect gas, Internal energy is the function of absolute temperature. In an Isothermal process the temperature is constant. Hence, the internal energy is constant, and the net change in internal energy is ZERO. Within the perfect, or ideal gas, there are no inter-molecular forces and the gas particles are infinitesimal. However, for a real pure substance there is a component of internal energy corresponding to the energy used in overcoming inter-molecular forces. In an Isothermal process, when the volume of the gas changes, the average distance between each molecule changes as well. So if the real pure gas undergoes an Isothermal process, there is a net change in internal temperature consistent with this component of internal energy.[1]

Isothermal process

127

Notes
[1] Adkins 1983, p. 121.

References
Adkins, C. J.(1983). Equilibrium Thermodynamics. Cambridge University Press.

Adiabatic process
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

An adiabatic process (/dibtk/; from the Greek privative "a" + "diavaton") is a process that occurs without the transfer of heat or matter between a system and its surroundings.[1][2] A key concept in thermodynamics, adiabatic transfer provides a rigorous conceptual basis for the theory used to expound the first law of thermodynamics. It is also key in a practical sense, that many rapid chemical and physical processes are described using the adiabatic approximation; such processes are usually followed or preceded by events that do involve heat transfer. Adiabatic processes are primarily and exactly defined for a system contained by walls that are completely thermally insulating and impermeable to matter; such walls are said to be adiabatic. An adiabatic transfer is a transfer of energy as work across an adiabatic wall or sector of a boundary. Approximately, a transfer may be regarded as adiabatic if it happens in an extremely short time, so that there is no opportunity for significant heat exchange.[3] The adiabatic flame temperature is a virtual quantity. It is the temperature that would be achieved by a flame in the absence of heat loss to the surroundings.

Adiabatic process

128

Etymology
The term adiabatic literally means 'not to be passed'. It is formed from the privative "" ("not") + , "able to be passed through", in turn deriving from - ("through"), and ("to pass"), thus .[4] According to Maxwell, the term was introduced by Rankine.[5] The etymological origin corresponds here to an impossibility of transfer of energy as heat and of transfer of matter across the wall.

Description
An adiabatic transfer of energy as work may be described by the notation Q = 0 where Q is the quantity of energy transferred as heat across the adiabatic boundary or wall. An ideal or fictive adiabatic transfer of energy as work that occurs without friction or viscous dissipation within the system is said to be isentropic, with S = 0. For a natural process of transfer of energy as heat, driven by a finite temperature difference, entropy is both transferred with the heat and generated within the system. Such a process is in general neither adiabatic nor isentropic, having Q 0 and S 0. For a general fictive quasi-static transfer of energy as heat, driven by an ideally infinitesimal temperature difference, the second law of thermodynamics provides that Q = T deS, where Q denotes an infinitesimal element of transfer of energy as heat into the system from its surroundings, T denotes the practically common temperature of system and surroundings at which the transfer takes place, and deS denotes the infinitesimal element of entropy transferred into the system from the surroundings with the heat transfer. For an adiabatic fictive quasi-static process, Q = 0 and deS = 0. For a natural process of transfer of energy as heat, driven by a finite temperature difference, there is generation of entropy within the system, in addition to entropy that is transferred into the system from the surroundings. If the process is fairly slow, so that it can be described near enough by differentials, the second law of thermodynamics observes that Q < T dS. Here T denotes the temperature of the system to which heat is transferred. Entropy diS is thereby generated internally within the system, in addition to the entropy deS transferred with the heat. Thus the total entropy increment within the system is given by dS = diS + deS.[6] A natural adiabatic process is irreversible and is not isentropic. Adiabatic transfer of energy as work can be analyzed into two extreme component kinds. One extreme kind is without friction or viscous dissipation within the system, and this is usually pressure-volume work, denoted customarily by P dV. This is an ideal case that does not exactly occur in nature. It may be regarded as "reversible". The other extreme kind is isochoric work, for which dV = 0, solely through friction or viscous dissipation within the system. Isochoric work is irreversible.[7] The second law of thermodynamics observes that a natural process of transfer of energy as work, exactly considered, always consists at least of isochoric work and often of both of these extreme kinds of work. Every natural process, exactly considered, is irreversible, however slight may be the friction or viscosity.

Adiabatic heating and cooling


Adiabatic changes in temperature occur due to changes in pressure of a gas while not adding or subtracting any heat. In contrast, free expansion is an isothermal process for an ideal gas. Adiabatic heat occurs when the pressure of a gas is increased from work done on it by its surroundings, e.g., a piston compressing a gas contained within an adiabatic cylinder. This finds practical application in Diesel engines which rely on the lack of quick heat dissipation during their compression stroke to elevate the fuel vapor temperature sufficiently to ignite it.

Adiabatic process Adiabatic heating also occurs in the Earth's atmosphere when an air mass descends, for example, in a katabatic wind or Foehn or chinook wind flowing downhill over a mountain range. When a parcel of air descends, the pressure on the parcel increases. Due to this increase in pressure, the parcel's volume decreases and its temperature increases, thus increasing the internal energy. Adiabatic cooling occurs when the pressure of a substance is decreased as it does work on its surroundings. Adiabatic cooling occurs in the Earth's atmosphere with orographic lifting and lee waves, and this can form pileus or lenticular clouds if the air is cooled below the dew point. When the pressure applied on a parcel of air decreases, the air in the parcel is allowed to expand; as the volume increases, the temperature falls and internal energy decreases. Adiabatic cooling does not have to involve a fluid. One technique used to reach very low temperatures (thousandths and even millionths of a degree above absolute zero) is adiabatic demagnetisation, where the change in magnetic field on a magnetic material is used to provide adiabatic cooling. Also, the contents of an expanding universe (to first order) can be described as an adiabatically cooling fluid. (See - Heat death of the universe) Rising magma also undergoes adiabatic cooling before eruption, particularly significant in the case of magmas that rise quickly from great depths such as kimberlites. Such temperature changes can be quantified using the ideal gas law, or the hydrostatic equation for atmospheric processes. In practice, no process is truly adiabatic. Many processes rely on a large difference in time scales of the process of interest and the rate of heat dissipation across a system boundary, and thus are approximated by using an adiabatic assumption. There is always some heat loss, as no perfect insulators exist.

129

Ideal gas (reversible process)


The mathematical equation for an ideal gas undergoing a reversible (i.e., no entropy generation) adiabatic process is

where P is pressure, V is volume, and

being constant

the

specific

heat being

for the is

pressure,

specific heat for constant volume, the adiabatic index, and

is the

number of degrees of freedom (3 for monatomic gas, 5 for diatomic gas and collinear molecules e.g. carbon dioxide). For a monatomic ideal gas, , and for a diatomic gas (such as nitrogen and oxygen, components of air)
For a simple substance, during an adiabatic process in which the volume increases, the internal energy of the working substance must decrease

the main .[8] Note that the above formula is only applicable to classical ideal gases and not

BoseEinstein or Fermi gases. For reversible adiabatic processes, it is also true that

Adiabatic process

130

where T is an absolute temperature. This can also be written as

Example of adiabatic compression


Let's now look at a common example of adiabatic compression- the compression stroke in a gasoline engine. We will make a few simplifying assumptions: that the uncompressed volume of the cylinder is 1000cc's (one liter), that the gas within is nearly pure nitrogen (thus a diatomic gas with five degrees of freedom and so = 7/5), and that the compression ratio of the engine is 10:1 (that is, the 1000 cc volume of uncompressed gas will compress down to 100 cc when the piston goes from bottom to top). The uncompressed gas is at approximately room temperature and pressure (a warm room temperature of ~27 degC or 300 K, and a pressure of 1 bar ~ 100,000 Pa, or about 14.7 PSI, or typical sea-level atmospheric pressure).

so our adiabatic constant for this experiment is about 1.58 billion. The gas is now compressed to a 100cc volume (we will assume this happens quickly enough that no heat can enter or leave the gas). The new volume is 100 ccs, but the constant for this experiment is still 1.58 billion:

so solving for P:

or about 362 PSI or 24.5 atm. Note that this pressure increase is more than a simple 10:1 compression ratio would indicate; this is because the gas is not only compressed, but the work done to compress the gas has also heated the gas and the hotter gas will have a greater pressure even if the volume had not changed. We can solve for the temperature of the compressed gas in the engine cylinder as well, using the ideal gas law. Our initial conditions are 100,000 pa of pressure, 1000 cc volume, and 300 K of temperature, so our experimental constant is:

We know the compressed gas has V = 100 cc and P = 2.50E6 pascals, so we can solve for temperature by simple algebra:

That's a final temperature of 751 K, or 477 C, or 892 F, well above the ignition point of many fuels. This is why a high compression engine requires fuels specially formulated to not self-ignite (which would cause engine knocking when operated under these conditions of temperature and pressure), or that a supercharger and inter cooler to provide a lower temperature at the same pressure would be advantageous. A diesel engine operates under even more extreme conditions, with compression ratios of 20:1 or more being typical, in order to provide a very high gas temperature which ensures immediate ignition of injected fuel.

Adiabatic process

131

Adiabatic free expansion of a gas


For an adiabatic free expansion of an ideal gas, the gas is contained in an insulated container and then allowed to expand in a vacuum. Because there is no external pressure for the gas to expand against, the work done by or on the system is zero. Since this process does not involve any heat transfer or work, the First Law of Thermodynamics then implies that the net internal energy change of the system is zero. For an ideal gas, the temperature remains constant because the internal energy only depends on temperature in that case. Since at constant temperature, the entropy is proportional to the volume, the entropy increases in this case, therefore this process is irreversible.

Derivation of continuous formula for adiabatic heating and cooling


The definition of an adiabatic process is that heat transfer to the system is zero, first law of thermodynamics, where dU is the change in the internal energy of the system and W is work done by the system. Any work (W) done must be done at the expense of internal energy U, since no heat Q is being supplied from the surroundings. Pressure-volume work W done by the system is defined as . Then, according to the

However, P does not remain constant during an adiabatic process but instead changes along with V. It is desired to know how the values of dP and dV relate to each other as the adiabatic process proceeds. For an ideal gas the internal energy is given by

where is the number of degrees of freedom divided by two, R is the universal gas constant and n is the number of moles in the system (a constant). Differentiating Equation (3) and use of the ideal gas law, , yields

Equation (4) is often expressed as

because

Now substitute equations (2) and (4) into equation (1) to obtain

factorize :

and divide both sides by PV:

After integrating the left and right sides from

to V and from

to P and changing the sides respectively,

Exponentiate both sides, and substitute

with

, the heat capacity ratio

and eliminate the negative sign to obtain

Therefore,

Adiabatic process

132

and

Derivation of discrete formula


The change in internal energy of a system, measured from state 1 to state 2, is equal to

At the same time, the work done by the pressure-volume changes as a result from this process, is equal to

Since we require the process to be adiabatic, the following equation needs to be true

By the previous derivation,

Rearranging (4) gives

Substituting this into (2) gives

Integrating,

Substituting

Rearranging,

Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases),

By the continuous formula,

Or,

Substituting into the previous expression for

Adiabatic process

133

Substituting this expression and (1) in (3) gives

Simplifying,

Simplifying,

Simplifying,

Graphing adiabats
An adiabat is a curve of constant entropy on the P-V diagram. Properties of adiabats on a P-V diagram are: 1. Every adiabat asymptotically approaches both the V axis and the P axis (just like isotherms). 2. Each adiabat intersects each isotherm exactly once. 3. An adiabat looks similar to an isotherm, except that during an expansion, an adiabat loses more pressure than an isotherm, so it has a steeper inclination (more vertical). 4. If isotherms are concave towards the "north-east" direction (45 ), then adiabats are concave towards the "east north-east" (31 ). 5. If adiabats and isotherms are graphed severally at regular changes of entropy and temperature, respectively (like altitude on a contour map), then as the eye moves towards the axes (towards the south-west), it sees the density of isotherms stay constant, but it sees the density of adiabats grow. The exception is very near absolute zero, where the density of adiabats drops sharply and they become rare (see Nernst's theorem). The following diagram is a P-V diagram with a superposition of adiabats and isotherms:

Adiabatic process

134

The isotherms are the red curves and the adiabats are the black curves. The adiabats are isentropic. Volume is the horizontal axis and pressure is the vertical axis.

References
[1] Carathodory, C. (1909). Untersuchungen ber die Grundlagen der Thermodynamik, Mathematische Annalen, 67: 355386, . A translation may be found here (http:/ / neo-classical-physics. info/ uploads/ 3/ 0/ 6/ 5/ 3065888/ caratheodory_-_thermodynamics. pdf). Also a mostly reliable translation is to be found (http:/ / books. google. com. au/ books?id=xwBRAAAAMAAJ& q=Investigation+ into+ the+ foundations) at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. [2] Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3, p. 21. [3] http:/ / buphy. bu. edu/ ~duffy/ semester1/ c27_process_adiabatic_sim. html [4] Liddell, H.G., Scott, R. (1940). A Greek-English Lexicon, Clarendon Press, Oxford UK. [5] Rankine, W.J.McQ. (1866). On the theory of explosive gas engines, The Engineeer, July 27, 1866; at page 467 of the reprint in Miscellaneous Scientific Papers (https:/ / archive. org/ details/ miscellaneoussci00rank), edited by W.J. Millar, 1881, Charles Griffin, London. [6] Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures, John Wiley & Sons, Chichester, ISBN 0471973939, p. 88. [7] Mnster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, WileyInterscience, London, ISBN 0-471-62430-6, p. 45. [8] Adiabatic Processes (http:/ / hyperphysics. phy-astr. gsu. edu/ hbase/ thermo/ adiab. html)

Silbey, Robert J.; et al. (2004). Physical chemistry. Hoboken: Wiley. p.55. ISBN978-0-471-21504-2. Broholm, Collin. "Adiabatic free expansion." Physics & Astronomy @ Johns Hopkins University. N.p., 26 Nov. 1997. Web. 14 Apr. *Nave, Carl Rod. "Adiabatic Processes." HyperPhysics. N.p., n.d. Web. 14 Apr. 2011. (http:/ /hyperphysics.phy-astr.gsu.edu/hbase/thermo/adiab.html). Thorngren, Dr. Jane R.. "Adiabatic Processes." Daphne A Palomar College Web Server. N.p., 21 July 1995. Web. 14 Apr. 2011. (http://daphne.palomar.edu/jthorngren/adiabatic_processes.htm).

Adiabatic process

135

External links
Article in HyperPhysics Encyclopaedia (http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/adiab.html#c1:)

Polytropic process
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

A polytropic process is a thermodynamic process that obeys the relation:

where p is the pressure, v is specific volume, n, the polytropic index, is any real number, and C is a constant. This equation can be used to accurately characterize processes of certain systems, notably the compression or expansion (including with heat transfer) of a gas and in some cases liquids and solids.

Derivation
The following derivation is adapted from Christians.[1] Consider a gas in a closed system undergoing an internally reversible process with negligible changes in kinetic and potential energy. The First Law of Thermodynamics is

Define the energy transfer ratio, K, as q/w. For an internally reversible process the only type of work interaction is moving boundary work, given by Pdv. Also assume the gas is calorically perfect (constant specific heat) so du = cvdT. The First Law can then be written
Polytropic processes behave differently with various polytropic indice. Polytropic process can generate other basic thermodynamic processes.

Polytropic process Consider the Ideal Gas equation of state with the well-known compressibility factor, Z: Pv = ZRT. Assume the compressibility factor is constant for the process. Assume the gas constant is also fixed (i.e. no chemical reactions are occurring). The PV = ZRT equation of state can be differentiated to give

136

Based on the well-known specific heat relationship arising from the definition of enthalpy, the term ZR can be replaced by cp - cv. With these observations First Law becomes

where is the ratio of specific heats. This equation will be important for understanding the basis of the polytropic process equation. Now consider the polytropic process equation itself:

Taking the natural log of both sides (recognizing that the exponent n is constant for a polytropic process) gives

which can be differentiated and re-arranged to give

By comparing this result to the result obtained from the First Law, it is concluded that the polytropic exponent is constant (and therefore the process is polytropic) when the energy transfer ratio is constant for the process. In fact the polytropic exponent can be expressed in terms of the energy transfer ratio: . This derivation can be expanded to include polytropic processes in open systems, including instances where the kinetic energy (i.e. Mach Number) is significant. It can also be expanded to include irreversible polytropic processes (see Ref ).

Applicability
The polytropic process equation is usually applicable for reversible or irreversible processes of ideal or near-ideal gases involving heat transfer and/or work interactions when the energy transfer ratio (q/w) is constant for the process. The equation may not be applicable for processes in an open system if the kinetic energy (i.e. Mach Number) is significant. The polytropic process equation may also be applicable in some cases to processes involving liquids, or even solids.

Polytropic Specific Heat Capacity


It is denoted by and it is equal to

Relationship to ideal processes


For certain values of the polytropic index, the process will be synonymous with other common processes. Some examples of the effects of varying index values are given in the table.

Polytropic process

137

Variation of polytropic index


Polytropic index Relation Effects

Negative exponents reflect a process where the amount of heat being added is large compared to the amount of work being done (i.e. the energy transfer ratio > /(-1)). Negative exponents can also be meaningful in some [2] special cases not dominated by thermal interactions, such as in the processes of certain plasmas in astrophysics. Equivalent to an isobaric process (constant pressure)

(constant) Equivalent to an isothermal process (constant temperature) (constant) A quasi-adiabatic process such as in an internal combustion engine during expansion, or in vapor compression refrigeration during compression. Also a "polytropic compression" process like gas through a centrifugal compressor where heat loss from the compressor (into environment) is greater than the heat added to the gas through compression.

is the isentropic exponent, yielding an isentropic process ( no heat and no entropy transferred). It is also widely referred as adiabatic index, yielding an adiabatic process (no heat transferred). However the term adiabatic [3] does not adequately describe this process, since it only implies no heat transfer. A reversible adiabatic process is an isentropic process. Normally polytropic index is greater than specific heat ratio (gamma) within a "polytropic compression" process like gas through a centrifugal compressor. The inefficiencies of centrifugal compression and heat added to the gas outweigh the loss of heat into the environment. Equivalent to an isochoric process (constant volume)

When the index n is between any two of the former values (0, 1, , or ), it means that the polytropic curveWikipedia:Please clarify will be bounded by the curves of the two corresponding indices. Note that , since .

Notation
In the case of an isentropic ideal gas, exponent. An isothermal ideal gas is also a polytropic gas. Here, the polytropic index is equal to one, and differs from the adiabatic index . In order to discriminate between the two gammas, the polytropic gamma is sometimes capitalized, To confuse matters further, some authors refer to as the polytropic index, rather than . Note that . is the ratio of specific heats, known as the adiabatic index or as adiabatic

Polytropic process

138

Other
A solution to the Lane-Emden equation using a polytropic fluid is known as a polytrope.

References
[1] Christians, Joseph, "Approach for Teaching Polytropic Processes Based on the Energy Transfer Ratio, International Journal of Mechanical Engineering Education, Volume 40, Number 1 (January 2012), Manchester University Press [2] G. P. Horedt Polytropes: Applications In Astrophysics And Related Fields (http:/ / books. google. dk/ books?id=vXhFjGj5PjIC& lpg=PA24& ots=UgOKwzGq57& dq=polytropic with negative exponent& pg=PA24#v=onepage& q=polytropic with negative exponent& f=false), Springer, 10/08/2004, pp.24. [3] GPSA book section 13

139

Chapter 6. System Properties


Introduction to entropy
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

The idea of "irreversibility" is central to the understanding of entropy. Everyone has an intuitive understanding of irreversibility - if one watches a movie of everyday life running forward and one of it running in reverse, it is easy to distinguish between the two. The movie running in reverse shows impossible things happening - water jumping out of a glass into a pitcher above it, smoke going down a chimney, water "unmelting" to form ice in a warm room, crashed cars reassembling themselves, and so on. The intuitive meaning of expressions such as "don't cry over spilled milk" or "you can't take the cream out of the coffee" is that these are irreversible processes. There is a direction in time by which spilled milk does not go back into the glass. In thermodynamics, one says that the "forward" processes - pouring water from a pitcher, smoke going up a chimney, etc. are "irreversible" - they cannot happen in reverse, even though, on a microscopic level, no laws of physics are being violated. All real physical processes involving systems in everyday life, with many atoms or molecules, are irreversible. For an irreversible process in an isolated system, the thermodynamic state variable known as entropy is always increasing. The reason that the movie in reverse is so easily recognized is because it shows processes for which entropy is decreasing, which is physically impossible (or, more correctly, statistically improbable). In everyday life, there may be processes in which the increase of entropy is practically unobservable, almost zero. In these cases, a movie of the process run in reverse will not seem unlikely. For example, in a 1-second video of the collision of two billiard balls, it will be hard to distinguish the forward and the backward case, because the increase of entropy during that time is relatively small. In thermodynamics, one says that this process is practically "reversible", with an entropy increase that is practically zero. The statement of the fact that entropy never decreases is found in the second law of thermodynamics. In a physical system, entropy provides a measure of the amount of thermal energy that cannot be used to do work. In some other definitions of entropy, it is a measure of how evenly energy (or some analogous property) is distributed in a system. Work and heat are determined by a process that a system undergoes, and only occur at the

Introduction to entropy boundary of a system. Entropy is a function of the state of a system, and has a value determined by the state variables of the system. The concept of entropy is central to the second law of thermodynamics. The second law determines which physical processes can occur. For example, it predicts that the flow of heat from a region of high temperature to a region of low temperature is a spontaneous process it can proceed along by itself without needing any extra external energy. When this process occurs, the hot region becomes cooler and the cold region becomes warmer. Heat is distributed more evenly throughout the system and the system's ability to do work has decreased because the temperature difference between the hot region and the cold region has decreased. Referring back to our definition of entropy, we can see that the entropy of this system has increased. Thus, the second law of thermodynamics can be stated to say that the entropy of an isolated system always increases, and such processes which increase entropy can occur spontaneously. Since entropy increases as uniformity increases, the second law says qualitatively that uniformity increases. The term entropy was coined in 1865 by the German physicist Rudolf Clausius, from the Greek words en-, "in", and trope "a turning", in analogy with energy.

140

Explanation
The concept of thermodynamic entropy arises from the second law of thermodynamics. It uses entropy to quantify the capacity of a system for change, namely that heat flows from a region of higher temperature to one with lower temperature, and to determine whether a thermodynamic process may occur. Entropy is defined by two descriptions, first as a macroscopic relationship between heat flow into a system and the system's change in temperature, and second, on a microscopic level, as the natural logarithm of the number of microstates of a system. Following the formalism of Clausius, the first definition can be mathematically stated as:[1]

Where dS is the change in entropy, q is the heat added to the system, which holds only during a reversible process Wikipedia:Please clarify, and T is temperature. If the temperature is allowed to vary, the equation must be integrated over the temperature path. This definition of entropy does not allow the determination of an absolute value, only of differences. In this context, the Second Law of Thermodynamics may be stated that for heat transferred over any valid process for any system, whether isolated or not,

The second definition of entropy comes from statistical mechanics. The entropy of a particular macrostate is defined to be Boltzmann's constant times the natural logarithm of the number of microstates corresponding to that macrostate, or mathematically

Where S is the entropy, kB is Boltzmann's constant, and is the number of microstates. The macrostate of a system is what we know about the system, for example the temperature, pressure, and volume of a gas in a box. For each set of values of temperature, pressure, and volume there are many arrangements of molecules which result in those values. The number of arrangements of molecules which could result in the same values for temperature, pressure and volume is the number of microstates. The concept of energy is related to the first law of thermodynamics, which deals with the conservation of energy and under which the loss in heat will result in a decrease in the internal energy of the thermodynamic system. Thermodynamic entropy provides a comparative measure of the amount of this decrease in internal energy of the system and the corresponding increase in internal energy of the surroundings at a given temperature. A simple and

Introduction to entropy more concrete visualization of the second law is that energy of all types changes from being localized to becoming dispersed or spread out, if it is not hindered from doing so. Entropy change is the quantitative measure of that kind of a spontaneous process: how much energy has flowed or how widely it has become spread out at a specific temperature. Entropy has been developed to describe any of several phenomena, depending on the field and the context in which it is being used. Information entropy takes the mathematical concepts of statistical thermodynamics into areas of probability theory unconnected with heat and energy.

141

Example of increasing entropy


Ice melting provides an example in which entropy increases in a small system, a thermodynamic system consisting of the surroundings (the warm room) and the entity of glass container, ice, water which has been allowed to reach thermodynamic equilibrium at the melting temperature of ice. In this system, some heat (Q) from the warmer surroundings at 298 K (77F, 25C) transfers to the cooler system of ice and water at its constant temperature (T) of 273 K (32F, 0C), the melting temperature of ice. The entropy of the system, which is Q/T, increases by Q/273K. The heat Q for this process is the energy required to change water from the solid state to the liquid state, and is called the enthalpy of fusion, i.e. H for ice fusion. It is important to realize that the entropy of the surrounding room decreases less than the entropy of the ice and water increases: the room Ice melting provides an example of entropy temperature of 298 K is larger than 273 K and therefore the ratio, increasing (entropy change), of Q/298K for the surroundings is smaller than the ratio (entropy change), of Q/273K for the ice and water system. This is always true in spontaneous events in a thermodynamic system and it shows the predictive importance of entropy: the final net entropy after such an event is always greater than was the initial entropy. As the temperature of the cool water rises to that of the room and the room further cools imperceptibly, the sum of the Q/T over the continuous range, at many increments, in the initially cool to finally warm water can be found by calculus. The entire miniature universe, i.e. this thermodynamic system, has increased in entropy. Energy has spontaneously become more dispersed and spread out in that universe than when the glass of ice + water was introduced and became a 'system' within it.

Origins and uses


Originally, entropy was named to describe the "waste heat," or more accurately, energy losses, from heat engines and other mechanical devices which could never run with 100% efficiency in converting energy into work. Later, the term came to acquire several additional descriptions, as more was understood about the behavior of molecules on the microscopic level. In the late 19th century, the word "disorder" was used by Ludwig Boltzmann in developing statistical views of entropy using probability theory to describe the increased molecular movement on the microscopic level. That was before quantum behavior came to be better understood by Werner Heisenberg and those who followed. Descriptions of thermodynamic (heat) entropy on the microscopic level are found in statistical thermodynamics and statistical mechanics. For most of the 20th century, textbooks tended to describe entropy as "disorder", following Boltzmann's early conceptualisation of the motional energy of molecules. More recently, there has been a trend in chemistry and physics textbooks to describe entropy as energy dispersal.[2] Entropy can also involve the dispersal of particles,

Introduction to entropy which are themselves energetic. Thus there are instances where both particles and energy disperse at different rates when substances are mixed together. The mathematics developed in statistical thermodynamics were found to be applicable in other disciplines. In particular, information sciences developed the concept of information entropy where a constant replaces the temperature which is inherent in thermodynamic entropy.

142

Heat and entropy


At a microscopic level, kinetic energy of molecules is responsible for the temperature of a substance or a system. Heat is the kinetic energy of molecules being transferred: when motional energy is transferred from hotter surroundings to a cooler system, faster-moving molecules in the surroundings collide with the walls of the system which transfers some of their energy to the molecules of the system and makes them move faster. Molecules in a gas like nitrogen at room temperature at any instant are moving at an average speed of nearly 500 miles per hour (210m/s), repeatedly colliding and therefore exchanging energy so that their individual speeds are always changing. Assuming an ideal-gas model, average kinetic energy increases linearly with temperature, so the average speed increases as the square root of temperature. Thus motional molecular energy (heat energy) from hotter surroundings, like faster-moving molecules in a flame or violently vibrating iron atoms in a hot plate, will melt or boil a substance (the system) at the temperature of its melting or boiling point. That amount of motional energy from the surroundings that is required for melting or boiling is called the phase-change energy, specifically the enthalpy of fusion or of vaporization, respectively. This phase-change energy breaks bonds between the molecules in the system (not chemical bonds inside the molecules that hold the atoms together) rather than contributing to the motional energy and making the molecules move any faster so it does not raise the temperature, but instead enables the molecules to break free to move as a liquid or as a vapor. In terms of energy, when a solid becomes a liquid or a liquid a vapor, motional energy coming from the surroundings is changed to potential energy in the substance (phase change energy, which is released back to the surroundings when the surroundings become cooler than the substance's boiling or melting temperature, respectively). Phase-change energy increases the entropy of a substance or system because it is energy that must be spread out in the system from the surroundings so that the substance can exist as a liquid or vapor at a temperature above its melting or boiling point. When this process occurs in a 'universe' that consists of the surroundings plus the system, the total energy of the 'universe' becomes more dispersed or spread out as part of the greater energy that was only in the hotter surroundings transfers so that some is in the cooler system. This energy dispersal increases the entropy of the 'universe'. The important overall principle is that Energy of all types changes from being localized to becoming dispersed or spread out, if not hindered from doing so. Entropy (or better, entropy change) is the quantitative measure of that kind of a spontaneous process: how much energy has been transferred/T or how widely it has become spread out at a specific temperature.

Classical calculation of entropy


When entropy was first defined and used in 1865 the very existence of atoms was still controversial and there was no concept that temperature was due to the motional energy of molecules or that heat was actually the transferring of that motional molecular energy from one place to another. Entropy change, , was described in macroscopic terms that could be directly measured, such as volume, temperature, or pressure. However, today the classical equation of entropy, responsible for what is happening: can be explained, part by part, in modern terms describing how molecules are

Introduction to entropy is the change in entropy of a system (some physical substance of interest) after some motional energy (heat) has been transferred to it by fast-moving molecules. So, Then, .

143

, the quotient of the motional energy (heat) q that is transferred

"reversibly" (rev) to the system from the surroundings (or from another system in contact with the first system) divided by T, the absolute temperature at which the transfer occurs. Reversible or reversibly (rev) simply means that T, the temperature of the system, has to stay (almost) exactly the same while any energy is being transferred to or from it. Thats easy in the case of phase changes, where the system absolutely must stay in the solid or liquid form until enough energy is given to it to break bonds between the molecules before it can change to a liquid or a gas. For example in the melting of ice at 273.15 K, no matter what temperature the surroundings are from 273.20 K to 500 K or even higher, the temperature of the ice will stay at 273.15 K until the last molecules in the ice are changed to liquid water, i.e., until all the hydrogen bonds between the water molecules in ice are broken and new, less-exactly fixed hydrogen bonds between liquid water molecules are formed. This amount of energy necessary for ice melting per mole has been found to be 6008 joules at 273 K. Therefore, the entropy change per mole is , or 22 J/K. When the temperature isn't at the melting or boiling point of a substance no intermolecular bond-breaking is possible, and so any motional molecular energy (heat) from the surroundings transferred to a system raises its temperature, making its molecules move faster and faster. As the temperature is constantly rising, there is no longer a particular value of T at which energy is transferred. However, a "reversible" energy transfer can be measured at a very small temperature increase, and a cumulative total can be found by adding each of many small temperature intervals or increments. For example, to find the entropy change from 300 K to 310 K,

measure the amount of energy transferred at dozens or hundreds of temperature increments, say from 300.00 K to 300.01 K and then 300.01 to 300.02 and so on, dividing the q by each T, and finally adding them all. Calculus can be used to make this calculation easier if the effect of energy input to the system is linearly dependent on the temperature change, as in simple heating of a system at moderate to relatively high temperatures. Thus, the energy being transferred per incremental change in temperature (the heat capacity, ), multiplied by the integral of . from to , is directly given by

Introductory descriptions of entropy


Traditionally, 20th century textbooks have introduced entropy as order and disorder so that it provides "a measurement of the disorder or randomness of a system". It has been argued that ambiguities in the terms used (such as "disorder" and "chaos") contribute to widespread confusion and can hinder comprehension of entropy for most students. A more recent formulation associated with Frank L. Lambert describing entropy as energy dispersal.[3]

References
[1] I. Klotz, R. Rosenberg, Chemical Thermodynamics - Basic Concepts and Methods, 7th ed., Wiley (2008), p. 125 [2] welcome to entropysite. (http:/ / entropysite. oxy. edu/ ) [3] welcome to entropysite. (http:/ / entropysite. oxy. edu/ )

Introduction to entropy

144

Further reading
Goldstein, Martin and Inge F. (1993) The Refrigerator and the Universe: Understanding the Laws of Energy. Harvard Univ. Press. Chpts. 4-12 touch on entropy in some way.

Entropy
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]
Entropy articles Introduction History Classical Statistical

v t

In thermodynamics, entropy (usual symbol S) is a measure of the number of specific ways in which a thermodynamic system may be arranged, often taken to be a measure of disorder, or a measure of progressing towards thermodynamic equilibrium. The entropy of an isolated system never decreases, because isolated systems spontaneously evolve towards thermodynamic equilibrium, which is the state of maximum entropy. The change in entropy (S) was originally defined for a thermodynamically reversible process as , which is found from the uniform thermodynamic temperature (T) of a closed system dividing an incremental reversible transfer of heat into that system (dQ). The above definition is sometimes called the macroscopic definition of entropy because it can be used without regard to any microscopic picture of the contents of a system. In thermodynamics, entropy has been found to be more generally useful and it has several other formulations. Entropy was discovered when it was noticed to be a quantity that behaves as a function of state, as a consequence of the second law of thermodynamics. Entropy is an extensive property, but it is often given as an intensive property of specific entropy as entropy per unit mass or entropy per mole.

Entropy The absolute entropy (S rather than S) was defined later, using either statistical mechanics or the third law of thermodynamics. In the modern microscopic interpretation of entropy in statistical mechanics, entropy is the amount of additional information needed to specify the exact physical state of a system, given its thermodynamic specification. The role of thermodynamic entropy in various thermodynamic processes can thus be understood by understanding how and why that information changes as the system evolves from its initial condition. It is often said that entropy is an expression of the disorder, or randomness of a system, or of our lack of information about it. The second law is now often seen as an expression of the fundamental postulate of statistical mechanics via the modern definition of entropy.

145

History
The analysis which led to the concept of entropy began with the work of French mathematician Lazare Carnot who in his 1803 paper Fundamental Principles of Equilibrium and Movement proposed that in any machine the accelerations and shocks of the moving parts represent losses of moment of activity. In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy. Building on this work, in 1824 Lazare's son Sadi Carnot published Reflections on the Motive Power of Fire which posited that in all heat-engines whenever "caloric", or what is now known as heat, falls through a temperature difference, work or motive power can be produced from the actions of the "fall of caloric" between a hot and cold body. He made the analogy with that of how water falls in a water wheel. This was an early insight into the second law of Rudolf Clausius (1822 1888), originator of the concept of entropy thermodynamics. Carnot based his views of heat partially on the early 18th century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford who showed (1789) that heat could be created by friction as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, that "no change occurs in the condition of the working body". The first law of thermodynamics, formalized based on the heat-friction experiments of James Joule in 1843, deals with the concept of energy, which is conserved in all processes; the first law, however, is unable to quantify the effects of friction and dissipation. In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave this "change" a mathematical interpretation by questioning the nature of the inherent loss of usable heat when work is done, e.g. heat produced by friction. Clausius described entropy as the transformation-content, i.e. dissipative energy use, of a thermodynamic system or working body of chemical species during a change of state. This was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877 Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy to be proportional to the logarithm of the number of microstates such a gas could occupy. Henceforth, the essential problem in statistical thermodynamics, i.e. according to Erwin Schrdinger, has been to determine the distribution of a given amount of energy E over N identical systems. Carathodory linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability.

Entropy

146

Definitions and descriptions


Any method involving the notion of entropy, the very existence of which depends on the second law of thermodynamics, will doubtless seem to many far-fetched, and may repel beginners as obscure and difficult of comprehension. Willard Gibbs, Graphical Methods in the Thermodynamics of Fluids

There are two related definitions of entropy: the thermodynamic definition and the statistical mechanics definition. Historically, the classical thermodynamics definition developed first, and it has more recently been extended in the area of non-equilibrium thermodynamics. Entropy was defined from a classical thermodynamics viewpoint, in which the details of the system's constituents are not directly considered, with their behavior only showing up in macroscopically averaged properties, e.g. heat capacity. Later, thermodynamic entropy was more generally defined from a statistical thermodynamics viewpoint, in which the detailed constituents modeled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.) were explicitly considered.

Function of state
There are many thermodynamic properties that are functions of state. This means that at a particular thermodynamic state (which should not be confused with the microscopic state of a system), these properties have a certain value. Often, if two properties have a particular value, then the state is determined and the other properties values are set. For instance, an ideal gas, at a particular temperature and pressure, has a particular volume according to the ideal gas equation. As another instance, a pure substance of single phase at a particular uniform temperature and pressure (and thus a particular state) is at not only a particular volume but also at a particular entropy.[1] That entropy is a function of state is one reason it is useful. In the Carnot cycle, the working fluid returns to the same state at a particular stage of the cycle, hence the line integral of any state function, such as entropy, over the cycle is zero.

Reversible process
Entropy is defined for a reversible process and for a system that, at all times, can be treated as being at a uniform state and thus at a uniform temperature. Reversibility is an ideal that some real processes approximate and that is often presented in study exercises. For a reversible process, entropy behaves as a conserved quantity and no change occurs in total entropy. More specifically, total entropy is conserved in a reversible process and not conserved in an irreversible process.[2] One has to be careful about system boundaries. For example, in the Carnot cycle, while the heat flow from the hot reservoir to the cold reservoir represents an increase in entropy, the work output, if reversibly and perfectly stored in some energy storage mechanism, represents a decrease in entropy that could be used to operate the heat engine in reverse and return to the previous state, thus the total entropy change is still zero at all times if the entire process is reversible. Any process that does not meet the requirements of a reversible process must be treated as an irreversible process, which is usually a complex task. An irreversible process increases entropy.[3] Heat transfer situations require two or more non-isolated systems in thermal contact. In irreversible heat transfer, heat energy is irreversibly transferred from the higher temperature system to the lower temperature system, and the combined entropy of the systems increases. Each system, by definition, must have its own absolute temperature applicable within all areas in each respective system in order to calculate the entropy transfer. Thus, when a system at higher temperature TH transfers heat dQ to a system of lower temperature TC, the former loses entropy dQ/TH and the latter gains entropy dQ/TC. The combined entropy change is dQ/TC-dQ/TH which is positive, reflecting an increase in the combined entropy. When calculating entropy, the same requirement of having an absolute temperature for each system in thermal contact exchanging heat also applies to the entropy change of an isolated system having no thermal contact.

Entropy

147

Carnot cycle
The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle.[4] In a Carnot cycle, heat ( absorbed from a 'hot' reservoir, isothermally at the higher temperature reservoir, , at a lower temperature, ) is , and given up isothermally to a 'cold'

. According to Carnot's principle, work can only be done when there is a and , since he was working under the incorrect hypothesis that caloric and were equal) when, in

temperature difference, and the work should be some function of the difference in temperature and the heat absorbed. Carnot did not distinguish between fact, .
[5]

theory was valid, and hence heat was conserved (the incorrect assumption that

Through the efforts of Clausius and Kelvin, it is now known that the maximum work that

can be done is the product of the Carnot efficiency and the heat absorbed at the hot reservoir: In order to derive the Carnot efficiency, , Kelvin had to evaluate the ratio of the

work done to the heat absorbed in the isothermal expansion with the help of the Carnot-Clapeyron equation which contained an unknown function, known as the Carnot function. The fact that the Carnot function could be the temperature, measured from zero, was suggested by Joule in a letter to Kelvin, and this allowed Kelvin to establish his absolute temperature scale.[6] It is also known that the work is the difference in the heat absorbed at the hot reservoir and rejected at the cold one: Since the latter is valid over the entire cycle, this gave Clausius the hint that at each stage of the cycle, work and heat would not be equal, but rather their difference would be a state function that would vanish upon completion of the cycle. The state function was called the internal energy and it became the first law of thermodynamics.[7] Now equating the two expressions gives

If we allow

to incorporate the algebraic sign, this becomes a sum and implies that there is a function of state

which is conserved over a complete cycle. Clausius called this state function entropy. One can see that entropy was discovered through mathematics rather than through laboratory results. It is a mathematical construct and has no easy physical analogy. This makes the concept somewhat obscure or abstract, akin to how the concept of energy arose. Then Clausius asked what would happen if there would be less work done than that predicted by Carnot's principle. The right-hand side of the first equation would be the upper bound of the work, which would now be converted into an inequality we get When the second equation is used to express the work as a difference in heats, or So more heat is given off to the cold reservoir than in the for the two states, then the above inequality can be written . The wasted heat implies that irreversible processes must have

Carnot cycle. If we denote the entropies by as a decrease in the entropy prevented the cycle from carrying out maximum work.

Classical thermodynamics
The thermodynamic definition was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium. Clausius created the term entropy in 1865 as an extensive thermodynamic variable was shown to be useful in characterizing the Carnot cycle. Heat transfer along the isotherm steps of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in increments of entropy equal to the ratio of incremental heat transfer divided by temperature, which was found to vary in the thermodynamic cycle but eventually return to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system. Clausius wrote that he "intentionally formed the word Entropy as similar as possible to the word Energy", basing the term on the Greek - [trop], transformation.[8]

Entropy While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases. The difference between an isolated system and closed system is that heat may not flow to and from an isolated system, but heat flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur. According to the Clausius equality, for a reversible cyclic process: is path independent. So we can define a state function S called entropy, which satisfies: With this we can only obtain the difference of entropy by integrating the above formula. To obtain the absolute value, we need the Third Law of Thermodynamics, which states that S=0 at absolute zero for perfect crystals. From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process where the system gives up energy E, and its entropy falls by S, a quantity at least TR S of that energy must be given up to the system's surroundings as unusable heat (TR is the temperature of the system's external surroundings). Otherwise the process will not go forward. In classical thermodynamics, the entropy of a system is defined only if it is in thermodynamic equilibrium. . This means the line integral

148

Statistical mechanics
The statistical definition was developed by Ludwig Boltzmann in the 1870s by analyzing the statistical behavior of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant number which has since been known as Boltzmann's constant. In summary, the thermodynamic definition of entropy provides the experimental definition of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature. The interpretation of entropy in statistical mechanics is the measure of uncertainty, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and velocity of every molecule. The more such states available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways in which a system may be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder).[9][10] This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) which could give rise to the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant. Specifically, entropy is a logarithmic measure of the number of states with significant probability of being occupied:

where kB is the Boltzmann constant, equal to 1.380651023J K1. The summation is over all the possible microstates of the system, and pi is the probability that the system is in the ith microstate.[11] This definition assumes that the basis set of states has been picked so that there is no information on their relative phases. In a different basis set, the more general expression is

Entropy

149

where

is the density matrix and

is the matrix logarithm. This density matrix formulation is not needed in cases

of thermal equilibrium so long as the basis states are chosen to be energy eigenstates. For most practical purposes, this can be taken as the fundamental definition of entropy since all other formulas for S can be mathematically derived from it, but not vice versa. In what has been called the fundamental assumption of statistical thermodynamics or the fundamental postulate in statistical mechanics, the occupation of any microstate is assumed to be equally probable (i.e. Pi=1/ where is the number of microstates); this assumption is usually justified for an isolated system in equilibrium.[12] Then the previous equation reduces to:

In thermodynamics, such a system is one in which the volume, number of molecules, and internal energy are fixed (the microcanonical ensemble). The most general interpretation of entropy is as a measure of our uncertainty about a system. The equilibrium state of a system maximizes the entropy because we have lost all information about the initial conditions except for the conserved variables; maximizing the entropy maximizes our ignorance about the details of the system. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model. The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications: if two observers use different sets of macroscopic variables, they will observe different entropies. For example, if observer A uses the variables U, V and W, and observer B uses U, V, W, X, then, by changing X, observer B can cause an effect that looks like a violation of the second law of thermodynamics to observer A. In other words: the set of macroscopic variables one chooses must include everything that may change in the experiment, otherwise one might see decreasing entropy! Entropy can be defined for any Markov processes with reversible dynamics and the detailed balance property. In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics.

Entropy of a system
Entropy is the above-mentioned unexpected and, to some, obscure integral that arises directly from the Carnot cycle. It is reversible heat divided by temperature. It is, remarkably, a function of state and it is fundamental and very useful. In a thermodynamic system, pressure, density, and temperature tend to become uniform over time because this equilibrium state has higher probability (more possible combinations of microstates) than any other; see statistical mechanics. As an example, for a glass of ice water in air at room temperature, the difference in temperature between a warm room (the surroundings) and cold glass of ice and water (the A thermodynamic system system and not part of the room), begins to be equalized as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. The entropy of the room has decreased as some of its energy has been dispersed to

Entropy

150

the ice and water. However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalization has progressed. Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry.[13] Historically, the concept of entropy evolved in order to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy.[14] For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in entropy correspond to irreversible changes in a system, because some energy is expended as waste heat, limiting the amount of work a system can do.[][15]

A temperatureentropy diagram for steam. The vertical axis represents uniform temperature, and the horizontal axis represents specific entropy. Each dark line on the graph represents constant pressure, and these form a mesh with light gray lines of constant volume. (Dark-blue is liquid water, light-blue is boiling water, and faint-blue is steam. Grey-blue represents supercritical liquid water.)

Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Entropy can be calculated for a substance as the standard molar entropy from absolute zero (also known as absolute entropy) or as a difference in entropy from some other reference state which is defined as zero entropy. Entropy has the dimension of energy divided by temperature, which has a unit of joules per kelvin (J/K) in the International System of Units. While these are the same units as heat capacity, the two concepts are distinct.[16] Entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. The second law of thermodynamics, states that a closed system has entropy which may increase or otherwise remain constant. Chemical reactions cause changes in entropy and entropy plays an important role in determining in which direction a chemical reaction spontaneously proceeds. One dictionary definition of entropy is that it is "a measure of thermal energy per unit temperature that is not available for useful work". For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine. A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there will be no net exchange of heat or work the entropy change will be entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing.[17]

Second law of thermodynamics


The second law of thermodynamics states that in general the total entropy of any system will not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system will tend not to decrease. It follows that heat will not flow from a colder body to a hotter body without the application of work (the imposition of order) to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat

Entropy from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion system. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient. It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, will always make a bigger contribution to the entropy of the environment than will the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics. In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature T absorbing an infinitesimal amount of heat q in a reversible way, is given by q/T. More explicitly, an energy TR S is not available to do useful work, where TR is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy. Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely.

151

Applications
The fundamental thermodynamic relation
The entropy of a system depends on its internal energy and the external parameters, such as the volume. In the thermodynamic limit this fact leads to an equation relating the change in the internal energy to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If the volume is the only external parameter, this relation is: dU = T dS - P dV Since the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the entropy, pressure and temperature may not exist). The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities.

Entropy in chemical thermodynamics


Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system the combination of a subsystem under study and its surroundings increases during all spontaneous chemical and physical processes. The Clausius equation of qrev/T = S introduces the measurement of entropy change, S. Entropy change describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems always from hotter to cooler spontaneously. The thermodynamic entropy therefore has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI).

Entropy Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: Jkg1K1). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of Jmol1K1. Thus, when one mole of substance at about 0K is warmed by its surroundings to 298K, the sum of the incremental values of qrev/T constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at 298K. Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture. Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, S must be incorporated in an expression that includes both the system and its surroundings, Suniverse = Ssurroundings + S system. This expression becomes, via some steps, the Gibbs free energy equation for reactants and products in the system: G [the Gibbs free energy change of the system] = H [the enthalpy change] T S [the entropy change].

152

Entropy balance equation for open systems


In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. In a system in which there are flows of both heat ( ) and work, i.e. (shaft work) and P(dV/dt) (pressure-volume work), across the system boundaries, the heat flow, but not the work flow, causes a change in the entropy of the system. This rate of entropy change is where T is the absolute thermodynamic temperature of the system at the point of the heat flow. If, in addition, there are mass flows across the system boundaries, the total entropy of the system will also change due to this convected flow.

During steady-state continuous operation, an entropy balance applied to an open system accounts for system entropy changes related to heat flow and mass flow across the system boundary.

To derive a generalized entropy balanced equation, we start with the general balance equation for the change in any extensive quantity in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that d/dt, i.e. the rate of change of in the system, equals the rate at which enters the system at the boundaries, minus the rate at which leaves the system across the system boundaries, plus the rate at which is generated within the system. Using this generic balance equation, with respect to the rate of change with time of the extensive quantity entropy S, the entropy balance equation for an open thermodynamic system is:

where

Entropy

153

= the net rate of entropy flow due to the flows of mass into and out of the system (where entropy per unit mass). = the rate of entropy flow due to the flow of heat across the system boundary. = the rate of entropy production within the system. Note, also, that if there are multiple heat flows, the term heat flow and is to be replaced by where

is the

is the temperature at the jth heat flow port into the system.

Entropy and other forms of energy beyond work


The fundamental equation of thermodynamics for a system containing n constituent species, with the i-th species having Ni particles is, with additional terms:

U is internal energy, T is temperature, P is pressure, V is volume, of molecules of the chemical, Solving for the change in entropy we get:

and Ni are the chemical potential and number

and Q are electric potential and charge, v and p are velocity and momentum.

There is a minuscule change in internal energy for any change in entropy (ds will change by 1/T*dU). But in theory, the entropy of a system can be changed without changing its energy. That is done by keeping all variables constant, including temperature (isothermally) and entropy (adiabatically). That is easy to see, but typically the energy of the system will change. e.g. You can attempt to keep volume constant but you will always do work on the system, and work changes the energy. Other potentials such as the gravitational potential can also be taken into account.

Entropy change formulas for simple processes


For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas.

Isothermal expansion or compression of an ideal gas


For the expansion (or compression) of an ideal gas from an initial volume and pressure at any constant temperature, the change in entropy is given by: and pressure to a final volume

Here

is the number of moles of gas and

is the ideal gas constant. These equations also apply for expansion

into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant.

Entropy

154

Cooling and heating


For heating (or cooling) of any system (gas, liquid or solid) at constant pressure from an initial temperature final temperature , the entropy change is . provided that the constant-pressure molar heat capacity (or specific heat) CP is constant and that no phase transition occurs in this temperature interval. Similarly at constant volume, the entropy change is , where the constant-volume heat capacity Cv is constant and there is no phase change. At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply.[18] Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps - heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is . Similarly if the temperature and pressure of an ideal gas both vary, . to a

Phase transitions
Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (melting) of a solid to a liquid at the melting point Tm, the entropy of fusion is . Similarly for vaporization of a liquid to a gas at the boiling point Tb, the entropy of vaporization is .

Approaches to understanding entropy


As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid.

Standard textbook definitions


The following is a list of additional definitions of entropy from a collection of textbooks: a measure of energy dispersal at a specific temperature. a measure of disorder in the universe or of the availability of the energy in a system to do work.[19] a measure of a systems thermal energy per unit temperature that is unavailable for doing useful work.[20] In Boltzmann's definition, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium. Consistent with the Boltzmann definition, the second law of thermodynamics needs to be re-worded as such that entropy decreases over time, though the underlying principle remains the same.

Entropy

155

Order and disorder


Entropy has often been loosely associated with the amount of order, disorder, and/or chaos in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the status quo of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies.[21][22] One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of disorder in the system is given by:

Similarly, the total amount of "order" in the system is given by:

In which CD is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, CI is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and CO is the "order" capacity of the system.

Energy dispersal
The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature.[23] Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels. Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students.[24] As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures will tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics[25] (compare discussion in next section). Physical chemist Peter Atkins, for example, who previously wrote of dispersal leading to a disordered state, now writes that "spontaneous changes are always accompanied by a dispersal of energy".

Relating entropy to energy usefulness


Following on from the above, it is possible (in a thermal context) to regard entropy as an indicator or measure of the effectiveness or usefulness of a particular quantity of energy. This is because energy supplied at a high temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at room temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a loss which can never be replaced. Thus, the fact that the entropy of the universe is steadily increasing, means that its total energy is becoming less useful: eventually, this will lead to the "heat death of the Universe".

Entropy

156

Entropy and adiabatic accessibility


A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E.H.Lieb and J. Yngvason in 1999.[26] This approach has several predecessors, including the pioneering work of Constantin Carathodory from 1909 [27] and the monograph by R. Giles from 1964.[28] In the setting of Lieb and Yngvason one starts by picking, for a unit amount of the substance under consideration, two reference states and such that the latter is adiabatically accessible from the former but not vice versa. Defining the entropies of the reference states to be 0 and 1 respectively the entropy of a state amount, , in the state is defined as the largest number in the state such that is adiabatically accessible from a composite state consisting of an amount and a complementary

. A simple but important result within this setting is that entropy is uniquely

determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: It is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling.

Entropy in quantum mechanics


In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy",

where is the density matrix and Tr is the trace operator. This upholds the correspondence principle, because in the classical limit, when the phases between the basis states used for the classical probabilities are purely random, this expression is equivalent to the familiar classical definition of entropy, , i.e. in such a basis the density matrix is diagonal. Von Neumann established a rigorous mathematical framework for quantum mechanics with his work Mathematische Grundlagen der Quantenmechanik. He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain.

Information theory
I thought of calling it 'information', but the word was overly used, so I decided to call it 'uncertainty'. [...] Von Neumann told me, 'You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.' Conversation between Claude Shannon and John von Neumann regarding what name to give to the attenuation in phone-line signals
[29]

When viewed in terms of information theory, the entropy state function is simply the amount of information (in the Shannon sense) that would be needed to specify the full microstate of the system. This is left unspecified by the macroscopic description. In information theory, entropy is the measure of the amount of information that is missing before reception and is sometimes referred to as Shannon entropy.[30] Shannon entropy is a broad and general concept which finds applications in information theory as well as thermodynamics. It was originally devised by Claude Shannon in 1948 to study the amount of information in a transmitted message. The definition of the information entropy is, however, quite general, and is expressed in terms of a discrete set of probabilities pi so that

Entropy

157

In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average amount of information in a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of yes/no questions needed to determine the content of the message. The question of the link between information entropy and thermodynamic entropy is a debated topic. While most authors argue that there is a link between the two, a few argue that they have nothing to do with each other.[31] The expressions for the two entropies are similar. The information entropy H for equal probabilities pi = p = 1/n is where k is a constant which determines the units of entropy. There are many ways of demonstrating the equivalence of "information entropy" and "physics entropy", that is, the equivalence of "Shannon entropy" and "Boltzmann entropy". Nevertheless, some authors argue for dropping the word entropy for the H function of information theory and using Shannon's other term "uncertainty" instead.[32]

Interdisciplinary applications of entropy


Although the concept of entropy was originally a thermodynamic construct, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution.

Thermodynamic and statistical mechanics concepts


Entropy unit a non-S.I. unit of thermodynamic entropy, usually denoted "e.u." and equal to one calorie per Kelvin per mole, or 4.184 Joules per Kelvin per mole. Gibbs entropy the usual statistical mechanical entropy of a thermodynamic system. Boltzmann entropy a type of Gibbs entropy, which neglects internal statistical correlations in the overall particle distribution. Tsallis entropy a generalization of the standard Boltzmann-Gibbs entropy. Standard molar entropy is the entropy content of one mole of substance, under conditions of standard temperature and pressure. Residual entropy the entropy present after a substance is cooled arbitrarily close to absolute zero. Entropy of mixing the change in the entropy when two different chemical substances or components are mixed. Loop entropy is the entropy lost upon bringing together two residues of a polymer within a prescribed distance. Conformational entropy is the entropy associated with the physical arrangement of a polymer chain that assumes a compact or globular state in solution. Entropic force a microscopic force or reaction tendency related to system organization changes, molecular frictional considerations, and statistical variations. Free entropy an entropic thermodynamic potential analogous to the free energy. Entropic explosion an explosion in which the reactants undergo a large change in volume without releasing a large amount of heat. Entropy change a change in entropy dS between two equilibrium states is given by the heat transferred dQrev divided by the absolute temperature T of the system in this interval. Sackur-Tetrode entropy the entropy of a monatomic classical ideal gas determined via quantum considerations.

Entropy

158

The arrow of time


Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases. Hence, from this perspective, entropy measurement is thought of as a kind of clock.

Cosmology
Since a finite universe is an isolated system, the Second Law of Thermodynamics states that its total entropy is constantly increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy, so that no more work can be extracted from any source. If the universe can be considered to have generally increasing entropy, then as Sir Roger Penrose has pointed out gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. Hawking has, however, recently changed his stance on this aspect.[citation needed] The role of entropy in cosmology remains a controversial subject. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult. The entropy gap is widely believed to have been originally opened up by the early rapid exponential expansion of the universe.

Notes
[1] Entropy (http:/ / theory. phy. umist. ac. uk/ ~judith/ stat_therm/ node29. html) JA McGovern [2] Irreversibility, Entropy Changes, and ``Lost Work (http:/ / web. mit. edu/ 16. unified/ www/ FALL/ thermodynamics/ notes/ node48. html) Thermodynamics and Propulsion, Z. S. Spakovszky, 2002 [3] What is entropy? (http:/ / www. chem1. com/ acad/ webtext/ thermeq/ TE2. html) Thermodynamics of Chemical Equilibrium by S. Lower, 2007 [4] B. H. Lavenda, "A New Perspective on Thermodynamics" Springer, 2009, Sec. 2.3.4, [5] S. Carnot, "Reflexions on the Motive Power of Fire", translated and annotated by R. Fox, Manchester University Press, 1986, p. 26; C. Truesdell, "The Tragicomical History of Thermodynamics, Springer, 1980, pp. 7885 [6] J. Clerk-Maxwell, "Theory of Heat", 10th ed. Longmans, Green and Co., 1891, pp. 155158. [7] R. Clausius, "The Mechanical Theory of Heat", translated by T. Archer Hirst, van Voorst, 1867, p. 28 [8] A machine in this context includes engineered devices as well as biological organisms. [9] McGraw-Hill Concise Encyclopedia of Chemistry, 2004 [10] Barnes & Noble's Essential Dictionary of Science, 2004 [11] Frigg, R. and Werndl, C. "Entropy A Guide for the Perplexed" (http:/ / charlottewerndl. net/ Entropy_Guide. pdf). In Probabilities in Physics; Beisbart C. and Hartmann, S. Eds; Oxford University Press, Oxford, 2010 [12] Schroeder, Daniel V. An Introduction toThermal Physics. Addison Wesley Longman, 1999, p. 57 [13] Sandler S. I., Chemical and Engineering Thermodynamics, 3rd Ed. Wiley, New York, 1999 p. 91 [14] McQuarrie D. A., Simon J. D., Physical Chemistry: A Molecular Approach, University Science Books, Sausalito 1997 p. 817 [15] Oxford Dictionary of Science, 2005 [16] Heat Capacities (http:/ / theory. phy. umist. ac. uk/ ~judith/ stat_therm/ node50. html) JA McGovern [17] Ben-Naim, Arieh, On the So-Called Gibbs Paradox, and on the Real Paradox, Entropy, 9, pp. 132136, 2007 Link (http:/ / www. mdpi. org/ entropy/ papers/ e9030132. pdf)

Entropy
[18] The Third Law (http:/ / www4. ncsu. edu/ ~franzen/ public_html/ CH433/ lecture/ Third_Law. pdf) Chemistry 433, Stefan Franzen, ncsu.edu [19] Gribbin's Q Is for Quantum: An Encyclopedia of Particle Physics, Free Press ISBN 0-684-85578-X , 2000 [20] Entropy (http:/ / www. britannica. com/ EBchecked/ topic/ 189035/ entropy) Encyclopedia Britannica [21] Landsberg, P.T. (1984). Is Equilibrium always an Entropy Maximum? J. Stat. Physics 35, pp. 159169 [22] Landsberg, P.T. (1984). Can Entropy and Order Increase Together? Physics Letters 102A, pp. 171173 [23] Frank L. Lambert, A Students Approach to the Second Law and Entropy (http:/ / entropysite. oxy. edu/ students_approach. html) [24] Carson, E. M. and J. R. Watson (Department of Educational and Professional Studies, Kings College, London), Undergraduate students' understandings of entropy and Gibbs Free energy (http:/ / www. rsc. org/ pdf/ uchemed/ papers/ 2002/ p2_carson. pdf), University Chemistry Education 2002 Papers, Royal Society of Chemistry [25] Frank L. Lambert, JCE 2002 (79) 187 [Feb] Disorder A Cracked Crutch for Supporting Entropy Discussions (http:/ / jchemed. chem. wisc. edu/ HS/ Journal/ Issues/ 2002/ Feb/ abs187. html) [26] Elliott H. Lieb, Jakob Yngvason: The Physics and Mathematics of the Second Law of Thermodynamics (http:/ / de. arxiv. org/ abs/ cond-mat/ 9708200), Phys. Rep. 310, pp. 196 (1999) [27] Constantin Carathodory: Untersuchungen ber die Grundlagen der Thermodynamik, Math. Ann., 67, pp. 355386, 1909 [28] Robin Giles: Mathematical Foundations of Thermodynamics", Pergamon, Oxford 1964 [29] M. Tribus, E.C. McIrvine, Energy and information (http:/ / math. library. wisc. edu/ reserves/ proxy/ Math801/ energy. pdf), Scientic American, 224 (September 1971), pp. 178184 [30] Balian, Roger (2003). Entropy Protean Concept (http:/ / www-spht. cea. fr/ articles_k2/ t03/ 193/ publi. pdf) (PDF). Poincar Seminar 2: pp. 119145 [31] Lin, Shu-Kun. (1999). Diversity and Entropy (http:/ / www. mdpi. com/ 1099-4300/ 1/ 1/ 1). Entropy (Journal), 1[1], pp. 13 [32] Schneider, Tom, DELILA system (Deoxyribonucleic acid Library Language), (Information Theory Analysis of binding sites), Laboratory of Mathematical Biology, National Cancer Institute, FCRDC Bldg. 469. Rm 144, P.O. Box. B Frederick, MD 21702-1201, USA

159

References Further reading


Atkins, Peter; Julio De Paula (2006). Physical Chemistry, 8th ed. Oxford University Press. ISBN0-19-870072-5. Baierlein, Ralph (2003). Thermal Physics. Cambridge University Press. ISBN0-521-65838-1. Ben-Naim, Arieh (2007). Entropy Demystified. World Scientific. ISBN981-270-055-2. Callen, Herbert, B (2001). Thermodynamics and an Introduction to Thermostatistics, 2nd Ed. John Wiley and Sons. ISBN0-471-86256-8. Chang, Raymond (1998). Chemistry, 6th Ed. New York: McGraw Hill. ISBN0-07-115221-0. Cutnell, John, D.; Johnson, Kenneth, J. (1998). Physics, 4th ed. John Wiley and Sons, Inc. ISBN0-471-19113-2. Dugdale, J. S. (1996). Entropy and its Physical Meaning (2nd ed.). Taylor and Francis (UK); CRC (US). ISBN0-7484-0569-0. Fermi, Enrico (1937). Thermodynamics. Prentice Hall. ISBN0-486-60361-X. Goldstein, Martin; Inge, F (1993). The Refrigerator and the Universe. Harvard University Press. ISBN0-674-75325-9. Gyftopoulos, E.P.; G.P. Beretta (1991, 2005, 2010). Thermodynamics. Foundations and Applications. Dover. ISBN0-486-43932-1. Haddad, Wassim M.; Chellaboina, VijaySekhar; Nersesov, Sergey G. (2005). Thermodynamics A Dynamical Systems Approach. Princeton University Press. ISBN0-691-12327-6. Kroemer, Herbert; Charles Kittel (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. ISBN0-7167-1088-9. Lambert, Frank L.; entropysite.oxy.edu (http://entropysite.oxy.edu/) Penrose, Roger (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. New York: A. A. Knopf. ISBN0-679-45443-8. Reif, F. (1965). Fundamentals of statistical and thermal physics. McGraw-Hill. ISBN0-07-051800-9.

Schroeder, Daniel V. (2000). Introduction to Thermal Physics. New York: Addison Wesley Longman. ISBN0-201-38027-7.

Entropy Serway, Raymond, A. (1992). Physics for Scientists and Engineers. Saunders Golden Subburst Series. ISBN0-03-096026-6. Spirax-Sarco Limited, Entropy A Basic Understanding (http://www.spiraxsarco.com/resources/ steam-engineering-tutorials/steam-engineering-principles-and-heat-transfer/entropy-a-basic-understanding.asp) A primer on entropy tables for steam engineering vonBaeyer; Hans Christian (1998). Maxwell's Demon: Why Warmth Disperses and Time Passes. Random House. ISBN0-679-43342-2. Entropy for beginners a wikibook An Intuitive Guide to the Concept of Entropy Arising in Various Sectors of Science a wikibook

160

External links
Entropy and the Second Law of Thermodynamics (https://www.youtube.com/watch?v=ER8d_ElMJu0) - an A-level physics lecture with detailed derivation of entropy based on Carnot cycle Khan Academy: entropy lectures, part of Chemistry playlist (https://www.youtube.com/ playlist?list=PL1A79AF620ABA411C) Proof: S (or Entropy) is a valid state variable (https://www.youtube.com/watch?v=sPz5RrFus1Q) Thermodynamic Entropy Definition Clarification (https://www.youtube.com/watch?v=PFcGiMLwjeY) Reconciling Thermodynamic and State Definitions of Entropy (https://www.youtube.com/ watch?v=WLKEVfLFau4) Entropy Intuition (https://www.youtube.com/watch?v=xJf6pHqLzs0) More on Entropy (https://www.youtube.com/watch?v=dFFzAP2OZ3E) The Second Law of Thermodynamics and Entropy (http://oyc.yale.edu/physics/phys-200/lecture-24) - Yale OYC lecture, part of Fundamentals of Physics I (PHYS 200) Entropy and the Clausius inequality (http://ocw.mit.edu/courses/chemistry/ 5-60-thermodynamics-kinetics-spring-2008/video-lectures/lecture-9-entropy-and-the-clausius-inequality/) MIT OCW lecture, part of 5.60 Thermodynamics & Kinetics, Spring 2008 The Discovery of Entropy (https://www.youtube.com/watch?v=glrwlXRhNsg) by Adam Shulman. Hour-long video, January 2013. Moriarty, Philip; Merrifield, Michael (2009). "S Entropy" (http://www.sixtysymbols.com/videos/entropy. htm). Sixty Symbols. Brady Haran for the University of Nottingham.

Pressure

161

Pressure
Pressure
Common symbol(s): in SI base quantities: SI unit: P 1kg/(ms2) Pascal (Pa)

Derivations from other quantities: P = F / A

Pressure as exerted by particle collisions inside a closed container.

Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Pressure (symbol: P or p) is the ratio of force to the area over which that force is distributed. Pressure is force per unit area applied in a direction perpendicular to the surface of an object. Gauge pressure (also spelled gage pressure)[1] is the pressure relative to the local atmospheric or ambient pressure. Pressure is measured in any unit of force divided by any unit of area. The SI unit of pressure is the newton per square metre, which is called

Pressure the pascal (Pa) after the seventeenth-century philosopher and scientist Blaise Pascal. A pressure of 1 Pa is small; it approximately equals the pressure exerted by a dollar bill resting flat on a table. Everyday pressures are often stated in kilopascals (1 kPa = 1000 Pa).

162

Definition
Pressure is the amount of force acting perpendicularly per unit area. The symbol of pressure is p.[2]

Formula
Mathematically:

where: is the pressure, is the normal force, is the area of the surface on contact. Pressure is a scalar quantity. It relates the vector surface element (a vector normal to the surface) with the normal force acting on it. The pressure is the scalar proportionality constant that relates the two normal vectors:

The minus sign comes from the fact that the force is considered towards the surface element, while the normal vector points outward. It is incorrect (although rather usual) to say "the pressure is directed in such or such direction". The pressure, as a scalar, has no direction. The force given by the previous relationship to the quantity has a direction, but the pressure does not. If we change the orientation of the surface element, the direction of the normal force changes accordingly, but the pressure remains the same. Pressure is transmitted to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. It is a fundamental parameter in thermodynamics, and it is conjugate to volume.

Pressure

163

Units
The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N/m2 or kgm1s2). This special name for the unit was added in 1971; before that, pressure in SI was expressed simply as N/m2. Non-SI measures such as pounds per square inch and bars are used in some parts of the world, primarily in the United States of America. The cgs unit of pressure is the barye (ba), equal to 1dyncm2 or 0.1Pa. Pressure is sometimes expressed in grams-force/cm2, or as kg/cm2 and the like without properly identifying the force units. But using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as units of force is expressly forbidden in SI. The technical atmosphere (symbol: at) is 1kgf/cm2 (98.0665kPa or 14.223psi). Since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume measured in Jm3, related to energy density.
Mercury column Some meteorologists prefer the hectopascal (hPa) for atmospheric air pressure, which is equivalent to the older unit millibar (mbar). Similar pressures are given in kilopascals (kPa) in most other fields, where the hecto- prefix is rarely used. The inch of mercury is still used in the United States. Oceanographers usually measure underwater pressure in decibars (dbar) because an increase in pressure of 1dbar is approximately equal to an increase in depth of 1meter.

The standard atmosphere (atm) is an established constant. It is approximately equal to typical air pressure at earth mean sea level and is defined as follows: standard atmosphere = 101,325Pa = 101.325kPa = 1,013.25hPa. Because pressure is commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., centimeters of water, mm or inches of mercury). The most common choices are mercury (Hg) and water; water is nontoxic and readily available, while mercury's high density allows a shorter column (and so a smaller manometer) to be used to measure a given pressure. The pressure exerted by a column of liquid of height h and density is given by the hydrostatic pressure equation p = gh. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. When millimeters of mercury or inches of mercury are quoted today, these units are not based on a physical column of mercury; rather, they have been given precise definitions that can be expressed in terms of SI units.[citation needed] One mmHg (millimeter of mercury) is equal to one torr. The water-based units still depend on the density of water, a measured, rather than defined, quantity. These manometric units are still encountered in many fields. Blood pressure is measured in millimeters of mercury in most of the world, and lung pressures in centimeters of water are still common. Underwater divers use the metre sea water (msw or MSW) and foot sea water (fsw or FSW) units of pressure, and these are the standard units for pressure gauges used to measure pressure exposure in diving chambers and personal decompression computers. A msw is defined as 0.1 bar, and is not the same as a linear metre of depth, and 33.066fsw = 1atm. Note that the pressure conversion from msw to fsw is different from the length conversion: 10msw = 32.6336fsw, while 10m = 32.8083ft Gauge pressure is often given in units with 'g' appended, e.g. 'kPag' or 'psig', and units for measurements of absolute pressure are sometimes given a suffix of 'a', to avoid confusion, for example 'kPaa', 'psia'. However, the US National Institute of Standards and Technology recommends that, to avoid confusion, any modifiers be instead applied to the quantity being measured rather than the unit of measure For example, "Pg = 100 psi" rather than "P = 100 psig".

Pressure Differential pressure is expressed in units with 'd' appended; this type of measurement is useful when considering sealing performance or whether a valve will open or close. Presently or formerly popular pressure units include the following: atmosphere (atm) manometric units: centimeter, inch, and millimeter of mercury (torr) Height of equivalent column of water, including millimeter (mm H2O), centimeter (cm H2O), meter, inch, and foot of water customary units: kip, short ton-force, long ton-force, pound-force, ounce-force, and poundal per square inch short ton-force and long ton-force per square inch fsw (feet sea water) used in underwater diving, particularly in connection with diving pressure exposure and decompression non-SI metric units: bar, decibar, millibar msw (metres sea water), used in underwater diving, particularly in connection with diving pressure exposure and decompression kilogram-force, or kilopond, per square centimeter (technical atmosphere) gram-force and tonne-force (metric ton-force) per square centimeter barye (dyne per square centimeter) kilogram-force and tonne-force per square meter sthene per square meter (pieze)

164

Pressure units
v t [3] e 1 Pa 1 bar 1 at 1 atm 1 Torr 1 psi Pascal (Pa) 1 N/m2 105 0.980665 105 1.01325 105 133.3224 6.8948103 Bar (bar) 105 106dyn/cm2 0.980665 1.01325 1.333224103 6.8948102 Technical atmosphere Standard atmosphere (at) 1.0197105 1.0197 1 kp/cm2 1.0332 1.359551103 7.03069102 (atm) 9.8692106 0.98692 0.9678411 p0 1.315789103 6.8046102 Torr (Torr) 7.5006103 750.06 735.5592 760 1mmHg 51.71493 Pounds per square inch (psi) 1.450377104 14.50377 14.22334 14.69595 1.933678102 1 lbF/in2

Examples
As an example of varying pressures, a finger can be pressed against a wall without making any lasting impression; however, the same finger pushing a thumbtack can easily damage the wall. Although the force applied to the surface is the same, the thumbtack applies more pressure because the point concentrates that force into a smaller area. Pressure is transmitted to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. Unlike stress, pressure is defined as a scalar quantity. The negative gradient of pressure is called the force density.

Pressure Another example is of a common knife. If we try to cut a fruit with the flat side it obviously will not cut. But if we take the thin side, it will cut smoothly. The reason is that the flat side has a greater surface area (less pressure) and so it does not cut the fruit. When we take the thin side, the surface area is reduced and so it cuts the fruit easily and quickly. This is one example of a practical application of pressure. For gases, pressure is sometimes measured not as an absolute pressure, but relative to atmospheric pressure; such measurements are called gauge pressure. An example of this is the air pressure in an automobile tire, which might be said to be "220kPa (32psi)", but is actually 220kPa (32psi) above atmospheric pressure. Since atmospheric pressure at sea level is about 100kPa (14.7psi), the absolute pressure in the tire is therefore about 320kPa (46.7psi). In technical work, this is written "a gauge pressure of 220kPa (32psi)". Where space is limited, such as on pressure gauges, name plates, graph labels, and table headings, the use of a modifier in parentheses, such as "kPa (gauge)" or "kPa (absolute)", is permitted. In non-SI technical work, a gauge pressure of 32psi is sometimes written as "32psig" and an absolute pressure as "32psia", though the other methods explained above that avoid attaching characters to the unit of pressure are preferred.[4] Gauge pressure is the relevant measure of pressure wherever one is interested in the stress on storage vessels and the plumbing components of fluidics systems. However, whenever equation-of-state properties, such as densities or changes in densities, must be calculated, pressures must be expressed in terms of their absolute values. For instance, if the atmospheric pressure is 100kPa, a gas (such as helium) at 200kPa (gauge) (300kPa [absolute]) is 50% denser than the same gas at 100kPa (gauge) (200kPa [absolute]). Focusing on gauge values, one might erroneously conclude the first sample had twice the density of the second one.

165

Scalar nature
In a static gas, the gas as a whole does not appear to move. The individual molecules of the gas, however, are in constant random motion. Because we are dealing with an extremely large number of molecules and because the motion of the individual molecules is random in every direction, we do not detect any motion. If we enclose the gas within a container, we detect a pressure in the gas from the molecules colliding with the walls of our container. We can put the walls of our container anywhere inside the gas, and the force per unit area (the pressure) is the same. We can shrink the size of our "container" down to a very small point (becoming less true as we approach the atomic scale), and the pressure will still have a single value at that point. Therefore, pressure is a scalar quantity, not a vector quantity. It has magnitude but no direction sense associated with it. Pressure acts in all directions at a point inside a gas. At the surface of a gas, the pressure force acts perpendicular (at right angle) to the surface. A closely related quantity is the stress tensor , which relates the vector force F to the vector area A via

This tensor may be expressed as the sum of the viscous stress tensor minus the hydrostatic pressure. The negative of the stress tensor is sometimes called the pressure tensor, but in the following, the term "pressure" will refer only to the scalar pressure. According to the theory of general relativity, pressure increases the strength of a gravitational field (see stress-energy tensor) and so adds to the mass-energy cause of gravity. This effect is unnoticeable at everyday pressures but is significant in neutron stars, although it has not been experimentally tested.

Pressure

166

Types
Fluid pressure
Fluid pressure is the pressure at some point within a fluid, such as water or air (for more information specifically about liquid pressure, see section below). Fluid pressure occurs in one of two situations: 1. an open condition, called "open channel flow" 1. the ocean, or 2. swimming pool, or 3. the atmosphere. 3. a closed condition, called closed conduits 1. water line, or 2. gas line. Pressure in open conditions usually can be approximated as the pressure in "static" or non-moving conditions (even in the ocean where there are waves and currents), because the motions create only negligible changes in the pressure. Such conditions conform with principles of fluid statics. The pressure at any given point of a non-moving (static) fluid is called the hydrostatic pressure. Closed bodies of fluid are either "static", when the fluid is not moving, or "dynamic", when the fluid can move as in either a pipe or by compressing an air gap in a closed container. The pressure in closed conditions conforms with the principles of fluid dynamics. The concepts of fluid pressure are predominantly attributed to the discoveries of Blaise Pascal and Daniel Bernoulli. Bernoulli's equation can be used in almost any situation to determine the pressure at any point in a fluid. The equation makes some assumptions about the fluid, such as the fluid being ideal and incompressible. An ideal fluid is a fluid in which there is no friction, it is inviscid, zero viscosity. The equation is written between any two points a and b in a system that contain the same fluid.

where: p = pressure of the fluid = g = densityacceleration of gravity = specific weight of the fluid. v = velocity of the fluid g = acceleration of gravity z = elevation = pressure head = velocity head

Pressure Applications (Hydraulics brakes) Artesian well Blood pressure Hydraulic head Plant cell turgidity Pythagorean cup

167

Explosion or deflagration pressures


Explosion or deflagration pressures are the result of the ignition of explosive gases, mists, dust/air suspensions, in unconfined and confined spaces.

Negative pressures
While pressures are, in general, positive, there are several situations in which negative pressures may be encountered: When dealing in relative (gauge) pressures. For instance, an absolute pressure of 80kPa may be described as a gauge pressure of 21kPa (i.e., 21kPa below an atmospheric pressure of 101kPa). When attractive forces (e.g., van der Waals forces) between the particles of a fluid exceeds repulsive forces due to thermal motion. These forces explain ascent of sap in tall plants. Negative pressure negative pressure chamber in must exist at the top of any tree taller than 10m, which is the Bundesleistungszentrum Kienbaum, Germany pressure head of water that balances the atmospheric pressure. Van der Waals forces maintain cohesion of columns of sap that run continuously in xylem from the roots to the top leaves. The Casimir effect can create a small attractive force due to interactions with vacuum energy; this force is sometimes termed "vacuum pressure" (not to be confused with the negative gauge pressure of a vacuum). Depending on how the orientation of a surface is chosen, the same distribution of forces may be described either as a positive pressure along one surface normal, or as a negative pressure acting along the opposite surface normal. In the cosmological constant.

Stagnation pressure
Stagnation pressure is the pressure a fluid exerts when it is forced to stop moving. Consequently, although a fluid moving at higher speed will have a lower static pressure, it may have a higher stagnation pressure when forced to a standstill. Static pressure and stagnation pressure are related by the Mach number of the fluid. In addition, there can be differences in pressure due to differences in the elevation (height) of the fluid. See Bernoulli's equation. The pressure of a moving fluid can be measured using a Pitot tube, or one of its variations such as a Kiel probe or Cobra probe, connected to a manometer. Depending on where the inlet holes are located on the probe, it can measure static pressures or stagnation pressures.

Pressure

168

Surface pressure
There is a two-dimensional analog of pressure the lateral force per unit length applied on a line perpendicular to the force. Surface pressure is denoted by and shares many similar properties with three-dimensional pressure. Properties of surface chemicals can be investigated by measuring pressure/area isotherms, as the two-dimensional analog of Boyle's law, A = k, at constant temperature.

Pressure of an ideal gas


In an ideal gas, molecules have no volume and do not interact. Pressure varies linearly with temperature, volume, and quantity according to the ideal gas law,

where: P is the absolute pressure of the gas n is the amount of substance T is the absolute temperature V is the volume R is the ideal gas constant. Real gases exhibit a more complex dependence on the variables of state.[5]

Vapor pressure
Vapor pressure is the pressure of a vapor in thermodynamic equilibrium with its condensed phases in a closed system. All liquids and solids have a tendency to evaporate into a gaseous form, and all gases have a tendency to condense back to their liquid or solid form. The atmospheric pressure boiling point of a liquid (also known as the normal boiling point) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapor bubbles inside the bulk of the substance. Bubble formation deeper in the liquid requires a higher pressure, and therefore higher temperature, because the fluid pressure increases above the atmospheric pressure as the depth increases. The vapor pressure that a single component in a mixture contributes to the total pressure in the system is called partial vapor pressure.

Liquid pressure

Pressure

169

Continuum mechanics

e [6]

v t

When a person swims under the water, water pressure is felt acting on the person's eardrums. The deeper that person swims, the greater the pressure. The pressure felt is due to the weight of the water above the person. As someone swims deeper, there is more water above you and therefore greater pressure. The pressure a liquid exerts depends on its depth. Liquid pressure also depends on the density of the liquid. If someone was submerged in a liquid more dense than water, the pressure would be correspondingly greater. The pressure due to a liquid in liquid columns of constant density or at a depth within a substance is represented by the following formula:

where: P is liquid pressure g is gravity at the surface of overlaying material is density of liquid h is height of liquid column or depth within a substance Another way of saying this same formula is the following:

Derivation of this equation This is derived from the definitions of pressure and weight density. Consider an area at the bottom of a vessel of liquid. The weight of the column of liquid directly above this area produces pressure. From the definition:

we can express this weight of liquid as

where the volume of the column is simply the area multiplied by the depth. Then we have

With the "area" in the numerator and the "area" in the denominator canceling each other out, we are left with:

Written with symbols, this is our original equation:

Pressure The pressure a liquid exerts against the sides and bottom of a container depends on the density and the depth of the liquid. If atmospheric pressure is neglected, liquid pressure against the bottom is twice as great at twice the depth; at three times the depth, the liquid pressure is threefold; etc. Or, if the liquid is two or three times as dense, the liquid pressure is correspondingly two or three times as great for any given depth. Liquids are practically incompressible that is, their volume can hardly be changed by pressure (water volume decreases by only 50 millionths of its original volume for each atmospheric increase in pressure). Thus, except for small changes produced by temperature, the density of a particular liquid is practically the same at all depths. Atmospheric pressure pressing on the surface of a liquid must be taken into account when trying to discover the total pressure acting on a liquid. The total pressure of a liquid, then, is gh + the pressure of the atmosphere. When this distinction is important, the term total pressure is used. Otherwise, discussions of liquid pressure refer to pressure without regard to the normally ever-present atmospheric pressure. It is important to recognize that the pressure does not depend on the amount of liquid present. Volume is not the important factor - depth is. The average water pressure acting against a dam depends on the average depth of the water and not on the volume of water held back. For example, a large, shallow lake with a height of 3 m exerts only half the average pressure that the 6 m-tall small, deep pond does. A person will feel the same pressure whether his/her head is dunked a meter beneath the surface of the water in a small pool or to the same depth in the middle of a large lake. If four vases contain different amounts of water but are all filled to equal depths, then a fish with its head dunked a few centimeters under the surface will be acted on by water pressure that is the same in any of the vases. If the fish swims a few centimeters deeper, the pressure on the fish will increase with depth and be the same no matter which vase the fish is in. If the fish swims to the bottom, the pressure will be greater, but it makes no difference what vase it is in. All vases are filled to equal depths, so the water pressure is the same at the bottom of each vase, regardless of its shape or volume. If water pressure at the bottom of a vase were greater than water pressure at the bottom of a neighboring vase, the greater pressure would force water sideways and then up the narrower vase to a higher level until the pressures at the bottom were equalized. Pressure is depth dependent, not volume dependent, so there is a reason that water seeks its own level.

170

Direction of liquid pressure


An experimentally determined fact about liquid pressure is that it is exerted equally in all directions.[7] If someone is submerged in water, no matter which way that person tilts his/her head, the person will feel the same amount of water pressure on his/her ears. Because a liquid can flow, this pressure isn't only downward. Pressure is seen acting sideways when water spurts sideways from a leak in the side of an upright can. Pressure also acts upward, as demonstrated when someone tries to push a beach ball beneath the surface of the water. The bottom of a boat is pushed upward by water pressure (buoyancy). When a liquid presses against a surface, there is a net force that is perpendicular to the surface. Although pressure doesn't have a specific direction, force does. A submerged triangular block has water forced against each point from many directions, but components of the force that are not perpendicular to the surface cancel each other out, leaving only a net perpendicular point. This is why water spurting from a hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which the hole is located.Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and middle), then the force vectors perpendicular to the inner container surface will increase with increasing depth - that is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted by a fluid on a smooth surface is always at right angles to the surface. The speed of liquid out of the hole is , where h is the depth below the free surface. Interestingly, this is the same speed the water (or anything else) would have if freely falling the same vertical distance h.

Pressure

171

Kinematic pressure
is the kinematic pressure, where is the pressure and constant mass density. The SI unit of P is m2/s2. in order to compute Navier-Stokes equation

Kinematic pressure is used in the same manner as kinematic viscosity without explicitly showing the density . Navier-Stokes equation with kinematic quantities

Notes
[1] The preferred spelling varies by country and even by industry. Further, both spellings are often used within a particular industry or country. Industries in British English-speaking countries typically use the "gauge" spelling. Many of the largest US manufacturers of pressure transducers and instrumentation use the spelling "gage pressure" in their most formal documentation sensotec.com (http:/ / www. sensotec. com/ pressurefaq. shtml) Honeywell-Sensotec's FAQ page and fluke.com (http:/ / us. fluke. com/ usen/ Home/ Search. asp?txtSearchBox="gage+ pressure"& x=0& y=0), Fluke Corporation's product search page. [2] The usage of P vs p is context-driven. It depends on the field in which one is working, on the nearby presence of other symbols for quantities such as power and momentum, and on writing style. [3] http:/ / en. wikipedia. org/ w/ index. php?title=Template:Pressure_Units& action=edit [4] [5] [6] [7] NIST, Rules and Style Conventions for Expressing Values of Quantities (http:/ / physics. nist. gov/ Pubs/ SP811/ sec07. html#7. 4), Sect. 7.4. P. Atkins, J. de Paula Elements of Physical Chemistry, 4th Ed, W.H. Freeman, 2006. ISBN 0-7167-7329-5. http:/ / en. wikipedia. org/ w/ index. php?title=Template:Continuum_mechanics& action=edit Hewitt 251 (2006)

References External links


Introduction to Fluid Statics and Dynamics (http://www.physnet.org/modules/pdf_modules/m48.pdf) on Project PHYSNET (http://www.physnet.org/) Pressure being a scalar quantity (http://www.grc.nasa.gov/WWW/K-12/airplane/pressure.html) Online pressure converter for 15 different pressure units (http://www.sengpielaudio.com/ calculator-densityunits.htm) How to convert pressure units (http://www.cressto.cz/unit-converter) Pressure Exerted by a Solid Iron Cuboid on Sand (http://amrita.olabs.co.in/?sub=1&brch=1&sim=71& cnt=1), instructions for performing classroom experiment

Thermodynamic temperature

172

Thermodynamic temperature
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Thermodynamic temperature is the absolute measure of temperature and it is one of the principal parameters of thermodynamics. Thermodynamic temperature is defined by the second law of thermodynamics in which the theoretically lowest temperature is the null or zero point. At this point, called absolute zero, the particle constituents of matter have minimal motion and can become no colder.[1][2] In the quantum-mechanical description, matter at absolute zero is in its ground state, which is its state of lowest energy. Thermodynamic temperature is therefore often also called absolute temperature. The International System of Units specifies a particular scale for thermodynamic temperature. It uses the Kelvin scale for measurement and selects the triple point of water at 273.16K as the fundamental fixing point. Other scales have been in use historically. The Rankine scale, using the degree Fahrenheit as its unit interval, is still in use as part of the English Engineering Units in the United States in some engineering fields. ITS-90 gives a practical means of estimating the thermodynamic temperature to a very high degree of accuracy. Roughly, the temperature of a body at rest is a measure of the mean of the energy of the translational, vibrational and rotational motions of matter's particle constituents, such as molecules, atoms, and subatomic particles. The full variety of these kinetic motions, along with potential energies of particles, and also occasionally certain other types of particle energy in equilibrium with these, contribute the total internal energy of a substance. Internal energy is loosely called the heat energy or thermal energy in conditions when no work is done upon the substance by its surroundings, or by the substance upon the surroundings. Internal energy may be stored in a number of ways within a substance, each way constituting a "degree of freedom". At equilibrium, each degree of freedom will have on average the same energy: where is the Boltzmann constant, unless that degree of freedom is in the quantum regime. The internal degrees of freedom (rotation, vibration, etc.) may be in the quantum regime at room temperature, but the translational degrees of freedom will be in the classical regime except at extremely low temperatures (fractions of kelvins) and it may be said that, for most situations, the thermodynamic temperature is specified by the average translational kinetic energy of the particles.

Thermodynamic temperature

173

Overview
Temperature is a measure of the random submicroscopic motions and vibrations of the particle constituents of matter. These motions comprise the internal energy of a substance. More specifically, the thermodynamic temperature of any bulk quantity of matter is the measure of the average kinetic energy per classical (i.e., non-quantum) degree of freedom of its constituent particles. "Translational motions" are almost always in the classical regime. Translational motions are ordinary, whole-body movements in three-dimensional space in which particles move about and exchange energy in collisions. Figure1 below shows translational motion in gases; Figure4 below shows translational motion in solids. Thermodynamic temperature's null point, absolute zero, is the temperature at which the particle constituents of matter are as close as possible to complete rest; that is, they have minimal motion, retaining only quantum mechanical motion.[3] Zero kinetic energy remains in a substance at absolute zero (see Thermal energy at absolute zero, below). Throughout the scientific world where measurements are made in SI units, thermodynamic temperature is measured in kelvins (symbol:K). Many engineering fields in the U.S. however, measure thermodynamic temperature using the Rankine scale. By international agreement [4], the unit kelvin and its scale are defined by two points: absolute zero, and the triple point of Vienna Standard Mean Ocean Water (water with a specified blend of hydrogen and oxygen isotopes). Absolute zero, the lowest possible temperature, is defined as being precisely 0K and 273.15C. The triple point of water is defined as being precisely 273.16K and 0.01C. This definition does three things: 1. It fixes the magnitude of the kelvin unit as being precisely 1 part in 273.16 parts the difference between absolute zero and the triple point of water; 2. It establishes that one kelvin has precisely the same magnitude as a one-degree increment on the Celsius scale; and 3. It establishes the difference between the two scales' null points as being precisely 273.15 kelvins (0K = 273.15C and 273.16K = 0.01C). Temperatures expressed in kelvins are converted to degrees Rankine simply by multiplying by 1.8 as follows: TR=1.8TK, where TK and TR are temperatures in kelvin and degrees Rankine respectively. Temperatures expressed in degrees Rankine are converted to kelvins by dividing by 1.8 as follows: TK=TR1.8.

Practical realization
Although the Kelvin and Celsius scales are defined using absolute zero (0 K) and the triple point of water (273.16 K and 0.01 C), it is impractical to use this definition at temperatures that are very different from the triple point of water. ITS-90 is then designed to represent the thermodynamic temperature as closely as possible throughout its range. Many different thermometer designs are required to cover the entire range. These include helium vapor pressure thermometers, helium gas thermometers, standard platinum resistance thermometers (known as SPRTs, PRTs or Platinum RTDs) and monochromatic radiation thermometers. For some types of thermometer the relationship between the property observed (e.g., length of a mercury column) and temperature, is close to linear, so for most purposes a linear scale is sufficient, without point-by-point calibration. For others a calibration curve or equation is required. The mercury thermometer, invented before the thermodynamic temperature was understood, originally defined the temperature scale; its linearity made readings correlate well with true temperature, i.e. the "mercury" temperature scale was a close fit to the true scale.

Thermodynamic temperature

174

The relationship of temperature, motions, conduction, and thermal energy


The nature of kinetic energy, translational motion, and temperature
The thermodynamic temperature is a measure of the average energy of the translational, vibrational, and rotational motions of matter's particle constituents (molecules, atoms, and subatomic particles). The full variety of these kinetic motions, along with potential energies of particles, and also occasionally certain other types of particle energy in equilibrium with these, contribute the total internal energy (loosely, the thermal energy) of a substance. Thus, internal energy may be stored in a number of ways (degrees of freedom) within a substance. When the degrees of freedom are in the classical regime ("unfrozen") the temperature is very simply related to the average energy of those degrees of freedom at equilibrium. The three translational degrees of freedom are unfrozen except for the very lowest temperatures, and their kinetic energy is simply related to the thermodynamic temperature over the widest range. The heat capacity, which relates heat input and temperature change, is discussed below. The relationship of kinetic energy, mass, and velocity is given by the formula Ek=12mv2.[5] Accordingly, particles with one unit of mass moving at one unit of velocity have precisely the same kinetic energy, and precisely the same temperature, as those with four times the mass but half the velocity. Except in the quantum regime at extremely low temperatures, the thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the mean average kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three x, y, and zaxis dimensions of space means the particles move in the three spatial degrees of freedom. The temperature derived from this translational kinetic energy is sometimes referred to as kinetic temperature and is equal to the thermodynamic temperature over a very wide range of temperatures. Since there are three translational degrees of freedom (e.g., motion along the x, y, and z axes), the translational kinetic energy is related to the kinetic temperature by:

where: is the mean kinetic energy in joules (J) and is pronounced E bar kB = 1.3806504(24)1023J/K is the Boltzmann constant and is pronounced Kay sub bee is the kinetic temperature in kelvins (K) and is pronounced Tee

Thermodynamic temperature

175

While the Boltzmann constant is useful for finding the mean kinetic energy of a particle, it's important to note that even when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat is going into or out of it), the translational motions of individual atoms and molecules occurs across a wide range of speeds (see animation in Figure1 above). At any one instant, the proportion of particles moving at a given speed within this range is determined by probability as described by the MaxwellBoltzmann distribution. The graph shown here in Fig.2 shows the speed distribution of 5500K helium atoms. They have a most probable speed of 4.780km/s. However, a certain proportion of atoms at any given instant are moving faster while others are moving relatively slowly; some are momentarily at a virtual standstill (off the xaxis to the right). This graph uses Fig. 2 The translational motions of helium atoms occur across a range of speeds. inverse speed for its xaxis so the shape of Compare the shape of this curve to that of a Planck curve in Fig.5below. the curve can easily be compared to the curves in Figure5 below. In both graphs, zero on the xaxis represents infinite temperature. Additionally, the x and yaxis on both graphs are scaled proportionally.

The high speeds of translational motion


Although very specialized laboratory equipment is required to directly detect translational motions, the resultant collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The translational motions of elementary particles are very fast[6] and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a record-setting cold temperature of 700nK (billionths of a kelvin) in 1994, they used optical lattice laser equipment to adiabatically cool caesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7mm per second in order to calculate their temperature.[7] Formulas for calculating the velocity and speed of translational motion are given in the following footnote.[8]

Thermodynamic temperature

176

The internal motions of molecules and specific heat


There are other forms of internal energy besides the kinetic energy of translational motion. As can be seen in the animation at right, molecules are complex objects; they are a population of atoms and thermal agitation can strain their internal chemical bonds in three different ways: via rotation, bond length, and bond angle movements. These are all types of internal degrees of freedom. This makes molecules distinct from monatomic substances (consisting of individual atoms) like the noble gases helium and argon, which have only the three translational degrees of freedom. Kinetic energy is stored in molecules' internal degrees of freedom, which gives them an internal temperature. Even though these motions are called internal, the external portions of molecules still moverather like the jiggling of a stationary water balloon. This Fig. 3 Because of their internal structure and flexibility, molecules can permits the two-way exchange of kinetic energy store kinetic energy in internal degrees of freedom which contribute to between internal motions and translational motions the heat capacity. with each molecular collision. Accordingly, as energy is removed from molecules, both their kinetic temperature (the temperature derived from the kinetic energy of translational motion) and their internal temperature simultaneously diminish in equal proportions. This phenomenon is described by the equipartition theorem, which states that for any bulk quantity of a substance in equilibrium, the kinetic energy of particle motion is evenly distributed among all the active (i.e. unfrozen) degrees of freedom available to the particles. Since the internal temperature of molecules are usually equal to their kinetic temperature, the distinction is usually of interest only in the detailed study of non-local thermodynamic equilibrium (LTE) phenomena such as combustion, the sublimation of solids, and the diffusion of hot gases in a partial vacuum. The kinetic energy stored internally in molecules causes substances to contain more internal energy at any given temperature and to absorb additional internal energy for a given temperature increase. This is because any kinetic energy that is, at a given instant, bound in internal motions is not at that same instant contributing to the molecules' translational motions.[9] This extra thermal energy simply increases the amount of energy a substance absorbs for a given temperature rise. This property is known as a substance's specific heat capacity. Different molecules absorb different amounts of thermal energy for each incremental increase in temperature; that is, they have different specific heat capacities. High specific heat capacity arises, in part, because certain substances' molecules possess more internal degrees of freedom than others do. For instance, nitrogen, which is a diatomic molecule, has five active degrees of freedom at room temperature: the three comprising translational motion plus two rotational degrees of freedom internally. Since the two internal degrees of freedom are essentially unfrozen, in accordance with the equipartition theorem nitrogen has five-thirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases.[10] Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of thermal energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom.

Thermodynamic temperature

177

The diffusion of thermal energy: Entropy, phonons, and mobile conduction electrons
Heat conduction is the diffusion of thermal energy from hot parts of a system to cold. A system can be either a single bulk entity or a plurality of discrete bulk entities. The term bulk in this context means a statistically significant quantity of particles (which can be a microscopic amount). Whenever thermal energy diffuses within an isolated system, temperature differences within the system decrease (and entropy increases). One particular heat conduction mechanism occurs when translational motion, the particle motion underlying temperature, transfers momentum from particle to particle in collisions. In gases, these translational motions are of the nature shown above in Fig.1. As can be seen in that animation, not only does momentum (heat) Fig. 4 The temperature-induced translational motion of particles in solids diffuse throughout the volume of the gas through takes the form of phonons. Shown here are phonons with identical serial collisions, but entire molecules or atoms amplitudes but with wavelengths ranging from 2 to 12 molecules. can move forward into new territory, bringing their kinetic energy with them. Consequently, temperature differences equalize throughout gases very quicklyespecially for light atoms or molecules; convection speeds this process even more.[11] Translational motion in solids, however, takes the form of phonons (see Fig.4 at right). Phonons are constrained, quantized wave packets traveling at the speed of sound for a given substance. The manner in which phonons interact within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids, phonon-based heat conduction is usually inefficient[12] and such solids are considered thermal insulators (such as glass, plastic, rubber, ceramic, and rock). This is because in solids, atoms and molecules are locked into place relative to their neighbors and are not free to roam. Metals however, are not restricted to only phonon-based heat conduction. Thermal energy conducts through metals extraordinarily quickly because instead of direct molecule-to-molecule collisions, the vast majority of thermal energy is mediated via very light, mobile conduction electrons. This is why there is a near-perfect correlation between metals' thermal conductivity and their electrical conductivity.[13] Conduction electrons imbue metals with their extraordinary conductivity because they are delocalized (i.e., not tied to a specific atom) and behave rather like a sort of quantum gas due to the effects of zero-point energy (for more on ZPE, see Note 1 below). Furthermore, electrons are relatively light with a rest mass only 11836th that of a proton. This is about the same ratio as a .22 Short bullet (29 grains or 1.88g) compared to the rifle that shoots it. As Isaac Newton wrote with his third law of motion, Law #3: All forces occur in pairs, and these two forces are equal in magnitude and opposite in direction. However, a bullet accelerates faster than a rifle given an equal force. Since kinetic energy increases as the square of velocity, nearly all the kinetic energy goes into the bullet, not the rifle, even though both experience the same force from the expanding propellant gases. In the same manner, because they are much less massive, thermal energy is readily borne by mobile conduction electrons. Additionally, because they're delocalized and very fast, kinetic thermal energy conducts extremely quickly through metals with abundant conduction electrons.

Thermodynamic temperature

178

The diffusion of thermal energy: Black-body radiation


Thermal radiation is a byproduct of the collisions arising from various vibrational motions of atoms. These collisions cause the electrons of the atoms to emit thermal photons (known as black-body radiation). Photons are emitted anytime an electric charge is accelerated (as happens when electron clouds of two atoms collide). Even individual molecules with internal temperatures greater than absolute zero also emit black-body radiation from their atoms. In any bulk quantity of a substance at equilibrium, black-body photons are emitted across a range of wavelengths in a spectrum that has a bell curve-like shape called a Planck curve (see graph in Fig.5 at right). The top of a Planck curve (the peak emittance wavelength) is located in a Fig. 5 The spectrum of black-body radiation has the form of a Planck curve. A 5500K black-body has a peak emittance wavelength of 527nm. Compare the particular part of the electromagnetic shape of this curve to that of a Maxwell distribution in Fig.2above. spectrum depending on the temperature of the black-body. Substances at extreme cryogenic temperatures emit at long radio wavelengths whereas extremely hot temperatures produce short gamma rays (see Table of common temperatures). Black-body radiation diffuses thermal energy throughout a substance as the photons are absorbed by neighboring atoms, transferring momentum in the process. Black-body photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost in the process. As established by the StefanBoltzmann law, the intensity of black-body radiation increases as the fourth power of absolute temperature. Thus, a black-body at 824K (just short of glowing dull red) emits 60 times the radiant power as it does at 296K (room temperature). This is why one can so easily feel the radiant heat from hot objects at a distance. At higher temperatures, such as those found in an incandescent lamp, black-body radiation can be the principal mechanism by which thermal energy escapes a system. Table of thermodynamic temperatures The full range of the thermodynamic temperature scale, from absolute zero to absolute hot, and some notable points between them are shown in the table below.

Thermodynamic temperature

179

kelvin

Peak emittance [14] wavelength of black-body photons

Absolute zero (precisely by definition) Coldest measured [15] temperature One millikelvin (precisely by definition)

0K 450 pK

6,400 kilometers

0.001 K

2.89777 meters [16] (Radio, FM band)

Cosmic Microwave Background Radiation 2.72548(57)K 1.063mm (peak wavelength) Water's triple point (precisely by definition) Incandescent lampB Suns visible surfaceC
[17]

273.16 K 2500 K

10,608.3 nm (Long wavelength I.R.) 1160nm (Near infrared)C 501.5nm (Green light) 100nm (Far Ultraviolet light) 0.18nm (X-rays) 8.3 103 nm (Gamma rays) 1.4 103 nm (Gamma rays) 1 103 nm (Gamma rays) 8 106 nm (Gamma rays) 3 106 nm (Gamma rays) 3 106 nm (Gamma rays) 3 107 nm (Gamma rays) 1.616 1026 nm [25] (Planck frequency)

5778 K 28,000 K 16 MK 350 MK

Lightning bolts channel Suns core Thermonuclear weapon [18] (peak temperature) Sandia National Labs [19] Z machine D Core of a highmass [20] star on its last day Merging binary neutron [21] star system Gamma-ray burst [22] progenitors Relativistic Heavy [23] Ion Collider CERNs proton vs. [24] nucleus collisions

2 GK

3 GK

350 GK

1 TK

1 TK

10 TK

Universe 5.391 1044 s 1.417 1032 K after the Big Bang


A B

The 2500K value is approximate. For a true blackbody (which tungsten filaments are not). Tungsten filaments emissivity is greater at shorter wavelengths, which makes them appear whiter. C Effective photosphere temperature. D For a true blackbody (which the plasma was not). The Z machines dominant emission originated from 40MK electrons (soft xray emissions) within the plasma.

Thermodynamic temperature The heat of phase changes The kinetic energy of particle motion is just one contributor to the total thermal energy in a substance; another is phase transitions, which are the potential energy of molecular bonds that can form in a substance as it cools (such as during condensing and freezing). The thermal energy required for a phase transition is called latent heat. This phenomenon may more easily be grasped by considering it in the reverse direction: latent heat is the energy required to break chemical bonds (such as during evaporation and melting). Almost everyone is familiar Fig. 6 Ice and water: two phases of the same substance with the effects of phase transitions; for instance, steam at 100C can cause severe burns much faster than the 100C air from a hair dryer. This occurs because a large amount of latent heat is liberated as steam condenses into liquid water on the skin. Even though thermal energy is liberated or absorbed during phase transitions, pure chemical elements, compounds, and eutectic alloys exhibit no temperature change whatsoever while they undergo them (see Fig.7, below right). Consider one particular type of phase transition: melting. When a solid is melting, crystal lattice chemical bonds are being broken apart; the substance is transitioning from what is known as a more ordered state to a less ordered state. In Fig.7, the melting of ice is shown within the lower left box heading from blue to green. At one specific thermodynamic point, the melting point (which is 0C across a wide pressure range in the case of water), all the atoms or molecules are, on average, at the maximum energy threshold their chemical bonds can withstand without breaking away from the lattice. Chemical bonds are all-or-nothing forces: they either hold fast, or break; there is no in-between state. Consequently, when a substance is at its melting point, every joule of added thermal energy only breaks the bonds of a specific quantity of its Fig. 7 Water's temperature does not change during phase transitions as heat flows into or [26] out of it. The total heat capacity of a mole of water in its liquid phase (the green line) is atoms or molecules, converting 7.5507kJ. them into a liquid of precisely the same temperature; no kinetic energy is added to translational motion (which is what gives substances their temperature). The effect is rather like popcorn: at a certain temperature, additional thermal energy can't make the kernels any hotter until the transition (popping) is complete. If the process is reversed (as in the freezing of a liquid), thermal energy must be removed from a substance.

180

Thermodynamic temperature As stated above, the thermal energy required for a phase transition is called latent heat. In the specific cases of melting and freezing, it's called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong, the heat of fusion can be relatively great, typically in the range of 6 to 30kJ per mole for water and most of the metallic elements.[27] If the substance is one of the monatomic gases, (which have little tendency to form molecular bonds) the heat of fusion is more modest, ranging from 0.021 to 2.3kJ per mole.[28] Relatively speaking, phase transitions can be truly energetic events. To completely melt ice at 0C into water at 0C, one must add roughly 80 times the thermal energy as is required to increase the temperature of the same mass of liquid water by one degree Celsius. The metals' ratios are even greater, typically in the range of 400 to 1200 times.[29] And the phase transition of boiling is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is known as enthalpy of vaporization) is roughly 540 times that required for a one-degree increase.[30] Water's sizable enthalpy of vaporization is why one's skin can be burned so quickly as steam condenses on it (heading from red to green in Fig.7above). In the opposite direction, this is why one's skin feels cool as liquid water on it evaporates (a process that occurs at a sub-ambient wet-bulb temperature that is dependent on relative humidity). Water's highly energetic enthalpy of vaporization is also an important factor underlying why solar pool covers (floating, insulated blankets that cover swimming pools when not in use) are so effective at reducing heating costs: they prevent evaporation. For instance, the evaporation of just 20mm of water from a 1.29-meter-deep pool chills its water 8.4 degrees Celsius (15.1 F). Internal energy The total energy of all particle motion translational and internal, including that of conduction electrons, plus the potential energy of phase changes, plus zero-point energy comprise the internal energy of a substance. Internal energy at absolute zero As a substance cools, different forms of internal energy and their related effects simultaneously decrease in magnitude: the latent heat of available phase transitions are liberated as a substance changes from a less ordered state to a more ordered state; the translational motions of atoms and molecules diminish (their kinetic temperature decreases); the internal motions of molecules diminish (their internal temperature decreases); conduction electrons (if the substance is an electrical conductor) travel somewhat slower;[31] and black-body radiation's peak emittance wavelength increases (the photons' energy decreases). When the particles of a substance are as close as possible to complete rest and retain only ZPE-induced quantum mechanical motion, the substance is at the temperature of absolute zero (T=0).

181

Note that whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute zero is not necessarily the point at which a substance contains zero thermal energy; one must be very precise with what one means by internal energy. Often, all the phase changes that can occur in a substance, will have occurred by the time it reaches absolute zero. However, this is not always the case. Notably, T=0 helium

Fig. 8 When many of the chemical elements, such as the noble gases and platinum-group metals, freeze to a solid the most ordered state of matter their crystal structures have a closest-packed arrangement. This yields the greatest possible packing density and the lowest energy state.

Thermodynamic temperature remains liquid at room pressure and must be under a pressure of at least 25bar (2.5MPa) to crystallize. This is because helium's heat of fusion (the energy required to melt helium ice) is so low (only 21joules per mole) that the motion-inducing effect of zero-point energy is sufficient to prevent it from freezing at lower pressures. Only if under at least 25bar (2.5MPa) of pressure will this latent thermal energy be liberated as helium freezes while approaching absolute zero. A further complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars, or hundreds of gigapascals). These are known as solid-solid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more thermodynamically favorable, compact one. The above complexities make for rather cumbersome blanket statements regarding the internal energy in T=0 substances. Regardless of pressure though, what can be said is that at absolute zero, all solids with a lowest-energy crystal lattice such those with a closest-packed arrangement (see Fig.8, above left) contain minimal internal energy, retaining only that due to the ever-present background of zero-point energy.[32] One can also say that for a given substance at constant pressure, absolute zero is the point of lowest enthalpy (a measure of work potential that takes internal energy, pressure, and volume into consideration).[33] Lastly, it is always true to say that all T=0 substances contain zero kinetic thermal energy.

182

Practical applications for thermodynamic temperature


Thermodynamic temperature is useful not only for scientists, it can also be useful for lay-people in many disciplines involving gases. By expressing variables in absolute terms and applying GayLussac's law of temperature/pressure proportionality, solutions to everyday problems are straightforward; for instance, calculating how a temperature change affects the pressure inside an automobile tire. If the tire has a relatively cold pressure of 200kPa-gage , then in absolute terms Helium-4, is a superfluid when its temperature is no more than 2.17 kelvins, i.e.2.17 (relative to a vacuum), its pressure is "Celsius degrees" above absolute zero, the starting point in the measurement of 300kPa-absolute.[34][35][36] Room thermodynamic temperature. temperature ("cold" in tire terms) is 296K. If the tire pressure is 20C 316 K hotter (20kelvins), the solution is calculated as 296 K= 6.8% greater thermodynamic temperature and absolute pressure; that is, a pressure of 320kPa-absolute, which is 220kPa-gage.

Thermodynamic temperature

183

Definition of thermodynamic temperature


The thermodynamic temperature is defined by the second law of thermodynamics and its consequences. The thermodynamic temperature can be shown to have special properties, and in particular can be seen to be uniquely defined (up to some constant multiplicative factor) by considering the efficiency of idealized heat engines. Thus the ratio T2/T1 of two temperaturesT1 andT2 is the same in all absolute scales. Strictly speaking, the temperature of a system is well-defined only if it is in thermal equilibrium. From a microscopic viewpoint, its particles (atoms, molecules, electrons, photons) are at equilibrium, so that their energies obey a Boltzmann distribution (or its quantum mechanical counterpart). There are many possible scales of temperature, derived from a variety of observations of physical phenomena. Loosely stated, temperature differences control the flow of heat between two systems, and the universe as a whole, as with any natural system, tends to progress so as to maximize entropy. This suggests that there should be a relationship between temperature and entropy. To elucidate this, consider first the relationship between heat, work and temperature. One way to study this is to analyze a heat engine, which is a device for converting heat into mechanical work, such as the Carnot heat engine. Such a heat engine functions by using a temperature gradient between a high temperatureTH and a low temperature TC to generate work, and the work done (per cycle, say) by the heat engine is equal to the difference between the thermal energy qH put into the system at the high temperature and the heat qC ejected at the low temperature (in that cycle). The efficiency of the engine is the work divided by the heat put into the system or

where wcy is the work done per cycle. Thus the efficiency depends only on qC/qH. Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures T1 and T2 must have the same efficiency, that is to say, the efficiency is the function of only temperatures

In addition, a reversible heat engine operating between temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 andT3. A quick way to see this is that should this not be the case, then energy (in the form of Q) will be wasted or gained, resulting in different overall efficiencies every time a cycle is split into component cycles; clearly a cycle can be composed of any number of smaller cycles. With this understanding of Q1, Q2 and Q3, we note also that mathematically,

But the first function is NOT a function of T2, therefore the product of the final two functions MUST result in the removal of T2 as a variable. The only way is therefore to define the function f as follows:

and

so that

Thermodynamic temperature i.e. The ratio of heat exchanged is a function of the respective temperatures at which they occur. We can choose any monotonic function for our ; it is a matter of convenience and convention that we choose . Choosing then one fixed reference temperature (i.e. triple point of water), we establish the thermodynamic temperature scale. It is to be noted that such a definition coincides with that of the ideal gas derivation; also it is this definition of the thermodynamic temperature that enables us to represent the Carnot efficiency in terms of TH and TC, and hence derive that the (complete) Carnot cycle is isentropic:

184

Substituting this back into our first formula for efficiency yields a relationship in terms of temperature:

Notice that for TC=0 the efficiency is 100% and that efficiency becomes greater than 100% for TC<0, which cases are unrealistic. Subtracting the right hand side of Equation 4 from the middle portion and rearranging gives

where the negative sign indicates heat ejected from the system. The generalization of this equation is Clausius theorem, which suggests the existence of a state function S (i.e., a function which depends only on the state of the system, not on how it reached that state) defined (up to an additive constant) by

where the subscript indicates heat transfer in a reversible process. The function S corresponds to the entropy of the system, mentioned previously, and the change of S around any cycle is zero (as is necessary for any state function). Equation 5 can be rearranged to get an alternative definition for temperature in terms of entropy and heat (to avoid logic loop, we should first define entropy through statistical mechanics):

For a system in which the entropy S is a function S(E) of its energy E, the thermodynamic temperature T is therefore given by

so that the reciprocal of the thermodynamic temperature is the rate of increase of entropy with energy.

History
Ca. 485 BC: Parmenides in his treatise On Nature postulated the existente of primum frigidum, a hypothetical elementary substance source of all cooling or cold in the world.[37] 17021703: Guillaume Amontons (16631705) published two papers that may be used to credit him as being the first researcher to deduce the existence of a fundamental (thermodynamic) temperature scale featuring an absolute zero. He made the discovery while endeavoring to improve upon the air thermometers in use at the time. His J-tube thermometers comprised a mercury column that was supported by a fixed mass of air entrapped within the sensing portion of the thermometer. In thermodynamic terms, his thermometers relied upon the volume / temperature relationship of gas under constant pressure. His measurements of the boiling point of water and the melting point of ice showed that regardless of the mass of air trapped inside his thermometers or the weight of mercury the air was supporting, the reduction in air volume at the ice point was always the same ratio. This observation led him to posit that a sufficient reduction in temperature would reduce the air volume to zero. In fact,

Thermodynamic temperature his calculations projected that absolute zero was equivalent to 240Conly 33.15 degrees short of the true value of 273.15C. 1742: Anders Celsius (17011744) created a "backwards" version of the modern Celsius temperature scale. In Celsius's original scale, zero represented the boiling point of water and 100 represented the melting point of ice. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that ice's melting point was effectively unaffected by pressure. He also determined with remarkable precision how water's boiling point varied as a function of atmospheric pressure. He proposed that zero on his temperature scale (water's boiling point) would be calibrated at the mean barometric pressure at mean sea level. 1744:

185

Anders Celsius

Coincident with the death of Anders Celsius, the famous botanist Carolus Linnaeus (17071778) effectively reversed[38] Celsius's scale upon receipt of his first thermometer featuring a scale where zero represented the melting point of ice and 100 represented water's boiling point. The custom-made linnaeus-thermometer, for use in his greenhouses, was made by Daniel Ekstrm, Sweden's leading maker of scientific instruments at the time. For the next 204 years, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the centigrade scale were often reported simply as degrees or, when greater specificity was desired, degrees centigrade. The symbol for temperature values on this scale was C (in several formats over the years). Because the term centigrade was also the French-language name for a unit of angular measurement (one-hundredth Carolus Linnaeus of a right angle) and had a similar connotation in other languages, the term "centesimal degree" was used when very precise, unambiguous language was required by international standards bodies such as the Bureau international des poids et mesures (BIPM). The 9th CGPM (General Conference on Weights and Measures (Confrence gnrale des poids et mesures) and the CIPM (International Committee for Weights and Measures (Comit international des poids et mesures) formally adopted[39] degree Celsius (symbol: C) in 1948.[40] 1777: In his book Pyrometrie (Berlin: Haude & Spener [41], 1779) completed four months before his death, Johann Heinrich Lambert (17281777), sometimes incorrectly referred to as Joseph Lambert, proposed an absolute temperature scale based on the pressure/temperature relationship of a fixed volume of gas. This is distinct from the volume/temperature relationship of gas under constant pressure that Guillaume Amontons discovered 75 years earlier. Lambert stated that absolute zero was the point where a simple straight-line extrapolation reached zero gas pressure and was equal to 270C. Circa 1787: Notwithstanding the work of Guillaume Amontons 85 years earlier, Jacques Alexandre Csar Charles (17461823) is often credited with discovering, but not publishing, that the volume of a gas under constant pressure is proportional to its absolute temperature. The formula he created was V1/T1=V2/T2.

Thermodynamic temperature 1802: Joseph Louis Gay-Lussac (17781850) published work (acknowledging the unpublished lab notes of Jacques Charles fifteen years earlier) describing how the volume of gas under constant pressure changes linearly with its absolute (thermodynamic) temperature. This behavior is called Charles's Law and is one of the gas laws. His are the first known formulas to use the number 273 for the expansion coefficient of gas relative to the melting point of ice (indicating that absolute zero was equivalent to 273C). 1848: William Thomson, (18241907) also known as Lord Kelvin, wrote in his paper, On an Absolute Thermometric Scale [42], of the need for a scale whereby infinite cold (absolute zero) was the scale's null point, and which used the degree Celsius for its unit increment. Like Gay-Lussac, Thomson calculated that absolute zero was equivalent to 273C on the air thermometers of the time. This absolute scale is known today as the Kelvin thermodynamic temperature scale. It's noteworthy that Thomson's value of 273 was actually derived from 0.00366, which was the accepted expansion coefficient of gas per degree Celsius relative to the ice point. The inverse of 0.00366 expressed to five significant digits is 273.22C which is remarkably close to the true value of 273.15C. 1859: William John Macquorn Rankine (18201872) proposed a thermodynamic temperature scale similar to William Thomson's but which used the degree Fahrenheit for its unit increment. This absolute scale is known today as the Rankine thermodynamic temperature scale. 18771884: Ludwig Boltzmann (18441906) made major contributions to thermodynamics through an understanding of the role that particle kinetics and black body radiation played. His name is now attached to several of the formulas used today in thermodynamics. Circa 1930s: Gas thermometry experiments carefully calibrated to the melting point of ice and boiling point of water showed that absolute zero was equivalent to 273.15C. 1948: Resolution 3 [43] of the 9th CGPM (Confrence Gnrale des Poids et Mesures, also known as the General Conference on Weights and Measures) fixed the triple point of water at precisely 0.01C. At this time, the triple point still had no formal definition for its equivalent kelvin value, which the resolution declared "will be fixed at a later date". The implication is that if the value of absolute zero measured in Ludwig Boltzmann the 1930s was truly 273.15C, then the triple point of water (0.01C) was equivalent to 273.16K. Additionally, both the CIPM (Comit international des poids et mesures, also known as the International Committee for Weights and Measures) and the CGPM formally adopted [44] the name Celsius for the degree Celsius and the Celsius temperature scale. 1954: Resolution 3 [45] of the 10th CGPM gave the Kelvin scale its modern definition by choosing the triple point of water as its second defining point and assigned it a temperature of precisely 273.16kelvin (what was actually written 273.16 degrees Kelvin at the time). This, in combination with Resolution 3 of the 9th CGPM, had the effect of defining absolute zero as being precisely zero kelvin and 273.15C. 1967/1968: Resolution 3 [46] of the 13th CGPM renamed the unit increment of thermodynamic temperature kelvin, symbol K, replacing degree absolute, symbol K. Further, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also decided in Resolution 4 [47] that "The kelvin, unit of

186

Lord Kelvin

Thermodynamic temperature thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water". 2005: The CIPM (Comit International des Poids et Mesures, also known as the International Committee for Weights and Measures) affirmed [48] that for the purposes of delineating the temperature of the triple point of water, the definition of the Kelvin thermodynamic temperature scale would refer to water having an isotopic composition defined as being precisely equal to the nominal specification of Vienna Standard Mean Ocean Water.

187

Notes
In the following notes, wherever numeric equalities are shown in concise form, such as 1.85487(14) 1043, the two digits between the parentheses denotes the uncertainty at 1- (1 standard deviation, 68% confidence level) in the two least significant digits of the significand.
[1] Rankine, W.J.M., "A manual of the steam engine and other prime movers", Richard Griffin and Co., London (1859), p. 306-7 [2] Kelvin, "Heat", Adam and Charles Black, Edinburgh (1880), p. 39 [3] Absolute zero's relationship to zero-point energy

While scientists are achieving temperatures ever closer to absolute zero, they can not fully achieve a state of zero temperature. However, even if scientists could remove all kinetic thermal energy from matter, quantum mechanical zero-point energy (ZPE) causes particle motion that can never be eliminated. Encyclopdia Britannica Online defines zero-point (http:/ / britannica. com/ eb/ article-9078341) energy as the "vibrational energy that molecules retain even at the absolute zero of temperature". ZPE is the result of all-pervasive energy fields in the vacuum between the fundamental particles of nature; it is responsible for the Casimir effect and other phenomena. See Zero Point Energy and Zero Point Field (http:/ / calphysics. org/ zpe. html). See also Solid Helium (http:/ / www. phys. ualberta. ca/ ~therman/ lowtemp/ projects1. htm) by the University of Alberta's Department of Physics to learn more about ZPE's effect on BoseEinstein condensates of helium. Although absolute zero (T=0) is not a state of zero molecular motion, it isthe point of zero temperature and, in accordance with the Boltzmann constant, is also the point of zero particle kinetic energy and zero kinetic velocity. To understand how atoms can have zero kinetic velocity and simultaneously be vibrating due to ZPE, consider the following thought experiment: two T=0 helium atoms in zero gravity are carefully positioned and observed to have an average separation of 620pm between them (a gap of ten atomic diameters). It's an "average" separation because ZPE causes them to jostle about their fixed positions. Then one atom is given a kinetic kick of precisely 83yoctokelvins (1yK = ). This is done in a way that directs this atom's velocity vector at the other atom. With 83yK of kinetic energy between them, the 620 pm gap through their common barycenter would close at a rate of 719pm/s and they would collide after 0.862 second. This is the same speed as shown in the Fig.1 animation above. Before being given the kinetic kick, both T=0 atoms had zero kinetic energy and zero kinetic velocity because they could persist indefinitely in that state and relative orientation even though both were being jostled by ZPE. At T=0, no kinetic energy is available for transfer to other systems. The Boltzmann constant and its related formulas describe the realm of particle kinetics and velocity vectors whereas ZPE is an energy field that jostles particles in ways described by the mathematics of quantum mechanics. In atomic and molecular collisions in gases, ZPE introduces a degree of chaos, i.e., unpredictability, to rebound kinetics; it is as likely that there will be less ZPE-induced particle motion after a given collision as more. This random nature of ZPE is why it has no net effect upon either the pressure or volume of any bulk quantity (a statistically significant quantity of particles) of T>0K gases. However, in T=0 condensed matter; e.g., solids and liquids, ZPE causes inter-atomic jostling where atoms would otherwise be perfectly stationary. Inasmuch as the real-world effects that ZPE has on substances can vary as one alters a thermodynamic system (for example, due to ZPE, helium won't freeze unless under a pressure of at least 25bar or 2.5 MPa), ZPE is very much a form of thermal energy and may properly be included when tallying a substance's internal energy. Note too that absolute zero serves as the baseline atop which thermodynamics and its equations are founded because they deal with the exchange of thermal energy between "systems" (a plurality of particles and fields modeled as an

Thermodynamic temperature average). Accordingly, one may examine ZPE-induced particle motion within a system that is at absolute zero but there can never be a net outflow of thermal energy from such a system. Also, the peak emittance wavelength of black-body radiation shifts to infinity at absolute zero; indeed, a peak no longer exists and black-body photons can no longer escape. Because of ZPE, however, virtual photons are still emitted at T=0. Such photons are called "virtual" because they can't be intercepted and observed. Furthermore, this zero-point radiation has a unique zero-point spectrum. However, even though a T=0 system emits zero-point radiation, no net heat flow Q out of such a system can occur because if the surrounding environment is at a temperature greater than T=0, heat will flow inward, and if the surrounding environment is at T=0, there will be an equal flux of ZP radiation both inward and outward. A similar Q equilibrium exists at T=0 with the ZPE-induced spontaneous emission of photons (which is more properly called a stimulated emission in this context). The graph at upper right illustrates the relationship of absolute zero to zero-point energy. The graph also helps in the understanding of how zero-point energy got its name: it is the vibrational energy matter retains at the zero kelvin point. Derivation of the classical electromagnetic zero-point radiation spectrum via a classical [[thermodynamic operation (http:/ / pra. aps. org/ abstract/ PRA/ v42/ i4/p1847_1)] involving van der Waals forces], Daniel C. Cole, Physical Review A, 42 (1990) 1847.
[4] http:/ / www1. bipm. org/ en/ si/ si_brochure/ chapter2/ 2-1/ 2-1-1/ kelvin. html [5] At non-relativistic temperatures of less than about 30GK, classical mechanics are sufficient to calculate the velocity of particles. At 30GK, individual neutrons (the constituent of neutron stars and one of the few materials in the universe with temperatures in this range) have a 1.0042 (gamma or Lorentz factor). Thus, the classic Newtonian formula for kinetic energy is in error less than half a percent for temperatures less than 30GK. [6] Even roomtemperature air has an average molecular translational speed (not vector-isolated velocity) of 1822km/hour. This is relatively fast for something the size of a molecule considering there are roughly of them crowded into a single cubic millimeter. Assumptions: Average molecular weight of wet air = 28.838 g/mol and T = 296.15K. Assumption's primary variables: An altitude of 194 meters above mean sea level (the worldwide median altitude of human habitation), an indoor temperature of 23C, a dewpoint of 9C (40.85% relative humidity), and 760mmHg (101.325 kPa) sea levelcorrected barometric pressure. [7] Adiabatic Cooling of Cesium to 700nK in an Optical Lattice (http:/ / www. science. uva. nl/ research/ aplp/ eprints/ KasPhiRol95. pdf), A. Kastberg et al., Physical Review Letters 74 (1995) 1542 . It's noteworthy that a record cold temperature of 450pK in a BoseEinstein condensate of sodium atoms (achieved by A. E. Leanhardt et al.. of MIT) equates to an average vector-isolated atom velocity of 0.4mm/s and an average atom speed of 0.7mm/s. [8] The rate of translational motion of atoms and molecules is calculated based on thermodynamic temperature as follows:

188

where:
is the vector-isolated mean velocity of translational particle motion in m/s kB is the Boltzmann constant = T is the thermodynamic temperature in kelvins m is the molecular mass of substance in kilograms

In the above formula, molecular mass, m, in kilograms per particle is the quotient of a substance's molar mass (also known as atomic weight, atomic mass, relative atomic mass, and unified atomic mass units) in g/mol or daltons divided by (which is the Avogadro constant times one thousand). For diatomic molecules such as H2, N2, and O2, multiply atomic weight by two before plugging it into the above formula. The mean speed (not vector-isolated velocity) of an atom or molecule along any arbitrary path is calculated as follows:

where:
is the mean speed of translational particle motion in m/s

Note that the mean energy of the translational motions of a substance's constituent particles correlates to their mean speed, not velocity. Thus, substituting for v in the classic formula for kinetic energy, Ek=mv2 produces precisely the same value as does Emean=3/2kBT (as shown in the section titled The nature of kinetic energy, translational motion, and temperature).

Thermodynamic temperature Note too that the Boltzmann constant and its related formulas establish that absolute zero is the point of both zero kinetic energy of particle motion and zero kinetic velocity (see also Note 1 above).
[9] The internal degrees of freedom of molecules cause their external surfaces to vibrate and can also produce overall spinning motions (what can be likened to the jiggling and spinning of an otherwise stationary water balloon). If one examines a single molecule as it impacts a containers' wall, some of the kinetic energy borne in the molecule's internal degrees of freedom can constructively add to its translational motion during the instant of the collision and extra kinetic energy will be transferred into the container's wall. This would induce an extra, localized, impulse-like contribution to the average pressure on the container. However, since the internal motions of molecules are random, they have an equal probability of destructively interfering with translational motion during a collision with a container's walls or another molecule. Averaged across any bulk quantity of a gas, the internal thermal motions of molecules have zero net effect upon the temperature, pressure, or volume of a gas. Molecules' internal degrees of freedom simply provide additional locations where internal energy is stored. This is precisely why molecular-based gases have greater specific heat capacity than monatomic gases (where additional thermal energy must be added to achieve a given temperature rise). [10] When measured at constant-volume since different amounts of work must be performed if measured at constant-pressure. Nitrogen's CvH (100kPa, 20C) equals 20.8Jmol1K1 vs. the monatomic gases, which equal 12.4717Jmol1K1. Citations: W.H. Freeman's (http:/ / www. whfreeman. com/ ) Physical Chemistry, Part 3: Change ( 422kB PDF, here (http:/ / www. whfreeman. com/ college/ pdfs/ pchem8e/ PC8eC21. pdf)), Exercise 21.20b, p.787. Also Georgia State University's (http:/ / www. gsu. edu/ ) Molar Specific Heats of Gases (http:/ / hyperphysics. phy-astr. gsu. edu/ hbase/ kinetic/ shegas. html). [11] The speed at which thermal energy equalizes throughout the volume of a gas is very rapid. However, since gases have extremely low density relative to solids, the heat flux (the thermal power passing per area) through gases is comparatively low. This is why the dead-air spaces in multi-pane windows have insulating qualities. [12] Diamond is a notable exception. Highly quantized modes of phonon vibration occur in its rigid crystal lattice. Therefore, not only does diamond have exceptionally poor specific heat capacity, it also has exceptionally high thermal conductivity. [13] Correlation is 752 (Wm1K1)/(MScm), =81, through a 7:1 range in conductivity. Value and standard deviation based on data for Ag, Cu, Au, Al, Ca, Be, Mg, Rh, Ir, Zn, Co, Ni, Os, Fe, Pa, Pt, and Sn. Citation: Data from CRC Handbook of Chemistry and Physics, 1st Student Edition and this link (http:/ / www. webelements. com/ ) to Web Elements' home page. [14] Thecited emission wavelengths are for true black bodies in equilibrium. In this table, only the sun so qualifies. CODATA 2006 recommended value of 2.8977685(51)103mK used for Wien displacement law constant b. [15] Arecord cold temperature of 45080pK in a BoseEinstein condensate (BEC) of sodium atoms was achieved in 2003 by researchers at MIT. Citation: Cooling BoseEinstein Condensates Below 500 Picokelvin, A. E. Leanhardt et al., Science 301, 12 Sept. 2003, Pg. 1515. Its noteworthy that this records peak emittance black-body wavelength of 6,400 kilometers is roughly the radius of Earth. [16] Thepeak emittance wavelength of 2.89777m is a frequency of 103.456MHz [17] Measurementwas made in 2002 and has an uncertainty of 3 kelvins. A 1989 measurement (http:/ / www. kis. uni-freiburg. de/ ~hw/ astroandsolartitles. html) produced a value of 5777 2.5K. Citation: Overview of the Sun (Chapter 1 lecture notes on Solar Physics by Division of Theoretical Physics, Dept. of Physical Sciences, University of Helsinki). Download paper (252kB PDF (http:/ / theory. physics. helsinki. fi/ ~sol_phys/ Sol0601. pdf)) [18] The350MK value is the maximum peak fusion fuel temperature in a thermonuclear weapon of the TellerUlam configuration (commonly known as a hydrogen bomb). Peak temperatures in Gadget-style fission bomb cores (commonly known as an atomic bomb) are in the range of 50 to 100MK. Citation: Nuclear Weapons Frequently Asked Questions, 3.2.5 Matter At High Temperatures. Link to relevant Web page. (http:/ / nuclearweaponarchive. org/ Nwfaq/ Nfaq3. html#nfaq3. 2) All referenced data was compiled from publicly available sources. [19] Peaktemperature for a bulk quantity of matter was achieved by a pulsed-power machine used in fusion physics experiments. The term bulk quantity draws a distinction from collisions in particle accelerators wherein high temperature applies only to the debris from two subatomic particles or nuclei at any given instant. The >2GK temperature was achieved over a period of about ten nanoseconds during shot Z1137. In fact, the iron and manganese ions in the plasma averaged 3.58 0.41GK (309 35keV) for 3ns (ns 112 through 115). Citation: Ion Viscous Heating in a Magnetohydrodynamically Unstable Z Pinch at Over 2 109 Kelvin, M. G. Haines et al., Physical Review Letters 96, Issue 7, id. 075003. Link to Sandias news release. (http:/ / www. sandia. gov/ news-center/ news-releases/ 2006/ physics-astron/ hottest-z-output. html) [20] Coretemperature of a highmass (>811 solar masses) star after it leaves the main sequence on the HertzsprungRussell diagram and begins the alpha process (which lasts one day) of fusing silicon28 into heavier elements in the following steps: sulfur32 argon36 calcium40 titanium44 chromium48 iron52 nickel56. Within minutes of finishing the sequence, the star explodes as a TypeII supernova. Citation: Stellar Evolution: The Life and Death of Our Luminous Neighbors (by Arthur Holland and Mark Williams of the University of Michigan). Link to Web site (http:/ / www. umich. edu/ ~gs265/ star. htm). More informative links can be found here (http:/ / schools. qps. org/ hermanga/ images/ Astronomy/ chapter_21___stellar_explosions. htm), and here (http:/ / cosserv3. fau. edu/ ~cis/ AST2002/ Lectures/ C13/ Trans/ Trans. html), and a concise treatise on stars by NASA is here (http:/ / www. nasa. gov/ worldbook/ star_worldbook. html). [21] Basedon a computer model that predicted a peak internal temperature of 30 MeV (350GK) during the merger of a binary neutron star system (which produces a gammaray burst). The neutron stars in the model were 1.2 and 1.6 solar masses respectively, were roughly 20km in diameter, and were orbiting around their barycenter (common center of mass) at about 390Hz during the last several milliseconds before they completely merged. The 350GK portion was a small volume located at the pairs developing common core and varied from roughly 1 to 7km across over a time span of around 5ms. Imagine two city-sized objects of unimaginable density orbiting each other at the same frequency as the G4 musical note (the 28th white key on a piano). Its also noteworthy that at 350 GK, the average neutron has a vibrational

189

Thermodynamic temperature
speed of 30% the speed of light and a relativistic mass (m) 5% greater than its rest mass (m0). Citation: Torus Formation in Neutron Star Mergers and Well-Localized Short Gamma-Ray Bursts, R. Oechslin et al. of Max Planck Institute for Astrophysics. (http:/ / www. mpa-garching. mpg. de/ ), arXiv:astro-ph/0507099 v2, 22 Feb. 2006. Download paper (725kB PDF (http:/ / arxiv. org/ pdf/ astro-ph/ 0507099. pdf)) (from Cornell University Librarys arXiv.org server). To view a browser-based summary of the research, click here (http:/ / www. mpa-garching. mpg. de/ mpa/ research/ current_research/ hl2005-10/ hl2005-10-en. html). [22] NewScientist: Eight extremes: The hottest thing in the universe (http:/ / www. newscientist. com/ article/ mg20928026. 300-eight-extremes-the-hottest-thing-in-the-universe. html), 07 March 2011, which stated While the details of this process are currently unknown, it must involve a fireball of relativistic particles heated to something in the region of a trillion kelvin [23] Resultsof research by Stefan Bathe using the PHENIX (http:/ / www. phenix. bnl. gov/ ) detector on the Relativistic Heavy Ion Collider (http:/ / www. bnl. gov/ rhic/ ) at Brookhaven National Laboratory (http:/ / www. bnl. gov/ world/ ) in Upton, New York, U.S.A. Bathe has studied gold-gold, deuteron-gold, and proton-proton collisions to test the theory of quantum chromodynamics, the theory of the strong force that holds atomic nuclei together. Link to news release. (http:/ / www. bnl. gov/ bnlweb/ pubaf/ pr/ PR_display. asp?prID=06-56) [24] Citation: How do physicists study particles? (http:/ / public. web. cern. ch/ public/ Content/ Chapters/ AboutCERN/ HowStudyPrtcles/ HowSeePrtcles/ HowSeePrtcles-en. html) by CERN (http:/ / public. web. cern. ch/ public/ Welcome. html). [25] ThePlanck frequency equals 1.85487(14)1043Hz (which is the reciprocal of one Planck time). Photons at the Planck frequency have a wavelength of one Planck length. The Planck temperature of 1.41679(11)1032K equates to a calculated b/T=max wavelength of 2.04531(16)1026nm. However, the actual peak emittance wavelength quantizes to the Planck length of 1.61624(12)1026nm. [26] Water's enthalpy of fusion (0C, 101.325kPa) equates to per molecule so adding one joule of thermal energy to 0C water ice causes water molecules to break away from the crystal lattice and become liquid. [27] Water's enthalpy of fusion is 6.0095kJmol1 K1 (0C, 101.325kPa). Citation: Water Structure and Science, Water Properties, Enthalpy of fusion, (0C, 101.325kPa) (by London South Bank University). Link to Web site. (http:/ / www. lsbu. ac. uk/ water/ data. html) The only metals with enthalpies of fusion not in the range of 630Jmol1K1 are (on the high side): Ta, W, and Re; and (on the low side) most of the group 1 (alkaline) metals plus Ga, In, Hg, Tl, Pb, and Np. Citation: This link (http:/ / www. webelements. com/ ) to Web Elements' home page. [28] Xenon value citation: This link (http:/ / www. webelements. com/ webelements/ elements/ text/ Xe/ heat. html) to WebElements' xenon data (available values range from 2.3 to 3.1 kJ/mol). It is also noteworthy that helium's heat of fusion of only 0.021kJ/mol is so weak of a bonding force that zero-point energy prevents helium from freezing unless it is under a pressure of at least 25 atmospheres. [29] CRC Handbook of Chemistry and Physics, 1st Student Edition and Web Elements (http:/ / www. webelements. com/ ). [30] H2O specific heat capacity, Cp= 0.075327kJmol1K1 (25C); Enthalpy of fusion= 6.0095kJ/mol (0C, 101.325kPa); Enthalpy of vaporization (liquid)= 40.657 kJ/mol (100C). Citation: Water Structure and Science, Water Properties (by London South Bank University). Link to Web site. (http:/ / www. lsbu. ac. uk/ water/ data. html) [31] Mobile conduction electrons are delocalized, i.e. not tied to a specific atom, and behave rather like a sort of quantum gas due to the effects of zero-point energy. Consequently, even at absolute zero, conduction electrons still move between atoms at the Fermi velocity of about . Kinetic thermal energy adds to this speed and also causes delocalized electrons to travel farther away from the nuclei. [32] No other crystal structure can exceed the 74.048% packing density of a closest-packed arrangement. The two regular crystal lattices found in nature that have this density are hexagonal close packed (HCP) and face-centered cubic (FCC). These regular lattices are at the lowest possible energy state. Diamond is a closest-packed structure with an FCC crystal lattice. Note too that suitable crystalline chemical compounds, although usually composed of atoms of different sizes, can be considered as closest-packed structures when considered at the molecular level. One such compound is the common mineral known as magnesium aluminum spinel (MgAl2O4). It has a face-centered cubic crystal lattice and no change in pressure can produce a lattice with a lower energy state. [33] Nearly half of the 92 naturally occurring chemical elements that can freeze under a vacuum also have a closest-packed crystal lattice. This set includes beryllium, osmium, neon, and iridium (but excludes helium), and therefore have zero latent heat of phase transitions to contribute to internal energy (symbol: U). In the calculation of enthalpy (formula: H=U+pV), internal energy may exclude different sources of thermal energy (particularly ZPE_ depending on the nature of the analysis. Accordingly, all T=0 closest-packed matter under a perfect vacuum has either minimal or zero enthalpy, depending on the nature of the analysis. Use Of Legendre Transforms In Chemical Thermodynamics (http:/ / iupac. org/ publications/ pac/ 2001/ pdf/ 7308x1349. pdf), Robert A. Alberty, Pure Appl.Chem., 73 (2001) 1349. [34] Pressure also must be in absolute terms. The air still in a tire at 0kPa-gage expands too as it gets hotter. It's not uncommon for engineers to overlook that one must work in terms of absolute pressure when compensating for temperature. For instance, a dominant manufacturer of aircraft tires published a document on temperature-compensating tire pressure, which used gage pressure in the formula. However, the high gage pressures involved (180psi; 12.4 bar; 1.24 MPa) means the error would be quite small. With low-pressure automobile tires, where gage pressures are typically around 2 bar (200 kPa), failing to adjust to absolute pressure results in a significant error. Referenced document: Aircraft Tire Ratings ( 155kB PDF, here (http:/ / airmichelin. com/ pdfs/ 05 - Aircraft Tire Ratings. pdf)). [35] Regarding the spelling "gage" vs. "gauge" in the context of pressures measured relative to atmospheric pressure, the preferred spelling varies by country and even by industry. Further, both spellings are often used within a particular industry or country. Industries in British English-speaking countries typically use the spelling "gauge pressure" to distinguish it from the pressure-measuring instrument, which in the U.K., is spelled pressure gage. For the same reason, many of the largest American manufacturers of pressure transducers and instrumentation use the spelling gage pressure (the convention used here) in their formal documentation to distinguish it from the instrument, which is spelled pressure gauge. (see Honeywell-Sensotec's FAQ page (http:/ / sensotec. com/ pressurefaq. shtml) and Fluke Corporation's product search page (http:/ / us. fluke. com/ usen/ Home/ Search. asp?txtSearchBox="gage+ pressure"& x=0& y=0)).

190

Thermodynamic temperature
[36] A difference of 100kPa is used here instead of the 101.325kPa value of one standard atmosphere. In 1982, the International Union of Pure and Applied Chemistry (IUPAC) recommended that for the purposes of specifying the physical properties of substances, the standard pressure (atmospheric pressure) should be defined as precisely 100kPa (750.062Torr). Besides being a round number, this had a very practical effect: relatively few people live and work at precisely sea level; 100kPa equates to the mean pressure at an altitude of about 112 meters, which is closer to the 194meter, worldwide median altitude of human habitation. For especially low-pressure or high-accuracy work, true atmospheric pressure must be measured. Citation: IUPAC.org, Gold Book, Standard Pressure (http:/ / goldbook. iupac. org/ S05921. html) [37] Absolute Zero and the Conquest of Cold , Shachtman, Tom., Mariner Books, 1999. [38] A Brief History of Temperature Measurement (http:/ / thermodynamics-information. net/ ) and; Uppsala University (Sweden), Linnaeus' thermometer (http:/ / www. linnaeus. uu. se/ online/ life/ 6_32. html) [39] bipm.org (http:/ / www. bipm. org/ en/ committees/ cipm/ cipm-1948. html) [40] According to The Oxford English Dictionary (OED), the term "Celsius's thermometer" had been used at least as early as 1797. Further, the term "The Celsius or Centigrade thermometer" was again used in reference to a particular type of thermometer at least as early as 1850. The OED also cites this 1928 reporting of a temperature: "My altitude was about 5,800 metres, the temperature was 28 Celsius". However, dictionaries seek to find the earliest use of a word or term and are not a useful resource as regards the terminology used throughout the history of science. According to several writings of Dr. Terry Quinn CBE FRS, Director of the BIPM (19882004), including Temperature Scales from the early days of thermometry to the 21st century ( 148kB PDF, here (http:/ / www. imeko. org/ publications/ tc12-2004/ PTC12-2004-PL-001. pdf)) as well as Temperature (2nd Edition / 1990 / Academic Press / 0125696817), the term Celsius in connection with the centigrade scale was not used whatsoever by the scientific or thermometry communities until after the CIPM and CGPM adopted the term in 1948. The BIPM wasn't even aware that degree Celsius was in sporadic, non-scientific use before that time. It's also noteworthy that the twelve-volume, 1933 edition of OED didn't even have a listing for the word Celsius (but did have listings for both centigrade and centesimal in the context of temperature measurement). The 1948 adoption of Celsius accomplished three objectives: 1. All common temperature scales would have their units named after someone closely associated with them; namely, Kelvin, Celsius, Fahrenheit, Raumur and Rankine. 2. Notwithstanding the important contribution of Linnaeus who gave the Celsius scale its modern form, Celsius's name was the obvious choice because it began with the letter C. Thus, the symbol C that for centuries had been used in association with the name centigrade could continue to be used and would simultaneously inherit an intuitive association with the new name. 3. The new name eliminated the ambiguity of the term centigrade, freeing it to refer exclusively to the French-language name for the unit of angular measurement. [41] http:/ / www. spiess-verlage. de/ html/ haude___spener. html [42] http:/ / zapatopi. net/ kelvin/ papers/ on_an_absolute_thermometric_scale. html [43] http:/ / www. bipm. fr/ en/ CGPM/ db/ 9/ 3/ [44] http:/ / www. bipm. org/ en/ committees/ cipm/ cipm-1948. html [45] http:/ / www. bipm. fr/ en/ CGPM/ db/ 10/ 3/ [46] http:/ / www. bipm. fr/ en/ CGPM/ db/ 13/ 3/ [47] http:/ / www. bipm. fr/ en/ CGPM/ db/ 13/ 4/ [48] http:/ / www. bipm. fr/ en/ si/ si_brochure/ chapter2/ 2-1/ kelvin. html

191

External links
Kinetic Molecular Theory of Gases. (http://www.chm.davidson.edu/ChemistryApplets/ KineticMolecularTheory/index.html) An explanation (with interactive animations) of the kinetic motion of molecules and how it affects matter. By David N. Blauch, Department of Chemistry (http://www.chm. davidson.edu/), Davidson College (http://www2.davidson.edu/index.asp). Zero Point Energy and Zero Point Field. (http://www.calphysics.org/zpe.html) A Web site with in-depth explanations of a variety of quantum effects. By Bernard Haisch, of Calphysics Institute (http://www. calphysics.org/index.html).

Volume

192

Volume
Volume (thermodynamics)
Common symbol(s): V SI unit: m3

Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

In thermodynamics, the volume of a system is an important extensive parameter for describing its thermodynamic state. The specific volume, an intensive property, is the system's volume per unit of mass. Volume is a function of state and is interdependent with other thermodynamic properties such as pressure and temperature. For example, volume is related to the pressure and temperature of an ideal gas by the ideal gas law. The physical volume of a system may or may not coincide with a control volume used to analyze the system.

Overview
The volume of a thermodynamic system typically refers to the volume of the working fluid, such as, for example, the fluid within a piston. Changes to this volume may be made through an application of work, or may be used to produce work. An isochoric process however operates at a constant-volume, thus no work can be produced. Many other thermodynamic processes will result in a change in volume. A polytropic process, in particular, causes changes to the system so that the quantity is constant (where is pressure, is volume, and is the polytropic index, a constant). Note that for specific polytropic indexes a polytropic process will be equivalent to a constant-property process. For instance, for very large values of approaching infinity, the process becomes constant-volume. Gases are compressible, thus their volumes (and specific volumes) may be subject to change during thermodynamic processes. Liquids, however, are nearly incompressible, thus their volumes can be often taken as constant. In general, compressibility is defined as the relative volume change of a fluid or solid as a response to a pressure, and may be determined for substances in any phase. Similarly, thermal expansion is the tendency of matter to change in volume in response to a change in temperature.

Volume Many thermodynamic cycles are made up of varying processes, some which maintain a constant volume and some which do not. A vapor-compression refrigeration cycle, for example, follows a sequence where the refrigerant fluid transitions between the liquid and vapor states of matter. Typical units for volume are (cubic meters), (liters), and (cubic feet).

193

Heat and work


Mechanical work performed on a working fluid causes a change in the mechanical constraints of the system; in other words, for work to occur, the volume must be altered. Hence volume is an important parameter in characterizing many thermodynamic processes where an exchange of energy in the form of work is involved. Volume is one of a pair of conjugate variables, the other being pressure. As with all conjugate pairs, the product is a form of energy. The product is the energy lost to a system due to mechanical work. This product is one term which makes up enthalpy where :

is the internal energy of the system.

The second law of thermodynamics describes constraints on the amount of useful work which can be extracted from a thermodynamic system. In thermodynamic systems where the temperature and volume are held constant, the measure of "useful" work attainable is the Helmholtz free energy; and in systems where the volume is not held constant, the measure of useful work attainable is the Gibbs free energy. Similarly, the appropriate value of heat capacity to use in a given process depends on whether the process produces a change in volume. The heat capacity is a function of the amount of heat added to a system. In the case of a constant-volume process, all the heat affects the internal energy of the system (i.e., there is no pV-work, and all the heat affects the temperature). However in a process without a constant volume, the heat addition affects both the internal energy and the work (i.e., the enthalpy); thus the temperature changes by a different amount than in the constant-volume case and a different heat capacity value is required.

Specific volume
Specific volume ( ) is the volume occupied by a unit of mass of a material. In many cases the specific volume is a useful quantity to determine because, as an intensive property, it can be used to determine the complete state of a system in conjunction with another independent intensive variable. The specific volume also allows systems to be studied without reference to an exact operating volume, which may not be known (nor significant) at some stages of analysis. The specific volume of a substance is equal to the reciprocal of its mass density. Specific volume may be expressed in , , , or .

where,

is the volume,

is the mass and

is the density of the material.

For an ideal gas,

where,

is the specific gas constant,

is the temperature and

is the pressure of the gas.

Specific volume may also refer to molar volume.

Volume

194

Gas volume
Dependence on pressure and temperature
The volume of gas increases proportionally to absolute temperature and decreases inversely proportionally to pressure, approximately according to the ideal gas law:

where: p is the pressure V is the volume n is the amount of substance of gas (moles) R is the gas constant, 8.314 JK1mol1 T is the absolute temperature

To simplify, a volume of gas may be expressed as the volume it would have in standard conditions for temperature and pressure, which are 0 C and 100 kPa.

Humidity exclusion
In contrast to other gas components, water content in air, or humidity, to a higher degree depends on vaporization and condensation from or into water, which, in turn, mainly depends on temperature. Therefore, when applying more pressure to a gas saturated with water, all components will initially decrease in volume approximately according to the ideal gas law. However, some of the water will condense until returning to almost the same humidity as before, giving the resulting total volume deviating from what the ideal gas law predicted. Conversely, decreasing temperature would also make some water condense, again making the final volume deviating from predicted by the ideal gas law. Therefore, gas volume may alternatively be expressed excluding the humidity content: Vd (volume dry). This fraction more accurately follows the ideal gas law. On the contrary Vs (volume saturated) is the volume a gas mixture would have if humidity was added to it until saturation (or 100% relative humidity).

General conversion
To compare gas volume between two conditions of different temperature or pressure (1 and 2), assuming nR are the same, the following equation uses humidity exclusion in addition to the ideal gas law:

Where, in addition to terms used in the ideal gas law: pw is the partial pressure of gaseous water during condition 1 and 2, respectively For example, calculating how much 1 liter of air (a) at 0C, 100 kPa, pw = 0 kPa (known as STPD, see below) would fill when breathed into the lungs where it is mixed with water vapor (l), where it quickly becomes 37 C, 100 kPa, pw = 6.2 kPa (BTPS):

Volume

195

Common conditions
Some common expressions of gas volume with defined or variable temperature, pressure and humidity inclusion are: ATPS: Ambient temperature (variable) and pressure (variable), saturated (humidity depends on temperature) ATPD: Ambient temperature (variable) and pressure (variable), dry (no humidity) BTPS: Body Temperature (37 C or 310 K) and pressure (generally same as ambient), saturated (47 mmHg or 6.2 kPa) STPD: Standard temperature (0 C or 273 K) and pressure (760mmHg (101.33kPa) or 100kPa (750.06mmHg)), dry (no humidity)

Conversion factors Conversion factors between expressions of volume of gas


To convert from ATPS STPD BTPS To Multiply by [(PA Pwater S) / PS] * [TS / TA] [(PA Pwater S)/(PA Pwater B)] * [TB/TA] online [1] calculator (PA Pwater S)/PA (PA/PS) * (TS / TA) [PA/(PA Pwater B) * (TB / TA) PA/(PA Pwater S) [2] [2]

ATPD ATPD STPD BTPS ATPS BTPS STPD

Legend: PA = Ambient pressurePS = Standard pressure (100 kPa or 750 mmHg)Pwater S = Partial pressure of water in saturated air (100% relative humidity, dependent on ambient temperature (See Humidity#Dew point and frost pointdew point and frost point)Pwater B = Partial pressure of water in saturated air in 37 C = 47 mmHgTS = Standard temperature in Kelvin (unit)kelvins (K) = 273 KTA = Ambient temperature in kelvins = 273 + t (where t is ambient temperature in C)TB = Body temperature in kelvins = 310 K Unless else specified in table, then reference is:
[3]

Partial volume
The partial volume of a particular gas is the volume which the gas would have if it alone occupied the volume, with unchanged pressure and temperature, and is useful in gas mixtures, e.g. air, to focus on one particular gas component, e.g. oxygen. It can be approximated both from partial pressure and molar fraction:[4]

Vx is the partial volume of any individual gas component (X) Vtot is the total volume in gas mixture Px is the partial pressure of gas X Ptot is the total pressure in gas mixture nx is the amount of substance of a gas (X) ntot is the total amount of substance in gas mixture

Volume

196

References
[1] http:/ / www. dynamicmt. com/ btpsform. html [2] http:/ / books. google. com/ books?id=1b0iwv8-jGcC& printsec=frontcover#PPA113,M1 [3] Page 113 in: Exercise Physiology: Basis of Human Movement in Health and Disease (http:/ / books. google. com/ books?id=1b0iwv8-jGcC& printsec=frontcover#PPA113,M1) By Stanley P Brown, Wayne C Miller, Jane M Eason Edition: illustrated Published by Lippincott Williams & Wilkins, 2006 ISBN 0-7817-7730-5, 978-0-7817-7730-8 672 pages [4] Page 200 in: Medical biophysics. Flemming Cornelius. 6th Edition, 2008.

197

Chapter 7. Material Properties


Heat capacity
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Heat capacity, or thermal capacity, is the measurable physical quantity of heat energy required to change the temperature of an object or body by a given amount. The SI unit of heat capacity is joule per kelvin, and the dimensional form is M1L2T21. Heat capacity is an extensive property of matter, meaning it is proportional to the size of the system. When expressing the same phenomenon as an intensive property, the heat capacity is divided by the amount of substance, mass, or volume, so that the quantity is independent of the size or extent of the sample. The molar heat capacity is the heat capacity per mole of a pure substance and the specific heat capacity, often simply called specific heat, is the heat capacity per unit mass of a material. Occasionally, in engineering contexts, the volumetric heat capacity is used. Temperature reflects the average randomized kinetic energy of particles in matter, while heat is the transfer of thermal energy across a system boundary into the body or from the body to the environment. Translation, rotation, and a combination of the two types of energy in vibration (kinetic and potential) of atoms represent the degrees of freedom of motion which classically contribute to the heat capacity of matter, but loosely bound electrons may also participate. On a microscopic scale, each system particle absorbs thermal energy among the few degrees of freedom available to it, and at sufficient temperatures, this process contributes to the specific heat capacity that classically approaches a value per mole of particles that is set by the Dulong-Petit law. This limit, which is about 25 joules per kelvin for each mole of atoms, is achieved by many solid substances at room temperature. For quantum mechanical reasons, at any given temperature, some of these degrees of freedom may be unavailable, or only partially available, to store thermal energy. In such cases, the specific heat capacity is a fraction of the maximum. As the temperature approaches absolute zero, the specific heat capacity of a system also approaches zero, due to loss of available degrees of freedom. Quantum theory can be used to quantitatively predict the specific heat capacity of simple systems.

Heat capacity

198

Background
Before the development of modern thermodynamics, it was thought that heat was an invisible fluid, known as the caloric. Bodies were capable of holding a certain amount of this fluid, hence the term heat capacity, named and first investigated by Scottish chemist Joseph Black in the 1750s. Today, the notion of the caloric has been replaced by the notion of a system's internal energy. That is, heat is no longer considered a fluid; rather, heat is a transfer of disordered energy. Nevertheless, at least in English, the term "heat capacity" survives. In some other languages, the term thermal capacity is preferred, and it is also sometimes used in English.

Older units and English units


An older unit of heat is the kilogram-calorie (Cal), originally defined as the energy required to raise the temperature of one kilogram of water by one degree Celsius, typically from 15 to 16C. The specific heat capacity of water on this scale would therefore be exactly 1Cal/(Ckg). However, due to the temperature-dependence of the specific heat, a large number of different definitions of the calorie came into being. Whilst once it was very prevalent, especially its smaller cgs variant the gram-calorie (cal), defined so that the specific heat of water would be 1cal/(Kg), in most fields the use of the calorie is now archaic. In the United States other units of measure for heat capacity may be quoted in disciplines such as construction, civil engineering, and chemical engineering. A still common system is the English Engineering Units in which the mass reference is pound mass and the temperature is specified in degrees Fahrenheit or Rankine. One (rare) unit of heat is the pound calorie (lb-cal), defined as the amount of heat required to raise the temperature of one pound of water by one degree Celsius. On this scale the specific heat of water would be 1lb-cal/(Klb). More common is the British thermal unit, the standard unit of heat in the U.S. construction industry. This is defined such that the specific heat of water is 1 BTU/(Flb).

Extensive and intensive quantities


An object's heat capacity (symbol C) is defined as the ratio of the amount of heat energy transferred to an object and the resulting increase in temperature of the object,

In the International System of Units, heat capacity has the unit joules per kelvin. Heat capacity is an extensive property, meaning it is a physical property that scales with the size of a physical system. A sample containing twice the amount of substance as another sample requires the transfer of twice the amount of heat ( ) to achieve the same change in temperature ( ). For many experimental and theoretical purposes it is more convenient to report heat capacity as an intensive property - an intrinsic characteristic of a particular substance. This is most often accomplished by expressing the property in relation to a unit of mass. In science and engineering, such properties are often prefixed with the term specific. International standards now recommend that specific heat capacity always refer to division by mass. The units for the specific heat capacity are . In chemistry, heat capacity is often specified relative to one mole, the unit of amount of substance, and is called the molar heat capacity. It has the unit . For some considerations it is useful to specify the volume-specific heat capacity, commonly called volumetric heat capacity, which is the heat capacity per unit volume and has SI units . This is used almost exclusively for liquids and solids, since for gases it may be confused with specific heat capacity at constant volume.

Heat capacity

199

Measurement of heat capacity


The heat capacity of most systems is not a constant. Rather, it depends on the state variables of the thermodynamic system under study. In particular it is dependent on temperature itself, as well as on the pressure and the volume of the system. Different measurements of heat capacity can therefore be performed, most commonly either at constant pressure or at constant volume. The values thus measured are usually sub scripted (by p and V, respectively) to indicate the definition. Gases and liquids are typically also measured at constant volume. Measurements under constant pressure produce larger values than those at constant volume because the constant pressure values also include heat energy that is used to do work to expand the substance against the constant pressure as its temperature increases. This difference is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume.[citation needed] The specific heat capacities of substances comprising molecules (as distinct from monatomic gases) are not fixed constants and vary somewhat depending on temperature. Accordingly, the temperature at which the measurement is made is usually also specified. Examples of two common ways to cite the specific heat of a substance are as follows: Water (liquid): cp = 4.1855 [J/(gK)] (15 C, 101.325 kPa) or 1 calorie/gram C Water (liquid): CvH = 74.539 J/(molK) (25 C) For liquids and gases, it is important to know the pressure to which given heat-capacity data refer. Most published data are given for standard pressure. However, quite different standard conditions for temperature and pressure have been defined by different organizations. The International Union of Pure and Applied Chemistry (IUPAC) changed its recommendation from one atmosphere to the round value 100kPa (750.062Torr).[1]

Calculation from first principles


The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (DulongPetit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below.

Thermodynamic relations and definition of heat capacity


The internal energy of a closed system changes either by adding heat to the system or by the system performing work. Written mathematically we have esystem = ein - eout Or

For work as a result of an increase of the system volume we may write,

If the heat is added at constant volume, then the second term of this relation vanishes and one readily obtains

This defines the heat capacity at constant volume, CV, which is also related to changes in internal energy. Another useful quantity is the heat capacity at constant pressure, CP. This quantity refers to the change in enthalpy of the system and is given by

Heat capacity

200

A small change in the enthalpy can be expressed as

and therefore, at constant pressure, we have

These two equations:

are property relations and are therefore independent of the type of process. In other words, they are valid for any substance going through any process. Both the internal energy and enthalpy of a substance can change with the transfer of energy in many forms i.e., heat. [2]

Relation between heat capacities


Measuring the heat capacity, sometimes referred to as specific heat, at constant volume can be prohibitively difficult for liquids and solids. That is, small temperature changes typically require large pressures to maintain a liquid or solid at constant volume implying the containing vessel must be nearly rigid or at least very strong (see coefficient of thermal expansion and compressibility). Instead it is easier to measure the heat capacity at constant pressure (allowing the material to expand or contract freely) and solve for the heat capacity at constant volume using mathematical relationships derived from the basic thermodynamic laws. Starting from the fundamental thermodynamic relation one can show

where the partial derivatives are taken at constant volume and constant number of particles, and constant pressure and constant number of particles, respectively. This can also be rewritten

where is the coefficient of thermal expansion, is the isothermal compressibility. The heat capacity ratio or adiabatic index is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor.

Heat capacity Ideal gas


[3]

201

For an ideal gas, evaluating the partial derivatives above according to the equation of state where R is the gas constant for an ideal gas

substituting = this equation reduces simply to Mayer's relation,

Specific heat capacity


The specific heat capacity of a material on a per mass basis is

which in the absence of phase transitions is equivalent to

where is the heat capacity of a body made of the material in question, is the mass of the body, is the volume of the body, and is the density of the material. For gases, and also for other materials under high pressures, there is need to distinguish between different boundary conditions for the processes under consideration (since values differ significantly between different conditions). Typical processes for which a heat capacity may be defined include isobaric (constant pressure, ) or isochoric (constant volume, ) processes. The corresponding specific heat capacities are expressed as

From the results of the previous section, dividing through by the mass gives the relation

Heat capacity

202

A related parameter to

is

, the volumetric heat capacity. In engineering practice,

for solids or liquids

often signifies a volumetric heat capacity, rather than a constant-volume one. In such cases, the mass-specific heat capacity (specific heat) is often explicitly written with the subscript , as . Of course, from the above relationships, for solids one writes

For pure homogeneous chemical compounds with established molecular or molar mass or a molar quantity is established, heat capacity as an intensive property can be expressed on a per mole basis instead of a per mass basis by the following equations analogous to the per mass equations: = molar heat capacity at constant pressure

= molar heat capacity at constant volume where n = number of moles in the body or thermodynamic system. One may refer to such a per mole quantity as molar heat capacity to distinguish it from specific heat capacity on a per mass basis.

Polytropic heat capacity


The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperature) change = molar heat capacity at polytropic process The most important polytropic processes run between the adiabatic and the isotherm functions, the polytropic index is between 1 and the adiabatic exponent ( or )

Dimensionless heat capacity


The dimensionless heat capacity of a material is

where C is the heat capacity of a body made of the material in question (J/K) n is the amount of substance in the body (mol) R is the gas constant (J/(Kmol)) N is the number of molecules in the body. (dimensionless) k is Boltzmanns constant (J/(Kmolecule)) In the ideal gas article, dimensionless heat capacity is expressed as , and is related there directly to half the number of degrees of freedom per particle. This holds true for quadratic degrees of freedom, a consequence of the equipartition theorem. More generally, the dimensionless heat capacity relates the logarithmic increase in temperature to the increase in the dimensionless entropy per particle , measured in nats.

Heat capacity Alternatively, using base 2 logarithms, C* relates the base-2 logarithmic increase in temperature to the increase in the dimensionless entropy measured in bits.

203

Heat capacity at absolute zero


From the definition of entropy

the absolute entropy can be calculated by integrating from zero kelvins temperature to the final temperature Tf

The heat capacity must be zero at zero temperature in order for the above integral not to yield an infinite absolute entropy, which would violate the third law of thermodynamics. One of the strengths of the Debye model is that (unlike the preceding Einstein model) it predicts the proper mathematical form of the approach of heat capacity toward zero, as absolute zero temperature is approached.

Negative heat capacity (stars)


Most physical systems exhibit a positive heat capacity. However, even though it can seem paradoxical at first, there are some systems for which the heat capacity is negative. These are inhomogeneous systems which do not meet the strict definition of thermodynamic equilibrium. They include gravitating objects such as stars, galaxies; and also sometimes some nano-scale clusters of a few tens of atoms, close to a phase transition. A negative heat capacity can result in a negative temperature[citation needed]. According to the virial theorem, for a self-gravitating body like a star or an interstellar gas cloud, the average potential energy UPot and the average kinetic energy UKin are locked together in the relation The total energy U (= UPot + UKin) therefore obeys If the system loses energy, for example by radiating energy away into space, the average kinetic energy actually increases. If a temperature is defined by the average kinetic energy, then the system therefore can be said to have a negative heat capacity.[4] A more extreme version of this occurs with black holes. According to black hole thermodynamics, the more mass and energy a black hole absorbs, the colder it becomes. In contrast, if it is a net emitter of energy, through Hawking radiation, it will become hotter and hotter until it boils away.

Heat capacity

204

Theory of heat capacity


Factors that affect specific heat capacity
For any given substance, the heat capacity of a body is directly proportional to the amount of substance it contains (measured in terms of mass or moles or volume). Doubling the amount of substance in a body doubles its heat capacity, etc. However, when this effect has been corrected for, by dividing the heat capacity by the quantity of substance in a body, the resulting specific heat capacity is a function of the structure of the substance itself. In particular, it depends on the number of degrees of freedom that are available to the particles in the substance, each of which type of freedom allows substance particles to store energy. The translational kinetic energy of substance particles is only one of the many possible degrees of freedom which manifests as temperature change, and thus the larger the number of degrees of Molecules undergo many characteristic internal vibrations. Potential freedom available to the particles of a substance energy stored in these internal degrees of freedom contributes to a sample s energy content, but not to its temperature. More internal degrees other than translational kinetic energy, the larger of freedom tend to increase a substance's specific heat capacity, so long will be the specific heat capacity for the substance. as temperatures are high enough to overcome quantum effects. For example, rotational kinetic energy of gas molecules stores heat energy in a way that increases heat capacity, since this energy does not contribute to temperature. In addition, quantum effects require that whenever energy be stored in any mechanism associated with a bound system which confers a degree of freedom, it must be stored in certain minimal-sized deposits (quanta) of energy, or else not stored at all. Such effects limit the full ability of some degrees of freedom to store energy when their lowest energy storage quantum amount is not easily supplied at the average energy of particles at a given temperature. In general, for this reason, specific heat capacities tend to fall at lower temperatures where the average thermal energy available to each particle degree of freedom is smaller, and thermal energy storage begins to be limited by these quantum effects. Due to this process, as temperature falls toward absolute zero, so also does heat capacity. Degrees of freedom Molecules are quite different from the monatomic gases like helium and argon. With monatomic gases, thermal energy comprises only translational motions. Translational motions are ordinary, whole-body movements in 3D space whereby particles move about and exchange energy in collisionslike rubber balls in a vigorously shaken container (see animation here [5]). These simple movements in the three dimensions of space mean individual atoms have three translational degrees of freedom. A degree of freedom is any form of energy in which heat transferred into an object can be stored. This can be in translational kinetic energy, rotational kinetic energy, or other forms such as potential energy in vibrational modes. Only three translational degrees of freedom (corresponding to the three independent directions in space) are available for any individual atom, whether it is free, as a monatomic molecule, or bound into a polyatomic molecule. As to rotation about an atom's axis (again, whether the atom is bound or free), its energy of rotation is proportional to the moment of inertia for the atom, which is extremely small compared to moments of inertia of collections of atoms.

Heat capacity This is because almost all of the mass of a single atom is concentrated in its nucleus, which has a radius too small to give a significant moment of inertia. In contrast, the spacing of quantum energy levels for a rotating object is inversely proportional to its moment of inertia, and so this spacing becomes very large for objects with very small moments of inertia. For these reasons, the contribution from rotation of atoms on their axes is essentially zero in monatomic gases, because the energy spacing of the associated quantum levels is too large for significant thermal energy to be stored in rotation of systems with such small moments of inertia. For similar reasons, axial rotation around bonds joining atoms in diatomic gases (or along the linear axis in a linear molecule of any length) can also be neglected as a possible "degree of freedom" as well, since such rotation is similar to rotation of monatomic atoms, and so occurs about an axis with a moment of inertia too small to be able to store significant heat energy. In polyatomic molecules, other rotational modes may become active, due to the much higher moments of inertia about certain axes which do not coincide with the linear axis of a linear molecule. These modes take the place of some translational degrees of freedom for individual atoms, since the atoms are moving in 3-D space, as the molecule rotates. The narrowing of quantum mechanically determined energy spacing between rotational states results from situations where atoms are rotating around an axis that does not connect them, and thus form an assembly that has a large moment of inertia. This small difference between energy states allows the kinetic energy of this type of rotational motion to store heat energy at ambient temperatures. Furthermore internal vibrational degrees of freedom also may become active (these are also a type of translation, as seen from the view of each atom). In summary, molecules are complex objects with a population of atoms that may move about within the molecule in a number of different ways (see animation at right), and each of these ways of moving is capable of storing energy if the temperature is sufficient. The heat capacity of molecular substances (on a "per-atom" or atom-molar, basis) does not exceed the heat capacity of monatomic gases, unless vibrational modes are brought into play. The reason for this is that vibrational modes allow energy to be stored as potential energy in intra-atomic bonds in a molecule, which are not available to atoms in monatomic gases. Up to about twice as much energy (on a per-atom basis) per unit of temperature increase can be stored in a solid as in a monatomic gas, by this mechanism of storing energy in the potentials of interatomic bonds. This gives many solids about twice the atom-molar heat capacity at room temperature of monatomic gases. However, quantum effects heavily affect the actual ratio at lower temperatures (i.e., much lower than the melting temperature of the solid), especially in solids with light and tightly bound atoms (e.g., beryllium metal or diamond). Polyatomic gases store intermediate amounts of energy, giving them a "per-atom" heat capacity that is between that of monatomic gases (32 R per mole of atoms, where R is the ideal gas constant), and the maximum of fully excited warmer solids (3 R per mole of atoms). For gases, heat capacity never falls below the minimum of 32 R per mole (of molecules), since the kinetic energy of gas molecules is always available to store at least this much thermal energy. However, at cryogenic temperatures in solids, heat capacity falls toward zero, as temperature approaches absolute zero. Example of temperature-dependent specific heat capacity, in a diatomic gas To illustrate the role of various degrees of freedom in storing heat, we may consider nitrogen, a diatomic molecule that has five active degrees of freedom at room temperature: the three comprising translational motion plus two rotational degrees of freedom internally. Although the constant-volume molar heat capacity of nitrogen at this temperature is five-thirds that of monatomic gases, on a per-mole of atoms basis, it is five-sixths that of a monatomic gas. The reason for this is the loss of a degree of freedom due to the bond when it does not allow storage of thermal energy. Two separate nitrogen atoms would have a total of six degrees of freedomthe three translational degrees of freedom of each atom. When the atoms are bonded the molecule will still only have three translational degrees of freedom, as the two atoms in the molecule move as one. However, the molecule cannot be treated as a point object, and the moment of inertia has increased sufficiently about two axes to allow two rotational degrees of freedom to be active at room temperature to give five degrees of freedom. The moment of inertia about the third axis remains small, as this is the axis passing through the centres of the two atoms, and so is similar to the small moment of inertia

205

Heat capacity for atoms of a monatomic gas. Thus, this degree of freedom does not act to store heat, and does not contribute to the heat capacity of nitrogen. The heat capacity per atom for nitrogen (5/2 per mole molecules = 5/4 per mole atoms) is therefore less than for a monatomic gas (3/2 per mole molecules or atoms), so long as the temperature remains low enough that no vibrational degrees of freedom are activated. At higher temperatures, however, nitrogen gas gains two more degrees of internal freedom, as the molecule is excited into higher vibrational modes that store thermal energy. Now the bond is contributing heat capacity, and is contributing more than if the atoms were not bonded. With full thermal excitation of bond vibration, the heat capacity per volume, or per mole of gas molecules approaches seven-thirds that of monatomic gases. Significantly, this is seven-sixths of the monatomic gas value on a mole-of-atoms basis, so this is now a higher heat capacity per atom than the monatomic figure, because the vibrational mode enables for diatomic gases allows an extra degree of potential energy freedom per pair of atoms, which monatomic gases cannot possess.[6] See thermodynamic temperature for more information on translational motions, kinetic (heat) energy, and their relationship to temperature. However, even at these large temperatures where gaseous nitrogen is able to store 7/6ths of the energy per atom of a monatomic gas (making it more efficient at storing energy on an atomic basis), it still only stores 7/12 ths of the maximal per-atom heat capacity of a solid, meaning it is not nearly as efficient at storing thermal energy on an atomic basis, as solid substances can be. This is typical of gases, and results because many of the potential bonds which might be storing potential energy in gaseous nitrogen (as opposed to solid nitrogen) are lacking, because only one of the spatial dimensions for each nitrogen atom offers a bond into which potential energy can be stored without increasing the kinetic energy of the atom. In general, solids are most efficient, on an atomic basis, at storing thermal energy (that is, they have the highest per-atom or per-mole-of-atoms heat capacity). Per mole of different units Per mole of molecules When the specific heat capacity, c, of a material is measured (lowercase c means the unit quantity is in terms of mass), different values arise because different substances have different molar masses (essentially, the weight of the individual atoms or molecules). In solids, thermal energy arises due to the number of atoms that are vibrating. "Molar" heat capacity per mole of molecules, for both gases and solids, offer figures which are arbitrarily large, since molecules may be arbitrarily large. Such heat capacities are thus not intensive quantities for this reason, since the quantity of mass being considered can be increased without limit. Per mole of atoms Conversely, for molecular-based substances (which also absorb heat into their internal degrees of freedom), massive, complex molecules with high atomic countlike octanecan store a great deal of energy per mole and yet are quite unremarkable on a mass basis, or on a per-atom basis. This is because, in fully excited systems, heat is stored independently by each atom in a substance, not primarily by the bulk motion of molecules. Thus, it is the heat capacity per-mole-of-atoms, not per-mole-of-molecules, which is the intensive quantity, and which comes closest to being a constant for all substances at high temperatures. This relationship was noticed empirically in 1819, and is called the Dulong-Petit law, after its two discoverers. Historically, the fact that specific heat capacities are approximately equal when corrected by the presumed weight of the atoms of solids, was an important piece of data in favor of the atomic theory of matter. Because of the connection of heat capacity to the number of atoms, some care should be taken to specify a mole-of-molecules basis vs. a mole-of-atoms basis, when comparing specific heat capacities of molecular solids and gases. Ideal gases have the same numbers of molecules per volume, so increasing molecular complexity adds heat capacity on a per-volume and per-mole-of-molecules basis, but may lower or raise heat capacity on a per-atom basis, depending on whether the temperature is sufficient to store energy as atomic vibration.

206

Heat capacity In solids, the quantitative limit of heat capacity in general is about 3 R per mole of atoms, where R is the ideal gas constant. This 3 R value is about 24.9 J/mole.K. Six degrees of freedom (three kinetic and three potential) are available to each atom. Each of these six contributes 12R specific heat capacity per mole of atoms. This limit of 3 R per mole specific heat capacity is approached at room temperature for most solids, with significant departures at this temperature only for solids composed of the lightest atoms which are bound very strongly, such as beryllium (where the value is only of 66% of 3 R), or diamond (where it is only 24% of 3 R). These large departures are due to quantum effects which prevent full distribution of heat into all vibrational modes, when the energy difference between vibrational quantum states is very large compared to the average energy available to each atom from the ambient temperature. For monatomic gases, the specific heat is only half of 3 R per mole, i.e. (32R per mole) due to loss of all potential energy degrees of freedom in these gases. For polyatomic gases, the heat capacity will be intermediate between these values on a per-mole-of-atoms basis, and (for heat-stable molecules) would approach the limit of 3 R per mole of atoms, for gases composed of complex molecules, and at higher temperatures at which all vibrational modes accept excitational energy. This is because very large and complex gas molecules may be thought of as relatively large blocks of solid matter which have lost only a relatively small fraction of degrees of freedom, as compared to a fully integrated solid. For a list of heat capacities per atom-mole of various substances, in terms of R, see the last column of the table of heat capacities below. Corollaries of these considerations for solids (volume-specific heat capacity) Sincethe bulk density of a solid chemical element is strongly related to its molar mass (usually about 3 R per mole, as noted above), there exists a noticeable inverse correlation between a solids density and its specific heat capacity on a per-mass basis. This is due to a very approximate tendency of atoms of most elements to be about the same size, and constancy of mole-specific heat capacity) result in a good correlation between the volume of any given solid chemical element and its total heat capacity. Another way of stating this, is that the volume-specific heat capacity (volumetric heat capacity) of solid elements is roughly a constant. The molar volume of solid elements is very roughly constant, and (even more reliably) so also is the molar heat capacity for most solid substances. These two factors determine the volumetric heat capacity, which as a bulk property may be striking in consistency. For example, the element uranium is a metal which has a density almost 36 times that of the metal lithium, but uranium's specific heat capacity on a volumetric basis (i.e. per given volume of metal) is only 18% larger than lithium's. Since the volume-specific corollary of the Dulong-Petit specific heat capacity relationship requires that atoms of all elements take up (on average) the same volume in solids, there are many departures from it, with most of these due to variations in atomic size. For instance, arsenic, which is only 14.5% less dense than antimony, has nearly 59% more specific heat capacity on a mass basis. In other words; even though an ingot of arsenic is only about 17% larger than an antimony one of the same mass, it absorbs about 59% more heat for a given temperature rise. The heat capacity ratios of the two substances closely follows the ratios of their molar volumes (the ratios of numbers of atoms in the same volume of each substance); the departure from the correlation to simple volumes in this case is due to lighter arsenic atoms being significantly more closely packed than antimony atoms, instead of similar size. In other words, similar-sized atoms would cause a mole of arsenic to be 63% larger than a mole of antimony, with a correspondingly lower density, allowing its volume to more closely mirror its heat capacity behavior.

207

Heat capacity Other factors Hydrogen bonds Hydrogen-containing polar molecules like ethanol, ammonia, and water have powerful, intermolecular hydrogen bonds when in their liquid phase. These bonds provide another place where heat may be stored as potential energy of vibration, even at comparatively low temperatures. Hydrogen bonds account for the fact that liquid water stores nearly the theoretical limit of 3 R per mole of atoms, even at relatively low temperatures (i.e. near the freezing point of water). Impurities In the case of alloys, there are several conditions in which small impurity concentrations can greatly affect the specific heat. Alloys may exhibit marked difference in behaviour even in the case of small amounts of impurities being one element of the alloy; for example impurities in semiconducting ferromagnetic alloys may lead to quite different specific heat properties.

208

The simple case of the monatomic gas


In the case of a monatomic gas such as helium under constant volume, if it is assumed that no electronic or nuclear quantum excitations occur, each atom in the gas has only 3 degrees of freedom, all of a translational type. No energy dependence is associated with the degrees of freedom which define the position of the atoms. While, in fact, the degrees of freedom corresponding to the momenta of the atoms are quadratic, and thus contribute to the heat capacity. There are N atoms, each of which has 3 components of momentum, which leads to 3N total degrees of freedom. This gives:

where is the heat capacity at constant volume of the gas is the molar heat capacity at constant volume of the gas N is the total number of atoms present in the container n is the number of moles of atoms present in the container (n is the ratio of N and Avogadros number) R is the ideal gas constant, (8.3144621[75] J/(molK). R is equal to the product of Boltzmanns constant and Avogadros number The following table shows experimental molar constant volume heat capacity measurements taken for each noble monatomic gas (at 1 atm and 25 C):

Heat capacity

209

Monatomic gas CV, m (J/(molK)) CV, m/R He Ne Ar Kr Xe 12.5 12.5 12.5 12.5 12.5 1.50 1.50 1.50 1.50 1.50

It is apparent from the table that the experimental heat capacities of the monatomic noble gases agrees with this simple application of statistical mechanics to a very high degree. The molar heat capacity of a monatomic gas at constant pressure is then

Diatomic gas
In the somewhat more complex case of an ideal gas of diatomic molecules, the presence of internal degrees of freedom are apparent. In addition to the three translational degrees of freedom, there are rotational and vibrational degrees of freedom. In general, the number of degrees of freedom, f, in a molecule with na atoms is 3na:

Mathematically, there are a total of three rotational degrees of freedom, one corresponding to rotation about each of the axes of three dimensional space. However, Constant volume specific heat capacity of a diatomic gas (idealised). As in practice only the existence of two degrees temperature increases, heat capacity goes from 3/2 R (translation contribution only), to 5/2 R (translation plus rotation), finally to a maximum of 7/2 R of rotational freedom for linear molecules (translation + rotation + vibration) will be considered. This approximation is valid because the moment of inertia about the internuclear axis is vanishingly small with respect to other moments of inertia in the molecule (this is due to the very small rotational moments of single atoms, due to the concentration of almost all their mass at their centers; compare also the extremely small radii of the atomic nuclei compared to the distance between them in a diatomic molecule). Quantum mechanically, it can be shown that the interval between successive rotational energy eigenstates is inversely proportional to the moment of inertia about that axis. Because the moment of inertia about the internuclear axis is vanishingly small relative to the other two rotational axes, the energy spacing can be considered so high that no excitations of the rotational state can occur unless the temperature is extremely high. It is easy to calculate the expected number of vibrational degrees of freedom (or vibrational modes). There are three degrees of translational freedom, and two degrees of rotational freedom, therefore

Each rotational and translational degree of freedom will contribute R/2 in the total molar heat capacity of the gas. Each vibrational mode will contribute to the total molar heat capacity, however. This is because for each

Heat capacity vibrational mode, there is a potential and kinetic energy component. Both the potential and kinetic components will contribute R/2 to the total molar heat capacity of the gas. Therefore, a diatomic molecule would be expected to have a molar constant-volume heat capacity of

210

where the terms originate from the translational, rotational, and vibrational degrees of freedom, respectively. The following is a table of some molar constant-volume heat capacities of various diatomic gases at standard temperature (25 o C = 298 K)

Constant volume specific heat capacity of diatomic gases (real gases) between about 200 K and 2000 K. This temperature range is not large enough to include both quantum transitions in all gases. Instead, at 200 K, all but hydrogen are fully rotationally excited, so all have at least 5/2 R heat capacity. (Hydrogen is already below 5/2, but it will require cryogenic conditions for even H2 to fall to 3/2 R). Further, only the heavier gases fully reach 7/2 R at the highest temperature, due to the relatively small vibrational energy spacing of these molecules. HCl and H2 begin to make the transition above 500 K, but have not achieved it by 1000 K, since their vibrational energy-level spacing is too wide to fully participate in heat capacity, even at this temperature.

Diatomic gas CV, m (J/(molK)) CV, m / R H2 CO N2 Cl2 Br2 (vapour) 20.18 20.2 19.9 24.1 28.2 2.427 2.43 2.39 3.06 3.39

From the above table, clearly there is a problem with the above theory. All of the diatomics examined have heat capacities that are lower than those predicted by the equipartition theorem, except Br2. However, as the atoms composing the molecules become heavier, the heat capacities move closer to their expected values. One of the reasons for this phenomenon is the quantization of vibrational, and to a lesser extent, rotational states. In fact, if it is assumed that the molecules remain in their lowest energy vibrational state because the inter-level energy spacings for vibration-energies are large, the predicted molar constant volume heat capacity for a diatomic molecule becomes just that from the contributions of translation and rotation:

Heat capacity which is a fairly close approximation of the heat capacities of the lighter molecules in the above table. If the quantum harmonic oscillator approximation is made, it turns out that the quantum vibrational energy level spacings are actually inversely proportional to the square root of the reduced mass of the atoms composing the diatomic molecule. Therefore, in the case of the heavier diatomic molecules such as chlorine or bromine, the quantum vibrational energy level spacings become finer, which allows more excitations into higher vibrational levels at lower temperatures. This limit for storing heat capacity in vibrational modes, as discussed above, becomes 7R/2 = 3.5 R per mole of gas molecules, which is fairly consistent with the measured value for Br2 at room temperature. As temperatures rise, all diatomic gases approach this value.

211

General gas phase


The specific heat of the gas is best conceptualized in terms of the degrees of freedom of an individual molecule. The different degrees of freedom correspond to the different ways in which the molecule may store energy. The molecule may store energy in its translational motion according to the formula:

where m is the mass of the molecule and

is velocity of the center of mass of the molecule. Each

direction of motion constitutes a degree of freedom, so that there are three translational degrees of freedom. In addition, a molecule may have rotational motion. The kinetic energy of rotational motion is generally expressed as

where I is the moment of inertia tensor of the molecule, and

is the angular velocity pseudo-vector (in

a coordinate system aligned with the principle axes of the molecule). In general, then, there will be three additional degrees of freedom corresponding to the rotational motion of the molecule, (For linear molecules one of the inertia tensor terms vanishes and there are only two rotational degrees of freedom). The degrees of freedom corresponding to translations and rotations are called the rigid degrees of freedom, since they do not involve any deformation of the molecule. The motions of the atoms in a molecule which are not part of its gross translational motion or rotation may be classified as vibrational motions. It can be shown that if there are n atoms in the molecule, there will be as many as vibrational degrees of freedom, where is the number of rotational degrees of freedom. A vibrational degree of freedom corresponds to a specific way in which all the atoms of a molecule can vibrate. The actual number of possible vibrations may be less than this maximal one, due to various symmetries. For example, triatomic nitrous oxide N2O will have only 2 degrees of rotational freedom (since it is a linear molecule) and contains n=3 atoms: thus the number of possible vibrational degrees of freedom will be v = (3*3)-3-2 = 4. There are four ways or "modes" in which the three atoms can vibrate, corresponding to 1) A mode in which an atom at each end of the molecule moves away from, or towards, the center atom at the same time, 2) a mode in which either end atom moves asynchronously with regard to the other two, and 3) and 4) two modes in which the molecule bends out of line, from the center, in the two possible planar directions that are orthogonal to its axis. Each vibrational degree of freedom confers TWO total degrees of freedom, since vibrational energy mode partitions into 1 kinetic and 1 potential mode. This would give nitrous oxide 3 translational, 2 rotational, and 4 vibrational modes (but these last giving 8 vibrational degrees of freedom), for storing energy. This is a total of f = 3+2+8 = 13 total energy-storing degrees of freedom, for N2O. For a bent molecule like water H2O, a similar calculation gives 9-3-3 = 3 modes of vibration, and 3 (translational) + 3 (rotational) + 6 (vibrational) = 12 degrees of freedom.

Heat capacity

212

The storage of energy into degrees of freedom


If the molecule could be entirely described using classical mechanics, then the theorem of equipartition of energy could be used to predict that each degree of freedom would have an average energy in the amount of (1/2)kT where k is Boltzmanns constant and T is the temperature. Our calculation of the constant-volume heat capacity would be straightforward. Each molecule would be holding, on average, an energy of (f/2)kT where f is the total number of degrees of freedom in the molecule. Note that Nk = R if N is Avogadro's number, which is the case in considering the heat capacity of a mole of molecules. Thus, the total internal energy of the gas would be (f/2)NkT where N is the total number of molecules. The heat capacity (at constant volume) would then be a constant (f/2)Nk the mole-specific heat capacity would be (f/2)R the molecule-specific heat capacity would be (f/2)k and the dimensionless heat capacity would be just f/2. Here again, each vibrational degree of freedom contributes 2f. Thus, a mole of nitrous oxide would have a total constant-volume heat capacity (including vibration) of (13/2)R by this calculation. In summary, the molar heat capacity (mole-specific heat capacity) of an ideal gas with f degrees of freedom is given by

This equation applies to all polyatomic gases, if the degrees of freedom are known. The constant-pressure heat capacity for any gas would exceed this by an extra factor of R (see Mayer's relation, above). As example Cp would be a total of (15/2)R/mole for nitrous oxide.

The effect of quantum energy levels in storing energy in degrees of freedom


The various degrees of freedom cannot generally be considered to obey classical mechanics, however. Classically, the energy residing in each degree of freedom is assumed to be continuousit can take on any positive value, depending on the temperature. In reality, the amount of energy that may reside in a particular degree of freedom is quantized: It may only be increased and decreased in finite amounts. A good estimate of the size of this minimum amount is the energy of the first excited state of that degree of freedom above its ground state. For example, the first vibrational state of the hydrogen chloride (HCl) molecule has an energy of about 5.741020 joule. If this amount of energy were deposited in a classical degree of freedom, it would correspond to a temperature of about 4156K. If the temperature of the substance is so low that the equipartition energy of (1/2)kT is much smaller than this excitation energy, then there will be little or no energy in this degree of freedom. This degree of freedom is then said to be frozen out". As mentioned above, the temperature corresponding to the first excited vibrational state of HCl is about 4156K. For temperatures well below this value, the vibrational degrees of freedom of the HCl molecule will be frozen out. They will contain little energy and will not contribute to the thermal energy or the heat capacity of HCl gas.

Energy storage mode "freeze-out" temperatures


It can be seen that for each degree of freedom there is a critical temperature at which the degree of freedom unfreezes and begins to accept energy in a classical way. In the case of translational degrees of freedom, this temperature is that temperature at which the thermal wavelength of the molecules is roughly equal to the size of the container. For a container of macroscopic size (e.g. 10cm) this temperature is extremely small and has no significance, since the gas will certainly liquify or freeze before this low temperature is reached. For any real gas translational degrees of freedom may be considered to always be classical and contain an average energy of (3/2)kT per molecule. The rotational degrees of freedom are the next to unfreeze". In a diatomic gas, for example, the critical temperature for this transition is usually a few tens of kelvins, although with a very light molecule such as hydrogen the rotational

Heat capacity energy levels will be spaced so widely that rotational heat capacity may not completely "unfreeze" until considerably higher temperatures are reached. Finally, the vibrational degrees of freedom are generally the last to unfreeze. As an example, for diatomic gases, the critical temperature for the vibrational motion is usually a few thousands of kelvins, and thus for the nitrogen in our example at room temperature, no vibration modes would be excited, and the constant-volume heat capacity at room temperature is (5/2)R/mole, not (7/2)R/mole. As seen above, with some unusually heavy gases such as iodine gas I2, or bromine gas Br2, some vibrational heat capacity may be observed even at room temperatures. It should be noted that it has been assumed that atoms have no rotational or internal degrees of freedom. This is in fact untrue. For example, atomic electrons can exist in excited states and even the atomic nucleus can have excited states as well. Each of these internal degrees of freedom are assumed to be frozen out due to their relatively high excitation energy. Nevertheless, for sufficiently high temperatures, these degrees of freedom cannot be ignored. In a few exceptional cases, such molecular electronic transitions are of sufficiently low energy that they contribute to heat capacity at room temperature, or even at cryogenic temperatures. One example of an electronic transition degree of freedom which contributes heat capacity at standard temperature is that of nitric oxide (NO), in which the single electron in an anti-bonding molecular orbital has energy transitions which contribute to the heat capacity of the gas even at room temperature. An example of a nuclear magnetic transition degree of freedom which is of importance to heat capacity, is the transition which converts the spin isomers of hydrogen gas (H2) into each other. At room temperature, the proton spins of hydrogen gas are aligned 75% of the time, resulting in orthohydrogen when they are. Thus, some thermal energy has been stored in the degree of freedom available when parahydrogen (in which spins are anti-aligned) absorbs energy, and is converted to the higher energy ortho form. However, at the temperature of liquid hydrogen, not enough heat energy is available to produce orthohydrogen (that is, the transition energy between forms is large enough to "freeze out" at this low temperature), and thus the parahydrogen form predominates. The heat capacity of the transition is sufficient to release enough heat, as orthohydrogen converts to the lower-energy parahydrogen, to boil the hydrogen liquid to gas again, if this evolved heat is not removed with a catalyst after the gas has been cooled and condensed. This example also illustrates the fact that some modes of storage of heat may not be in constant equilibrium with each other in substances, and heat absorbed or released from such phase changes may "catch up" with temperature changes of substances, only after a certain time. In other words, the heat evolved and absorbed from the ortho-para isomeric transition contributes to the heat capacity of hydrogen on long time-scales, but not on short time-scales. These time scales may also depend on the presence of a catalyst. Less exotic phase-changes may contribute to the heat-capacity of substances and systems, as well, as (for example) when water is converted back and forth from solid to liquid or gas form. Phase changes store heat energy entirely in breaking the bonds of the potential energy interactions between molecules of a substance. As in the case of hydrogen, it is also possible for phase changes to be hindered as the temperature drops, so that they do not catch up and become apparent, without a catalyst. For example, it is possible to supercool liquid water to below the freezing point, and not observe the heat evolved when the water changes to ice, so long as the water remains liquid. This heat appears instantly when the water freezes.

213

Heat capacity

214

Solid phase
For matter in a crystalline solid phase, the Dulong-Petit law, which was discovered empirically, states that the mole-specific heat capacity assumes the value 3 R. Indeed, for solid metallic chemical elements at room temperature, molar heat capacities range from about 2.8 R to 3.4 R. Large exceptions at the lower end involve solids composed of relatively low-mass, tightly bonded atoms, such as beryllium at 2.0 R, and diamond at only 0.735 R. The latter conditions create larger quantum vibrational energy-spacing, so that many vibrational modes have energies too high to be populated (and thus are "frozen out") at room temperature. At the higher end of possible heat capacities, heat capacity may exceed R by modest amounts, due to contributions from anharmonic vibrations in solids, and sometimes a modest contribution from conduction electrons in metals. These are not degrees of freedom treated in the Einstein or Debye theories.

The dimensionless heat capacity divided by three, as a function of temperature as predicted by the Debye model and by Einsteins earlier model. The horizontal axis is the temperature divided by the Debye temperature. Note that, as expected, the dimensionless heat capacity is zero at absolute zero, and rises to a value of three as the temperature becomes much larger than the Debye temperature. The red line corresponds to the classical limit of the Dulong-Petit law

The theoretical maximum heat capacity for multi-atomic gases at higher temperatures, as the molecules become larger, also approaches the Dulong-Petit limit of 3 R, so long as this is calculated per mole of atoms, not molecules. The reason for this behavior is that, in theory, gases with very large molecules have almost the same high-temperature heat capacity as solids, lacking only the (small) heat capacity contribution that comes from potential energy that cannot be stored between separate molecules in a gas. The Dulong-Petit limit results from the equipartition theorem, and as such is only valid in the classical limit of a microstate continuum, which is a high temperature limit. For light and non-metallic elements, as well as most of the common molecular solids based on carbon compounds at standard ambient temperature, quantum effects may also play an important role, as they do in multi-atomic gases. These effects usually combine to give heat capacities lower than 3 R per mole of atoms in the solid, although in molecular solids, heat capacities calculated per mole of molecules in molecular solids may be more than 3 R. For example, the heat capacity of water ice at the melting point is about 4.6 R per mole of molecules, but only 1.5 R per mole of atoms. As noted, heat capacity values far lower than 3 R "per atom" (as is the case with diamond and beryllium) result from freezing out of possible vibration modes for light atoms at suitably low temperatures, just as happens in many low-mass-atom gases at room temperatures (where vibrational modes are all frozen out). Because of high crystal binding energies, the effects of vibrational mode freezing are observed in solids more often than liquids: for example the heat capacity of liquid water is twice that of ice at near the same temperature, and is again close to the 3 R per mole of atoms of the Dulong-Petit theoretical maximum.

Heat capacity

215

Liquid phase
For a more modern and precise analysis of the heat capacities of solids, especially at low temperatures, it is useful to use the idea of phonons. See Debye model. Phonons can also be applied to the heat capacity of liquids. Physicists have revived concepts first put forth in the 1940s to develop a new theory of the heat capacity of liquids. Created by Dmitry Bolmatov and Kostya Trachenko the new "phonon theory of liquid thermodynamics" has successfully predicted the heat capacity of 21 different liquids ranging from metals to noble and molecular liquids. The researchers say that the theory covers both the classical and quantum regimes and agrees with experiment over a wide range of temperatures and pressures. While physicists have a good theoretical understanding of the heat capacity of both solids and gases, a general theory of the heat capacity of liquids has always remained elusive. Apart from being an awkward hole in our knowledge of condensed-matter physics, heat capacity the amount of heat needed to change a substance's temperature by a certain amount is a technologically relevant quantity that it would be nice to be able to predict. Physicists had been reluctant to develop a theory because the relevant interactions in a liquid are both strong and specific to that liquid, which, it was felt, would make it tricky to develop a general way of calculating heat capacity for liquids. Using phonons quantized lattice vibrations that behave like particles to develop a theory of specific heat is nothing new in the world of solids. After all, the atoms in a solid oscillate about fixed points in the lattice, which means that the only way that heat in the form of randomly vibrating atoms can move through a material is via phonons. Indeed, Albert Einstein and Peter Debye famously developed separate theories early in the 20th century to explain the high-temperature and low-temperature heat capacity of solids, respectively. But, given that the atoms in a liquid are free to move and so can absorb or transfer heat without any need for phonons, it is not at first glance obvious why phonons should be a good way of describing how heat is transferred and absorbed in a liquid. Anyone who has dunked their head under water knows that sound propagates very well in liquids in the form of longitudinal phonons. What is not obvious, though, is whether transverse or "shear" phonons, which exist in solids, also occur in liquids. Because each phonon mode contributes to the specific heat, it is very important to know how many modes occur in a liquid of interest. This problem was first tackled in the 1940s by the Russian physicist Yakov Frenkel. He pointed out that for vibrations above a certain frequency (the Frenkel frequency), molecules in a liquid behave like those in a solid and can therefore support shear phonons. His idea was that it takes a characteristic amount of time for an atom or molecule to move from one equilibrium position in the liquid to another. As long as the period of the vibration is shorter than this time, the molecules will vibrate as if they are fixed in a solid. With this in mind, Bolmatov and colleagues derived an expression for the energy of a liquid in terms of its temperature and three parameters the liquid's coefficient of expansion, and its Debye and Frenkel frequencies. The Debye frequency is the theoretical maximum frequency that atoms or molecules in the liquid can oscillate at and can be derived from the speed of sound in the liquid. The Frenkel frequency puts a lower bound on the oscillation frequency of the atoms or molecules and can be derived from the viscosity and shear modulus of the liquid. The result is an expression for specific heat as a function of temperature that can be compared with experimental data. In all 21 liquids studied, the theory was able to reproduce the observed drop in heat capacity as temperature increases. The physicists explain this drop in terms of an increase in the Frenkel frequency as a function of temperature. As the material gets hotter, there are fewer shear phonon modes available to transport heat and therefore the heat capacity drops. The theory was able to describe simple liquids such as the noble liquids, which comprise atoms through to complicated molecular liquids such as hydrogen sulphide, methane and water. The physicists say that this broad agreement suggests that Frenkel's original proposal that the phonon states of the liquid depend upon a characteristic time applies to a wide range of materials. The result is that physicists should be able to predict the specific heat of many liquids without having to worry about complicated interactions between constituent atoms or molecules.

Heat capacity Bolmatov told Physics World that there are two reasons why it took so long for Frenkel's ideas to be applied to heat capacity. "The first is that it took 50 years to verify Frenkel's prediction," he says. The second reason is that historically the thermodynamic theory of liquids was developed from the theory of gases, not the theory of solids despite the similarities between liquids and solids. "This development had a certain inertia associated with it and consequently resulted in some delays and more thought was required for proposing that Frenkel's idea can be translated into a consistent phonon theory of liquids."[citation needed] The specific heat of amorphous materials has characteristic discontinuities at the glass transition temperature due to rearrangements that occur in the distribution of atoms. These discontinuities are frequently used to detect the glass transition temperature where a supercooled liquid transforms to a glass.

216

Table of specific heat capacities


Note that the especially high molar values, as for paraffin, gasoline, water and ammonia, result from calculating specific heats in terms of moles of molecules. If specific heat is expressed per mole of atoms for these substances, none of the constant-volume values exceed, to any large extent, the theoretical Dulong-Petit limit of 25 J/(molK) = 3 R per mole of atoms (see the last column of this table). Paraffin, for example, has very large molecules and thus a high heat capacity per mole, but as a substance it does not have remarkable heat capacity in terms of volume, mass, or atom-mol (which is just 1.41 R per mole of atoms, or less than half of most solids, in terms of heat capacity per atom). In the last column, major departures of solids at standard temperatures from the Dulong-Petit law value of 3R, are usually due to low atomic weight plus high bond strength (as in diamond) causing some vibration modes to have too much energy to be available to store thermal energy at the measured temperature. For gases, departure from 3R per mole of atoms in this table is generally due to two factors: (1) failure of the higher quantum-energy-spaced vibration modes in gas molecules to be excited at room temperature, and (2) loss of potential energy degree of freedom for small gas molecules, simply because most of their atoms are not bonded maximally in space to other atoms, as happens in many solids.

Table of specific heat capacities at 25 C (298 K) unless otherwise noted


Substance Phase (mass) specific heat capacity cp or cm Jg1K1 Constant Constant pressure molar volume heat capacity molar Cp,m heat capacity Jmol1K1 Cv,m Jmol1K1 29.07 20.7643 Volumetric heat capacity Cv Jcm3K1 Constant vol. atom-molar heat capacity in units of R Cv,m(atom) atom-mol1 ~ 1.25 R

Air (Sea level, dry, 0 C (273.15 K)) Air (typical room conditionsA) Aluminium Ammonia Animal tissue [7] (incl. human) Antimony Argon Arsenic Beryllium

gas

1.0035

0.001297

gas

1.012

29.19

20.85

0.00121

~ 1.25 R

solid liquid mixed

0.897 4.700 3.5

24.2 80.08

2.422 3.263 3.7*

2.91 R 3.21 R

solid gas solid solid

0.207 0.5203 0.328 1.82

25.2 20.7862 24.6 16.4 12.4717

1.386

3.03 R 1.50 R

1.878 3.367

2.96 R 1.97 R

Heat capacity

217
Bismuth Cadmium Carbon dioxide CO2 Chromium Copper Diamond Ethanol Gasoline (octane) Glass Gold Granite Graphite Helium Hydrogen Hydrogen sulfide H2S Iron Lead Lithium Lithium at 181 C Magnesium Mercury Methane at 2 C Methanol (298 K) Nitrogen Neon Oxygen Paraffin wax C25H52 Polyethylene (rotomolding grade) Silica (fused) Silver Sodium Steel Tin Titanium Tungsten Uranium Water at 100 C (steam) Water at 25 C solid solid gas solid solid solid liquid liquid solid solid solid solid gas gas gas solid solid solid liquid solid liquid gas liquid gas gas gas solid 0.123 0.231 0.839* 0.449 0.385 0.5091 2.44 2.22 0.84 0.129 0.790 0.710 5.1932 14.30 1.015* 0.450 0.129 3.58 4.379 1.02 0.1395 2.191 2.14 1.040 1.0301 0.918 2.5 (ave) 8.53 20.7862 28.82 34.60 25.1[citation needed] 26.4 24.8 30.33 24.9 27.98 35.69 68.62 29.12 20.7862 29.38 900 20.8 12.4717 21.0 2.325 3.537 1.44 1.912 2.242 1.773 1.888 12.4717 25.42 2.492 2.17 1.534 1.03 R 1.50 R 1.23 R 1.05 R 3.02 R 3.18 R 2.98 R 3.65 R 2.99 R 3.36 R 0.66 R? 4.23R 1.38 R 1.25 R 1.50 R 1.26 R 1.41 R 3.05 R 25.7 26.02 36.94 23.35 24.47 6.115 112 228 3.45 1.782 1.925 1.64 28.46 1.20 3.09 R 3.13 R 1.14 R 2.81 R 2.94 R 0.74 R 1.50 R 1.05 R

solid

2.3027

solid solid solid solid solid solid solid solid gas liquid

0.703 0.233 1.230 0.466 0.227 0.523 0.134 0.116 2.080 4.1813

42.2 24.9 28.23

1.547 2.44

1.69 R 2.99 R 3.39 R

27.112 26.060 24.8 27.7 37.47 75.327 28.03 74.53 4.1796 2.58 2.216

3.26 R 3.13 R 2.98 R 3.33 R 1.12 R 3.02 R

Heat capacity

218
Water at 100 C Water at 10 C (ice) Zinc Substance liquid solid solid Phase 4.1813 2.11 0.387 (mass) specific heat capacity cp or cm Jg1K1 75.327 38.09 25.2 Constant Constant pressure molar volume heat capacity molar Cp,m heat capacity Jmol1K1 Cv,m Jmol1K1 74.53 4.2160 1.938 2.76 Volumetric heat capacity Cv Jcm3K1 3.02 R 1.53 R 3.03 R Constant vol. atom-molar heat capacity in units of R Cv,m(atom) atom-mol1

Assuming an altitude of 194 metres above mean sea level (the worldwide median altitude of human habitation), an indoor temperature of 23

C, a dewpoint of 9 C (40.85% relative humidity), and 760mmHg sea levelcorrected barometric pressure (molar water vapor content = 1.16%). *Derived data by calculation. This is for water-rich tissues such as brain. The whole-body average figure for mammals is approximately 2.9 J/(cm3K)

Specific heat capacity of building materials


(Usually of interest to builders and solar designers)

Specific heat capacity of building materials


Substance Phase cp J/(gK) 0.920 0.840 0.880 0.840 0.670 0.503 0.753 0.790 1.090 0.880 0.835 0.800 0.664 1.7 (1.2 to 2.3) cp J/(gK)

Asphalt Brick Concrete Glass, silica Glass, crown Glass, flint Glass, pyrex Granite Gypsum Marble, mica Sand Soil Sulphur Hexafluoride Wood Substance

solid solid solid solid solid solid solid solid solid solid solid solid gas solid Phase

Heat capacity

219

Notes
[1] . Besides being a round number, this had a very practical effect: relatively few people live and work at precisely sea level; 100kPa equates to the mean pressure at an altitude of about 112 metres (which is closer to the 194metre, worldwide median altitude of human habitation). [2] Thermodynamics: An Engineering Approach by Yunus A. Cengal and Michael A. Boles [3] Yunus A. Cengel and Michael A. Boles,Thermodynamics: An Engineering Approach 7th Edition, , McGraw-Hill, 2010,ISBN 007-352932-X [4] See e.g., Section 4 and onwards. [5] Media:Translational motion.gif [6] Thecomparison must be made under constant-volume conditionsCvHso that no work is performed. Nitrogens CvH (100kPa, 20C) = 20.8Jmol1K1 vs. the monatomic gases which equal 12.4717Jmol1K1. Citations: . Also [7] Page 183 in: (also giving a density of 1.06 kg/L)

References External links


Air Specific Heat Capacity Calculator (http://www.enggcyclopedia.com/calculators/physical-properties/ air-specific-heat-calculator/)

Compressibility
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

In thermodynamics and fluid mechanics, compressibility is a measure of the relative volume change of a fluid or solid as a response to a pressure (or mean stress) change.

where V is volume and p is pressure.

Compressibility

220

Definition
The specification above is incomplete, because for any object or system the magnitude of the compressibility depends strongly on whether the process is adiabatic or isothermal. Accordingly isothermal compressibility is defined:

where the subscript T indicates that the partial differential is to be taken at constant temperature Isentropic compressibility is defined:

where S is entropy. For a solid, the distinction between the two is usually negligible.

Relation to speed of sound


Because the speed of sound is defined in classical mechanics as:

Where

is the density of the material. It is therefore found, through methods of replacing partial derivatives, that

the isentropic compressibility can be expressed as:

Relation to bulk modulus


The inverse of the compressibility is called the bulk modulus, often denoted K (sometimes B). That page also contains some examples for different materials. The compressibility equation relates the isothermal compressibility (and indirectly the pressure) to the structure of the liquid.

Thermodynamics
The term "compressibility" is also used in thermodynamics to describe the deviance in the thermodynamic properties of a real gas from those expected from an ideal gas. The compressibility factor is defined as

where p is the pressure of the gas, T is its temperature, and

is its molar volume. In the case of an ideal gas, the

compressibility factor Z is equal to unity, and the familiar ideal gas law is recovered:

Z can, in general, be either greater or less than unity for a real gas. The deviation from ideal gas behavior tends to become particularly significant (or, equivalently, the compressibility factor strays far from unity) near the critical point, or in the case of high pressure or low temperature. In these cases, a generalized compressibility chart or an alternative equation of state better suited to the problem must be utilized to produce accurate results. A related situation occurs in hypersonic aerodynamics, where dissociation causes an increase in the notational molar volume, because a mole of oxygen, as O2, becomes 2 moles of monatomic oxygen and N2 similarly

Compressibility dissociates to 2N. Since this occurs dynamically as air flows over the aerospace object, it is convenient to alter Z, defined for an initial 30 gram mole of air, rather than track the varying mean molecular weight, millisecond by millisecond. This pressure dependent transition occurs for atmospheric oxygen in the 2500K to 4000K temperature range, and in the 5000K to 10,000K range for nitrogen. In transition regions, where this pressure dependent dissociation is incomplete, both beta (the volume/pressure differential ratio) and the differential, constant pressure heat capacity will greatly increase. For moderate pressures, above 10,000K the gas further dissociates into free electrons and ions. Z for the resulting plasma can similarly be computed for a mole of initial air, producing values between 2 and 4 for partially or singly ionized gas. Each dissociation absorbs a great deal of energy in a reversible process and this greatly reduces the thermodynamic temperature of hypersonic gas decelerated near the aerospace object. Ions or free radicals transported to the object surface by diffusion may release this extra (non-thermal) energy if the surface catalyzes the slower recombination process. The isothermal compressibility is related to the isentropic (or adiabatic) compressibility by the relation,

221

via Maxwell's relations. More simply stated,

where, is the heat capacity ratio. See here for a derivation.

Earth science
Vertical, drained compressibilities
Material Plastic clay Stiff clay Medium-hard clay Loose sand Dense sand Dense, sandy gravel Rock, fissured Rock, sound (m/N or Pa1) 2106 2.6107 2.6107 1.3107 1.3107 6.9108 1107 5.2108 2108 1.3108 1108 5.2109 6.91010 3.31010 <3.31010

Water at 25 C (undrained) 4.61010

Compressibility is used in the Earth sciences to quantify the ability of a soil or rock to reduce in volume with applied pressure. This concept is important for specific storage, when estimating groundwater reserves in confined aquifers. Geologic materials are made up of two portions: solids and voids (or same as porosity). The void space can be full of liquid or gas. Geologic materials reduces in volume only when the void spaces are reduced, which expel the liquid or gas from the voids. This can happen over a period of time, resulting in settlement.

Compressibility It is an important concept in geotechnical engineering in the design of certain structural foundations. For example, the construction of high-rise structures over underlying layers of highly compressible bay mud poses a considerable design constraint, and often leads to use of driven piles or other innovative techniques.

222

Fluid dynamics
The degree of compressibility of a fluid has strong implications for its dynamics. Most notably, the propagation of sound is dependent on the compressibility of the medium.

Aeronautical dynamics
Compressibility is an important factor in aerodynamics. At low speeds, the compressibility of air is not significant in relation to aircraft design, but as the airflow nears and exceeds the speed of sound, a host of new aerodynamic effects become important in the design of aircraft. These effects, often several of them at a time, made it very difficult for World War II era aircraft to reach speeds much beyond 800km/h (500mph). Many effects are often mentioned in conjunction with the term "compressibility", but regularly have little to do with the compressible nature of air. From a strictly aerodynamic point of view, the term should refer only to those side-effects arising as a result of the changes in airflow from an incompressible fluid (similar in effect to water) to a compressible fluid (acting as a gas) as the speed of sound is approached. There are two effects in particular, wave drag and critical mach.

Negative compressibility
Under very specific conditions the compressibility can be negative.

References

Thermal expansion

223

Thermal expansion
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Thermal expansion is the tendency of matter to change in volume in response to a change in temperature. When a substance is heated, its particles begin moving more and thus usually maintain a greater average separation. Materials which contract with increasing temperature are unusual; this effect is limited in size, and only occurs within limited temperature ranges (see examples below). The degree of expansion divided by the change in temperature is called the material's coefficient of thermal expansion and generally varies with temperature.

Overview
Predicting expansion
If an equation of state is available, it can be used to predict the values of the thermal expansion at all the required temperatures and pressures, along with many other state functions.

Contraction effects
A number of materials contract on heating within certain temperature ranges; this is usually called negative thermal expansion, rather than "thermal contraction". For example, the coefficient of thermal expansion of water drops to zero as it is cooled to 3.983C and then becomes negative below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies of water maintaining this temperature at their lower depths during extended periods of sub-zero weather. Also, fairly pure silicon has a negative coefficient of thermal expansion for temperatures between about 18 and 120 Kelvin.

Thermal expansion

224

Factors affecting thermal expansion


Unlike gases or liquids, solid materials tend to keep their shape when undergoing thermal expansion. Thermal expansion generally decreases with increasing bond energy, which also has an effect on the melting point of solids, so, high melting point materials are more likely to have lower thermal expansion. In general, liquids expand slightly more than solids. The thermal expansion of glasses is higher compared to that of crystals. At the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefficient of thermal expansion or specific heat. These discontinuities allow detection of the glass transition temperature where a supercooled liquid transforms to a glass. Absorption or desorption of water (or other solvents) can change the size of many common materials; many organic materials change size much more due to this effect than they do to thermal expansion. Common plastics exposed to water can, in the long term, expand by many percent.

Coefficient of thermal expansion


The coefficient of thermal expansion describes how the size of an object changes with a change in temperature. Specifically, it measures the fractional change in size per degree change in temperature at a constant pressure. Several types of coefficients have been developed: volumetric, area, and linear. Which is used depending on the particular application and which dimensions are considered important. For solids, one might only be concerned with the change along a length, or over some area. The volumetric thermal expansion coefficient is the most basic thermal expansion coefficient. In general, substances expand or contract when their temperature changes, with expansion or contraction occurring in all directions. Substances that expand at the same rate in every direction are called isotropic. For isotropic materials, the area and linear coefficients may be calculated from the volumetric coefficient. Mathematical definitions of these coefficients are defined below for solids, liquids, and gasses.

General volumetric thermal expansion coefficient


In the general case of a gas, liquid, or solid, the volumetric coefficient of thermal expansion is given by

The subscript p indicates that the pressure is held constant during the expansion, and the subscript "V" stresses that it is the volumetric (not linear) expansion that enters this general definition. In the case of a gas, the fact that the pressure is held constant is important, because the volume of a gas will vary appreciably with pressure as well as temperature. For a gas of low density this can be seen from the ideal gas law.

Expansion in solids
Materials generally change their size when subjected to a temperature change while the pressure is held constant. In the special case of solid materials, the pressure does not appreciably affect the size of an object, and so, for solids, it's usually not necessary to specify that the pressure be held constant. Common engineering solids usually have coefficients of thermal expansion that do not vary significantly over the range of temperatures where they are designed to be used, so where extremely high accuracy is not required, practical calculations can be based on a constant, average, value of the coefficient of expansion.

Thermal expansion

225

Linear expansion
To a first approximation, the change in length measurements of an object ("linear dimension" as opposed to, e.g., volumetric dimension) due to thermal expansion is related to temperature change by a "linear expansion coefficient". It is the fractional change in length per degree of temperature change. Assuming negligible effect of pressure, we may write:

where

is a particular length measurement and

is the rate of change of that linear dimension per unit

change in temperature. The change in the linear dimension can be estimated to be:

This equation works well as long as the linear-expansion coefficient does not change much over the change in temperature . If it does, the equation must be integrated. Effects on strain For solid materials with a significant length, like rods or cables, an estimate of the amount of thermal expansion can be described by the material strain, given by and defined as:

where temperature.

is the length before the change of temperature and

is the length after the change of

For most solids, thermal expansion is proportional to the change in temperature:

Thus, the change in either the strain or temperature can be estimated by:

where

is the difference of the temperature between the two recorded strains, measured in degrees Celsius or Kelvin, and is the linear coefficient of thermal expansion in "per degree Celcius" or "per Kelvin", denoted by C1 or K1, respectively.

Area expansion
The area thermal expansion coefficient relates the change in a material's area dimensions to a change in temperature. It is the fractional change in area per degree of temperature change. Ignoring pressure, we may write:

where

is some area of interest on the object, and

is the rate of change of that area per unit change in

temperature. The change in the linear dimension can be estimated as:

This equation works well as long as the linear expansion coefficient does not change much over the change in temperature . If it does, the equation must be integrated.

Thermal expansion

226

Volumetric expansion
For a solid, we can ignore the effects of pressure on the material, and the volumetric thermal expansion coefficient can be written:

where

is the volume of the material, and

is the rate of change of that volume with temperature.

This means that the volume of a material changes by some fixed fractional amount. For example, a steel block with a volume of 1 cubic meter might expand to 1.002 cubic meters when the temperature is raised by 50C. This is an expansion of 0.2%. If we had a block of steel with a volume of 2 cubic meters, then under the same conditions, it would expand to 2.004 cubic meters, again an expansion of 0.2%. The volumetric expansion coefficient would be 0.2% for 50K, or 0.004%/K. If we already know the expansion coefficient, then we can calculate the change in volume

where

is the fractional change in volume (e.g., 0.002) and

is the change in temperature (50C).

The above example assumes that the expansion coefficient did not change as the temperature changed. This is not always true, but for small changes in temperature, it is a good approximation. If the volumetric expansion coefficient does change appreciably with temperature, then the above equation will have to be integrated:

where

is the starting temperature and

is the volumetric expansion coefficient as a function of

temperature T. Isotropic materials For exactly isotropic materials, and for small expansions, the volumetric thermal expansion coefficient is three times the linear coefficient:

This ratio arises because volume is composed of three mutually orthogonal directions. Thus, in an isotropic material, for small differential changes, one-third of the volumetric expansion is in a single axis. As an example, take a cube of steel that has sides of length L. The original volume will be and the new volume, after a temperature increase, will be

We can make the substitutions

and, for isotropic materials,

. We now have:

Since the volumetric and linear coefficients are defined only for extremely small temperature and dimensional changes (that is, when and are small), the last two terms can be ignored and we get the above relationship between the two coefficients. If we are trying to go back and forth between volumetric and linear coefficients using larger values of then we will need to take into account the third term, and sometimes even the fourth term. Similarly, the area thermal expansion coefficient is two times the linear coefficient:

This ratio can be found in a way similar to that in the linear example above, noting that the area of a face on the cube is just . Also, the same considerations must be made when dealing with large values of .

Thermal expansion

227

Anisotropic materials
Materials with anisotropic structures, such as crystals (with less than cubic symmetry) and many composites, will generally have different linear expansion coefficients in different directions. As a result, the total volumetric expansion is distributed unequally among the three axes. If the crystal symmetry is monoclinic or triclinic, even the angles between these axes are subject to thermal changes. In such cases it is necessary to treat the coefficient of thermal expansion as a tensor with up to six independent elements. A good way to determine the elements of the tensor is to study the expansion by powder diffraction.

Expansion in gases
For an ideal gas, the volumetric thermal expansion (i.e., relative change in volume due to temperature change) depends on the type of process in which temperature is changed. Two known cases are isobaric change, where pressure is held constant, and adiabatic change, where no work is done and no change in entropy occurs. In an isobaric process, the volumetric thermal expansivity, which we denote , is given by the ideal gas law:

The index

denotes an isobaric process.

Expansion in liquids
Theoretically, the coefficient of linear expansion can be found from the coefficient of volumetric expansion (3). However, for liquids, is calculated through the experimental determination of .

Expansion in mixtures and alloys


The expansivity of the components of the mixture can cancel each other like in invar. The thermal expansivity of a mixture from the expansivities of the pure components and their excess expansivities follow from:

Apparent and absolute expansion


When measuring the expansion of a liquid, the measurement must account for the expansion of the container as well. For example, a flask, that has been constructed with a long narrow stem filled with enough liquid that the stem itself is partially filled, when placed in a heat bath will initially show the column of liquid in the stem to drop followed by the immediate increase of that column until the flask/liquid/heat bath system has thermalized. The initial observation of the column of liquid dropping is not due to an initial contraction of the liquid but rather the expansion of the flask as it contacts the heat bath first. Soon after, the liquid in the flask is heated by the flask itself and begins to expand. Since liquids typically have a greater expansion over solids the liquid in the flask eventually exceeds that of the flask causing the column of liquid in the flask to rise. A direct measurement of the height of the liquid column is a measurement of the Apparent Expansion of the liquid. The Absolute expansion of the liquid is the apparent expansion corrected for the expansion of the containing vessel.[1]

Thermal expansion

228

Examples and applications


The expansion and contraction of materials must be considered when designing large structures, when using tape or chain to measure distances for land surveys, when designing molds for casting hot material, and in other engineering applications when large changes in dimension due to temperature are expected. Thermal expansion is also used in mechanical applications to fit parts over one another, e.g. a bushing can be fitted over a shaft by making its inner Thermal expansion of long continuous sections of rail tracks is the driving force for rail buckling. This phenomenon resulted in 190 train derailments during 19982002 in the US diameter slightly smaller than the [2] alone. diameter of the shaft, then heating it until it fits over the shaft, and allowing it to cool after it has been pushed over the shaft, thus achieving a 'shrink fit'. Induction shrink fitting is a common industrial method to pre-heat metal components between 150C and 300C thereby causing them to expand and allow for the insertion or removal of another component. There exist some alloys with a very small linear expansion coefficient, used in applications that demand very small changes in physical dimension over a range of temperatures. One of these is Invar 36, with approximately equal to 0.6106/K. These alloys are useful in aerospace applications where wide temperature swings may occur. Pullinger's apparatus is used to determine the linear expansion of a metallic rod in the laboratory. The apparatus consists of a metal cylinder closed at both ends (called a steam jacket). It is provided with an inlet and outlet for the steam. The steam for heating the rod is supplied by a boiler which is connected by a rubber tube to the inlet. The center of the cylinder contains a hole to insert a thermometer. The rod under investigation is enclosed in a steam jacket. One of its ends is free, but the other end is pressed against a fixed screw. The position of the rod is determined by a micrometer screw gauge or spherometer. The control of thermal expansion in brittle materials is a key concern for a wide range of reasons. For example, both glass and ceramics are brittle and uneven temperature causes uneven expansion which again causes thermal stress and this might lead to fracture. Ceramics need to be joined or work in consort with a wide range of materials and therefore their expansion must be matched to the application. Because glazes need to be firmly attached to the underlying porcelain (or other body type) their thermal expansion must be tuned to 'fit' the body so that crazing or shivering do not occur. Good example of products Drinking glass with fracture due to uneven whose thermal expansion is the key to their success are CorningWare thermal expansion after pouring of hot liquid into and the spark plug. The thermal expansion of ceramic bodies can be the otherwise cool glass controlled by firing to create crystalline species that will influence the overall expansion of the material in the desired direction. In addition or instead the formulation of the body can employ materials delivering particles of the desired expansion to the matrix. The thermal expansion of glazes is controlled by their chemical composition and the firing schedule to which they were subjected. In most cases there

Thermal expansion are complex issues involved in controlling body and glaze expansion, adjusting for thermal expansion must be done with an eye to other properties that will be affected, generally trade-offs are required. Heat-induced expansion has to be taken into account in most areas of engineering. A few examples are: Metal framed windows need rubber spacers Rubber tires Metal hot water heating pipes should not be used in long straight lengths Large structures such as railways and bridges need expansion joints in the structures to avoid sun kink One of the reasons for the poor performance of cold car engines is that parts have inefficiently large spacings until the normal operating temperature is achieved. A gridiron pendulum uses an arrangement of different metals to maintain a more temperature stable pendulum length. A power line on a hot day is droopy, but on a cold day it is tight. This is because the metals expand under heat. Expansion joints that absorb the thermal expansion in a piping system.[3] Precision engineering nearly always requires the engineer to pay attention to the thermal expansion of the product. For example when using a scanning electron microscope even small changes in temperature such as 1 degree can cause a sample to change its position relative to the focus point.

229

Thermometers are another application of thermal expansion most contain a liquid (usually mercury or alcohol) which is constrained to flow in only one direction (along the tube) due to changes in volume brought about by changes in temperature. A bi-metal mechanical thermometer uses a bimetallic strip and bends due to the differing thermal expansion of the two metals.

Thermal expansion coefficients for various materials


This section summarizes the coefficients for some common materials. For isotropic materials the coefficients linear thermal expansion and volumetric thermal expansion are related by =3. For liquids usually the coefficient of volumetric expansion is listed and linear expansion is calculated here for comparison. In the table below, the range for is from 107/K for hard solids to 103/K for organic liquids. The coefficient varies with the temperature and some materials have a very high variation ; see for example the variation vs. temperature of the volumetric coefficient for a semicrystalline polypropylene (PP) at different pressure, and the variaiton of the linear coefficient vs. temperature for some steel grades (from bottom to top: ferritic stainless steel, martensitic stainless steel, carbon steel, duplex stainless steel, austenitic steel). (The formula 3 is usually used for solids.)

Volumetric thermal expansion coefficient for a semicrystalline polypropylene.

Linear thermal expansion coefficient for some steel grades.

Thermal expansion

230

Material

Linear coefficient at 20C (106/K) 23.1 5.3 42 19 10.8 12 17 1 250

Volumetric coefficient at 20C (106/K) 69 4.2 126 57 32.4 36 51 3 750 17.4 950 25.5 9.9 42 13.8 3.6 33.3 60 87

Notes

Aluminium Aluminium nitride Benzocyclobutene Brass Carbon steel Concrete Copper Diamond Ethanol

Gallium(III) arsenide 5.8 Gasoline Glass Glass, borosilicate Gold Indium phosphide Invar Iron Kapton Lead Macor Magnesium Mercury Molybdenum Nickel Oak Douglas-fir Douglas-fir Douglas-fir Platinum PP PVC Quartz (fused) Quartz Rubber Sapphire 317 8.5 3.3 14 4.6 1.2 11.8 20 29 9.3 26 61 4.8 13 54 27 45 3.5 9 150 52 0.59 0.33 disputed 5.3

DuPont Kapton 200EN

78 182 14.4 39 Perpendicular to the grain 75 75 75 27 450 156 1.77 1 disputed see Talk Parallel to C axis, or [001] radial tangential parallel to grain

Thermal expansion

231
Silicon Carbide Silicon Silver Sitall Stainless steel Steel Titanium Tungsten Water YbGaGe Zerodur 2.77 3 18 00.15 17.3 11.0 ~ 13.0 8.6 4.5 69 0 0.02 13.5 207 0 Refuted at 0...50C 8.31 9 54 00.45 51.9 33.0 ~ 39.0 Depends on composition average for -60C to 60C

References
[1] Ganot, A., Atkinson, E. (1883). Elementary treatise on physics experimental and applied for the use of colleges and schools, William and Wood & Co, New York, pp. 2723. [2] Track Buckling Research (http:/ / www. volpe. dot. gov/ infrastructure-systems-engineering/ structures-and-dynamics/ track-buckling-research). Volpe Center, U.S. Department of Transportation [3] Lateral, Angular and Combined Movements (http:/ / www. usbellows. com/ expansion-joint-catalog/ lateral-angular-combined. htm) U.S. Bellows.

External links
Glass Thermal Expansion (http://glassproperties.com/expansion/ExpansionMeasurement.htm) Thermal expansion measurement, definitions, thermal expansion calculation from the glass composition Water thermal expansion calculator (http://www.engineeringtoolbox.com/ volumetric-temperature-expansion-d_315.html) DoITPoMS Teaching and Learning Package on Thermal Expansion and the Bi-material Strip (http://www. doitpoms.ac.uk/tlplib/thermal-expansion/simulation.php) Engineering Toolbox List of coefficients of Linear Expansion for some common materials (http://www. engineeringtoolbox.com/linear-expansion-coefficients-d_95.html) Article on how is determined (http://www.leybold-didactic.com/literatur/hb/e/p2/p2121_e.pdf) MatWeb: Free database of engineering properties for over 79,000 materials (http://www.matweb.com) USA NIST Website Temperature and Dimensional Measurement workshop (http://emtoolbox.nist.gov/ Temperature/Slide1.asp#Slide1) Hyperphysics: Thermal expansion (http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/thexp.html) Understanding Thermal Expansion in Ceramic Glazes (http://digitalfire.com/4sight/education/ understanding_thermal_expansion_in_ceramic_glazes_198.html)

232

Chapter 8. Potentials
Thermodynamic potential
A thermodynamic potential is a scalar quantity used to represent the thermodynamic state of a system. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886. Josiah Willard Gibbs in his papers used the term fundamental functions. One main thermodynamic potential that has a physical interpretation is the internal energyU. It is the energy of configuration of a given system of conservative forces (that is why it is a potential) and only has meaning with respect to a defined set of references (or data). Expressions for all other thermodynamic energy potentials are derivable via Legendre transforms from an expression forU. In thermodynamics, certain forces, such as gravity, are typically disregarded when formulating expressions for potentials. For example, while all the working fluid in a steam engine may have higher energy due to gravity while sitting on top of Mount Everest than it would at the bottom of the Mariana Trench, the gravitational potential energy term in the formula for the internal energy would usually be ignored because changes in gravitational potential within the engine during operation would be negligible.

Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Description and interpretation


Five common thermodynamic potentials are:[1]

Thermodynamic potential

233

Name Internal energy

Symbol

Formula

Natural variables

Helmholtz free energy Enthalpy Gibbs free energy Landau Potential (Grand potential) ,

where T = temperature, S = entropy, p = pressure, V = volume. The Helmholtz free energy is often denoted by the symbol F, but the use of A is preferred by IUPAC.[2] Ni is the number of particles of type i in the system and i is the chemical potential for an i-type particle. For the sake of completeness, the set of all Ni are also included as natural variables, although they are sometimes ignored. These five common potentials are all energy potentials, but there are also entropy potentials. The thermodynamic square can be used as a tool to recall and derive some of the potentials. Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings. Internal energy(U ) is the capacity to do work plus the capacity to release heat. Gibbs energy is the capacity to do non-mechanical work. Enthalpy is the capacity to do non-mechanical work plus the capacity to release heat. Helmholtz free energy is the capacity to do mechanical work (useful work). From these definitions we can say that U is the energy added to the system, F is the total work done on it, G is the non-mechanical work done on it, and H is the sum of non-mechanical work done on the system and the heat given to it. Thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction, or when measuring the properties of materials in a chemical reaction. The chemical reactions usually take place under some simple constraints such as constant pressure and temperature, or constant entropy and volume, and when this is true, there is a corresponding thermodynamic potential that comes into play. Just as in mechanics, the system will tend towards lower values of potential and at equilibrium, under these constraints, the potential will take on an unchanging minimum value. The thermodynamic potentials can also be used to estimate the total amount of energy available from a thermodynamic system under the appropriate constraint. In particular: (see principle of minimum energy for a derivation)[3] When the entropy(S ) and "external parameters" (e.g. volume) of a closed system are held constant, the internal energy(U ) decreases and reaches a minimum value at equilibrium. This follows from the first and second laws of thermodynamics and is called the principle of minimum energy. The following three statements are directly derivable from this principle. When the temperature(T ) and external parameters of a closed system are held constant, the Helmholtz free energy(F ) decreases and reaches a minimum value at equilibrium. When the pressure(p) and external parameters of a closed system are held constant, the enthalpy(H ) decreases and reaches a minimum value at equilibrium. When the temperature(T ), pressure(p) and external parameters of a closed system are held constant, the Gibbs free energy(G ) decreases and reaches a minimum value at equilibrium.

Thermodynamic potential

234

Natural variables
The variables that are held constant in this process are termed the natural variables of that potential.[4] The natural variables are important not only for the above mentioned reason, but also because if a thermodynamic potential can be determined as a function of its natural variables, all of the thermodynamic properties of the system can be found by taking partial derivatives of that potential with respect to its natural variables and this is true for no other combination of variables. On the converse, if a thermodynamic potential is not given as a function of its natural variables, it will not, in general, yield all of the thermodynamic properties of the system. Notice that the set of natural variables for the above four potentials are formed from every combination of the T-S and P-V variables, excluding any pairs of conjugate variables. There is no reason to ignore the Ni i conjugate pairs, and in fact we may define four additional potentials for each species.[5] Using IUPAC notation in which the brackets contain the natural variables (other than the main four), we have:
Formula Natural variables

If there is only one species, then we are done. But, if there are, say, two species, then there will be additional potentials such as and so on. If there are Ddimensions to the thermodynamic space, then there are 2D unique thermodynamic potentials. For the most simple case, a single phase ideal gas, there will be three dimensions, yielding eight thermodynamic potentials. In statistical mechanics, the relationship between the Helmholtz free energy and the partition function is fundamental, and is used to calculate the thermodynamic properties of matters; see configuration integral [1] for more details.

The fundamental equations


The definitions of the thermodynamic potentials may be differentiated and, along with the first and second laws of thermodynamics, a set of differential equations known as the fundamental equations follow.[6] (Actually they are all expressions of the same fundamental thermodynamic relation, but are expressed in different variables.) By the first law of thermodynamics, any differential change in the internal energy U of a system can be written as the sum of heat flowing into the system and work done by the system on the environment, along with any change due to the addition of new particles to the system:

where Q is the infinitesimal heat flow into the system, and W is the infinitesimal work done by the system, i is the chemical potential of particle type i and Ni is the number of type iparticles. (Note that neither Q nor W are exact differentials. Small changes in these variables are, therefore, represented with rather than d.) By the second law of thermodynamics, we can express the internal energy change in terms of state functions and their differentials. In case of reversible changes we have:

where T is temperature,

Thermodynamic potential S is entropy, p is pressure, and V is volume, and the equality holds for reversible processes. This leads to the standard differential form of the internal energy in case of a quasistatic reversible change:

235

Since U, S and V are thermodynamic functions of state, the above relation holds also for arbitrary non-reversible changes. If the system has more external variables than just the volume that can change, the fundamental thermodynamic relation generalizes to:

Here the Xi are the generalized forces corresponding to the external variablesxi. Applying Legendre transforms repeatedly, the following differential relations hold for the four potentials:

Note that the infinitesimals on the right-hand side of each of the above equations are of the natural variables of the potential on the left-hand side. Similar equations can be developed for all of the other thermodynamic potentials of the system. There will be one fundamental equation for each thermodynamic potential, resulting in a total of 2D fundamental equations. The differences between the four thermodynamic potentials can be summarized as follows:

The equations of state


We can use the above equations to derive some differential definitions of some thermodynamic parameters. If we define to stand for any of the thermodynamic potentials, then the above equations are of the form:

where xi and yi are conjugate pairs, and the yi are the natural variables of the potential. From the chain rule it follows that:

Where yi j is the set of all natural variables of except yi . This yields expressions for various thermodynamic parameters in terms of the derivatives of the potentials with respect to their natural variables. These equations are known as equations of state since they specify parameters of the thermodynamic state.[7] If we restrict ourselves to the potentials U, F, H and G, then we have:

Thermodynamic potential

236

where, in the last equation, is any of the thermodynamic potentials U, F, H, G and

are the set of

natural variables for that potential, excluding Ni . If we use all potentials, then we will have more equations of state such as

and so on. In all, there will be Dequations for each potential, resulting in a total of D 2D equations of state. If the Dequations of state for a particular potential are known, then the fundamental equation for that potential can be determined. This means that all thermodynamic information about the system will be known, and that the fundamental equations for any other potential can be found, along with the corresponding equations of state.

The Maxwell relations


Again, define xi and yi to be conjugate pairs, and the yi to be the natural variables of some potential. We may take the "cross differentials" of the state equations, which obey the following relationship:

From these we get the Maxwell relations.[8] There will be (D 1)/2 of them for each potential giving a total of D(D 1)/2 equations in all. If we restrict ourselves the U, F, H, G

Using the equations of state involving the chemical potential we get equations such as:

and using the other potentials we can get equations such as:

Thermodynamic potential

237

Euler integrals
Again, define xi and yi to be conjugate pairs, and the yi to be the natural variables of the internal energy. Since all of the natural variables of the internal energyU are extensive quantities

it follows from Euler's homogeneous function theorem that the internal energy can be written as:

From the equations of state, we then have:

Substituting into the expressions for the other main potentials we have:

As in the above sections, this process can be carried out on all of the other thermodynamic potentials. Note that the Euler integrals are sometimes also referred to as fundamental equations.

The GibbsDuhem relation


Deriving the GibbsDuhem equation from basic thermodynamic state equations is straightforward.[9][10] Equating any thermodynamic potential definition with its Euler integral expression yields:

Differentiating, and using the second law:

yields:

Which is the GibbsDuhem relation. The GibbsDuhem is a relationship among the intensive parameters of the system. It follows that for a simple system with Icomponents, there will be I + 1 independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Josiah Willard Gibbs and Pierre Duhem.

Thermodynamic potential

238

Chemical reactions
Changes in these quantities are useful for assessing the degree to which a chemical reaction will proceed. The relevant quantity depends on the reaction conditions, as shown in the following table. denotes the change in the potential and at equilibrium the change will be zero.
Constant V Constant p Constant S U Constant T F H G

Most commonly one considers reactions at constant p and T, so the Gibbs free energy is the most useful potential in studies of chemical reactions.

Notes
[1] Alberty (2001) p.1353 [2] Alberty (2001) p.1376 [3] Callen (1985) p.153 [4] Alberty (2001) p.1352 [5] Alberty (2001) p.1355 [6] Alberty (2001) p.1354 [7] Callen (1985) p.37 [8] Callen (1985) p.181 [9] Moran & Shapiro, p.538 [10] Callen (1985) p.60

References
Alberty, R. A. (2001). "Use of Legendre transforms in chemical thermodynamics" (http://www.iupac.org/ publications/pac/2001/pdf/7308x1349.pdf) (PDF). Pure Appl. Chem. 73 (8): 13491380. doi: 10.1351/pac200173081349 (http://dx.doi.org/10.1351/pac200173081349). Callen, Herbert B. (1985). Thermodynamics and an Introduction to Themostatistics (http://www.amazon.com/ Thermodynamics-Introduction-Thermostatistics-Herbert-Callen/dp/0471862568) (2nd ed.). New York: John Wiley & Sons. ISBN0-471-86256-8. Moran, Michael J.; Shapiro, Howard N. (1996). Fundamentals of Engineering Thermodynamics (3rd ed.). New York ; Toronto: J. Wiley & Sons. ISBN0-471-07681-3.

Further reading
McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, ISBN 0-07-051400-3 Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009, ISBN 9781420073683 Chemical Thermodynamics, D.J.G. Ives, University Chemistry, Macdonald Technical and Scientific, 1971, ISBN 0-356-03736-3 Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974, ISBN 0-201-05229-6 Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008, ISBN 9780471566588

Thermodynamic potential

239

External links
Thermodynamic Potentials (http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/thepot.html) Georgia State University Chemical Potential Energy: The 'Characteristic' vs the Concentration-Dependent Kind (http://arxiv.org/pdf/ physics/0004055.pdf)

Enthalpy
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

Enthalpy is a measure of the total energy of a thermodynamic system. It includes the system's internal energy and thermodynamic potential (a state function), as well as its volume and pressure (the energy required to "make room for it" by displacing its environment, which is an extensive quantity). The unit of measurement for enthalpy in the International System of Units (SI) is the joule, but other historical, conventional units are still in use, such as the British thermal unit and the calorie. The enthalpy is the preferred expression of system energy changes in many chemical, biological, and physical measurements, because it simplifies certain descriptions of energy transfer. Enthalpy change accounts for energy transferred to the environment at constant pressure through expansion or heating. The total enthalpy, H, of a system cannot be measured directly. The same situation exists in classical mechanics: only a change or difference in energy carries physical meaning. Enthalpy itself is a thermodynamic potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point; therefore what we measure is the change in enthalpy, H. The change H is positive in endothermic reactions, and negative in heat-releasing exothermic processes. H of a system is equal to the sum of non-mechanical work done on it and the heat supplied to it. For processes under constant pressure, H is equal to the change in the internal energy of the system, plus the work that the system has done on its surroundings.[1] This means that the change in enthalpy under such conditions is the heat absorbed (or released) by the material through a chemical reaction or by external heat transfer. Enthalpies for chemical substances at constant pressure assume standard state: most commonly 1 bar pressure. Standard state does not, strictly speaking, specify a temperature (see standard state), but expressions for enthalpy generally reference the

Enthalpy standard heat of formation at 25 C. Enthalpy of ideal gases and incompressible solids and liquids does not depend on pressure, unlike entropy and Gibbs energy. Real materials at common temperatures and pressures usually closely approximate this behavior, which greatly simplifies enthalpy calculation and use in practical designs and analyses.

240

Origins
The word enthalpy is based on the Greek noun enthalpos (), which means heating. It comes from the Classical Greek prefix -, en-, meaning to put into, and the verb , thalpein, meaning "to heat". The word enthalpy is often incorrectly attributed[citation needed] to Benot Paul mile Clapeyron and Rudolf Clausius through the 1850 publication of their Clausius-Clapeyron relation. This misconception was popularized by the 1927 publication of The Mollier Steam Tables and Diagrams. However, neither the concept, the word, nor the symbol for enthalpy existed until well after Clapeyron's death. The earliest writings to contain the concept of enthalpy did not appear until 1875, when Josiah Willard Gibbs introduced "a heat function for constant pressure". However, Gibbs did not use the word "enthalpy" in his writings.[2] The actual word first appears in the scientific literature in a 1909 publication by J. P. Dalton. According to that publication, Heike Kamerlingh Onnes (1853-1926) actually coined the word. Over the years, many different symbols were used to denote enthalpy. It was not until 1922 that Alfred W. Porter proposed the symbol "H" as the accepted standard, thus finalizing the terminology still in use today.

Formal definition
The enthalpy of a homogeneous system is defined as:[3]

where H is the enthalpy of the system U is the internal energy of the system p is the pressure of the system V is the volume of the system. The enthalpy is an extensive property. This means that, for homogeneous systems, the enthalpy is proportional to the size of the system. It is convenient to introduce the specific enthalpy h =H/m where m is the mass of the system, or the molar enthalpy Hm = H/n, where n is the number of moles (h and Hm are intensive properties). For inhomogeneous systems the enthalpy is the sum of the enthalpies of the composing subsystems

where the label k refers to the various subsystems. In case of continuously varying p, T, and/or composition the summation becomes an integral:

where is the density. The enthalpy H(S,p) of homogeneous systems can be derived as a characteristic function of the entropy S and the pressure p as follows: we start from the first law of thermodynamics for closed systems for an infinitesimal process

Here, Q is a small amount of heat added to the system and W a small amount of work performed by the system. In a homogeneous system only reversible processes can take place so the second law of thermodynamics gives Q = TdS with T the absolute temperature of the system. Furthermore, if only pV work is done, W = pdV. As a result

Enthalpy

241

Adding d(pV) to both sides of this expression gives

or

So

Other expressions
The expression of dH in terms of entropy and pressure may be unfamiliar to many readers. However, there are expressions in terms of more familiar variables such as temperature and pressure[4][5]

Here Cp is the heat capacity at constant pressure and is the coefficient of (cubic) thermal expansion

With this expression one can, in principle, determine the enthalpy if Cp and V are known as functions of p and T. Notice that for an ideal gas, ,[6] so that:

In a more general form, the first law describes the internal energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for dH then becomes:

where i is the chemical potential per particle for an i-type particle, and Ni is the number of such particles. The last term can also be written as idni (with dni the number of moles of component i added to the system and, in this case, i the molar chemical potential) or as idmi (with dmi the mass of component i added to the system and, in this case, i the specific chemical potential).

Enthalpy versus internal energy


The U term can be interpreted as the energy required to create the system, and the pV term as the energy that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, n moles of a gas of volume V at pressure p and temperature T, is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy U plus pV, where pV is the work done in pushing against the ambient (atmospheric) pressure. In basic physics and statistical mechanics it may be more interesting to study the internal properties of the system and therefore the internal energy is used.[7][8] In basic chemistry, experiments are often conducted at atmospheric pressure and H is therefore more useful for reaction energy calculations. Furthermore the enthalpy is the workhorse of engineering thermodynamics as we will see later.

Enthalpy

242

Relationship to heat
In order to discuss the relation between the enthalpy increase and heat supply we return to the first law for closed systems: dU = Q - W. We apply it to the special case that the pressure at the surface is uniform. In this case the work term can be split in two contributions, the so-called pV work, given by -pdV (where here p is the pressure at the surface, dV is the increase of the volume of the system) and other types of work W ' such as by a shaft or by electromagnetic interaction. So we write W = -pdV+W '. In this case the first law reads

or

From this relation we see that the increase in enthalpy of a system is equal to the added heat

provided that the system is under constant pressure (dp = 0) and that the only work done by the system is expansion work (W ' = 0)

Applications
In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, pV, differs based upon the constancy of conditions present at the creation of the thermodynamic system. Internal energy, U, must be supplied to remove particles from a surrounding in order to allow space for the creation of a system, providing that environmental variables, such as pressure (p) remain constant. This internal energy also includes the energy required for activation and the breaking of bonded compounds into gaseous species. This process is calculated within enthalpy calculations as U + pV, to label the amount of energy or work required to "set aside space for" and "create" the system; describing the work done by both the reaction or formation of systems, and the surroundings. For systems at constant pressure, the change in enthalpy is the heat received by the system. Therefore, the change in enthalpy can be devised or represented without the need for compressive or expansive mechanics; for a simple system, with a constant number of particles, the difference in enthalpy is the maximum amount of thermal energy derivable from a thermodynamic process in which the pressure is held constant. The term pV is the work required to displace the surrounding atmosphere in order to vacate the space to be occupied by the system.

Heat of reaction
The total enthalpy of a system cannot be measured directly; the enthalpy change of a system is measured instead. Enthalpy change is defined by the following equation:

where is the "enthalpy change" is the final enthalpy of the system, expressed in joules. In a chemical reaction, products. is the initial enthalpy of the system, expressed in joules. In a chemical reaction, reactants. For an exothermic reaction at constant pressure, the system's change in enthalpy equals the energy released in the reaction, including the energy retained in the system and lost through expansion against its surroundings. In a similar manner, for an endothermic reaction, the system's change in enthalpy is equal to the energy absorbed in the reaction, is the enthalpy of the is the enthalpy of the

Enthalpy including the energy lost by the system and gained from compression from its surroundings. A relatively easy way to determine whether or not a reaction is exothermic or endothermic is to determine the sign of H. If H is positive, the reaction is endothermic, that is heat is absorbed by the system due to the products of the reaction having a greater enthalpy than the reactants. On the other hand if H is negative, the reaction is exothermic, that is the overall decrease in enthalpy is achieved by the generation of heat. Although enthalpy is commonly used in engineering and science, it is impossible to measure directly, as enthalpy has no datum (reference point). Therefore enthalpy can only accurately be used in a closed system. However, few real-world applications exist in closed isolation, and it is for this reason that two or more closed systems cannot correctly be compared using enthalpy as a basis.

243

Specific enthalpy
As noted before, the specific enthalpy of a uniform system is defined as h = H/m where m is the mass of the system. The SI unit for specific enthalpy is joule per kilogram. It can be expressed in other specific quantities by h = u + pv, where u is the specific internal energy, p is the pressure, and v is specific volume, which is equal to 1/, where is the density.

Enthalpy changes
An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products, and the initial enthalpy of the system, i.e. the reactants. These processes are reversible and the enthalpy for the reverse process is the negative value of the forward change. A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics. When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process'. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including: A temperature of 25 C or 298 K, A pressure of one atmosphere (1 atm or 101.325 kPa), A concentration of 1.0 M when the element or compound is present in solution, Elements or compounds in their normal physical states, i.e. standard state.

For such standardized values the name of the enthalpy is commonly prefixed with the term standard, e.g. standard enthalpy of formation. Chemical properties: Enthalpy of reaction, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely. Enthalpy of formation, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents. Enthalpy of combustion, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen. Enthalpy of hydrogenation, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of an unsaturated compound reacts completely with an excess of hydrogen to form a saturated compound. Enthalpy of atomization, defined as the enthalpy change required to atomize one mole of compound completely.

Enthalpy Enthalpy of neutralization, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of water is formed when an acid and a base react. Standard Enthalpy of solution, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution. Standard enthalpy of Denaturation (biochemistry), defined as the enthalpy change required to denature one mole of compound. Enthalpy of hydration, defined as the enthalpy change observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous ions. Physical properties: Enthalpy of fusion, defined as the enthalpy change required to completely change the state of one mole of substance between solid and liquid states. Enthalpy of vaporization, defined as the enthalpy change required to completely change the state of one mole of substance between liquid and gaseous states. Enthalpy of sublimation, defined as the enthalpy change required to completely change the state of one mole of substance between solid and gaseous states. Lattice enthalpy, defined as the energy required to separate one mole of an ionic compound into separated gaseous ions to an infinite distance apart (meaning no force of attraction).

244

Open systems
In thermodynamic open systems, matter may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: The increase in the internal energy of a system is equal to the amount of energy added to the system by matter flowing in and by heating, minus the amount lost by matter flowing out and in the form of work done by the system. The first law for open systems is given by:

where system.

is the average internal energy entering the system and

is the average internal energy leaving the

The region of space enclosed by open system boundaries is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of matter into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of matter out as if it were driving a piston of fluid. There are then two types of work performed: flow work described above, which is performed on the fluid (this is also often called pV work), and shaft work, which may be performed on some mechanical device. These two types of work are expressed in the equation:

Fig.1 During steady, continuous operation, an energy balance applied to an open system equates shaft work performed by the system to heat added plus net enthalpy added

Substitution into the equation above for the control volume cv yields: .

Enthalpy The definition of enthalpy, and , permits us to use this thermodynamic potential to account for both internal energy . This expression is described by Fig.1. If we allow also the system boundary to move (e.g. due to moving pistons) we get a rather general form of the first law for open systems.[9] In terms of time derivatives it reads

245

work in fluids for open systems:

where

represent algebraic sums and the indices k refer to the various places where heat is supplied, matter flows terms represent enthalpy flows, which can be written as

into the system, and boundaries are moving. The

with

the mass flow and

the molar flow at position k respectively. The term dVk/dt represents the rate of

change of the system volume at position k that results in pV power done by the system. The parameter P represents all other forms of power done by the system such as shaft power, but it can also be e.g. electric power produced by an electrical power plant. Note that the previous expression holds true only if the kinetic energy flow rate is conserved between system inlet and outlet. Otherwise, it has to be included in the enthalpy balance. During steady-state operation of a device (see turbine, pump, and engine), the average dU/dt may be set equal to zero. This yields a useful expression for the average power generation for these devices in the absence of chemical reactions

where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its presence in the first law for open systems, as formulated above.

Diagrams
Nowadays the enthalpy values of important substances can be obtained via commercial software. Practically all relevant material properties can be obtained either in tabular or in graphical form. There are many types of diagrams, such as hT diagrams, which give the specific enthalpy as function of temperature for various pressures and hp diagrams, which give h as function of p for various T. One of the most common diagrams is the temperature-entropy diagram Fig.2 Ts diagram of nitrogen. The red curve at the left is the melting curve. The red dome represents the two-phase region with the low-entropy side the saturated liquid and the (Ts-diagram). An example is Fig.2, high-entropy side the saturated gas. The black curves give the Ts relation along isobars. which is the Ts-diagram of The pressures are indicated in bar. The blue curves are isenthalps (curves of constant [10] nitrogen. It gives the melting curve enthalpy). The values are indicated in blue in kJ/kg. The specific points a, b, etc., are and saturated liquid and vapor values treated in the main text. together with isobars and isenthalps. These diagrams are powerful tools in the hands of the thermal engineer.

Enthalpy

246

Some basic applications


The points a through h in Fig.2 play a role in the discussion in this Section. a T = 300 K, p = 1 bar, s = 6.85 kJ/(kgK), h = 461 kJ/kg; b T = 380 K, p = 2 bar, s = 6.85 kJ/(kgK), h = 530 kJ/kg; c T = 300 K, p = 200 bar, s = 5.16 kJ/(kgK), h = 430 kJ/kg; d T = 270 K, p = 1 bar, s = 6.79 kJ/(kgK), h = 430 kJ/kg; e T = 108 K, p = 13 bar, s = 3.55 kJ/(kgK), h = 100 kJ/kg (saturated liquid at 13 bar); f T = 77.2 K, p = 1 bar, s = 3.75 kJ/(kgK), h = 100 kJ/kg; g T = 77.2 K, p = 1 bar, s = 2.83 kJ/(kgK), h = 28 kJ/kg (saturated liquid at 1 bar);

Fig.3 Two open systems in the steady state. Fluid enters the system (dotted rectangle) at point 1 and leaves it at point 2. The mass flow is . a: schematic diagram of the throttling process. b: schematic diagram of a compressor. A power P is applied and a heat flow is released to the surroundings at ambient temperature Ta.

h T = 77.2 K, p = 1 bar, s = 5.41 kJ/(kgK), h =230 kJ/kg (saturated gas at 1 bar);

Throttling
One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule-Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in Fig.3a. This process is very important since it is at the heart of domestic refrigerators where it is responsible for the temperature drop between ambient temperature and the interior of the fridge. It is also the final stage in many types of liquefiers. In the first law for open systems, applied to the system in Fig.3a, all terms are zero except the terms for the enthalpy flow. Hence

Since the mass flow is constant the specific enthalpies at the two sides of the flow resistance are the same

The consequences of this relation can be demonstrated using Fig.2. Point c in Fig.2 is at 200 bar and room temperature (300 K). A Joule-Thomson expansion from 200 to 1 bar follows a curve (not shown in Fig.2) in between the 400 and 450 kJ/kg isenthalps and ends in point d, which is at a temperature of about 270 K. Hence the expansion from 200 bar to 1 bar cools nitrogen from 300 K to 270 K. In the valve there is a lot of friction and a lot of entropy is produced, but still the final temperature is below the starting value! Point e is chosen so that it is on the saturated liquid line with h = 100 kJ/kg. It corresponds roughly with p = 13 bar and T = 108 K. Throttling from this point to a pressure of one bar ends in the two-phase region (point f). This means that a mixture of gas and liquid leaves the throttling valve. Since the enthalpy is an extensive parameter the enthalpy in f (hf) is equal to the enthalpy in g (hg) multiplied with the liquid fraction in f (xf) plus the enthalpy in h (hh) multiplied with the gas fraction in f (1-xf). So

Enthalpy

247

With numbers: 100 = xf 28 + (1 - xf)230 so xf = 0.64. This means that the mass fraction of the liquid in the liquid-gas mixture that leaves the throttling valve is 64%.

Compressors
Fig.3b is a schematic drawing of a compressor. A power P is applied e.g. as electrical power. If the compression is adiabatic the gas temperature goes up. In the reversible case it would be at constant entropy, which corresponds with a vertical line in Fig.2. E.g. compressing nitrogen from 1 bar (point a) to 2 bar (point b) would result in a temperature increase from 300 K to 380 K. In order to let the compressed gas exit at ambient temperature Ta heat exchange, e.g. by cooling water, is necessary. In the ideal case the compression is isothermal. The average heat flow to the surroundings is . Since the system is in the steady state the first law gives

The minimum power, needed for the compression is realized if the compression is reversible. In that case the second law of thermodynamics for open systems gives

Eliminating

gives for the minimum power

E.g. compressing 1kg of nitrogen from 1 bar to 200 bar costs at least (hc - ha) - Ta(sc-sa). With the data, obtained with Fig.2, we find a value of (430-461) - 300(5.16 - 6.85) = 476 kJ/kg. The relation for the power can be further simplified by writing it as

With dh = Tds + vdp this results in the final relation

Notes
[1] G.J. Van Wylen and R.E. Sonntag (1985), Fundamentals of Classical Thermodynamics, Section 5.5 (3rd edition), John Wiley & Sons Inc. New York, NY. ISBN 0-471-82933-1 [2] The Collected Works of J. Willard Gibbs, Vol. I do not contain reference to the word enthalpy, but rather reference the heat function for constant pressure. [3] E.A. Guggenheim, Thermodynamics, North-Holland Publisching Company, Amsterdam, 1959 [4] Guggenheim, p. 88 [5] M.J. Moran and H.N. Shapiro "Fundamentals of Engineering Thermodynamics" 5th edition, (2006) John Wiley & Sons, Inc., p.511. [6] [7] F. Reif Statistical physics McGraw-Hill, London (1967) [8] C. Kittel and H. Kroemer Thermal physics Freeman London (1980) [9] M.J. Moran and H.N. Shapiro "Fundamentals of Engineering Thermodynamics" 5th edition, (2006) John Wiley & Sons, Inc., p.129. [10] Figure composed with data obtained with RefProp, NIST Standard Reference Database 23

Enthalpy

248

References Bibliography
Haase, R. In Physical Chemistry: An Advanced Treatise; Jost, W., Ed.; Academic: New York, 1971; p 29. Gibbs, J. W. In The Collected Works of J. Willard Gibbs, Vol. I; Yale University Press: New Haven, CT, reprinted 1948; p 88. Laidler, K. The World of Physical Chemistry; Oxford University Press: Oxford, 1995; p 110. C.Kittel, H.Kroemer In Thermal Physics; S.R Furphy and Company, New York, 1980; p246 DeHoff, R. Thermodynamics in Materials Science: 2nd ed.; Taylor and Francis Group, New York, 2006.

External links
Enthalpy (http://scienceworld.wolfram.com/physics/Enthalpy.html) - Eric Weisstein's World of Physics Enthalpy (http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/firlaw.html) - Georgia State University Enthalpy example calculations (http://www.chem.tamu.edu/class/majors/tutorialnotefiles/enthalpy.htm) Texas A&M University Chemistry Department

Internal energy

249

Internal energy
Internal energy
Common symbol(s): in SI base quantities: SI unit: Derivations from other quantities: U m2*kg/s2 J

Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

In thermodynamics, the internal energy is the total energy contained by a thermodynamic system. It is the energy needed to create the system but excludes the energy to displace the system's surroundings, any energy associated with a move as a whole, or due to external force fields. Internal energy has two major components, kinetic energy and potential energy. The kinetic energy is due to the motion of the system's particles (translations, rotations, vibrations), and the potential energy is associated with the static rest mass energy of the constituents of matter, static electric energy of atoms within molecules or crystals, and the static energy of chemical bonds. The internal energy of a system can be changed by heating the system or by doing work on it; the first law of thermodynamics states that the increase in internal energy is equal to the total heat added and work done by the surroundings. If the system is isolated from its surroundings, its internal energy cannot change. For practical considerations in thermodynamics and engineering it is rarely necessary or convenient to consider all energies belonging to the total intrinsic energy of a sample system, such as the energy given by the equivalence of mass. Typically, descriptions only include components relevant to the system under study. Thermodynamics is chiefly concerned only with changes in the internal energy. The internal energy is a state function of a system, because its value depends only on the current state of the system and not on the path taken or process undergone to arrive at this state. It is an extensive quantity. The SI unit of energy is the joule (J). Some authors use a corresponding intensive thermodynamic property called specific internal energy which is internal energy per unit of mass (kilogram) of the system in question. The SI unit of specific internal energy is J/kg. If intensive internal energy is expressed relative to units of amount of substance (mol), then it is referred to as molar internal energy and the unit is J/mol.

Internal energy From the standpoint of statistical mechanics, the internal energy is equal to the ensemble average of the total energy of the system. It is also called intrinsic energy.

250

Description and definition


The internal energy (U) is the sum of all forms of energy (Ei) intrinsic to a thermodynamic system:

It is the energy needed to create the system. It may be divided into potential energy (Upot) and kinetic energy (Ukin) components:

The kinetic energy of a system arises as the sum of the motions of all the system's particles, whether it be the motion of atoms, molecules, atomic nuclei, electrons, or other particles. The potential energy includes all energies given by the mass of particles, by the chemical composition, i.e. the chemical energy stored in chemical bonds having the potential to undergo chemical reactions, the nuclear energy stored by the configuration of protons, neutrons, and other elementary particles in atomic nuclei, and the physical force fields within the system, such as due to internal induced electric or magnetic dipole moment, as well as the energy of deformation of solids (stress-strain). Internal energy does not include the energy due to motion of a system as a whole. It further excludes any kinetic or potential energy the body may have because of its location in external gravitational, electrostatic, or electomagnetic fields. It does, however, include the contribution to the energy due to the coupling of the internal degrees of freedom of the object to such the field. In such a case, the field is included in the thermodynamic description of the object in the form of an additional external parameter. For practical considerations in thermodynamics or engineering, it is rarely necessary, convenient, nor even possible, to consider all energies belonging to the total intrinsic energy of a sample system, such as the energy given by the equivalence of mass. Typically, descriptions only include components relevant to the system under study. Indeed in most systems under consideration, especially through thermodynamics, it is impossible to calculate the total internal energy.[1] Therefore, a convenient null reference point may be chosen for the internal energy. The internal energy is an extensive property: it depends on the size of the system, or on the amount of substance it contains. At any temperature greater than absolute zero, potential energy and kinetic energy are constantly converted into one another, but the sum remains constant in an isolated system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero point energy. A system at absolute zero is merely in its quantum-mechanical ground state, the lowest energy state available. At absolute zero a system has attained its minimum attainable entropy. The kinetic energy portion of the internal energy gives rise to the temperature of the system. Statistical mechanics relates the pseudo-random kinetic energy of individual particles to the mean kinetic energy of the entire ensemble of particles comprising a system. Furthermore it relates the mean kinetic energy to the macroscopically observed empirical property that is expressed as temperature of the system. This energy is often referred to as the thermal energy of a system,[2] relating this energy, like the temperature, to the human experience of hot and cold. Statistical mechanics considers any system to be statistically distributed across an ensemble of N microstates. Each microstate has an energy Ei and is associated with a probability pi. The internal energy is the mean value of the system's total energy, i.e., the sum of all microstate energies, each weighted by their probability of occurrence:

This is the statistical expression of the first law of thermodynamics.

Internal energy

251

Internal energy changes Interactions of thermodynamic systems


Type of system Open Closed Thermally isolated Mechanically isolated Isolated Mass flow Work Heat

Thermodynamics is chiefly concerned only with the changes, U, in internal energy. The most important parameters in thermodynamics when considering the changes in total energy are the changes due to the flow of heat Q and due to mechanical work, i.e. from changes in volume of the system under an external pressure. Accordingly, the internal energy change U for a process may be written more specifically as

where Q is heat added to a system and Wmech is the mechanical work performed by the surroundings due to pressure or volume changes in the system.[3] All other perturbations and energies added by other processes, such as an electric current introduced into an electronic circuit, is summarized as the term Wextra. When a system is heated, it receives energy in form of heat. This energy increases the internal energy. However, it may be extremely difficult to determine how this extra energy is stored. In general, except in an ideal gas, it is redistributed between kinetic and potential energy. The net increase in kinetic energy is measurable by an increase in the temperature of the system. The equipartition theorem states that increase in thermal energy is distributed between the available degrees of freedom of the fundamental oscillators in the system. In an ideal gas all of the extra energy results in a temperature increase, as it is stored solely as kinetic energy. The heat introduced to a system while the temperature changes is often called sensible heat. Another method to change the internal energy of a system is by performing work on the system, either in mechanical form by changing pressure or volume, or by other perturbations, such as directing an electrical current through the system. Finally, the internal energy increases when additional mass is transferred into the system. If a system undergoes certain phase transformations while being heated, such as melting and vaporization, it may be observed that the temperature of the system does not change until the entire sample has completed the transformation. The energy introduced into the system while the temperature did not change is called a latent energy, or latent heat, in contrast to sensible heat. It increases only the potential energy of the system, but not its thermal energy component.

Internal energy of the ideal gas


Thermodynamics often uses the concept of the ideal gas for teaching purposes, and as an approximation for working systems. The ideal gas is a gas of particles considered as point objects that interact only by elastic collisions and fill a volume such that their free mean path between collisions is much larger than their diameter. Such systems are approximated by the monatomic gases, helium and the other noble gases. Here the kinetic energy consists only of the translational energy of the individual atoms. Monatomic particles do not rotate or vibrate, and are not electronically excited to higher energies except at very high temperatures. Therefore practical internal energy changes in an ideal gas may be described solely by changes in its kinetic energy. Kinetic energy is simply the internal energy of the perfect gas and depends entirely on its pressure, volume and thermodynamic temperature.

Internal energy The internal energy of an ideal gas is proportional to its mass (number of moles) N and to its temperature T

252

where c is the heat capacity (at constant volume) of the gas. The internal energy may be written as a function of the three extensive properties S, V, N (entropy, volume, mass) in the following way

where const is an arbitrary positive constant and where R is the Universal Gas Constant. It is easily seen that U is a linearly homogeneous function of the three variables and that it is weakly convex. Knowing temperature and pressure to be the derivatives the Ideal Gas Law immediately follows.

Internal energy of a closed thermodynamic system


This above summation of all components of change in internal energy assume that a positive energy denotes heat added to the system or work done on the system, while a negative energy denotes work of the system on the environment. Typically this relationship is expressed in infinitesimal terms using the differentials of each term. Only the internal energy is an exact differential. For a system undergoing only thermodynamics processes, i.e. a closed system that can exchange only heat and work, the change in the internal energy is

which constitutes the first law of thermodynamics. It may be expressed in terms of other thermodynamic parameters. Each term is composed of an intensive variable (a generalized force) and its conjugate infinitesimal extensive variable (a generalized displacement). For example, for a non-viscous fluid, the mechanical work done on the system may be related to the pressure p and volume V. The pressure is the intensive generalized force, while the volume is the extensive generalized displacement: . This defines the direction of work, W, to be energy flow from the working system to the surroundings, indicated by a negative term. Taking the direction of heat transfer Q to be into the working fluid and assuming a reversible process, the heat is . is temperature is entropy and the change in internal energy becomes

Internal energy

253

Changes due to temperature and volume


The expression relating changes in internal energy to changes in temperature and volume is

This is useful if the equation of state is known. In case of an ideal gas, we can derive that function that depends only on the temperature. , i.e. the internal energy of an ideal gas can be written as a

Changes due to temperature and pressure


When dealing with fluids or solids, an expression in terms of the temperature and pressure is usually more useful:

where it is assumed that the heat capacity at constant pressure is related to the heat capacity at constant volume according to:

Changes due to volume at constant pressure


The internal pressure is defined as a partial derivative of the internal energy with respect to the volume at constant temperature:

Internal energy of multi-component systems


In addition to including the entropy S and volume V terms in the internal energy, a system is often described also in terms of the number of particles or chemical species it contains:

where Nj are the molar amounts of constituents of type j in the system. The internal energy is an extensive function of the extensive variables S, V, and the amounts Nj, the internal energy may be written as a linearly homogeneous function of first degree:

where is a factor describing the growth of the system. The differential internal energy may be written as

which shows temperature T to be the partial derivative of U with respect to entropy S and pressure p to be the negative of the similar derivative with respect to volume V

and where the coefficients

are the chemical potentials for the components of type i in the system. The chemical

potentials are defined as the partial derivatives of the energy with respect to the variations in composition:

Internal energy As conjugate variables to the composition , the chemical potentials are intensive properties, intrinsically

254

characteristic of the system, and not dependent on its extent. Because of the extensive nature of U and its variables, the differential dU may be integrated and yields an expression for the internal energy: . The sum over the composition of the system is the Gibbs energy:

that arises from changing the composition of the system at constant temperature and pressure. For a single component system, the chemical potential equals the Gibbs energy per amount of substance, i.e. particles or moles according to the original definition of the unit for .

Internal energy in an elastic medium


For an elastic medium the mechanical energy term of the internal energy must be replaced by the more general expression involving the stress and strain . The infinitesimal statement is:

where Einstein notation has been used for the tensors, in which there is a summation over all repeated indices in the product term. The Euler theorem yields for the internal energy:

For a linearly elastic material, the stress can be related to the strain by:

Where Cijkl is an element of the 4th-rank elastic constant tensor of the medium.

Computational methods
The path integral Monte Carlo method is a numerical approach for determining the values of the internal energy, based on quantum dynamical principles.

History
James Joule studied the relationship between heat, work, and temperature. He observed that if he did mechanical work on a fluid, such as water, by agitating the fluid, its temperature increased. He proposed that the mechanical work he was doing on the system was converted to thermal energy. Specifically, he found that 4185.5 joules of energy were needed to raise the temperature of a kilogram of water by one degree Celsius.

Internal energy

255

Notes
[1] I. Klotz, R. Rosenberg, Chemical Thermodynamics - Basic Concepts and Methods, 7th ed., Wiley (2008), p.39 [2] Thermal energy (http:/ / hyperphysics. phy-astr. gsu. edu/ hbase/ kinetic/ eqpar. html#c2) Hyperphysics [3] In this article we choose the sign convention of the mechanical work as typically defined in chemistry, which is different from the convention used in physics. In chemistry, work performed by the system against the environment, e.g., a system expansion, is negative, while in physics this is taken to be positive.

References Bibliography
Alberty, R. A. (2001). "Use of Legendre transforms in chemical thermodynamics" (http://www.iupac.org/ publications/pac/2001/pdf/7308x1349.pdf) (PDF). Pure Appl. Chem. 73 (8): 13491380. doi: 10.1351/pac200173081349 (http://dx.doi.org/10.1351/pac200173081349). Lewis, Gilbert Newton; Randall, Merle: Revised by Pitzer, Kenneth S. & Brewer, Leo (1961). Thermodynamics (2nd ed.). New York, NY USA: McGraw-Hill Book Co. ISBN0-07-113809-9. Landau, L. D.; Lifshitz, E. M. (1986). Theory of Elasticity (Course of Theoretical Physics Volume 7). (Translated from Russian by J.B. Sykes and W.H. Reid) (Third ed.). Boston, MA: Butterworth Heinemann. ISBN0-7506-2633-X.

256

Chapter 9. Equations
Ideal gas law
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

The ideal gas law is the equation of state of a hypothetical ideal gas. It is a good approximation to the behaviour of many gases under many conditions, although it has several limitations. It was first stated by mile Clapeyron in 1834 as a combination of Boyle's law and Charles' law.[1] The ideal gas law is often introduced in its common form:

where P is the absolute pressure of the gas, V is the volume of the gas, n is the amount of substance of gas (measured in moles), T is the absolute temperature of the gas and R is the ideal, or universal, gas constant. It can also be derived from kinetic theory, as was achieved (apparently independently) by August Krnig in 1856[2] and Rudolf Clausius in 1857.[3] Universal gas constant was discovered and first introduced into the ideal gas law instead of a large number of specific gas constants by Dmitri Mendeleev in 1874.[4][5][6]
Isotherms of an ideal gas. The curved lines represent the relationship between pressure (on the vertical, y-axis) and volume (on the horizontal, x-axis) for an ideal gas at different temperatures: lines which are further away from the origin (that is, lines that are nearer to the top right-hand corner of the diagram) represent higher temperatures.

Equation

Ideal gas law The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: in the SI system of units, kelvin.

257

Common form
The most frequently introduced form is

where P is the pressure of the gas, V is the volume of the gas, n is the amount of substance of gas (also known as number of moles), T is the temperature of the gas and R is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant. In SI units, P is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in kelvins (273.15 kelvins = 0.00 degrees Celsius). R has the value 8.314 JK1mol1 or 0.08206 Latmmol1K1 if using pressure in standard atmospheres (atm) instead of pascals, and volume in liters instead of cubic metres.

Molar form
How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount (n) (in moles) is equal to the mass (m) (in grams) divided by the molar mass (M) (in grams per mole):

By replacing n with m / M, and subsequently introducing density = m/V, we get:

Defining the specific gas constant Rspecific as the ratio R/M, This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume v, the reciprocal of density, as

It is common, especially in engineering applications, to represent the specific gas constant by the symbol R. In such cases, the universal gas constant is usually given a different symbol such as R to distinguish it. In any case, the context and/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being referred to.[7]

Statistical mechanics
In statistical mechanics the following molecular equation is derived from first principles:

where P is the absolute pressure of the gas measured in Pascals; V is the volume (in this equation the volume is expressed in meters cubed, as pascal times cubic meter equals one Joule); N is the number of particles in the gas; k is the Boltzmann constant relating temperature and energy; and T is the absolute temperature. The actual number of molecules contrasts to the other formulation, which uses n, the number of moles. This relation implies that Nk = nR, and the consistency of this result with experiment is a good check on the principles of statistical mechanics.

Ideal gas law From this we can notice that for an average particle mass of times the atomic mass constant mu (i.e., the mass is u)

258

and since = m/V, we find that the ideal gas law can be rewritten as:

In SI units, P is measured in pascals; V in cubic metres; N is a dimensionless number; and T in kelvins. k has the value 1.381023 JK1 in SI units.

Applications to thermodynamic processes


The table below essentially simplifies the ideal gas equation for a particular processes, thus making this equation easier to solve using numerical methods. A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by subscript. As shown in the first column of the table, basic thermodynamic processes are defined such that one of the gas properties (P, V, T, or S) is constant throughout the process. For a given thermodynamics process, in order to specify the extent of a particular process, one of the properties ratios (listed under the column labeled "known ratio") must be specified (either directly or indirectly). Also, the property for which the ratio is known must be distinct from the property held constant in the previous column (otherwise the ratio would be unity, and not enough information would be available to simplify the gas law equation). In the final three columns, the properties (P, V, or T) at state 2 can be calculated from the properties at state 1 using the equations listed.
Process Isobaric process Constant Pressure Known ratio V2/V1 T2/T1 Isochoric process (Isovolumetric process) (Isometric process) Isothermal process Volume P2/P1 T2/T1 Temperature P2/P1 V2/V1 Isentropic process (Reversible adiabatic process) Entropy[a] P2/P1 V2/V1 T2/T1 Polytropic process P Vn P2/P1 V2/V1 T2/T1 ^ P2 = P1 P2 = P1 P2 = P1(P2/P1) P2 = P1(T2/T1) P2 = P1(P2/P1) P2 = P1/(V2/V1) P2 = P1(P2/P1) P2 = P1(V2/V1) P2 V2 V2 = V1(V2/V1) V2 = V1(T2/T1) V2 = V1 V2 = V1 V2 = V1/(P2/P1) V2 = V1(V2/V1) V2 = V1(P2/P1)(1/) V2 = V1(V2/V1) T2 T2 = T1(V2/V1) T2 = T1(T2/T1) T2 = T1(P2/P1) T2 = T1(T2/T1) T2 = T1 T2 = T1 T2 = T1(P2/P1)(1 1/) T2 = T1(V2/V1)(1 )

P2 = P1(T2/T1)/( 1) V2 = V1(T2/T1)1/(1 ) T2 = T1(T2/T1) P2 = P1(P2/P1) P2 = P1(V2/V1)n V2 = V1(P2/P1)(-1/n) V2 = V1(V2/V1) T2 = T1(P2/P1)(1 - 1/n) T2 = T1(V2/V1)(1n)

P2 = P1(T2/T1)n/(n 1) V2 = V1(T2/T1)1/(1 n) T2 = T1(T2/T1)

a. In an isentropic process, system entropy (S) is constant. Under these conditions, P1 V1 = P2 V2, where is defined as the heat capacity ratio, which is constant for an ideal gas. The value used for is typically 1.4 for diatomic gases like nitrogen (N2) and oxygen (O2), (and air, which is 99% diatomic). Also is typically 1.6 for monatomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines varies between

Ideal gas law 1.35 and 1.15, depending on constitution gases and temperature.

259

Deviations from ideal behavior of real gases


The equation of state given here applies only to an ideal gas, or as an approximation to a real gas that behaves sufficiently like an ideal gas. There are in fact many different forms of the equation of state. Since the ideal gas law neglects both molecular size and intermolecular attractions, it is most accurate for monatomic gases at high temperatures and low pressures. The neglect of molecular size becomes less important for lower densities, i.e. for larger volumes at lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular size. The relative importance of intermolecular attractions diminishes with increasing thermal kinetic energy, i.e., with increasing temperatures. More detailed equations of state, such as the van der Waals equation, account for deviations from ideality caused by molecular size and intermolecular forces. A residual property is defined as the difference between a real gas property and an ideal gas property, both considered at the same pressure, temperature, and composition.

Derivations
Empirical
The ideal gas law can be derived from combining two empirical gas laws: the combined gas law and Avogadro's law. The combined gas law states that

where C is a constant which is directly proportional to the amount of gas, n (Avogadro's law). The proportionality factor is the universal gas constant, R, i.e. C = nR. Hence the ideal gas law

Theoretical
Kinetic theory The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved. Statistical mechanics Let q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then the time average momentum of the particle is:

where the first equality is Newton's second law, and the second line uses Hamilton's equations and the equipartition theorem. Summing over a system of N particles yields

Ideal gas law By Newton's third law and the ideal gas assumption, the net force on the system is the force applied by the walls of their container, and this force is given by the pressure P of the gas. Hence

260

where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is

the divergence theorem implies that

where dV is an infinitesimal volume within the container and V is the total volume of the container. Putting these equalities together yields

which immediately implies the ideal gas law for N particles:

where n = N/NA is the number of moles of gas and R = NAkB is the gas constant.

References
[1] [2] [3] [4] [5] [6] [7] Facsimile at the Bibliothque nationale de France (pp.15390). (http:/ / gallica. bnf. fr/ ark:/ 12148/ bpt6k4336791/ f157. table) Facsimile at the Bibliothque nationale de France (pp.31522). (http:/ / gallica. bnf. fr/ ark:/ 12148/ bpt6k15184h/ f327. table) Facsimile at the Bibliothque nationale de France (pp.35379). (http:/ / gallica. bnf. fr/ ark:/ 12148/ bpt6k15185v/ f371. table) (From the Laboratory of the University of St. Petersburg). Facsimile at the Bibliothque nationale de France (http:/ / gallica. bnf. fr/ ark:/ 12148/ bpt6k95208b. r=mendeleev. langEN) doi: 10.1038/015498a0. Moran and Shapiro, Fundamentals of Engineering Thermodynamics, Wiley, 4th Ed, 2000

Further reading
Davis and Masten Principles of Environmental Engineering and Science, McGraw-Hill Companies, Inc. New York (2002) ISBN 0-07-235053-9 Website giving credit to [[Benot Paul mile Clapeyron (http://www.gearseds.com/curriculum/learn/lesson. php?id=23&chapterid=5)], (17991864) in 1834]

External links
Ideal Gas Law Calculator (http://www.webqc.org/ideal_gas_law.html) Configuration integral (statistical mechanics) (http://clesm.mae.ufl.edu/wiki.pub/index.php/ Configuration_integral_(statistical_mechanics)) where an alternative statistical mechanics derivation of the ideal-gas law, using the relationship between the Helmholtz free energy and the partition function, but without using the equipartition theorem, is provided.

261

Chapter 10. Fundamentals


Fundamental thermodynamic relation
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

In thermodynamics, the fundamental thermodynamic relation is generally expressed as an infinitesimal change in internal energy in terms of infinitesimal changes in entropy, and volume for a closed system in thermal equilibrium in the following way.

Here, U is internal energy, T is absolute temperature, S is entropy, P is pressure, and V is volume. This is only one expression of the fundamental thermodynamic relation. It may be expressed in other ways, using different variables (e.g. using thermodynamic potentials). For example, the fundamental relation may be expressed in terms of the Helmholtz free energy (F) as:

Derivation from the first and second laws of thermodynamics


The first law of thermodynamics states that:

where

and

are infinitesimal amounts of heat supplied to the system by its surroundings and work done by

the system on its surroundings, respectively. According to the second law of thermodynamics we have for a reversible process:

Hence:

Fundamental thermodynamic relation By substituting this into the first law, we have:

262

Letting dW be reversible pressure-volume work, we have:

This equation has been derived in the case of reversible changes. However, since

, and

are

thermodynamic functions of state, the above relation holds also for non-reversible changes. If the system has more external parameters than just the volume that can change and if the numbers of particles in the system can also change, the fundamental thermodynamic relation generalizes to:

Here the

are the generalized forces corresponding to the external parameters .

. The

are the chemical

potentials corresponding to particles of type

Derivation from statistical mechanical principles


The above derivation uses the first and second laws of thermodynamics. The first law of thermodynamics is essentially a definition of heat, i.e. heat is the change in the internal energy of a system that is not caused by a change of the external parameters of the system. However, the second law of thermodynamics is not a defining relation for the entropy. The fundamental definition of entropy of an isolated system containing an amount of energy of is:

where choice of

is the number of quantum states in a small interval between

and

. Here

is a

macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific . The entropy is thus a measure of the entropy (entropy per unit volume or per unit mass) does not depend on

uncertainty about exactly which quantum state the system is in, given that we know its energy to be in some interval of size . Deriving the fundamental thermodynamic relation from first principles thus amounts to proving that the above definition of entropy implies that for reversible processes we have:

The fundamental assumption of statistical mechanics is that all the

states are equally likely. This allows us to

extract all the thermodynamical quantities of interest. The temperature is defined as:

This definition can be derived from the microcanonical ensemble, which is a system of a constant number of particles, a constant volume and that does not exchange energy with its environment. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x. According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in. The generalized force, X, corresponding to the external parameter x is defined such that is the work performed by the system if x is increased by an amount dx. E.g., if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate is given by:

Fundamental thermodynamic relation

263

Since the system can be in any energy eigenstate within an interval of system as the expectation value of the above expression:

, we define the generalized force for the

To evaluate the average, we partition the within a range between and

energy eigenstates by counting how many of them have a value for . Calling this number , we have:

The average defining the generalized force can now be written:

We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx. Then will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between for which lies within the range between and and . Let's focus again on the energy eigenstates . Since these energy eigenstates increase in energy

by Y dx, all such energy eigenstates that are in the interval ranging from E - Y dx to E move from below E to above E. There are

such energy eigenstates. If to above is, of course, given by

, all these energy eigenstates will move into the range between . The number of energy eigenstates that move from below . The difference . Note that if Y dx is larger than . They are counted in both

and

and contribute to an increase in

is thus the net contribution to the increase in that move from below to above the above expression is also valid in that case.

there will be energy eigenstates and , therefore

Expressing the above expression as a derivative with respect to E and summing over Y yields the expression:

The logarithmic derivative of

with respect to x is thus given by:

The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and thus vanishes in the thermodynamic limit. We have thus found that:

Combining this with

Gives:

Fundamental thermodynamic relation

264

which we can write as:

External links
The Fundamental Thermodynamic Relation [1]

References
[1] http:/ / theory. ph. man. ac. uk/ ~judith/ stat_therm/ node38. html

Heat engine
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

In thermodynamics, a heat engine is a system that performs the conversion of heat or thermal energy to mechanical work.[1][2] It does this by bringing a working substance from a higher state temperature to a lower state temperature. A heat "source" generates thermal energy that brings the working substance to the high temperature state. The working substance generates work in the "working body" of the engine while transferring heat to the colder "sink" until it reaches a low temperature state. During this process some of the thermal energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a non-zero heat capacity, but it usually is a gas or liquid. In general an engine converts energy to mechanical work. Heat engines distinguish themselves from other types of engines by the fact that their efficiency is fundamentally limited by Carnot's theorem.[3] Although this efficiency limitation can be a drawback, an advantage of heat engines is that most forms of energy can be easily converted to heat by processes like exothermic reactions (such as combustion), absorption of light or energetic particles, friction, dissipation and resistance. Since the heat source that supplies thermal energy to the engine can thus be powered by virtually any kind of energy, heat engines are very versatile and have a wide range of applicability.

Heat engine Heat engines are often confused with the cycles they attempt to mimic. Typically when describing the physical device the term 'engine' is used. When describing the model the term 'cycle' is used.

265

Overview
In thermodynamics, heat engines are often modeled using a standard engineering model such as the Otto cycle. The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an indicator diagram. Since very few actual implementations of heat engines exactly match their underlying thermodynamic cycles, one could say that a thermodynamic cycle is an ideal case of a mechanical engine. In any case, fully understanding an engine and its efficiency requires gaining a good understanding of the (possibly simplified or idealized) theoretical model, the practical nuances of an actual mechanical engine, and the discrepancies between the two. In general terms, the larger the difference in Figure 1: Heat engine diagram temperature between the hot source and the cold sink, the larger is the potential thermal efficiency of the cycle. On Earth, the cold side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower than 300 Kelvin, so most efforts to improve the thermodynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material limits. The maximum theoretical efficiency of a heat engine (which no engine ever attains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, all expressed in absolute temperature or kelvins. The efficiency of various heat engines proposed or used today ranges from 3 percent [4] (97 percent waste heat using low quality heat) for the OTEC ocean power proposal through 25 percent for most automotive engines [citation needed], to 45 percent for a supercritical coal-fired power station, to about 60 percent for a steam-cooled combined cycle gas turbine.[5] All of these processes gain their efficiency (or lack thereof) due to the temperature drop across them.

Heat engine

266

Power
Heat engines can be characterized by their specific power, which is typically given in kilowatts per litre of engine displacement (in the U.S. also horsepower per cubic inch). The result offers an approximation of the peak power output of an engine. This is not to be confused with fuel efficiency, since high efficiency often requires a lean fuel-air ratio, and thus lower power density. A modern high-performance car engine makes in excess of 75kW/l (1.65hp/in).

Everyday examples
Examples of everyday heat engines include the steam engine, the diesel engine, and the gasoline (petrol) engine in an automobile. A common toy that is also a heat engine is a drinking bird. Also the stirling engine is a heat engine. All of these familiar heat engines are powered by the expansion of heated gases. The general surroundings are the heat sink, providing relatively cool gases which, when heated, expand rapidly to drive the mechanical motion of the engine.

Examples of heat engines


It is important to note that although some cycles have a typical combustion location (internal or external), they often can be implemented with the other. For example, John Ericsson developed an external heated engine running on a cycle very much like the earlier Diesel cycle. In addition, externally heated engines can often be implemented in open or closed cycles.

Phase-change cycles
In these cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression. Rankine cycle (classical steam engine) Regenerative cycle (steam engine more efficient than Rankine cycle) Organic Rankine cycle (Coolant changing phase in temperature ranges of ice and hot liquid water) Vapor to liquid cycle (Drinking bird, Injector, Minto wheel) Liquid to solid cycle (Frost heaving water changing from ice to liquid and back again can lift rock up to 60cm.) Solid to gas cycle (Dry ice cannon Dry ice sublimes to gas.)

Gas-only cycles
In these cycles and engines the working fluid is always a gas (i.e., there is no phase change): Carnot cycle (Carnot heat engine) Ericsson cycle (Caloric Ship John Ericsson) Stirling cycle (Stirling engine, thermoacoustic devices) Internal combustion engine (ICE): Otto cycle (e.g. Gasoline/Petrol engine, high-speed diesel engine) Diesel cycle (e.g. low-speed diesel engine) Atkinson cycle (Atkinson engine) Brayton cycle or Joule cycle originally Ericsson cycle (gas turbine) Lenoir cycle (e.g., pulse jet engine)

Miller cycle

Heat engine

267

Liquid only cycle


In these cycles and engines the working fluid are always like liquid: Stirling cycle (Malone engine) Heat Regenerative Cyclone

Electron cycles
Johnson thermoelectric energy converter Thermoelectric (PeltierSeebeck effect) Thermionic emission Thermotunnel cooling

Magnetic cycles
Thermo-magnetic motor (Tesla)

Cycles used for refrigeration


A domestic refrigerator is an example of a heat pump: a heat engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot side hotter. Internal combustion engine versions of these cycles are, by their nature, not reversible. Refrigeration cycles include: Vapor-compression refrigeration Stirling cryocoolers Gas-absorption refrigerator Air cycle machine Vuilleumier refrigeration Magnetic refrigeration

Evaporative heat engines


The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air.

Mesoscopic heat engines


Mesoscopic heat engines are nanoscale devices that may serve the goal of processing heat fluxes and perform useful work at small scales. Potential applications include e.g. electric cooling devices. In such mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. There is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. This relation transforms the Carnot's inequality into exact equality.

Heat engine

268

Efficiency
The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input. From the laws of thermodynamics:

where is the work extracted from the engine. (It is negative since work is done by the engine.) is the heat energy taken from the high temperature system. (It is negative since heat is extracted from the source, hence added to the sink.) In other words, a heat engine absorbs heat energy from the high temperature heat source, converting part of it to useful work and delivering the rest to the cold temperature heat sink. In general, the efficiency of a given heat transfer process (whether it be a refrigerator, a heat pump or an engine) is defined informally by the ratio of "what you get out" to "what you put in." In the case of an engine, one desires to extract work and puts in a heat transfer. is positive.) is the heat energy delivered to the cold temperature system. (It is positive since heat is

The theoretical maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine, although other engines using different cycles can also attain maximum efficiency. Mathematically, this is because in reversible processes, the change in entropy of the cold reservoir is the negative of that of the hot reservoir (i.e., ), keeping the overall change of entropy zero. Thus:

where Note that

is the absolute temperature of the hot source and is positive while

that of the cold sink, usually measured in kelvin.

is negative; in any reversible work-extracting process, entropy is overall not

increased, but rather is moved from a hot (high-entropy) system to a cold (low-entropy one), decreasing the entropy of the heat source and increasing that of the heat sink. The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy. Since, by the second law of thermodynamics, this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of any process. Empirically, no heat engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine. Figure 2 and Figure 3 show variations on Carnot cycle efficiency. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the efficiency changes with an increase in the heat rejection temperature for a constant turbine inlet temperature.

Heat engine

269

Figure 2: Carnot cycle efficiency with changing heat addition temperature.

Figure 3: Carnot cycle efficiency with changing heat rejection temperature.

Endoreversible heat engines


The most Carnot efficiency as a criterion of heat engine performance is the fact that by its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient. This is because any transfer of heat between two bodies at differing temperatures is irreversible, and therefore the Carnot efficiency expression only applies in the infinitesimal limit. The major problem with that is that the object of most heat engines is to output some sort of power, and infinitesimal power is usually not what is being sought. A different measure of ideal heat engine efficiency is given by considerations of endoreversible thermodynamics, where the cycle is identical to the Carnot cycle except in that the two processes of heat transfer are not reversible (Callen 1985): (Note: Units K or R) This model does a better job of predicting how well real-world heat engines can do (Callen 1985, see also endoreversible thermodynamics):

Efficiencies of power stations[citation needed]


Power station West Thurrock (UK) coal-fired power station 25 CANDU (Canada) nuclear power station Larderello (Italy) geothermal power station 25 80 (C) 565 300 250 (C) (Carnot) 0.64 0.48 0.33 (Endoreversible) 0.40 0.28 0.178 (Observed) 0.36 0.30 0.16

As shown, the endoreversible efficiency much more closely models the observed data.

Heat engine

270

History
Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed today.

Heat engine enhancements


Engineers have studied the various heat engine cycles extensively in effort to improve the amount of usable work they could extract from a given power source. The Carnot cycle limit cannot be reached with any gas-based cycle, but engineers have worked out at least two ways to possibly go around that limit, and one way to get better efficiency without bending any rules. 1. Increase the temperature difference in the heat engine. The simplest way to do this is to increase the hot side temperature, which is the approach used in modern combined-cycle gas turbines. Unfortunately, physical limits (such as the melting point of the materials from which the engine is constructed) and environmental concerns regarding NOx production restrict the maximum temperature on workable heat engines. Modern gas turbines run at temperatures as high as possible within the range of temperatures necessary to maintain acceptable NOx output [citation needed] . Another way of increasing efficiency is to lower the output temperature. One new method of doing so is to use mixed chemical working fluids, and then exploit the changing behavior of the mixtures. One of the most famous is the so-called Kalina cycle, which uses a 70/30 mix of ammonia and water as its working fluid. This mixture allows the cycle to generate useful power at considerably lower temperatures than most other processes. 2. Exploit the physical properties of the working fluid. The most common such exploitation is the use of water above the so-called critical point, or so-called supercritical steam. The behavior of fluids above their critical point changes radically, and with materials such as water and carbon dioxide it is possible to exploit those changes in behavior to extract greater thermodynamic efficiency from the heat engine, even if it is using a fairly conventional Brayton or Rankine cycle. A newer and very promising material for such applications is CO2. SO2 and xenon have also been considered for such applications, although SO2 is a little toxic for most. 3. Exploit the chemical properties of the working fluid. A fairly new and novel exploit is to use exotic working fluids with advantageous chemical properties. One such is nitrogen dioxide (NO2), a toxic component of smog, which has a natural dimer as di-nitrogen tetraoxide (N2O4). At low temperature, the N2O4 is compressed and then heated. The increasing temperature causes each N2O4 to break apart into two NO2 molecules. This lowers the molecular weight of the working fluid, which drastically increases the efficiency of the cycle. Once the NO2 has expanded through the turbine, it is cooled by the heat sink, which causes it to recombine into N2O4. This is then fed back by the compressor for another cycle. Such species as aluminium bromide (Al2Br6), NOCl, and Ga2I6 have all been investigated for such uses. To date, their drawbacks have not warranted their use, despite the efficiency gains that can be realized.

Heat engine processes

Heat engine

271

Cycle

Process 1-2 (Compression)

Process 2-3 (Heat Addition)

Process 3-4 (Expansion)

Process 4-1 (Heat Rejection)

Notes

Power cycles normally with external combustion - or heat pump cycles: Bell Coleman Carnot Ericsson Rankine adiabatic isobaric adiabatic isobaric A reversed Brayton cycle

isentropic isothermal adiabatic

isothermal isobaric isobaric isobaric variable pressure and volume isochoric isobaric

isentropic isothermal adiabatic adiabatic adiabatic

isothermal isobaric isobaric isobaric isochoric the second Ericsson cycle from 1853 Steam engine Hygroscopic cycle

Hygroscopic adiabatic Scuderi adiabatic

Stirling Stoddard

isothermal adiabatic

isothermal adiabatic

isochoric isobaric

Power cycles normally with internal combustion: Brayton adiabatic isobaric adiabatic isobaric Jet engines the external combustion version of this cycle is known as first Ericsson cycle from 1833

Diesel Lenoir

adiabatic isobaric

isobaric isochoric

adiabatic adiabatic

isochoric Pulse jets (Note: Process 1-2 accomplishes both the heat rejection and the compression) isochoric Gasoline / petrol engines

Otto

adiabatic

isochoric

adiabatic

Each process is one of the following: isothermal (at constant temperature, maintained with heat added or removed from a heat source or sink) isobaric (at constant pressure) isometric/isochoric (at constant volume), also referred to as iso-volumetric adiabatic (no heat is added or removed from the system during adiabatic process) isentropic (reversible adiabatic process, no heat is added or removed during isentropic process)

References
[1] Fundamentals of Classical Thermodynamics, 3rd ed. p. 159, (1985) by G.J. Van Wylen and R.E. Sonntag: "A heat engine may be defined as a device that operates in a thermodynamic cycle and does a certain amount of net positive work as a result of heat transfer from a high-temperature body and to a low-temperature body. Often the term heat engine is used in a broader sense to include all devices that produce work, either through heat transfer or combustion, even though the device does not operate in a thermodynamic cycle. The internal-combustion engine and the gas turbine are examples of such devices, and calling these heat engines is an acceptable use of the term." [2] Mechanical efficiency of heat engines, p. 1 (2007) by James R. Senf: "Heat engines are made to provide mechanical energy from thermal energy." [3] Thermal physics: entropy and free energies, by Joon Chang Lee (2002), Appendix A, p. 183: "A heat engine absorbs energy from a heat source and then converts it into work for us.","When the engine absorbs heat energy, the absorbed heat energy comes with entropy." (heat energy ), "When the engine performs work, on the other hand, no entropy leaves the engine. This is problematic. We would like the engine to repeat the process again and again to provide us with a steady work source. ... to do so, the working substance inside the engine must return to its initial thermodynamic condition after a cycle, which requires to remove the remaining entropy. The engine can do this only in one way. It must let part of the absorbed heat energy leave without converting it into work. Therefore the engine cannot convert all of the input energy into work!"

Heat engine
[4] M. Emam, Experimental Investigations on a Standing-Wave Thermoacoustic Engine, M.Sc. Thesis, Cairo University, Egypt (2013) (http:/ / www. scribd. com/ doc/ 147785416/ Experimental-Investigations-on-a-Standing-Wave-Thermoacoustic-Engine#fullscreen). [5] "Efficiency by the Numbers" (http:/ / memagazine. asme. org/ Web/ Efficiency_by_Numbers. cfm) by Lee S. Langston

272

Notes Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed. ed.). W. H. Freeman Company. ISBN0-7167-1088-9. Callen, Herbert B. (1985). Thermodynamics and an Introduction to Thermostatistics (2nd ed. ed.). John Wiley & Sons, Inc. ISBN0-471-86256-8. On line museum of toy steam engines, including a very rare Bing heat engine (http://www.mikes-steam-engines. co.uk/other_engines.htm)

External links
Video of Stirling engine running on dry ice (http://www.icefoundry.org/how-stirling-engine-works.php) Heat Engine (http://www.taftan.com/thermodynamics/HENGINE.HTM)

Carnot cycle
Thermodynamics

The classical Carnot heat engine Book:Thermodynamics

e [1]

v t

The Carnot cycle is a theoretical thermodynamic cycle proposed by Nicolas Lonard Sadi Carnot in 1823 and expanded by in the 1830s and 40s. It can be shown that it is the most efficient cycle for converting a given amount of thermal energy into work, or conversely, creating a temperature difference (e.g. refrigeration) by doing a given amount of work. Every single thermodynamic system exists in a particular state. When a system is taken through a series of different states and finally returned to its initial state, a thermodynamic cycle is said to have occurred. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine. A system undergoing a Carnot cycle is called a Carnot heat engine, although such a 'perfect' engine is only a theoretical limit and cannot be built in practice.[citation needed]

Carnot cycle

273

Stages of the Carnot Cycle


The Carnot cycle when acting as a heat engine consists of the following steps: 1. Reversible isothermal expansion of the gas at the "hot" temperature, T1 (isothermal heat addition or absorption). During this step (1 to 2 on Figure 1, A to B in Figure 2) the gas is allowed to expand and it does work on the surroundings. The temperature of the gas does not change during the process, and thus the expansion is isothermal. The gas expansion is propelled by absorption of heat energy Q1 and of entropy from the high temperature reservoir. 2. Isentropic (reversible adiabatic) expansion of the gas (isentropic work output). For this step (2 to 3 on Figure 1, B to C in Figure 2) the piston and cylinder are assumed to be thermally insulated, thus they neither gain nor lose heat. The gas continues to expand, doing work on the surroundings, and losing an equivalent amount of internal energy. The gas expansion causes it to cool to the "cold" temperature, T2. The entropy remains unchanged. 3. Reversible isothermal compression of the gas at the "cold" temperature, T2. (isothermal heat rejection) (3 to 4 on Figure 1, C to D on Figure 2) Now the surroundings do work on the gas, causing an amount of heat energy Q2 and of entropy to flow out of the gas to the low temperature reservoir. (This is the same amount of entropy absorbed in step 1, as can be seen from the Clausius inequality.) 4. Isentropic compression of the gas (isentropic work input). (4 to 1 on Figure 1, D to A on Figure 2) Once again the piston and cylinder are assumed to be thermally insulated. During this step, the surroundings do work on the gas, increasing its internal energy and compressing it, causing the temperature to rise to T1. The entropy remains unchanged. At this point the gas is in the same state as at the start of step 1.

The pressure-volume graph


When the Carnot cycle is plotted on a pressure volume diagram, the isothermal stages follow the isotherm lines for the working fluid, adiabatic stages move between isotherms and the area bounded by the complete cycle path represents the total work that can be done during one cycle.

Properties and significance

Figure 1: A Carnot cycle illustrated on a PV diagram to illustrate the work done.

Carnot cycle

274

The temperature-entropy diagram


The behaviour of a Carnot engine or refrigerator is best understood by using a temperature-entropy diagram (TS diagram), in which the thermodynamic state is specified by a point on a graph with entropy (S) as the horizontal axis and temperature (T) as the vertical axis. For a simple system with a fixed number of particles, any point on the graph will represent a particular state of the system. A thermodynamic process will consist of a curve connecting an initial state (A) and a final state (B). The area under the curve will be:
Figure 2: A Carnot cycle acting as a heat engine, illustrated on a temperature-entropy diagram. The cycle takes place between a hot reservoir at temperature TH and a cold reservoir at temperature TC. The vertical axis is temperature, the horizontal axis is entropy.

which is the amount of thermal energy transferred in the process. If the process moves to greater entropy, the area under the curve will be the amount of heat absorbed by the system in that process. If the process moves towards lesser entropy, it will be the amount of heat removed. For any cyclic process, there will be an upper portion of the cycle and a lower portion. For a clockwise cycle, the area under the upper portion will be the thermal energy absorbed during the cycle, while the area under the lower portion will be the thermal energy removed during the cycle. The area inside the cycle will then be the difference between the two, but since the internal energy of the system must have returned to its initial value, this difference must be the amount of work done by the system over the cycle. Referring to figure 1, mathematically, for a reversible process we may write the amount of work done over a cyclic process as:

A generalized thermodynamic cycle taking place between a hot reservoir at temperature TH and a cold reservoir at temperature TC. By the second law of thermodynamics, the cycle cannot extend outside the temperature band from TC to TH. The area in red QC is the amount of energy exchanged between the system and the cold reservoir. The area in white W is the amount of work energy exchanged by the system with its surroundings. The amount of heat exchanged with the hot reservoir is the sum of the two. If the system is behaving as an engine, the process moves clockwise around the loop, and moves counter-clockwise if it is behaving as a refrigerator. The efficiency of the cycle is the ratio of the white area (work) divided by the sum of the white and red areas (heat absorbed from the hot reservoir).

Carnot cycle Since dU is an exact differential, its integral over any closed loop is zero and it follows that the area inside the loop on a T-S diagram is equal to the total work performed if the loop is traversed in a clockwise direction, and is equal to the total work done on the system as the loop is traversed in a counterclockwise direction.

275

The Carnot cycle


Evaluation of the above integral is particularly simple for the Carnot cycle. The amount of energy transferred as work is

The total amount of thermal energy transferred between the hot reservoir and the system will be

and the total amount of thermal energy transferred between the system and the cold reservoir will be
A Carnot cycle taking place between a hot reservoir at temperature TH and a cold reservoir at temperature TC.

The efficiency

is defined to be:

where is the work done by the system (energy exiting the system as work), is the heat put into the system (heat energy entering the system), is the absolute temperature of the cold reservoir, and is the absolute temperature of the hot reservoir. is the maximum system entropy is the minimum system entropy This efficiency makes sense for a heat engine, since it is the fraction of the heat energy extracted from the hot reservoir and converted to mechanical work. A Rankine cycle is usually the practical approximation.

The Reversed Carnot cycle


The Carnot heat-engine cycle described is a totally reversible cycle. Therefore, all the processes that comprised it can be reversed, in which case it becomes the Carnot refrigeration cycle. This time, the cycle remains exactly the same, except that the directions of any heat and work interactions are reversed: Heat is absorbed from the low-temperature reservoir, heat in is rejected to a high-temperature reservoir, and a work input is required to accomplish all this. The P-V diagram of the reversed Carnot cycle is the same as for the Carnot cycle, except that the directions of the processes are reversed.[1]

Carnot cycle

276

Carnot's theorem
It can be seen from the above diagram, that for any cycle operating between temperatures exceed the efficiency of a Carnot cycle. Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs. Thus, Equation 3 gives the maximum efficiency possible for any engine using the corresponding A real engine (left) compared to the Carnot cycle (right). The entropy of a real material temperatures. A corollary to Carnot's changes with temperature. This change is indicated by the curve on a T-S diagram. For theorem states that: All reversible this figure, the curve indicates a vapor-liquid equilibrium (See Rankine cycle). Irreversible engines operating between the same systems and losses of heat (for example, due to friction) prevent the ideal from taking place at every step. heat reservoirs are equally efficient. Rearranging the right side of the equation gives what may be a more easily understood form of the equation. Namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir. To find the absolute temperature in kelvin, add 273.15 degrees to the Celsius temperature. Looking at this formula an interesting fact becomes apparent. Lowering the temperature of the cold reservoir will have more effect on the ceiling efficiency of a heat engine than raising the temperature of the hot reservoir by the same amount. In the real world, this may be difficult to achieve since the cold reservoir is often an existing ambient temperature. In other words, maximum efficiency is achieved if and only if no new entropy is created in the cycle. Otherwise, since entropy is a state function, the required dumping of heat into the environment to dispose of excess entropy leads to a reduction in efficiency. So Equation 3 gives the efficiency of any reversible heat engine. In mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. For the case when work and heat fluctuations are counted, there is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. This relation transforms the Carnot's inequality into exact equality that is applied to an arbitrary heat engine coupled to two heat reservoirs and operating at arbitrary rate. and , none can

Efficiency of real heat engines


See also: Heat engine efficiency and other performance criteria Carnot realized that in reality it is not possible to build a thermodynamically reversible engine, so real heat engines are less efficient than indicated by Equation 3. In addition, real engines that operate along this cycle are rare. Nevertheless, Equation 3 is extremely useful for determining the maximum efficiency that could ever be expected for a given set of thermal reservoirs. Although Carnot's cycle is an idealisation, the expression of Carnot efficiency is still useful. Consider the average temperatures,

at which heat is input and output, respectively. Replace TH and TC in Equation (3) by TH and TC respectively.

Carnot cycle For the Carnot cycle, or its equivalent, TH is the highest temperature available and TC the lowest. For other less efficient cycles, TH will be lower than TH, and TC will be higher than TC. This can help illustrate, for example, why a reheater or a regenerator can improve the thermal efficiency of steam power plantsand why the thermal efficiency of combined-cycle power plants (which incorporate gas turbines operating at even higher temperatures) exceeds that of conventional steam plants.

277

References
[1] engel, Yunus A., and Michael A. Boles. "6-7." Thermodynamics: An Engineering Approach. 7th ed. New York: McGraw-Hill, 2011. 299. Print.

Carnot, Sadi, Reflections on the Motive Power of Fire Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (1963). The Feynman Lectures on Physics. Addison-Wesley Publishing Company. pp.444f. ISBN0-201-02116-1. Halliday, David; Resnick, Robert (1978). Physics (3rd ed. ed.). John Wiley & Sons. pp.541548. ISBN0-471-02456-2. Kittel, Charles; Kroemer, Herbert (1980). Thermal Physics (2nd ed. ed.). W. H. Freeman Company. ISBN0-7167-1088-9. Kostic, M., Revisiting The Second Law of Energy Degradation and Entropy Generation: From Sadi Carnot's Ingenious Reasoning to Holistic Generalization. AIP Conf. Proc. 1411, pp.327350; doi: http:/ / dx. doi. org/ 10. 1063/ 1. 3665247. American Institute of Physics, 2011. ISBN 978-0-7354-0985-9. Abstract at: (http:/ / adsabs. harvard. edu/ abs/ 2011AIPC. 1411. . 327K). Full article (24 pages (http:/ / scitation. aip. org/ getpdf/ servlet/ GetPDFServlet?filetype=pdf& id=APCPCS001411000001000327000001& idtype=cvips& doi=10. 1063/ 1. 3665247& prog=normal& bypassSSO=1)), also at (http:/ / www. kostic. niu. edu/ 2ndLaw/ Revisiting The Second Law of Energy Degradation and Entropy Generation - From Carnot to Holistic Generalization-4.pdf).

External links
Hyperphysics (http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/carnot.html) article on the Carnot cycle. Interactive Java applet (http://galileoandeinstein.physics.virginia.edu/more_stuff/flashlets/carnot.htm) showing behavior of a Carnot engine.

278

Chapter 11. Philosophy


Heat death paradox
The heat death paradox, also known as Clausius' paradox and thermodynamic paradox, is a reductio ad absurdum argument that uses thermodynamics to show the impossibility of an infinitely old universe. Assuming that the universe is eternal, a question arises: How is it that thermodynamic equilibrium has not already been achieved? Any hot object transfers heat to its cooler surroundings, until everything is at the same temperature. For two objects at the same temperature as much heat flows from one body as flows from the other, and the net effect is no change. If the universe were infinitely old, there must have been enough time for the stars to cool and warm their surroundings. Everywhere should therefore be at the same temperature and there should either be no stars, or everything should be as hot as stars. Since there are stars and the universe is not in thermal equilibrium it cannot be infinitely old. The paradox does not arise in Big Bang, steady state or chaotic inflationary cosmologies. In Big Bang cosmology, the universe is not old enough to have reached equilibrium. Steady state and chaotic inflation escape the paradox by expanding.[citation needed] Radiation is continually being red-shifted by the expansion, causing background cooling. Thus, they balance entropy production, and are in eternal equilibrium.

References

Loschmidt's paradox

279

Loschmidt's paradox
Loschmidt's paradox, first published by Sir William Thomson, 1st Baron Kelvin, in 1874,[1] also known as the reversibility paradox, irreversibility paradox or German: Umkehreinwand, is the objection that it should not be possible to deduce an irreversible process from time-symmetric dynamics. This puts the time reversal symmetry of (almost) all known low-level fundamental physical processes at odds with any attempt to infer from them the second law of thermodynamics which describes the behaviour of macroscopic systems. Both of these are well-accepted principles in physics, with sound observational and theoretical support, yet they seem to be in conflict; hence the paradox. Johann Loschmidt's criticism was provoked by the H-theorem of Boltzmann, which was an attempt to explain using kinetic theory the increase of entropy in an ideal gas from a non-equilibrium state, when the molecules of the gas are allowed to collide. In 1876, Loschmidt pointed out that if there is a motion of a system from time t0 to time t1 to time t2 that leads to a steady decrease of H (increase of entropy) with time, then there is another allowed state of motion of the system at t1, found by reversing all the velocities, in which H must increase. This revealed that one of Boltzmann's key assumptions, molecular chaos, or, the Stosszahlansatz, that all particle velocities were completely uncorrelated, did not follow from Newtonian dynamics. One can assert that possible correlations are uninteresting, and therefore decide to ignore them; but if one does so, one has changed the conceptual system, injecting an element of time-asymmetry by that very action. Reversible laws of motion cannot explain why we experience our world to be in such a comparatively low state of entropy at the moment (compared to the equilibrium entropy of universal heat death); and to have been at even lower entropy in the past.

Arrow of time
Any process that happens regularly in the forward direction of time but rarely or never in the opposite direction, such as entropy increasing in an isolated system, defines what physicists call an arrow of time in nature. This term only refers to an observation of an asymmetry in time, it is not meant to suggest an explanation for such asymmetries. Loschmidt's paradox is equivalent to the question of how it is possible that there could be a thermodynamic arrow of time given time-symmetric fundamental laws, since time-symmetry implies that for any process compatible with these fundamental laws, a reversed version that looked exactly like a film of the first process played backwards would be equally compatible with the same fundamental laws, and would even be equally probable if one were to pick the system's initial state randomly from the phase space of all possible states for that system. Although most of the arrows of time described by physicists are thought to be special cases of the thermodynamic arrow, there are a few that are believed to be unconnected, like the cosmological arrow of time based on the fact that the universe is expanding rather than contracting, and the fact that a few processes in particle physics actually violate time-symmetry, while they respect a related symmetry known as CPT symmetry. In the case of the cosmological arrow, most physicists believe that entropy would continue to increase even if the universe began to contract (although the physicist Thomas Gold once proposed a model in which the thermodynamic arrow would reverse in this phase). In the case of the violations of time-symmetry in particle physics, the situations in which they occur are rare and are only known to involve a few types of meson particles. Furthermore, due to CPT symmetry reversal of time direction is equivalent to renaming particles as antiparticles and vice versa. Therefore this cannot explain Loschmidt's paradox.

Loschmidt's paradox

280

Dynamical systems
Current research in dynamical systems offers one possible mechanism for obtaining irreversibility from reversible systems. The central argument is based on the claim that the correct way to study the dynamics of macroscopic systems is to study the transfer operator corresponding to the microscopic equations of motion. It is then argued that the transfer operator is not unitary (i.e. is not reversible) but has eigenvalues whose magnitude is strictly less than one; these eigenvalues corresponding to decaying physical states. This approach is fraught with various difficulties; it works well for only a handful of exactly solvable models.[2] Abstract mathematical tools used in the study of dissipative systems include definitions of mixing, wandering sets, and ergodic theory in general.

Fluctuation theorem
One approach to handling Loschmidt's paradox is the fluctuation theorem, proved by Denis Evans and Debra Searles, which gives a numerical estimate of the probability that a system away from equilibrium will have a certain change in entropy over a certain amount of time. The theorem is proved with the exact time reversible dynamical equations of motion and the Axiom of Causality. The fluctuation theorem is proved using the fact that dynamics is time reversible. Quantitative predictions of this theorem have been confirmed in laboratory experiments at the Australian National University conducted by Edith M. Sevick et al. using optical tweezers apparatus. However, the fluctuation theorem assumes that the system is initially in a non-equilibrium state, so it can be argued that the theorem only verifies the time-asymmetry of the second law of thermodynamics based on an a priori assumption of time-asymmetric boundary conditions. If no low-entropy boundary conditions in the past are assumed, the fluctuation theorem should give exactly the same predictions in the reverse time direction as it does in the forward direction, meaning that if you observe a system in a nonequilibrium state, you should predict that its entropy was more likely to have been higher at earlier times as well as later times. This prediction appears at odds with everyday experience in systems that are not closed, since if you film a typical nonequilibrium system and play the film in reverse, you typically see the entropy steadily decreasing rather than increasing. Thus we still have no explanation for the arrow of time that is defined by the observation that the fluctuation theorem gives correct predictions in the forward direction but not the backward direction, so the fundamental paradox remains unsolved. Note, however, that if you were looking at an isolated system which had reached equilibrium long in the past, so that any departures from equilibrium were the result of random fluctuations, then the backwards prediction would be just as accurate as the forward one, because if you happen to see the system in a nonequilibrium state it is overwhelmingly likely that you are looking at the minimum-entropy point of the random fluctuation (if it were truly random, there's no reason to expect it to continue to drop to even lower values of entropy, or to expect it had dropped to even lower levels earlier), meaning that entropy was probably higher in both the past and the future of that state. So, the fact that the time-reversed version of the fluctuation theorem does not ordinarily give accurate predictions in the real world is reason to think that the nonequilibrium state of the universe at the present moment is not simply a result of a random fluctuation, and that there must be some other explanation such as the Big Bang starting the universe off in a low-entropy state (see below).

The Big Bang


Another way of dealing with Loschmidt's paradox is to see the second law as an expression of a set of boundary conditions, in which our universe's time coordinate has a low-entropy starting point: the Big Bang. From this point of view, the arrow of time is determined entirely by the direction that leads away from the Big Bang, and a hypothetical universe with a maximum-entropy Big Bang would have no arrow of time. The theory of cosmic inflation tries to give reason why the early universe had such a low entropy.

Loschmidt's paradox

281

References
[1] The kinetic theory of the dissipation of energy, as reprinted in S. Brush, ed., Kinetic Theory, vol. 2, Pergamon Press, 1966, pp. 176--187, as quoted in Jan von Plato, Creating Modern Probability, Cambridge Univ. Press, 1994, p. 85. [2] Dean J. Driebe, Fully Chaotic Maps and Broken Time Symmetry, (1999) Kluwer Academic ISBN 0-7923-5564-4

J. Loschmidt, Sitzungsber. Kais. Akad. Wiss. Wien, Math. Naturwiss. Classe 73, 128142 (1876)

External links
Reversible laws of motion and the arrow of time (http://www.nyu.edu/classes/tuckerman/stat.mech/lectures/ lecture_3/node2.html) by Mark Tuckerman A toy system with time-reversible discrete dynamics showing entropy increase (http://www.scientificblogging. com/hammock_physicist/fibonacci_chaos_and_times_arrow)

Article Sources and Contributors

282

Article Sources and Contributors


Classical Thermodynamics Source: http://en.wikipedia.org/w/index.php?oldid=586019703 Contributors: - ), 10metreh, 209.9.201.xxx, 213.253.39.xxx, 24.93.53.xxx, A Raider Like Indiana, A13ean, A666666, APH, Abb3w, Abqwildcat, Acroterion, Addihockey10, Adwaele, Agro r, Ahoerstemeier, Aim Here, Aitias, Alansohn, Albert.white, Aleksa Lukic, Alyblaith, Ancheta Wis, Andre Engels, Andreas.Persson, Anonymous Dissident, Anonymous editor, Anshul173, Antandrus, Antonio Lopez, Antonrojo, Anubis1975, Appieters, AriasFco, Arj, Arjun024, Arthur Rubin, Asaydjari, Atlant, Aua, Austin Maxwell, Azuris, Baatarchuluun, Babbage, Balajits93, Banus, Barticus88, Bduke, BeRo999, Beans098, Beetstra, Ben Ben, Ben Moore, Bensaccount, Beyond My Ken, Biff Laserfire, Bill Nye the wheelin' guy, Blodslav, Bo Jacoby, Bob, Bob K31416, Bobet, Bobo192, Bomac, Bos7, BozMo, Brews ohare, Brian the Editor, BrianGregory86, Brien Clark, Bryan Derksen, BryanG, Bubba58, C J Cowie, CA387, CALR, CDN99, CRGreathouse, Cableman1112, Calabe1992, Calmer Waters, CambridgeBayWeather, Capricorn42, CaptinJohn, Carcharoth, Casey56, Casper2k3, CatastrophicToad, CatherineMunro, CathySc, Cbdorsett, Cdc, Celtis123, CheddarMan, ChemGardener, Chester Markel, Chjoaygame, Chrisban0314, Chronic21, Coelacan, Complexica, Conversion script, Correogsk, Cortonin, Coverman6, CultureShock582, Curps, CyrilB, Czforest, D.H, DARTH SIDIOUS 2, DBigXray, DEMOLISHOR, DVD R W, DVdm, Daen, Dakkagon, Damorbel, Daniel5127, Dank, Danny, Dar-Ape, DavidLeighEllis, Dbeardsl, Ddsmartie, DeadEyeArrow, Deadmau8****, Deathcrap, Defladamouse, Denello, DennisIsMe, DesertAngel, Dgrant, Dhollm, Diberri, Djr32, Dmr2, Docboat, Domesticenginerd, Donner60, DougsTech, DrTorstenHenning, Dreadstar, Duciswrong1234, Duncanpark, Dycedarg, Dyolf Knip, Dzordzm, Eadric, Edsanville, Eeekster, Eff John Wayne, Ehn, Ejw50, El C, Enigmaman, Enormousdude, Enviroboy, Eric Forste, EricWesBrown, EugeneZelenko, EvelinaB, Falcon8765, Falcorian, Faulcon DeLacy, Fedor Babkin, Femto, Fireballxyz, Flewis, Flyer22, Foobar, FrankLambert, Frankie1969, Freak in the bunnysuit, Fredde 99, Freedumb, Frokor, Funandtrvl, Funnybunny, Furious.baz, Fvw, Galor612, Galorr, Gary King, Gatewayofintrigue, Gene Nygaard, Georgegeorge127, Gerry Ashton, Giftlite, Gilliam, Gioto, Glane23, Glenn, GoingBatty, Graham87, GrapeSmuckers, Grj23, Gtxfrance, H Padleckas, Hadal, Hagiographer, Haham hanuka, Haiti333, Hamiltondaniel, Hammer1980, Happysailor, Harlem Baker Hughes, Hayabusa future, Headbomb, Helix84, Helptry, Heron, Hhhippo, Hknaik1307, Hmains, Hoax user, Hollycliff, IW.HG, Ian.thomson, Icairns, IncidentalPoint, Indon, Ivy mike, Ixfalia, JNW, JabberWok, Jackfork, Jagged 85, James086, Jason Patton, Jbergste, Jclemens, Jdpipe, Jedimike, Jeff Relf, JerrySteal, Jezhotwells, Jheald, Jianni, Jim1138, Jklin, Jnyanydts, Joeljoeljoel12345, John of Lancaster, JohnnoShadbolt, JoseREMY, Joshmt, Jpk, Jrtayloriv, Jtir, Jung dalglish, Jusdafax, Jwanders, Jwulsin, KGasso, Kablammo, Karol Langner, Katalaveno, Kbrose, Kdruhl, Keta, Kku, Klemen Kocjancic, Knakts, Krenair, Kukini, Kzollman, Lagrangian, Landofthedead2, LaosLos, LaurentRDC, LeadSongDog, Leafyplant, Lee J Haywood, Levis ken, Lfh, Libb Thims, Lidnariq, Ligulem, LilHelpa, LittleOldMe old, LizardJr8, Llh, Logger9, Looxix, Loren.wilton, Lottamiata, Louis Labrche, Ludatha, Luk, Lumos3, Luna Santin, Lyctc, Lynxara, MER-C, ML5, MPerel, MaNeMeBasat, MacRusgail, Macedonian, MagDude101, Magog the Ogre, Malo, Mandarax, Mariano Blasi, MarsRover, Masudr, Materialscientist, Matewis1, Matthew Fennell, Mattmaccourt, Maurice Carbonaro, Mausy5043, Mayur, McVities, Meisam.fa, Melesse, Menchi, Mermaid from the Baltic Sea, Metricopolus, Miaow Miaow, Michael Devore, Michael Hardy, Mido, Miguel, Mike Rosoft, MikeEagling, Miketwardos, Miterdale, Miyangoo, Mjmcb1, Mls1492, Moink, Monedula, Monn0016, Moocowisi, Morri028, Ms2ger, Mshonle, MuTau, MulderX, Mxn, Mygerardromance, Myleneo, N12345n, NAHID, NAshbery, Nag 08, Nakon, Namasi, Nascar90210, Necatikaval, Negrulio, Neutiquam, Nick, Nny12345, Nomi12892, Nonsuch, Notburnt, NuclearEnergy, NuclearWarfare, Nwbeeson, Obamafan70, Ocee, Okedem, OlEnglish, Olivier, Omicronpersei8, Omnipaedista, Oneileri, Orphan Wiki, Orthologist, PAR, PTJoshua, Pak umrfrq, Paradoctor, Paranoidhuman, Patrosnoopy, Pavel Vozenilek, Pax:Vobiscum, Pearle, Peregrine981, Petr10, Peyre, Pflatau, Philip Trueman, Phmoreno, PhySusie, Phys, Physicist, Piast93, Pinethicket, Pip2andahalf, Pjvpjv, Pkeck, Plek, Pmetzger, Pmronchi, Poccil, Power.corrupts, Prashanthns, Psyche825, Quadalpha, Quadpus, Qxz, R'n'B, R3m0t, Racerx11, RadicalBender, Ram-Man, Raul654, Ravichandar84, Razorflame, Reatlas, Reddi, Rejnej, Rettetast, RexNL, Rhinestone K, Richard001, Rifleman 82, Rjwilmsi, Roadrunner, Rogper, Rumpelstiltskin223, Ruslik0, SMC, Sadi Carnot, Sam Hocevar, Sango123, Sankalpdravid, Saros136, Sasquatch, SchfiftyThree, Schmei, Scog, Scohoust, Seherr, Seth Ilys, Shannon1, Shirulashem, Shoeofdeath, Sholto Maud, Shres58tha, Siddhant, Sidsawsome, Silly rabbit, Sillybilly, Sillygoosemo, Skizzik, Smack, Snapperman2, Sounny, SpeedyGonsales, Spinkysam, Spitfire, Sploonie, Spud Gun, Spudcrazy, Srleffler, Stannered, Stedder, Stephenb, Stokerm, Sundareshan, Sundaryourfriend, Suresh 5, SvNH, Sympleko, THEN WHO WAS PHONE?, TVC 15, TakuyaMurata, Tantalate, Taroaldo, Tasc, That Guy, From That Show!, Thatguyflint, The Rambling Man, The Thing That Should Not Be, The Troll lolololololololol, The1physicist, The_ansible, Thebestofall007, Thecurran, Therealmilton, Thermo771, Thermodynoman, Thomas85127, ThorinMuglindir, Tim Starling, Titoxd, Tolly4bolly, Tomasz Prochownik, Tpot2688, Traxs7, Twested, Tylerni7, UDScott, Ugog Nizdast, Unbitwise, Uncle Dick, Uncle G, User A1, Vanished user fois8fhow3iqf9hsrlgkjw4tus, Vanka5, Victor Gijsbers, Vinodhchennu, Vsmith, Waggers, Waleswatcher, Wallyau, Wariner, Wavesmikey, Weleepoxypoo, Whbstare, Whoever101, Wiki alf, WikiCatalogEdit701, Wikipelli, WilfriedC, Wtmitchell, Wwoods, Xonein, Your Lord and Master, Yunshui, Zion bi as, Zntrip, , 1150 anonymous edits Statistical Thermodynamics Source: http://en.wikipedia.org/w/index.php?oldid=585025284 Contributors: APH, Abtract, Acmedogs, Agricola44, Alefbenedetti, Aleksas, Alison, Amviotd, Ancheta Wis, Andries, Anoko moonlight, Anterior1, Ap, Apuldram, Baz.77.243.99.32, Boardhead, Bogdangiusca, Brews ohare, Brianjd, Bryan Derksen, BryanD, Chandraveer, Charele, Charles Matthews, Charvest, Chris Howard, Chrisch, Christian75, Complexica, Conversion script, Cordell, DMTagatac, Damorbel, Dancter, Daniel5127, Davennmarr, David Woolley, Den fjttrade ankan, Derek Ross, Dhollm, DiceDiceBaby, Djr32, Djus, Doprendek, Drttm, Edgar181, Edkarpov, Edsanville, Elwikipedista, Eman, F=q(E+v^B), Fephisto, Forthommel, Frokor, G716, Gail, GangofOne, Gerrit C. Groenenboom, Giftlite, Gurch, HappyCamper, Headbomb, Hlfhjwlrdglsp, Hpubliclibrary, Ht686rg90, IBensone, Illia Connell, Isopropyl, IvanLanin, JKeck, JSquish, JZCL, JabberWok, Jheald, John Palkovic, Jorgenumata, Joyradost, Jyoshimi, Karl-Henner, Karol Langner, Kbrose, KeithFratus, Keulian, Kmarinas86, Koumz, Kyucasio, Kzollman, Lambiam, Landregn, Lantonov, LeadSongDog, Linas, Linuxlad, Locke9k, LokiClock, Looie496, Loupeter, Lyonspen, MK8, Mark viking, Mary blackwell, Mct mht, Melcombe, Mets501, Michael Hardy, Michael L. Kaufman, Michael assis, Miguel, Mikez, Mild Bill Hiccup, Mlys, Mogism, Monedula, Moondarkx, Mpatel, Mxn, Nanite, Netzwerkerin, Nnh, Op47, P99am, PAR, Patrick, Pavlovi, Peabeejay, Perelaar, Peterlin, PhnomPencil, Phudga, Phys, PhysPhD, Pol098, Politepunk, Pullister, Qwfp, Radagast83, RandomP, Rashhypothesis, Razimantv, Rjwilmsi, Robwf, RockMagnetist, Roshan220195, Ryanmcdaniel, SDC, SPat, Sadi Carnot, Samkung, Sanpaz, Sbharris, SchreiberBike, Scorwin, Sheliak, SimpsonDG, Skizzik, Spud Gun, Steve Omohundro, StewartMH, StradivariusTV, Template namespace initialisation script, Teply, That Guy, From That Show!, The Anome, The Cunctator, The Original Wildbear, The.orpheus, Theopolisme, Thingg, ThorinMuglindir, Tim Starling, TravisAF, Truthnlove, Tweenk, Van helsing, Vql, Wavelength, Weiguxp, Wickey-nl, Wikfr, Wiki me, WolfmanSF, Woohookitty, Xp54321, Yevgeny Kats, Yill577, ^musaz, , 194 anonymous edits Chemical Thermodynamics Source: http://en.wikipedia.org/w/index.php?oldid=580854272 Contributors: Alansohn, AndrewHowse, Arthur Rubin, Astrochemist, AtholM, Avitohol, Avoided, Barticus88, Beetstra, Bellerophon, Billyjeanisalive1995, Bwrs, Caltas, Citrusbowler, Conversion script, Count Iblis, D.H, Damorbel, Dhollm, Discospinster, EconoPhysicist, Ectomaniac, Elfer, ErrantX, Fuzzform, Galorr, Gdewilde, Giftlite, Gilderien, Hallenrm, Headbomb, Icairns, IncognitoErgoSum, Itub, J G Campbell, Jdpipe, Jeff3000, Jeffq, JzG, Ketiltrout, LukeSurl, Marek69, Materialscientist, Mn-imhotep, Nk, Notebooktheif, NuclearEnergy, PAR, Philip Trueman, Ronhjones, Russot1, Sadi Carnot, Sanguinity, SchreiberBike, Seb az86556, Selket, Srleffler, StaticVision, SteveLower, StradivariusTV, Stratocracy, The High Fin Sperm Whale, The Original Wildbear, The Thing That Should Not Be, Thermbal, Thisisborin9, Tnxman307, Unara, User A1, Vsmith, Wikipelli, Yuorme, , 94 anonymous edits Equilibrium Thermodynamics Source: http://en.wikipedia.org/w/index.php?oldid=550012277 Contributors: ChrisChiasson, Czforest, Dhollm, Karol Langner, Pjacobi, Quadalpha, Sadi Carnot, Vsmith, Wavesmikey, ZxxZxxZ, , 3 anonymous edits Non-equilibrium Thermodynamics Source: http://en.wikipedia.org/w/index.php?oldid=570408912 Contributors: 7methylguanosine, Adwaele, Aetheling, AgarwalSumeet, Bender235, Bernhlav, Boardhead, Burhan Salay, Campo246, Chjoaygame, Chris Howard, ChrisChiasson, Complexica, D4g0thur, Dhollm, DrProbability, Duncanpark, Ems2715, Eug373, Favonian, Gary Dee, GuidoGer, Gwernol, HappyInGeneral, JarahE, Jbergquist, Juchoy, Karol Langner, Kbrose, Lebon-anthierens, Linas, Loudubewe, Lseixas, Mandarax, Massieu, Mdd, Michael Hardy, Michielsen, Miguel, Mihaiam, Mike Rosoft, Miketwardos, Mn-imhotep, Mythealias, Nathan Johnson, Nerdseeksblonde, NonDucor, Oleg Alexandrov, Ozarfreo, PAR, Pfd1986, Phys, Physchim62, R'n'B, Rjwilmsi, Sadi Carnot, SimonP, Sinusoidal, Tamtamar, The Anome, Thermoworld, Tobias Bergemann, Toby Bartels, Tranh Nguyen, Unauthorised Immunophysicist, Waltpohl, Wavesmikey, WebDrake, William M. Connolley, X-men2011, Xdeh, Yardimsever, Yrogirg, Zhenqinli, , 80 anonymous edits Zeroth Source: http://en.wikipedia.org/w/index.php?oldid=583354467 Contributors: ACSE, Achoo5000, Aisteco, Alan Liefting, Alansohn, Amikake3, Anaxial, Anna512, Art and Muscle, Asbestos, Astrochemist, BDD, Bender235, Binadot, Bkell, Brandon5485, Chamal N, Chjoaygame, Christian75, Cinar, Cognitivecarbon, Cutler, Cybercobra, DanielJanzon, Davwillev, Derek Iv, Devper94, Dgyeah, Dhollm, Dicklyon, Dissident, Djr32, Duk, Eddy 1000, Equendil, Fibonacci, Fortesque666, Fresheneesz, Frokor, Funandtrvl, Giftlite, Ginkgo100, Gpetrov, Headbomb, Hernlund, Hyperquantization, InverseHypercube, Jason Quinn, Jeepien, Jehochman, Jojalozzo, K, Kareemjee, Karol Langner, Kbrose, Kdau, Kingpin13, Knowhow, Korech, Krouge, KurtLC, Kwyjibear, Lambiam, Lear's Fool, Llywelyn, M1ss1ontomars2k4, Makecat, Marcika, Markorajendra, Marosszk, Mawfive, Maxair215, Meno25, Metagraph, Miaow Miaow, Michael Hardy, MigFP, Much noise, Mslimix, Nathan Johnson, Neptunius, NetRolller 3D, Nk, Nobleness of Mind, Ntmatter, Odie5533, PAR, Palica, Paul August, Pearle, Pfranson, Pjacobi, Psychokinetic, Pt, Puffin, Quadparty, Raul654, Reddi, Revent, Rgeldard, Richard001, Ring0, Robert Brockway, Rs2360, SCZenz, Sadi Carnot, Savarona1, Seth Ilys, Sheeson, Sholto Maud, Sokane, SoledadKabocha, Spinningspark, Splintercellguy, SubwayEater, Sun Creator, Sundareshan, Svick, Sawomir Biay, THEN WHO WAS PHONE?, Tcncv, The Anome, TheMadBaron, Theda, ThorinMuglindir, Tim Starling, Tombomp, Tresiden, Victor Gijsbers, Vinayak 1995, WLU, Wavesmikey, Wenli, Widefox, Wik, Wikijens, Wikipedialuva, Wikipelli, Wrs1864, Yurik, Zebas, Zmicier P., 155 anonymous edits First Source: http://en.wikipedia.org/w/index.php?oldid=586752699 Contributors: 2T, 478jjjz, 4C, 912doctorwho, A. di M., Abecedare, Acroterion, Adamaja456, Adwaele, Aido2002, Akmunna, Alchemice, Anbu121, Arcandam, Arthena, Askeyca, AstroChemist, Astrochemist, AtomicDragon, BF6-NJITWILL, Belinrahs, Bgwhite, BirdKr, Blurpeace, Bongwarrior, BorgQueen, Calaschysm, Charlieb003, Chessmad1, Chjoaygame, ChrisChiasson, ChrisNoe, Christian75, Cinar, Cloudjunkie, Coldestgecko, CombatCraig, Complexica, Count Iblis, Cremepuff222, DVdm, Dave souza, DeadEyeArrow, Dhollm, Dirac66, Discospinster, Dr mindbender, ESkog, Easchiff, Echinoidea, EdJohnston, Equendil, Erodium, Fatimah M, Fish and karate, Flying Jazz, Fox Wilson, Fraggle81, Fresheneesz, GMTA, Geni, Giftlite, Glenn, Gonfer, Gracenotes, Gutsul, H falcon, Hbent, Hdante, Headbomb, Helix84, Heron, ISTB351, Icairns, Iridescent, Isrl.abel, Itub, IvanLanin, JDspeeder1, JSquish, JWSchmidt, Jay-Sebastos, Jebba, Jheald, Jim1138, Jonathanfu, Jschnur, JuJube, K, KTC, Kareemjee, Karol Langner, Kazvorpal, Kbrose, Keegan, Keitam, Koen Van de moortel, LeBofSportif, LeaveSleaves, Lh389, LilHelpa, Littlecanargie, Llewkcalbyram, Logan, Loodog, Luna Santin, MER-C, Makeemlighter, Mandarax, Marosszk, Martinvl, Marx Gomes, Materialscientist, Matthew Fennell, McDScott, McGeddon, Mcavoys, Mejor Los Indios, Meno25, MichaelHenley, MigFP, Mikiemike, Mild Bill Hiccup, Mn-imhotep, Mogism, Momo

Article Sources and Contributors


san, Moravveji, NZLS11, Nanog, Narssarssuaq, Nathan Johnson, NawlinWiki, Nescio, Netheril96, NewYorkDreams, Ninja-bunny.webs, Ninjamen1234, Nonsuch, Novusuna, NuclearWarfare, Orzetto, PAR, PV=nRT, Pazouzou, Perpetual motion machine, Pflatau, Pgagge, Pharaoh of the Wizards, Philx, PhySusie, Pifvyubjwm, Pjacobi, Pmmanley, Popx3rocks, Qwerty Binary, Reddi, Rex the first, Rhetth, Rishabhgoel, Rs2360, Sadads, Sadi Carnot, Sag010793, Sbyrnes321, Shally87, Shanes, Sharkb, Skk146, Smalljim, SmthManly, Spicemix, Srleffler, Stafo86, Stephenb, TCGrenfell, Tarquin, Tbhotch, Tehfu, The Gnome, The wub, TheBusiness, Therealrockstar007, Thine Antique Pen, TimVickers, Tobby72, TwistOfCain, Usien6, Valthalas, Venny85, Vincenzo Malvestuto, Vojta2, Vsmith, WadeSimMiser, Waleswatcher, Wavesmikey, Webclient101, Wertuose, Widr, Wiki13, Wikidudeman, Wikipelli, Wikiwind, Wikster72, Worm That Turned, XJaM, ZacBowling, Zealander, Zenibus, Zidane tribal, Zmicier P., 409 anonymous edits Second Source: http://en.wikipedia.org/w/index.php?oldid=584405568 Contributors: 2over0, ABF, AC+79 3888, AP Shinobi, AThing, Abb3w, Acroterion, Adwaele, Aeternium, Aeusoes1, Ahoerstemeier, Aircorn, Akamad, Alai, Ale jrb, Alfredwongpuhk, AnandaDaldal, Andyparkins, Anonymous Dissident, Antandrus, Antixt, AppleJuggler, Aquillion, Arbitrarily0, Arjun S Ariyil, Arjun r acharya, ArnoldReinhold, Arthena, Arthur Rubin, Ashenai, Ashley Y, Aspro89, Astrobradley, AugPi, Aunt Entropy, AwesomeMachine, BF6-NJITWILL, Barry Fruitman, Bbanerje, Bduke, Ben Rogers, Bluecheese333, Bmord, Bobby1011, Bobo192, Bonaparte, BorgQueen, Brianhe, Caltas, CambridgeBayWeather, Canadian-Bacon, CarbonCopy, Cdh1001, ChXu, Chemeditor, Chinasaur, Chjoaygame, Chris the speller, ChrisChiasson, ChrisNoe, ChrisO, Christopher Thomas, Cirejcon, ComaVN, Complexica, Count Iblis, Cpcjr, Crash Underride, Crio, Crosbiesmith, Crowsnest, Curps, Cutler, Cyp, CzarB, DJ Clayworth, DMZ, DVD R W, Da500063, Daa89563, Daarznieks, Dan Gluck, DanielCD, Dantecubed, Dav4is, Dave souza, David Shear, David spector, Dawn Bard, Denevans, Desp, Dhollm, Dieseldrinker, Dna-webmaster, Doanison, Dominic, Dreadstar, Duncharris, EPM, EdJohnston, Edcolins, Egbertus, Emilio Juanatey, Ems2715, Enormousdude, Enviroboy, Eroica, Euyyn, Evanh2008, Evil Monkey, Eyu100, Favonian, FeloniousMonk, Fluent aphasia, Flying Jazz, Fredrik, Fresheneesz, Frisettes, Funhistory, FunnyMan3595, GSlicer, Gaius Cornelius, Gandalf61, Gatewayofintrigue, George100, Gerbrant, Giftlite, Gilliam, Gonfer, Grstain, Gkhan, H-J-Niemann, Hadal, Hairy Dude, Hartz, Hbent, Headbomb, Helix84, Henning Makholm, Heqwm, Hjlim, Howzeman, Hubbardaie, Hweimer, Ian.thomson, IceKarma, Ignignot, Ilmari Karonen, Ilyanep, Infinity0, Ivan Bajlo, Ixfd64, J.delanoy, JMS Old Al, JSquish, Jab843, Jacob2718, Jason One, JayEsJay, Jchammel, Jdpipe, Jebba, Jeffq, Jeronimo, Jewk, Jheald, Jim1138, Jim62sch, Jimbomonkey, Jittat, Jncraton, Jochen Burghardt, JodyB, Joeoettinger, John254, Johnstone, JorisvS, Josedanielc, Jossi, Jovianeye, Jtir, Jucati, K, KTC, Kaldari, Karn, Karol Langner, Kbdank71, Kbrose, Kenan82, Keta, KillerChihuahua, Klangenfurt, Kmarinas86, Knowledgeum, Koavf, Kommando797, Larryisgood, Laughitup2, Lbrewer42, LeBofSportif, Lea phys, LeaveSleaves, LeeMcLoughlin1975, LiDaobing, Lilmy13, LoStrangolatore, LonelyBeacon, Lorenzarius, Lsommerer, Lugia2453, MER-C, Madmardigan53, Maebmij, Magus732, Manishearth, Marc Girod, Materialscientist, Mbarbier, Mbweissman, McGeddon, Mdkssner, Meeples, Meno25, Mgiganteus1, Mgrierson, Miaow Miaow, Michael C Price, Michael H 34, Michael Hardy, Michaeladenner, Michaelbusch, Miftime, Miguel de Servet, Mikaduki, Mikiemike, Mineralogy, Mormequill, MrArt, MrOllie, Mre env, Musaran, N12345n, Naji Khaleel, Nakitu, Nanog, Narssarssuaq, Neptunius, Netheril96, Neutrality, Nicksola, Nk, Nonsuch, Nrsmith, Number 0, Oleg Alexandrov, Omegatron, PAR, Palinurus, Panzi, PaulLowrance, Pauli133, Pcarbonn, Peyre, Phys, Physical Chemist, PigFlu Oink, Pinethicket, Pjacobi, Popnose, Postdlf, Ppithermo, Profero, Prokaryotes, Ptbptb, Pterodactyloid, QTxVi4bEMRbrNqOorWBV, R'n'B, RWG00, Ragesoss, Ratiocinate, Ravikiran r, Rdsmith4, Reatlas, Reddi, Rhettballew, Rhobite, Rich Farmbrough, Rifleman 82, Ring0, Rjwilmsi, Rklawton, Roadrunner, Robinh, Rocketrod1960, Romanm, Rs2360, SCZenz, Sadi Carnot, Sam Hocevar, San Diablo, Sanyi4, Savarona1, ScienceGuy, Seqsea, Sholto Maud, Sietse Snel, Sin.pecado, Snoid Headly, Snoyes, Speaker to Lampposts, Spicemix, Spk ben, Srleffler, Stephenb, SteveCoast, Stikonas, Stirling Newberry, Stootoon, Subh83, Subversive.sound, Sun Creator, Tantalate, Tb, Tercer, Terse, The Anome, The Gnome, The-vegan-muser, Theresa knott, ThorinMuglindir, Tide rolls, Timc, Time traveller, Tisthammerw, Tls60, Tobby72, Tobias Bergemann, Trevor MacInnis, Tubbyspencer, Ugncreative Usergname, Vanished user qkqknjitkcse45u3, Verrai, Vh mby, WAS 4.250, Wafulz, Waleswatcher, Wavelength, Wavesmikey, WebDrake, Widefox, Widr, WikiPidi, WikiPuppies, Wikimol, Wisemove, Wkussmaul, Wndl42, Wpegden, Wtshymanski, XJaM, Xionbox, Yamamoto Ichiro, Yappy2bhere, Zchenyu, Zebas, Zenosparadox, Zmanish, Zmicier P., , 738 anonymous edits Third Source: http://en.wikipedia.org/w/index.php?oldid=585080293 Contributors: 2T, 8digits, Adwaele, Alan Holyday, Alchemist314, Andrewpmk, Ankitdwivedimi6, Astrochemist, BZegarski, Barticus88, Bender235, Bewporteous, CWenger, Canberra User, CardinalDan, Cesaranieto, Chjoaygame, Chris Capoccia, CityOfSilver, Cup o' Java, Cutler, D6, Dakkagon, Dchristle, Dhollm, Draxtreme, Duk, Edward321, Einstein runner, Erik9, Fredrik, Gene Nygaard, Giftlite, Gogo Dodo, Grendelkhan, Guy Peters, Happysam92, Hb2007, Helix84, Jheald, Jim1138, K, Karol Langner, Kbrose, Keenan Pepper, Khattab01, LAX, MER-C, Majorclanger, Malosse, Marosszk, Masaki K, Mbweissman, McGeddon, Miaow Miaow, MigFP, Mygerardromance, Natox, Ned Morrell, Nexia asx, Nitcho1as12, Nobleness of Mind, Okedem, Oxymoron83, PAR, Pjacobi, Reddi, Richard75, Ring0, Rob Hooft, Rrburke, Rs2360, SCZenz, Sadi Carnot, Salsb, Sandycx, Sanyi4, Sball004, Sbyrnes321, SeventyThree, Shuipzv3, Simeondahl, Smjg, Spicemix, Spitfire, Ssault, The Anome, Time traveller, Tom harrison, Waleswatcher, Wavesmikey, Widefox, WikiLaurent, Wikijens, Wolfrock, XJaM, Zebas, 122 anonymous edits History of thermodynamics Source: http://en.wikipedia.org/w/index.php?oldid=570036594 Contributors: A.R., AdultSwim, Ajh16, Ariconte, Arkuat, Barticus88, Belief action, Benbest, Bfong2828, CambridgeBayWeather, Carcharoth, Cardamon, Collabi, Cutler, D.H, Dhollm, Djr32, Dougweller, EdJogg, Eeekster, ElinorD, Enviroboy, Eric Forste, FilipeS, Fortdj33, Gaius Cornelius, Gandalf61, Geraldo61, Greg L, Gtxfrance, I Love Pi, Inwind, J04n, J8079s, Jagged 85, JorisvS, Jtir, JzG, Karol Langner, Ligulem, Lottamiata, Ludi Romani, Lumos3, M karzarj, MCCRogers, Marianika, Marie Poise, Mion, Moe Epsilon, Myasuda, Natox, PAR, Peterlewis, Pilotguy, Radagast3, Ragesoss, Rayc, Riick, Rjwilmsi, Sadi Carnot, Saeed.Veradi, Special-T, Srleffler, Syncategoremata, TimBentley, Tomasz Prochownik, Tropylium, Wikkidd, 43 anonymous edits An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction Source: http://en.wikipedia.org/w/index.php?oldid=575282958 Contributors: Airplaneman, Bloodshedder, Charles Matthews, Cutler, Dhollm, Dominus, Gioto, Good Olfactory, GregorB, Guillaume2303, Inwind, Itub, Jaraalbe, Jdpipe, Kdruhl, Ligulem, Localzuk, MakeRocketGoNow, Mdd, Mrmrbeaniepiece, PC78, Peterlewis, Qero, Saehry, Tassedethe, Tim!, Vclaw, Wijnand, Wizard191, 2 anonymous edits Control volume Source: http://en.wikipedia.org/w/index.php?oldid=543923246 Contributors: Bjs1234, Cacadril, Chris the speller, Crowsnest, Dhollm, Dolphin51, FelixTheCat85, HydrogenSu, Iwfyita, Jdpipe, Kbdank71, Mairi, Matador, Mdd, Plober, RJFJR, Rich Farmbrough, Sadi Carnot, Salih, Siddhant, Silverfish, Wanstr, Wolfram.Tungsten, 12 anonymous edits Ideal gas Source: http://en.wikipedia.org/w/index.php?oldid=582682289 Contributors: -kkm, 2over0, Aaron Schulz, Adamtester, Aisteco, Aleksas, Altmany, Avathar, Bamos, BeaumontTaz, Ben-Zin, Bender235, Bensaccount, Bigbill2303, Bigjoestalin, BillC, Bo Jacoby, Bongwarrior, BoomerAB, Brad7777, Brianjd, Brona, COGDEN, CWenger, CambridgeBayWeather, CarlosPati o, Cdc, Chewie, Chris the speller, ChrisChiasson, Ckk253, CommonsDelinker, Complexica, Corpeter, CrniBombarder!!!, D. F. Schmidt, DSRH, Davidr222, Davidtwu, Dhollm, E0steven, EconoPhysicist, Edsanville, Enochlau, Fieldday-sunday, FlorianMarquardt, GTBacchus, Gene Nygaard, Giftlite, GregRM, GregorB, H Padleckas, H2g2bob, Hairy Dude, Hankwang, Headbomb, Herbee, JabberWok, Joelholdsworth, Just plain Bill, JustAGal, Karol Langner, Katanada, Keenan Pepper, Kevinjasm, Khunglongcon, Kipoc, Kizor, Kmarinas86, Kpengboy, Kraton, Lambiam, Landarski, Leftynm, Looxix, Louis Labrche, Malinaccier, Man It's So Loud In Here, Margospl, Mean Free Path, Michael Hardy, Michael Ross, Mike666234, Mild Bill Hiccup, Moosesheppy, Mrericsully, Mythealias, Nagoltastic, Nanite, Nickkid5, Nigelj, Nikai, Nk, Nonagonal Spider, Nosbig, Nosferattr, OlexiyO, PAR, Paranoidhuman, Patrick, Paul D. Anderson, Peterlin, Power.corrupts, PranksterTurtle, Prowikipedians, Qmantoast, Razimantv, Rebroad, Rick lightburn, Riick, SD5, Sadi Carnot, Schneelocke, SimonP, Slugger, Soilguy3, SpookyMulder, Stan J Klimas, TBadger, Tarotcards, Thekingofspain, Theosch, Tide rolls, Tsemii, Tsi43318, Turbojet, Uopchem25asdf, User A1, ViceroyInterus, Vql, Whpq, Wolfkeeper, Wshun, YDelta, , 198 anonymous edits Real gas Source: http://en.wikipedia.org/w/index.php?oldid=584361265 Contributors: 84user, Anakata, Avono, Azylber, Boccobrock, Brianjd, Charles Matthews, Closedmouth, Crowsnest, Dcirovic, Dhollm, Download, Fayenatic london, Giftlite, Gogo Dodo, Gumok, Headbomb, Heero Kirashami, Ideal gas equation, J-puppy, Jauhienij, Jianhui67, Jost Riedel, Jwchong, Katanada, LeaveSleaves, Logan, Marco zannotti, Marvin W. Hile, MinkeyBuddy, Mn-imhotep, Olaf, Omnipaedista, PAR, Pinethicket, Power.corrupts, Raoul NK, Rjwilmsi, Sarah george mesiha, Sonia, Stan J Klimas, StaticGull, Takuma-sa, Theosch, Tony1, UAwiki, Vanished user 39948282, Velella, Zl1corvette, Zrephel, 75 anonymous edits Isobaric process Source: http://en.wikipedia.org/w/index.php?oldid=578120604 Contributors: Anagogist, AugPi, Auntof6, CarrieVS, Carultch, Damouns, Dance-a-day, Dhollm, Discospinster, Duk, El Belga, Glenn, Gkhan, Hjlim, IkamusumeFan, Insanity Incarnate, JaGa, Jncraton, Karol Langner, Kbrose, Keenan Pepper, Lechatjaune, Loodog, Mike2vil, Orzetto, PV=nRT, Peterlin, Pflatau, Pinethicket, Plober, Pyther, Rgdboer, Sabate, Sanyi4, Simeon89, Wik, , 40 anonymous edits Isochoric process Source: http://en.wikipedia.org/w/index.php?oldid=576490716 Contributors: A.Z., ALittleSlow, ArneBab, AugPi, BenFrantzDale, CDN99, DanielNuyu, David Legrand, Dhollm, Duk, Freddyd945, Gene Nygaard, Ginsuloft, Glenn, IkamusumeFan, Ixfd64, JaGa, Jeffrd10, Karol Langner, Kbrose, Kerotan, Knuckles, Lara bran, LokiClock, Mahlerite, Mejor Los Indios, Nachoj, Nightkhaos, Ortho, PV=nRT, Peterlin, Plober, Pyther, Sanyi4, Shoessss, Skyezx, StuRat, Thi Nhi, Voltaire169, Wikijens, Ydw, 45 anonymous edits Isothermal process Source: http://en.wikipedia.org/w/index.php?oldid=580453073 Contributors: Adamrush, Akriasas, AnkurBargotra, Astrochemist, AugPi, ChrisGualtieri, Coffin, Cyan, David Shear, Dcoetzee, Dhollm, Duk, Dungodung, EoGuy, Forbes72, Gelo71, Glenn, HaeB, JCraw, JaGa, Jim1138, Jncraton, John254, Jwilson75503, Karol Langner, Kathovo, KrDa, LOL, Lambiam, Lechatjaune, Mcduff, Netheril96, OlexiyO, PV=nRT, Pedvi, Peterlin, Pflatau, Plober, Postglock, R'n'B, Roadrunner, Romeoracz, Rtanz, Sanyi4, Shpoffo, Thi Nhi, Tide rolls, Trueravenfan, Uopchem0251, Yuta Aoki, 123 anonymous edits Adiabatic process Source: http://en.wikipedia.org/w/index.php?oldid=585246286 Contributors: AdamW, Aka042, Alkonblanko, Andre Engels, Andycjp, AnnaFrance, Armasd, Artur adib, AstroHurricane001, AuburnPilot, AugPi, AxelBoldt, Balawd, Bannerts, Bauka91 91, Bcheah, BenFrantzDale, Bender235, BernardH, Bgwhite, Bobo192, Breeet, Bryan Derksen, C5st4wr6ch, CYD, Carultch, Chancemill, Chjoaygame, Choihei, ChrisHodgesUK, Ciphers, Clive.gregory, Conversion script, Count Iblis, DVdm, DacodaNelson, Damorbel, Dan100, Darkroll, Dauto, David Straight, Dbrunner, Denisarona, Deor, Destroyer130, Dhaluza, Dhollm, Dolphin51, Donarreiskoffer, Dr. Crash, Duk, E. Ripley, Ec5618, Ecomesh, EconoPhysicist, Edward, Eg-T2g, Emilio juanatey, Enochlau, Ettrig, Evand, Fsiler, Gene Nygaard, Gershwinrb, Giftlite, Giraffedata, Glenn, Grendelkhan, Gunnar Larsson, Heathmoor, Hgilbert, Hike395, Icarus, IceUnshattered, InverseHypercube, Ivan tambuk, JLewis98856, JeLuF, Joe Frickin Friday, Jschnur, Jtalledo, KL56-NJITWILL, Kaare, Karol Langner, Kbrose, Kevinbevin9, Kghose, Klemen Kocjancic, KostasG, Linas, Lindert, Loodog, Lynskyder, Magioladitis, MajorHazard, Masegado, Mat-C, Materialscientist, Mboverload, Mejor Los Indios, Mgiganteus1, Michael Hardy, Mikenorton, Mlouns, Moink, Molly-in-md, Mscript, NawlinWiki, NevemTeve, NewEnglandYankee, Oxymoron83, PAR, Palica, Pcmproducts, Peterlin, Pflatau, Pgriffin, Phuzion, Phys, Plenumchamber, Plober, Pvnuffel, Rich Farmbrough, Rjwilmsi, Rm w a vu, Roadrunner, Rogerwillismillsii, Royourboat, Rracecarr, Samgo27, Sanyi4, Sapphirus, Sarasknight, Sbharris, SeventyThree, Sheeson, Shrew, Sirsparksalot, Slashme, Smokefoot, Stan J Klimas, Stannered, Stassats, Steinsky, Sverdrup, Synchronism, TAnthony, TCarey, Tac2z, Tbhotch, The Geologist, The Gnome, Thljcl,

283

Article Sources and Contributors


ThorinMuglindir, Tim Starling, Tohd8BohaithuGh1, Tony1, Toyalima, Triku, Tschwenn, User A1, Venny85, Vsmith, W.F.Galway, Warrenrob50, Wereon, Xtreme219, YumOooze, 268 anonymous edits Polytropic process Source: http://en.wikipedia.org/w/index.php?oldid=583937649 Contributors: Ac44ck, Alain r, AllHailZeppelin, AxelBoldt, CasualJJ, Cky2250, Dhollm, DoorsAjar, Glenn, Gogo Dodo, IkamusumeFan, JamesAM, JeffConrad, Jeffbadge, Jzana, Karol Langner, Lechatjaune, MTessier, Marcello Pas, Mihail Vasiliev, Mikiemike, NellieBly, Nick Mks, PV=nRT, Pegua, Plober, R'n'B, RN1970, SDC, Sanyi4, Tpholm, , 42 anonymous edits Introduction to entropy Source: http://en.wikipedia.org/w/index.php?oldid=574368807 Contributors: 16@r, Ac1201, Adam C C, Army1987, Art LaPella, Bduke, BigrTex, Brisvegas, Carcharoth, ConfuciusOrnis, Crowsnest, DBBabyboydavey, DanP4522874, Daniele Pugliesi, Danno uk, Dave souza, Davidm617617, Davwillev, Dhollm, Dolphin51, Dratman, Drozdyuk, Dylan Lake, Edward, EryZ, FilipeS, Fresheneesz, Gary King, Grafen, Gtxfrance, Headbomb, Hypergeek14, Ipatrol, Jheald, John254, JorisvS, K, Kbrose, Kirbytime, Kissnmakeup, Lazylaces, Light current, LilHelpa, Loom91, Michelino12, Microfrost, Nilock, PAR, PAS, Papparolf, Pharos, Plastikspork, Ray Eston Smith Jr, Retired username, Rodhullandemu, Sadi Carnot, Serendipodous, Sesshomaru, Sixtylarge2000, Sprayer faust, T.J. Crowder, TJKluegel, Tobias Bergemann, User24, Vrenator, Wayne Slam, Xyzzyplugh, Zaereth, Zain Ebrahim111, 79 anonymous edits Entropy Source: http://en.wikipedia.org/w/index.php?oldid=586547591 Contributors: 129.132.139.xxx, 12seda78, 478jjjz, 99of9, AThing, Aarghdvaark, Aaronbrick, Abjad, Abtract, Addihockey10, Adhanali, Adodge, Adwaele, Aetheling, Aintsemic, Ajaxkroon, Akhil 0950, AlecTaylor, Alpt, Alrasheedan, AltheaJ, Amircrypto, Andrewman327, AngelOfSadness, Anonymous Dissident, Anrnusna, Antoni Barau, Anville, Apostolos Margaritis, Arcfrk, Arcturus, Arjun r acharya, Art and Muscle, Arthena, Artur adib, Astavats, Astrobayes, Attilios, Atuin, AugPi, Aunt Entropy, Avjoska, Awaterl, Awickert, BJNartowt, BW95, Ballchef, Ballhausflip, Bargebum, BassBone, Bbanerje, Bduke, BenFrantzDale, Bender235, Benjah-bmm27, Betacommand, Bhny, Billgdiaz, BlckKnght, Blue3snail, Bo Jacoby, Bobby1011, Bobcorn123321, Bobo192, Bojack727, Bongwarrior, Boob12, Bookbuddi, Branxton, Brisvegas, Bryan Derksen, Bths83Cu87Aiu06, C1t1v151on, CYD, Calmer Waters, Camarks, Can't sleep, clown will eat me, Cantanchorus, Capcom1116, Captain panda, Carmichael, Causticorulos, Cbuckley, Chaiken, ChemGardener, Chenyu, Chester Markel, Chjoaygame, ChrisGriswold, ChrisGualtieri, Chrislewis.au, Christofurio, Chuayw2000, Ciacco, Cky2250, Cmbreuel, Cobaltcigs, Compdude47, Complexica, Connelly, Constatin666999, Conversion script, Count Iblis, Craig Pemberton, Crasshopper, Csdidier, Curatrice, Cuzkatzimhut, D.H, DARTH SIDIOUS 2, DGG, DJ Creature, DL5MDA, DLMcN, DVD R W, DVdm, Dartbanks, Darth Panda, DavRosen, Dave souza, David Edgar, David Shear, DavidCary, Ddon, DeadEyeArrow, Debu334, Debzer, Dftb, Dhollm, Dick Chu, Dirac66, Disdero, Djr32, Dlenmn, Doetoe, Dolphin51, Dougbateman, Dougluce, Dr. Ebola, Dr.K., Dragonlord Jack, Drat, Dreg743, Drestros power, Drpriver, Dtguelph, Dysprosia, E David Moyer, EdJohnston, Edgar181, Edkarpov, Edsanville, Edzevallos, Egg, El C, Eleassar777, ElectricRay, Electricmic, Electron9, Ellwyz, Emansf, Emote, EnSamulili, Engwar, Enormousdude, Epbr123, Eric Hawthorne, Esowteric, Everyking, Evil saltine, Favonian, FilipeS, Fimbulfamb, Foxj, FrankLambert, Freakofnurture, Fred t hamster, Fredrik, Frelke, Fresheneesz, FutureTrillionaire, G716, GP modernus, GT5162, Gail, Gaius Cornelius, Galor612, Galoubet, Garethb1961, Gary King, Gatewayofintrigue, Gene Nygaard, Geoff Plourde, Geoking66, Georgette2, Geschichte, Gianluigi, Giftlite, Giraffedata, Glen, Glockenklang1, Gmaster108, Gogo Dodo, Gonfer, Gracefool, Graeme Bartlett, Graham87, GregAsche, Gtxfrance, GuidoGer, Gunnar Larsson, Gurch, Gurudev23, H Padleckas, HEL, HGHSTROJAN, Hadal, Haeleth, Hagedis, Haham hanuka, Hai2410, Hamiltondaniel, Hanspi, HappyCamper, HappyVR, Happysailor, Hdt83, Headbomb, Henry Flower, Heoigi, Herbee, Heron, Hkyriazi, Hmains, Homestarmy, Ht686rg90, Hugozam, Humanoid, ILikeMIDI, IRP, IVAN3MAN, Ianml, Icairns, Imaginaryoctopus, Intgr, Intoronto1125, InverseHypercube, Inwind, Iris lorain, Itub, Ixtli, Izno, J'raxis, J.Wolfe@unsw.edu.au, J.delanoy, JHoffmueller, JMD, JRSpriggs, JSquish, Jab843, JabberWok, Jack sherrod, Jacobko, Jacobolus, Jani, Jdpipe, Jeffrey Mall, Jeffscott007, Jheald, Jiang, Jianhui67, Jim62sch, JimWae, Jitse Niesen, Jive Dadson, Jni, JohnBonham69, Jonathan48, Jorgenumata, Joriki, JorisvS, Josemald, Josterhage, Jschissel, Jschnur, Jsd, Juliancolton, Juro2351, Jwanders, K, Kafziel, Kahriman, Kaihsu, Kanags, Karol Langner, Katzmik, Kbh3rd, Kbrose, Keenan Pepper, KillerChihuahua, Kissnmakeup, Kjoonlee, Kmarinas86, Knowledge Seeker, KnowledgeOfSelf, Koen Van de moortel, Kpedersen1, Kurykh, Kwiki, LEBOLTZMANN2, Lakinekaki, Lambiam, Larryisgood, Lateg, Laurascudder, Lbertolotti, LeBofSportif, Lea phys, Leafyplant, Lee J Haywood, Lerdsuwa, Lg king, LidiaFourdraine, Light current, Ligulem, Linas, Linshukun, Locke9k, LokiClock, Lone Isle, Loom91, Looxix, Lordloihi, Lotje, LoveMonkey, Lseixas, Lumos3, M.O.X, MECU, MER-C, Macedonian, Macrakis, Macvienna, Mad540trix, Maghemite, Magioladitis, MagnaMopus, Mani1, Marathoner, Marechal Ney, MarnetteD, Marskell, MartinSpacek, Martinvl, Massieu, Master Jay, Materialscientist, Maurice Carbonaro, Mausy5043, Mbeychok, Mbweissman, Mdd, Mean Free Path, Melaen, Memming, Mennato, Metamagician3000, Mgiganteus1, Michael C Price, Michael Hardy, Miguel de Servet, Mike Christie, MilesTerrex, Mishka.medvezhonok, Mjs, Mkratz, Mogism, Moonriddengirl, MorphismOfDoom, Mouse is back, Mouvement, Moveaway00, Ms2ger, Mschlindwein, Munozdj, Music Sorter, Mwilso24, Mxn, NCDane, NE Ent, Naddy, Nakon, Natasha2006, Natron25, NawlinWiki, Nbarth, Necron909, Neligterink, Netheril96, Nihiltres, Nikhil Sanjay Bapat, Nobleness of Mind, Nonsuch, Nora lives, NotableException, Numbo3, Nwbeeson, Obradovic Goran, Oceans and oceans, Oenus, Oleg Alexandrov, Olivier, Omnichic82, Omnipaedista, Omnist, Oneismany, Opabinia regalis, Ost316, Otheus, PAR, Pachyphytum, Paganpan, Paisley, Parodi, Pasquale.Carelli, Passwordwas1234, Patrickwooldridge, Paul August, Paul venter, Pbroks13, Pedrose, Pentasyllabic, Peter Chastain, Peterlin, Phidus, Phil Boswell, Philip Trueman, Philip2357, PhySusie, Physchim62, Physical Chemist, Physicistjedi, Physis, Piano non troppo, Pinethicket, Piolinfax, Pipifax, Pjacobi, Pkeck, Plastikspork, Pmanderson, Prasadmalladi, Private Pilot, Probeb217, Pt, QTCaptain, QuantumMatt101, Quantumechanic, Quidproquo2004, RJHall, Radon210, Raffamaiden, Ramaksoud2000, Random Dude Who Is Cool, Ray Eston Smith Jr, Rayc, Raymondwinn, Reallybored999, Reddi, Reedy, Regancy42, RekishiEJ, ResearchRave, Retired username, Riana, Rich Farmbrough, Rifleman 82, Rising*From*Ashes, Riskdoc, Rize Above, Rjg83, Rjwilmsi, Rkswb, Roadrunner, RobertG, RockMagnetist, Ronburk, Rracecarr, Ruddy9hell, Ruudje, SAE1962, Saad bin zubair, Sadi Carnot, Sajjadha, Sam Staton, Sandwiches, SchreiberBike, Schurasbrat, Schwijker, Sciyoshi, Seaphoto, Serketan, Serpent's Choice, Sesshomaru, SeventyThree, Sfzh, Shanel, Shawn in Montreal, Shotgunlee, Shunpiker, Simetrical, Sitearm, SkyMachine, Slakr, Smack, Smallman12q, Soumya.92, Spetalnick, Srich32977, Srleffler, Stannered, Steevven1, Stephenb, Stevertigo, StradivariusTV, Stroppolo, Subversive.sound, Superkan619, Suz115, Sverdrup, Tantalate, Tcnuk, Teh tennisman, Tennismaniac2112, Terse, Texture, The Anome, The Fish, The Thing That Should Not Be, TheIncredibleEdibleOompaLoompa, Thearchontect, Thechamelon, Theda, Theodolite, Theowoo, Thermbal, Thinkadoodle, ThorinMuglindir, Tide rolls, TigerShark, Tim Shuba, Timwi, Tiogalinha, Tnf37, Tnxman307, Tobias Bergemann, Tobias Hoevekamp, Toni 001, Tonyfaull, Touch Of Light, Tpbradbury, Tritchls, Tschijnmotschau, Tsemii, Tsiehta, Tuuky, Tygar, UffeHThygesen, Ugur Basak, Uopchem2510, Uopchem2517, User A1, Ute in DC, V8rik, VBGFscJUn3, Vadept, Vanished user 82345ijgeke4tg, Vbrayne, Velella, Velho, Vendrov, Versus22, VolatileChemical, Vrenator, WAS 4.250, Wavelength, Webclient101, Wetman, WhyBeNormal, Wijnand, WikiDao, WillowW, Wimt, Wingwongdong, Wisdom89, Wolf.312, Wolfmankurd, Woogee, Woohookitty, XJaM, Xaosflux, XerebZ, Xerxes314, Yath, Yevgeny Kats, Yian, Youandme, Yurko, Zachorious, Zaereth, Zeimusu, Zeno Gantner, ZezzaMTE, Zueignung, Zundark, , 1073 anonymous edits Pressure Source: http://en.wikipedia.org/w/index.php?oldid=585602462 Contributors: 84user, AVand, Aaron Kauppi, Ackerleytng, AdjustShift, Admkushwaha, Adrian147, Adz 619, Ahoerstemeier, Ahunt, Alansohn, Alex Bakharev, Alex43223, AlexCovarrubias, AlonCoret, Analwarrior, AndersFeder, Andonic, Angelo Michael, Anna Lincoln, Anoopm, Antti29, Antzervos, Anujjjj, Arda Xi, Armando, Ashishbhatnagar72, Avengingbandit, AxelBoldt, B7582, BLUE, Becarlson, Belovedfreak, Bensaccount, Betterusername, Bgpaulus, Bkell, Blue520, Bluerasberry, Bobo192, Bongwarrior, Bookgrrl, BoomerAB, Bovineone, Bowlhover, Braincricket, Brosen, Bryan Derksen, Bstepp99, Cable Hills, Calair, Calmer Waters, Calvin 1998, Can't sleep, clown will eat me, CanadianLinuxUser, Cesiumfrog, Charles Matthews, CharlesM, ChemGardener, Chodorkovskiy, Chris 73, Chris G, Christ1013, Cj005257, Cky2250, Cmichael, Codyfinke6, Coffee, Complexica, Conversion script, Cookie90, Credema, Cronian, Crucis, DARTH SIDIOUS 2, DJ Clayworth, Daano15, Dancter, DavidLevinson, Dbeardsl, Dbooksta, Dbtfz, Delirium, Denisarona, Derintelligente, Dhollm, Djr32, Dolphin51, DragonflySixtyseven, Dreadstar, Drmies, Drphilharmonic, Duk, Dungodung, EJF, Easchiff, Egil, Ehn, Ellywa, Emperorbma, Empty Buffer, Energybender, Epbr123, Excirial, FF2010, FactChecker1199, FelisLeo, Felyza, Fieldday-sunday, Finejon, Fir0002, Fiziker, FizykLJF, Flewis, Fluffernutter, Fnfal, Fnlayson, Foobaz, Forcez, Franz99, Freiddie, FreplySpang, From-cary, Fuhghettaboutit, Fuzzie, Fvw, Fzxboy, GB fan, GRAHAMUK, GTBacchus, Gadfium, Gaius Cornelius, Garbagecansrule, Gene Nygaard, Geoff Plourde, Gerhardt m, Giftlite, Giuliopp, Glenn, Gogo Dodo, Gonfer, Gotta catch 'em all yo, Greg L, Greg searle, Gurch, Gzkn, Haein45, Halfdan, Hamtechperson, Hankwang, Harp, Hazard-SJ, Headbomb, Hello32020, Hemingrubbish, Herbee, HereToHelp, Heron, Hgrobe, Hooperbloob, Ht686rg90, Icairns, Infrogmation, Isaac Rabinovitch, Ish ishwar, Isnow, Iviney, J.delanoy, JDP90, JNW, JSquish, Jack No1, Jackol, Jamesooders, Jamie C, JamieS93, Jasualcomni, Jay, JayC, Jdpipe, Jeffrey Mall, Jeltz, Jemandwicca, Jessica-NJITWILL, Jetforme, Jimbreed, Jimp, Joa po, Joanjoc, Johndburger, Johnflux, JonHarder, Jonkerz, Jossi, Jpfru2, Jrockley, Jschnur, Jujutacular, Karuna8, Kasamasa, Katana, Kdkeller, Keo Ross Sangster, Keta, Kingpin13, Kntrabssi, Kotasik, Kourd, Krushia, L1f07bscs0035, LAX, La goutte de pluie, Lantonov, LeBofSportif, LedgendGamer, LeonardoGregianin, Librscorp, Liempt, Light current, Lindberg G Williams Jr, Lst27, M bastow, MER-C, MK8, Mac, Magnus Manske, Malinaccier, Malo, Mani1, Mania112, Marcmarroquin, Marco Polo, Marek69, Mark.murphy, MarkS, MarkSutton, Marsey04, Martin Cole, Martin451, Masterbait123, Materialscientist, Mathonius, Matt Hayter, Mav, Mbeychok, Mean as custard, Meggar, Merovingian, Michael Devore, Michael Hardy, Middlec, Midgrid, Mikael Hggstrm, Mike dill, Minesweeper, Mkweise, Moe Epsilon, Moink, Montyv, Morgankevinj, Morning277, Mrt3366, Musiphil, Mysterious Whisper, Mythicism, NCurse, Nakon, Navidh.ahmed, NawlinWiki, Neeraj1997, NeilN, Neparis, Newty23125, Nigelleelee, Nyanhtoo, Odie5533, Oleg Alexandrov, Oli Filth, Omegatron, Opelio, Orange Suede Sofa, Orthoepy, Oxymoron83, PAR, Paikrishnan, Pascaldulieu, Patrick, Paul August, Pbsouthwood, Peak, Peter Horn, Peter bertok, Peterlin, Pflatau, Pheon, Philip Trueman, Piercetheorganist, Pigsonthewing, Pinethicket, Pit, Plasmic Physics, Pmcm, Pol098, Porqin, Profero, Quondum, Qxz, R3m0t, RDBury, RG2, Ralf Roletschek, Ranmamaru, RayC, Reatlas, Rectangle546, Redgolpe, Reinoutr, Resolution3.464, Rich Farmbrough, Rivertorch, Rob0571, RockMagnetist, RodC, Ronhjones, Rracecarr, Rudolf.hellmuth, Rvoorhees, SD5, Sadi Carnot, Salih, Sam Derbyshire, Sam Hocevar, Saurabhbaptista, SchfiftyThree, Scottfisher, Seba5618, Several Times, Sen Travers, Shikhar1089, Shoefly, Shoeofdeath, Shuipzv3, Sietse Snel, Simian, Sir cumalot, SlightlyMad, Smack, Smelialichu, Smichr, Smokefoot, Snowolf, Some jerk on the Internet, Sonett72, Spaully, SpookyMulder, Spoon!, SpuriousQ, Sr4delta, Sriharsh1234, Srleffler, StaticGull, Stephenb, SubstanceDx99, Suicidalhamster, Sun Creator, Superboy112233, Superduck463, TZGreat, Takometer, Talyor Will, Tarquin, Tawker, The Anome, The Anonymouse, The Valid One, TheGreatMango, Thecheesykid, Thierryc, Tide rolls, Tim Starling, Time501, Tiptoety, Tktktk, Tlork Thunderhead, Tls60, Tobias Bergemann, Tom harrison, Tommy2010, Tonyho, Tpbradbury, TraceyR, Trebacz, Tresiden, Trusilver, Tvaughn05, Unused000701, Urhixidur, Uri2, UtherSRG, Uxorion, Vanished user kksudfijekkdfjlrd, Velella, Versus22, Vincent Grosskopf, Vsmith, Waggers, Waninge, Warut, Widr, Wiki alf, Wikianon, Wikipelli, William Shi, Willy turner, Wimt, Wocky, Wolfkeeper, Wolvereness, Wwoods, Wyklety, Xjwiki, YVSREDDY, Yadevol, Yellowing, Yggdrasilsroot, Yidisheryid, Ytrottier, Yyy, Zaidpjd, Zakian49, ZakuSage, Zfr, Zidonuke, Zondi, Zundark, Zven, , , 1018 anonymous edits Thermodynamic temperature Source: http://en.wikipedia.org/w/index.php?oldid=585052600 Contributors: ARTE, AdamW, Aleitner, Ashishbhatnagar72, AxelBoldt, Baffclan, Bauka91 91, Bearycool, Benbest, Blaxthos, Braindrain0000, Breno, Bubba58, CambridgeBayWeather, CheesyBiscuit, Chris the speller, Chromaticity, CommonsDelinker, Cutler, Damorbel, Daniele Pugliesi, Dave3457, David Shear, Dhollm, Eequor, Emerson7, Enormousdude, Entton1, Ephraim33, Erebus555, Evolauxia, Frangojohnson, Frokor, Frostus, Gareth Griffith-Jones, Gene Nygaard, Geometry guy, Giftlite, Giraffedata, Glider87, Greg L, Gurch, Headbomb, Henning Makholm, Hqb, JRSpriggs, Jaan513, Jatosado, Jeremy W Powell, JokerXtreme, JoseREMY, KHamsun, Kbrose, Keenan Pepper, Kisokj, Kithira, Koavf, Kylu, LeBofSportif, Lethe, Limtohhan, Loom91, Lumos3, Materialscientist, Mgiganteus1, Mion, Mpk138, MrOllie, Netheril96, Nk, Nonsuch, PAR, Pedrose, Pifvyubjwm, Pinethicket, Pjacobi, Poga, Pol098, RJHall, Rich Farmbrough, Ricky81682, Rifleman 82, Roadrunner, Romanm, Rparson, Sadads, Sadi Carnot, Sbharris, Sbyrnes321,

284

Article Sources and Contributors


Schnazola, Shirik, Sibom, Skatebiker, Smurrayinchester, Spacepotato, Sun Creator, Teply, The Anome, Thumperward, Trovatore, Ugog Nizdast, Velella, Vivekakulharia, Wikifan2744, WikipedianProlific, Woohookitty, , 112 anonymous edits Volume Source: http://en.wikipedia.org/w/index.php?oldid=581915739 Contributors: Acratta, Cky2250, Cobaltcigs, Dbrawner, Dhollm, Dreadstar, Gene Nygaard, Md2perpe, Mikael Hggstrm, Miracleworker5263, Physchim62, , 13 anonymous edits Heat capacity Source: http://en.wikipedia.org/w/index.php?oldid=585970739 Contributors: 8thstar, AManWithNoPlan, Adwaele, Ajraddatz, Akiaterry, Alberisch, Allmightyduck, Alro, Anaxial, Andyjsmith, Anthonymcnug, AresLiam, Auntof6, Aymatth2, Bbanerje, BenB4, BenFrantzDale, Bensaccount, Bo Jacoby, Bobblewik, Boomur, Brien Clark, Brvman, C5st4wr6ch, Calabe1992, CaptainVindaloo, Ccmwiki, Chenopodiaceous, Chris the speller, Christian75, Chthonicdaemon, Complexica, Cwkmail, DVdm, Damorbel, Danim, Demize, Denisarona, Dewritech, Dh78, Dheknesn, Dhollm, Dirac66, Djr32, Dtrx, Edgar181, Edsanville, Edward, ElZarco, Elassint, Ellywa, Engineman, Eumolpo, Flyer22, Gaius Cornelius, Gene Nygaard, Giftlite, Glenn, Gowtham vmj, Grafen, Greg L, Gscshoyru, Hansonrstolaf, Heron, Hpubliclibrary, I-hunter, Ian Moody, Icairns, ImminentFate, J36miles, JDHeinzmann, JPushkarH, Jason Quinn, Jheald, Jimp, Joanjoc, John, John of Reading, Jonathanfu, JorisvS, Jschmalzel, Julesd, Jung dalglish, Karol Langner, Kbrose, Keds0, Kelly Martin, Kernkkk, Khakiandmauve, Krithin, Kwamikagami, KyuubiSeal, LHcheM, Lolm8, Looxix, Luke arnold16, Madkayaker, Magneticmoment, Marek69, Martkat08, MathewTownsend, Mausy5043, Mc6809e, Meumeul, Mfwitten, Michael Hardy, Mikewax, Minimac, Mmww123, Modulatum, Mogren, Mumuwenwu, Mythealias, Nabla, NellieBly, Newestcastleman, Notreallydavid, Ojovan, Onegumas, PAR, Palica, Patrick, Peacheshead, Pearle, Philip Trueman, Physics is all gnomes, Physicsch, Pinethicket, Pol098, Poppy, Ppareit, Pulsfordp, Quarkboard, R'n'B, RAM, Rathemis, RayForma, Reatlas, Riceplaytexas, Rjwilmsi, Romanm, Ronk01, SDS, Samw, Sarah george mesiha, Sbembenek18, Sbharris, School of Stone, Schusch, Sct72, Shorespirit, Skizzik, Smallcog, Soeren.b.c, Spiel496, Ste4k, Sverdrup, Tantalate, Tcep, The Letter J, The Master of Mayhem, TheOtherJesse, ThePhantom, Thermbal, ThorinMuglindir, Thi Nhi, TimothyRias, Tom Morris, Tpudlik, Trovatore, Tsemii, Ulflund, V111P, Vaughan Pratt, VijayGargUA, Vincent88, Voidxor, Vsmith, Wavelength, Wayne Slam, Webclient101, WikHead, Wikipelli, Xezbeth, Yafjj215, Yauran, Yhr, Yrfeloran, Ytic nam, Zmicier P., 313 anonymous edits Compressibility Source: http://en.wikipedia.org/w/index.php?oldid=586418124 Contributors: AMR, AManWithNoPlan, Aarchiba, Agrasa, Alfie66, Algorithms, Andy Dingley, Ankid, Anrnusna, Basar, BenFrantzDale, Binksternet, COMPFUNK2, Chris Roy, Count Iblis, Courcelles, Covalent, Crowsnest, Cryonic Mammoth, Deans-nl, Deklund, Dhollm, EarthPerson, Ehdr, Freireib, Gene Nygaard, GregorB, Headbomb, HeartofaDog, Ibjt4ever, Iepeulas, John, KudzuVine, Lenoxus, Leonard G., Magioladitis, Maury Markowitz, Michael Hardy, Mintleaf, Mn-imhotep, Mogism, Mor, Moriori, Mpfiz, Msd3k, Mwtoews, Novous, PAR, Pacerlaser, Pne, Powerfool, R'n'B, Ra'ike, Red Sunset, Redhanker, Rjwilmsi, Rpspeck, Sam Hocevar, Sandman619, Stwalczyk, TStein, TheTito, Tigga, Twin Bird, Tzm41, Valeriecoffman, Whoop whoop pull up, Wiz9999, 63 anonymous edits Thermal expansion Source: http://en.wikipedia.org/w/index.php?oldid=586560109 Contributors: 1ForTheMoney, A8UDI, ACrush, AManWithNoPlan, Adrian dakota, Afluegel, Aguner, Aidanlister, Akamad, Alansohn, Alex Bakharev, Alexf, AllHailZeppelin, Ammm3478, Andrewman327, Angry birds fan Club, AntoniusJ, Arc1977, ArcticFlame, Art LaPella, Asplace, Awickert, BenFrantzDale, Bento00, Bongwarrior, CWenger, Cardamon, Cdang, Charles Gaudette, ChrisRuvolo, Christophe.Finot, Chromaticity, Chzz, Claidheamohmor, Clark89, Confession0791, Courcelles, Cristianrodenas, Csloomis, Csuino, Da2ce7, Dan Gluck, Dan6hell66, Dani setiawan, Davidprior, Deewiant, Dentalplanlisa, Dhollm, E235, EndingPop, Epbr123, Eupedia, Firien, Fred Bauder, Gareth Griffith-Jones, Gene Nygaard, Giftlite, Gilliam, Ginsuloft, Grafen, Grm wnr, Gurch, Gtz, Harryboyles, Hooperbloob, Hqb, Hwangrox99, Ibjt4ever, In Transit, Ironmagma, J.delanoy, JamesBWatson, Jatosado, Jclemens, Jcwf, Jinxinzzi, Karthik3186, Katelyn.kitzinger, Ken l lee, Knuckles, Knucmo2, Kyng, La Pianista, Largedizkool, Leaf of Silver, Lektio, Leonard^Bloom, LilHelpa, Linas, Luk, LukeMcMahon, ML5, MagnInd, Mahmud Halimi Wardag, Mandarax, Masgatotkaca, Materialscientist, Matt Deres, Mcginnly, Mike.lifeguard, Mindmatrix, Mmarre, Moe Epsilon, Mothmolevna, NCdave, Ngebbett, Nick Number, Ojovan, Otisjimmy1, Owoturo tboy, P1415926535, PAR, Paladinwannabe2, Paxsimius, Piano non troppo, Pol098, Porqin, Puffin, Qq19342174, QueenMisha, Quietly, QuiteUnusual, R'n'B, Raggiante, Reatlas, Reza1615, Rjwilmsi, RogueNinja, Satellizer, Sealsrock!, Slashme, Snowolf, Squids and Chips, StuTheSheep, TaintedMustard, Teaktl17, Teles, That Guy, From That Show!, The Thing That Should Not Be, Thorsten1, Tide rolls, Tmariem, TomasBat, Trusilver, TwoTwoHello, Ulrich67, VolpeCenter, Vsmith, WOSlinker, Wizard191, Yvwv, Zachlipton, , 347 anonymous edits Thermodynamic potential Source: http://en.wikipedia.org/w/index.php?oldid=576974308 Contributors: Aboalbiss, Bomac, Chaos, ChrisChiasson, Cimon Avaro, Count Iblis, Cybercobra, Danno uk, Dhollm, Dorgan, Drphilharmonic, Edsanville, El C, Eli84, EoGuy, F=q(E+v^B), Fawcett5, Fractalizator, Fragaria Vesca, GangofOne, Giftlite, Headbomb, Hobojaks, Huwmanbeing, Icairns, Incnis Mrsi, JillCoffin, Joshua Davis, Kareemjee, Karol Langner, Kbrose, Keenan Pepper, Kmarinas86, Larryisgood, LeBofSportif, Lianglei0304, LilHelpa, Lseixas, Michael Hardy, Netheril96, Nightwoof, PAR, Pavlovi, Pearle, Phil Boswell, Ring0, Rjwilmsi, Sadi Carnot, Serge Lachinov, Sheliak, Shivankmehra, Steven0309, Terse, That Guy, From That Show!, Thermodude, Tizeff, Trainspotter, V8rik, VasilievVV, Vql, Wavesmikey, Willhsmit, Xavic69, 47 anonymous edits Enthalpy Source: http://en.wikipedia.org/w/index.php?oldid=586556684 Contributors: 66.156.135.xxx, AC+79 3888, Adwaele, Ani td, Anonymous Dissident, Antonio Lopez, Atraxani, AugPi, Az1568, BD2412, Banes, Bduke, BeaumontTaz, Beetstra, Begomber, Benjah-bmm27, Bensaccount, BernardH, BertSen, Betacommand, Bioe205fun, BlGene, Bomac, Br77rino, Brandmeister (old), Bryan Derksen, Buster2058, Bytbox, C4, Caknuck, Calabe1992, Causticorulos, Chandni chn, Chaos, Chris 73, ChrisGualtieri, Christian75, Cmcfarland, Coleslime5403, Complexica, Conairh, Connelly, Conversion script, Count Iblis, Crowsnest, Dagimar, Daniele Pugliesi, Dar-Ape, Dc3, DerHexer, Dhollm, Diannaa, Diberri, Dirac1933, Discospinster, Dolphin51, Don Gosiewski, Donvinzk, Dotancohen, Drat, Drphilharmonic, EconoPhysicist, Edgar181, Edward, Egmontaz, Ehn, Emresulun93, Eteq, Evilstudent, F l a n k e r, Faraz shaukat ali, Fbianco, Felixbecker2, Flying Jazz, Fredrik, Gail, Gaurav.gautam17, GauteHope, Gbleem, Gene Nygaard, Gentgeen, Giftlite, Gioto, Glengarry, Gonfer, Gosolowe, Gregbard, Grj23, Grondilu, Gunnar Larsson, Hairchrm, Hans Mayer, Happy-melon, Headbomb, Helix84, Hhhippo, Hjlim, Icairns, Infinity0, Isnow, Itub, Izuko, J991, JCraw, JSpudeman, JamMan, JamesBWatson, Jandalhandler, Jianhui67, Jimp, JoeBlogsDord, John of Reading, John254, JohnWheater, Jomasecu, JonathanDursi, Jrf, Jrtayloriv, Julesd, Jusdafax, KHamsun, Karenjc, KarlHegbloom, Karol Langner, Kbrose, Kdliss, Kedmond, Keith D, Kku, Kmarinas86, Kookookook, Kupirijo, Kyng, LOTRrules, Larrybaxter, Lesath, LilHelpa, Llort, Llywrch, Looxix, Lseixas, LucasVB, Luigi30, Lumos3, Lupo, MER-C, Madbehemoth, Mandarax, Mark viking, Markus Kuhn, Materialscientist, Matlsarefun, Matthew Yeager, Mct mht, Mdd, Mejor Los Indios, Mesoderm, Mezzaluna, Mgiganteus1, Mike Rosoft, Mikiemike, Morekitsch, Myasuda, Natty sci, Neffk, Neparis, Ngebendi, Nobull67, Numbo3, Odyssey1989, Ohconfucius, Ohms law, Omegatron, Omnipaedista, P. M. Sakkas, PAR, Pasky, Pbroks13, Pdch, Peterlin, Phdrahmed, Physchim62, Puckly, Qwfp, RG2, Raggot, Riick, Rlsheehan, Robbyduffy, RockMagnetist, Rreagan007, Runch, S1dorner, Sadi Carnot, Salih, Salsb, Sam Korn, Sbharris, Scientific29, Sciyoshi, Seaphoto, Senthilvel32, Sheliak, Shivankmehra, Slashme, Smack, Smitjo, Someones life, Spartan, Spiel496, Spiritia, Srleffler, Stevengus, StradivariusTV, Taw, TeaDrinker, Teentje, Teeteetee, Tehfu, TexasAndroid, The Obento Musubi, The real bicky, TheKMan, Thehelpfulone, TimeVariant, Tlroche, Toby Bartels, TransportObserver, Trinibones, Tsemii, Tunheim, Tuntable, Useight, User A1, Vacant999, VasilievVV, Venny85, Vikky2904, Viridae, Vuo, Wakeham, Wavelength, Wesley Moy, Wickey-nl, WikiLaurent, Wikisteff, Willandbeyond, Winterst, Xanchester, Yaris678, Yk Yk Yk, Yurik, ZeroOne, 422 anonymous edits Internal energy Source: http://en.wikipedia.org/w/index.php?oldid=584922004 Contributors: 2over0, AFP, Acratta, Adwaele, Aisteco, Andres, Andrewjlockley, Andries, Arcturus87, Artemis Fowl III, Atif.t2, Avoided, BD2412, Barticus88, Becky Sayles, Bensaccount, Bgwhite, BirdValiant, Bobblehead, Bobblewik, Bryan Derksen, Cardamon, ChrisChiasson, ChrisHodgesUK, Christian75, Cky2250, Complexica, Count Iblis, Crowsnest, Cyan, Da Joe, Daniele Pugliesi, David Shear, Ddcampayo, DeltaQuad, Derild4921, Dhollm, Djr32, Dolphin51, Dratman, Edsanville, El C, Euyyn, Fabiform, F, Giftlite, GleasSpty, GoingBatty, Googamooga, Gosnap0, H Padleckas, Haham hanuka, Hankwang, Hans Adler, HappyCamper, Headbomb, Henning Makholm, Hess88, Icairns, Ipatrol, Isnow, J04n, JSquish, Jamesx12345, John of Reading, Jonathanfu, Jusdafax, Kbrose, Kine, LcawteHuggle, LedgendGamer, LilHelpa, Lseixas, Lysdexia, MLauba, Maghemite, Magioladitis, Mariraja2007, Max139, Michael Hardy, Mild Bill Hiccup, Mnmngb, Morning277, Nhandler, NuclearEnergy, Oloumi, Omicronpersei8, PAR, PV=nRT, Patrick, Pdcook, Persian Poet Gal, Peterlin, PhilKnight, Pinethicket, Qclijun, Qsq, Qwertyus, R'n'B, RG2, RWG00, RainbowOfLight, RazielZero, Reaverdrop, Riick, Rjwilmsi, Rrburke, SHL-at-Sv, SQL, Sadi Carnot, Saperaud, Sbharris, SebastianHelm, Sheliak, Spiko-carpediem, Sportgirl426, Squids and Chips, Stassats, Ste4k, Stikonas, SuperHamster, Thatguyflint, The Thing That Should Not Be, The way, the truth, and the light, Thechamelon, ThorinMuglindir, Timmytoddler, Trakesht, Vatbey, Vaughan Pratt, Vramasub, Xanthoxyl, Xp54321, Zgyorfi, 190 anonymous edits Ideal gas law Source: http://en.wikipedia.org/w/index.php?oldid=585546214 Contributors: 123ilikecheese, 2over0, ANONYMOUS COWARD0xC0DE, ARAJ, Acit, AdamGomaa, Adamtester, AlanParkerFrance, Alexf, Alexwcovington, Andre Engels, Astrochemist, Avathar, Baccyak4H, Barneca, Baxter9, Bb3cxv, Bensaccount, Berland, Bo Jacoby, Bobo192, BrianHansen, Brianga, Bryan Derksen, BubblyWantedXx, Bwiki, COGDEN, CSWarren, CYD, CambridgeBayWeather, Carhas0, Christian75, Computergeeksjw, Comtraya, Coolhandscot, Craftyminion, Cramyourspam, Daniele Pugliesi, Dave19880, Dcirovic, Dhollm, Discospinster, Dj-dios-del-sol, Dnvrfantj, Donner60, Dougofborg, Drax Conqueror, ELApro, Eelpop, Electron9, Enochlau, EntropyTrap, Esrever, Eteq, Excirial, F=q(E+v^B), Femto, Fern Forest, FlorianMarquardt, Foxhunt king, Fresheneesz, Frood, Fuguangwei, G716, Gaius Cornelius, Gene Nygaard, Geraldo62, Giftlite, Ginsuloft, Givegains, GorillaWarfare, Grick, Habadasher, Hanjabba, Happydude69 yo, Headbomb, Helloimriley, Hmrox, Hoopssheaffer, Hseo, Huw Powell, Hydrox, I am One of Many, Icairns, Incnis Mrsi, Intersofia, IronGargoyle, Isopropyl, J.delanoy, JSquish, JaGa, Jade Harley, JakeVortex, Jakirkham, Jan van Male, JerroldPease-Atlanta, Jerry858, Jimp, Jmk, Johan Lont, Jrtayloriv, Juliancolton, Just plain Bill, Jwoodger, KMossey, Kamran28, Karol Langner, Khakiandmauve, Kharazia, Kimtaeil, Kishmakov, Kittyemo, Klilidiplomus, Kmarinas86, Kmg90, Krishnavedala, KudzuVine, Kunalmehta, Lambda(T), LanceBarber, Larry V, Lee S. Svoboda, Lightmouse, Loupeter, MC10, Magioladitis, MagnInd, Majora4, Malinaccier, Mandarax, Mariansavu, Mark Arsten, Mark Foskey, MarkclX, Mbeychok, Mbweissman, Metal Militia, Michael93555, Mike Rosoft, Mikiemike, Mitchan, Mrahner, MusikAnimal, NJIT HUMNV, NJIT HUMrudyh, Nasanbat, Nehahaha, Nickkid5, Nirupambits, Nlitement, Nskillen, Ollien, Oxymoron83, Ozuma, P.wormer, Patrick, Pbsouthwood, Pdch, Pennywisdom2099, Peter Horn, Philip Trueman, Physchim62, PiMaster3, Power.corrupts, Prolog, QuantumGlow, Quark1005, Quinlan Vos, RA0808, RP459, RWG00, Ranmoth, Razor2988, Rbingama, RedWasp, Rexeken, Riana, Riick, Rjwilmsi, Rock4p, RockMagnetist, Ruhrfisch, Sal.farina, Samir.Mesic, Sango123, Scroteau96, Seaphoto, Shoefly, Sifaka, Silly rabbit, SimonP, SimpsonDG, SkyLined, Smaines, Smaug123, Someones life, Ssp37097, Steve Belkins, SteveBaker, Stovl, StradivariusTV, Superm401, Susfele, T, TZGreat, Tantalate, Tarquin, TechNickL1, The 888th Avatar, Theislikerice, Thomjakobsen, Tianxiaozhang, U.S.Vevek, User27091, Venu62, Vicki Rosenzweig, Vivin, Vql, Vsmith, WAS 4.250, Waterproof-breathable, Wereon, Widr, William Avery, Wrecker1431, YnnusOiramo, Zenibus, Zmcdargh, , 462 anonymous edits Fundamental thermodynamic relation Source: http://en.wikipedia.org/w/index.php?oldid=558313236 Contributors: Batmanand, BertSen, Betacommand, Count Iblis, Dhollm, Dicklyon, Gogobera, John Baez, KHamsun, Katieh5584, Kbrose, Makecat, Netheril96, PAR, PV=nRT, Robomojo, Sadi Carnot, Tnowotny, Towerman86, 33 anonymous edits

285

Article Sources and Contributors


Heat engine Source: http://en.wikipedia.org/w/index.php?oldid=580797063 Contributors: A Doon, Abjkf, Academic Challenger, AdrianAbel, Adsllc, Alansohn, Allstarecho, Ancheta Wis, Andejons, Andrew Swallow, Animagi1981, Animum, ArielGold, Aspensti, BFD1, Back ache, Bangjiwoo, Beetstra, Bluerasberry, Bob Castle, Borgx, Buster2058, Calabe1992, CaptainTickles, CatherineMunro, Charles Matthews, Chris23, Complexica, Conversion script, CuriousEric, Cutler, Cyrius, DVdm, Daniele Pugliesi, DavidMCEddy, Delirium, Dhollm, Dodo bird, Dwolsten, Eb Oesch, EdJogg, Efranco, Engware, Eric Norby, Exomnium, FactsAndFigures, Fakk, Far neil, Femto, Fingers-of-Pyrex, Fresheneesz, GCarty, GDallimore, Gaius Cornelius, Garylhewitt, Gene Nygaard, Glenn, Gonfer, Gralo, HamburgerRadio, Headbomb, Heron, Hhhippo, Hu12, IanOfNorwich, Icarus, Ignacio Icke, In fact, Isis, JMBryant, JabberWok, Jdpipe, Jeriee, Jfmantis, Jim1138, Jtir, Jung dalglish, Jwonder, Karol Langner, Kku, Klundarr, Larryisgood, LeaveSleaves, LiDaobing, Liberatus, Lio, Lionelbrits, Lmatt, Loodog, Loopy48, Lotje, Loupeter, LovesMacs, Lseixas, Lumos3, MFago, Martarius, Mat-C, Matt tuke, Mauk2, Maustrauser, Mav, Mbertsch, Mcapdevila, Megaman en m, Mfield, Michael C Price, Mikiemike, Mirwin, NPrice, NathanHurst, Nikkimaria, Nk, Nono64, Northumbrian, O8h7w, Oleg Alexandrov, Omegatron, Oxymoron83, PAR, Paquitotrek, Pashute, Pentajism, Phil Boswell, Piano non troppo, PlatinumX, RG2, Ram-Man, Rbj, Rememberway, Rich Farmbrough, Rich257, Roadrunner, Robvanbasten, Ronz, RottweilerCS, Rtdrury, Sadi Carnot, Scimitar, Scs, Senpai71, Shadowjams, Sheeana, Siddhant, SkerHawx, Steve Quinn, Stikonas, Stokerm, Strongriley, Tantalate, Teapeat, Teep111, Thayts, The Anome, Theseeker4, Tide rolls, TimVickers, Tom Gundtofte-Bruun, Tom harrison, Tony1, TooComplicated, Toy 121, Triku, Vilkapi, Viskonsas, Vyom25, WadeSimMiser, Wagino 20100516, WarFox, Wavelength, Why Not A Duck, Wimt, Wolfkeeper, Yerocus, YouRang?, Zedshort, , , , 246 anonymous edits Carnot cycle Source: http://en.wikipedia.org/w/index.php?oldid=585407209 Contributors: 123Mike456Winston789, AioftheStorm, Alfasst, Andres, Aseeb36, Atucceri, Beetstra, Benbest, CaptainTickles, Cerireid, Chairboy, ChrisGualtieri, Chromana, Complexica, Daruvurisai, Davemody, Dcpcc, Dhollm, Dlw20070716, Donebythesecondlaw, Doulos Christos, Drpixie, Engware, Eric DUMINIL, EvergreenFir, Gene Nygaard, GeneCallahan, Gilliam, Glenn, Globalmedic111, Gobindakayal, Gutza, H Padleckas, HonestGent, InverseHypercube, IronGargoyle, Ivanjko Josip, Jackfork, Jdpipe, Jtir, Kaini, Kclongstocking, Killiondude, LA2, Larryisgood, Lelandrb, Lupo, MER-C, Materialscientist, Meaghan, Meithan, Mimtchazeniact, Moe Epsilon, MorphismOfDoom, NawlinWiki, Nk, Northfox, Nostammai, Orionus, PAR, Pacerlaser, Patrick, PeaceNT, Piano non troppo, Pimemorizer, Pinethicket, Plc123, Pratyya Ghosh, Rabarberski, Rage419, Rememberway, RobertCailliau, Sadi Carnot, Saintali, Saketdalal, Sintau.tayua, Sirsparksalot, Skier Dude, Spacepotato, Special Cases, Steve.kimberley, Sun Creator, SuperJew, Svick, TGCP, THEN WHO WAS PHONE?, Tcnuk, Teammm, Tom harrison, Waggers, Wisecheesedoodlehead, Wolfkeeper, XJaM, Yecril, , 145 anonymous edits Heat death paradox Source: http://en.wikipedia.org/w/index.php?oldid=578081720 Contributors: Aef711, Airplaneman, Byelf2007, Chris 73, Crater Creator, Dhollm, Michael C Price, Mpov, Nabla, OlEnglish, Paradoctor, Pearle, PenguiN42, PlanetStar, Polyamorph, R'n'B, RJFJR, Robertinventor, SemperBlotto, Solomonfromfinland, Thumperward, Treelzebub, 11 anonymous edits Loschmidt's paradox Source: http://en.wikipedia.org/w/index.php?oldid=585828442 Contributors: 2over0, 4Jays1034, Acalamari, Airplaneman, Bcrowell, Bporopat, Cacycle, Charles Matthews, Crosbiesmith, Dan Gluck, Danc, Denevans, Dhollm, Doctor joshi, Eequor, Enormousdude, Faizhaider, Fnfal, Gcsnelgar, Hypnosifl, Jheald, JocK, Karada, Leibniz, Linas, Lysdexia, Mbell, Michael Hardy, Mike Peel, Mjec, Nbarth, PC78, Paradoctor, Phys, Pjacobi, Robertd, Salih, Sasoriza, TakuyaMurata, Tobias Bergemann, Was a bee, WebDrake, 19 anonymous edits

286

Image Sources, Licenses and Contributors

287

Image Sources, Licenses and Contributors


File:Carnot engine (hot body - working body - cold body).jpg Source: http://en.wikipedia.org/w/index.php?title=File:Carnot_engine_(hot_body_-_working_body_-_cold_body).jpg License: Public Domain Contributors: Libb Thims (talk). Original uploader was Libb Thims at en.wikipedia File:Carnot heat engine 2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Carnot_heat_engine_2.svg License: Public Domain Contributors: Eric Gaba (Sting - fr:Sting) File:Symbol book class2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Symbol_book_class2.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Lokal_Profil File:Eight founding schools.png Source: http://en.wikipedia.org/w/index.php?title=File:Eight_founding_schools.png License: Public Domain Contributors: Libb Thims File:Thermodynamics.png Source: http://en.wikipedia.org/w/index.php?title=File:Thermodynamics.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Miketwardos Image:system boundary.svg Source: http://en.wikipedia.org/w/index.php?title=File:System_boundary.svg License: Public domain Contributors: en:User:Wavesmikey, traced by User:Stannered File:Increasing disorder.svg Source: http://en.wikipedia.org/w/index.php?title=File:Increasing_disorder.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Maxwell's_demon.svg: Htkym derivative work: Dhollm (talk) Image:Willard Gibbs.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Willard_Gibbs.jpg License: Public Domain Contributors: User:YebisYa Image:Deriving Kelvin Statement from Clausius Statement.svg Source: http://en.wikipedia.org/w/index.php?title=File:Deriving_Kelvin_Statement_from_Clausius_Statement.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Netheril96 File:Sadi Carnot.jpeg Source: http://en.wikipedia.org/w/index.php?title=File:Sadi_Carnot.jpeg License: Public Domain Contributors: FSII, Kilom691, Maksim, Materialscientist, Mu, Ruslik0, Wuyouyuan File:Clausius-1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Clausius-1.jpg License: Public Domain Contributors: User:Ireas File:James-clerk-maxwell3.jpg Source: http://en.wikipedia.org/w/index.php?title=File:James-clerk-maxwell3.jpg License: Public Domain Contributors: User:Bcrowell Image: Can_T=0_be_reached.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Can_T=0_be_reached.jpg License: GNU Free Documentation License Contributors: Adwaele File:Savery-engine.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Savery-engine.jpg License: Public Domain Contributors: Original uploader was Wavesmikey at en.wikipedia File:Thermally Agitated Molecule.gif Source: http://en.wikipedia.org/w/index.php?title=File:Thermally_Agitated_Molecule.gif License: GNU Free Documentation License Contributors: en:User:Greg L File:Ice-calorimeter.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Ice-calorimeter.jpg License: Public Domain Contributors: Originally en:User:Sadi Carnot File:Robert Boyle 0001.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Robert_Boyle_0001.jpg License: Public Domain Contributors: Boo-Boo Baroo, Ecummenic, Kilom691, Leyo, QWerk, Shakko, Soerfm, 2 anonymous edits File:Maquina vapor Watt ETSIIM.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Maquina_vapor_Watt_ETSIIM.jpg License: GNU Free Documentation License Contributors: Nicols Prez File:Carnot2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Carnot2.jpg License: Public Domain Contributors: Kilom691, Maksim, Mu File:Benjamin Thompson.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Benjamin_Thompson.jpg License: Public Domain Contributors: Not specified Image:Joule's Apparatus (Harper's Scan).png Source: http://en.wikipedia.org/w/index.php?title=File:Joule's_Apparatus_(Harper's_Scan).png License: Public Domain Contributors: Ariadacapo, Chowbok, Martinvl, Pieter Kuiper, 1 anonymous edits File:Real Gas Isotherms.svg Source: http://en.wikipedia.org/w/index.php?title=File:Real_Gas_Isotherms.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Raoul NK Image:Isobaric process plain.svg Source: http://en.wikipedia.org/w/index.php?title=File:Isobaric_process_plain.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:IkamusumeFan File:isochoric process SVG.svg Source: http://en.wikipedia.org/w/index.php?title=File:Isochoric_process_SVG.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:IkamusumeFan image:Ideal gas isotherms.png Source: http://en.wikipedia.org/w/index.php?title=File:Ideal_gas_isotherms.png License: Public Domain Contributors: ChongDae, Krishnavedala, Pieter Kuiper, 1 anonymous edits Image:Isothermal process.svg Source: http://en.wikipedia.org/w/index.php?title=File:Isothermal_process.svg License: Creative Commons Zero Contributors: User:Netheril96 Image:Adiabatic.svg Source: http://en.wikipedia.org/w/index.php?title=File:Adiabatic.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: User:Stannered Image:Entropyandtemp.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Entropyandtemp.PNG License: GNU Free Documentation License Contributors: Original uploader was AugPi at en.wikipedia File:Polytropic.gif Source: http://en.wikipedia.org/w/index.php?title=File:Polytropic.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:IkamusumeFan Image:Ice water.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Ice_water.jpg License: Public Domain Contributors: Computerjoe, Ms2ger, Paroxysm, 1 anonymous edits Image:Clausius.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Clausius.jpg License: Public Domain Contributors: Original uploader was user:Sadi Carnot at en.wikipedia File:system boundary.svg Source: http://en.wikipedia.org/w/index.php?title=File:System_boundary.svg License: Public domain Contributors: en:User:Wavesmikey, traced by User:Stannered File:Temperature-entropy chart for steam, US units.svg Source: http://en.wikipedia.org/w/index.php?title=File:Temperature-entropy_chart_for_steam,_US_units.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Emok File:First law open system.svg Source: http://en.wikipedia.org/w/index.php?title=File:First_law_open_system.svg License: Public Domain Contributors: derivative work: Pbroks13 (talk) First_law_open_system.png: United States Department of Energy Image:Pressure exerted by collisions.svg Source: http://en.wikipedia.org/w/index.php?title=File:Pressure_exerted_by_collisions.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Becarlson File:Pressure force area.svg Source: http://en.wikipedia.org/w/index.php?title=File:Pressure_force_area.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Kdkeller File:Barometer mercury column hg.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Barometer_mercury_column_hg.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Hannes Grobe 19:02, 3 September 2006 (UTC) File:13-07-23-kienbaum-unterdruckkammer-33.jpg Source: http://en.wikipedia.org/w/index.php?title=File:13-07-23-kienbaum-unterdruckkammer-33.jpg License: Creative Commons Attribution 3.0 Contributors: Ralf Roletschek file:BernoullisLawDerivationDiagram.svg Source: http://en.wikipedia.org/w/index.php?title=File:BernoullisLawDerivationDiagram.svg License: GNU Free Documentation License Contributors: MannyMax (original) Image:Maxwell Dist-Inverse Speed.png Source: http://en.wikipedia.org/w/index.php?title=File:Maxwell_Dist-Inverse_Speed.png License: GNU Free Documentation License Contributors: User:Cydebot, User:Greg L, User:Keenan Pepper Image:Thermally Agitated Molecule.gif Source: http://en.wikipedia.org/w/index.php?title=File:Thermally_Agitated_Molecule.gif License: GNU Free Documentation License Contributors: en:User:Greg L Image:1D normal modes (280 kB).gif Source: http://en.wikipedia.org/w/index.php?title=File:1D_normal_modes_(280_kB).gif License: GNU Free Documentation License Contributors: Badseed, Herbythyme, Pabouk, Pieter Kuiper, 2 anonymous edits Image:Wiens law.svg Source: http://en.wikipedia.org/w/index.php?title=File:Wiens_law.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: 4C Image:IceBlockNearJoekullsarlon.jpg Source: http://en.wikipedia.org/w/index.php?title=File:IceBlockNearJoekullsarlon.jpg License: unknown Contributors: Alno, Alvaro qc, Apalsola, Berrucomons, Chun-hian, ComputerHotline, Davin7, Eleassar, Felyx, Helix84, Jean-Frdric, Joolz, King of Hearts, Li-sung, Lilyu, MarkSweep, Mhby87, Moogsi, Nguyn L, RedWolf, Saperaud, Tillea, Wikiseldon, Wst, var Arnfjr Bjarmason Image:Energy thru phase changes.png Source: http://en.wikipedia.org/w/index.php?title=File:Energy_thru_phase_changes.png License: GNU Free Documentation License Contributors: User:Cydebot, User:Greg L, User:Keenan Pepper

Image Sources, Licenses and Contributors


Image:Close-packed spheres.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Close-packed_spheres.jpg License: GNU Free Documentation License Contributors: User:Greg L Image:Liquid helium superfluid phase.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Liquid_helium_superfluid_phase.jpg License: Public Domain Contributors: Bmatulis Image:Anders Celsius.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Anders_Celsius.jpg License: GNU Free Documentation License Contributors: MGA73 Image:Carolus Linnaeus (cleaned up version).jpg Source: http://en.wikipedia.org/w/index.php?title=File:Carolus_Linnaeus_(cleaned_up_version).jpg License: Public Domain Contributors: Original painting by Alexander Roslin. Digitally improved by Greg L. Image:William Thomson 1st Baron Kelvin.jpg Source: http://en.wikipedia.org/w/index.php?title=File:William_Thomson_1st_Baron_Kelvin.jpg License: Public Domain Contributors: Angusmclellan, Interpretix, Martinvl, Materialscientist, Mwaldeck, Pieter Kuiper, Woudloper Image:Boltzmann2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Boltzmann2.jpg License: Public Domain Contributors: unbekannt File:DiatomicSpecHeat1.png Source: http://en.wikipedia.org/w/index.php?title=File:DiatomicSpecHeat1.png License: Public Domain Contributors: User:PAR File:DiatomicSpecHeat2.png Source: http://en.wikipedia.org/w/index.php?title=File:DiatomicSpecHeat2.png License: Public Domain Contributors: User:PAR Image:DebyeVSEinstein.jpg Source: http://en.wikipedia.org/w/index.php?title=File:DebyeVSEinstein.jpg License: Public Domain Contributors: Fffred, Keenan Pepper, Nicoguaro, Pieter Kuiper File:Rail buckle.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Rail_buckle.jpg License: Public Domain Contributors: Original uploader was Trainwatcher at en.wikipedia File:Drikkeglas med brud-1.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Drikkeglas_med_brud-1.JPG License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Arc1977 File:Coefficient dilatation volumique isobare PP semicristallin Tait.svg Source: http://en.wikipedia.org/w/index.php?title=File:Coefficient_dilatation_volumique_isobare_PP_semicristallin_Tait.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Cdang File:Coefficient dilatation lineique aciers.svg Source: http://en.wikipedia.org/w/index.php?title=File:Coefficient_dilatation_lineique_aciers.svg License: Creative Commons Zero Contributors: User:Cdang Image:First law open system.svg Source: http://en.wikipedia.org/w/index.php?title=File:First_law_open_system.svg License: Public Domain Contributors: derivative work: Pbroks13 (talk) First_law_open_system.png: United States Department of Energy File:Ts diagram of N2 02.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Ts_diagram_of_N2_02.jpg License: GNU Free Documentation License Contributors: Adwaele File:Schematic of throttling and compressor 01.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Schematic_of_throttling_and_compressor_01.jpg License: GNU Free Documentation License Contributors: Adwaele File:Green check.svg Source: http://en.wikipedia.org/w/index.php?title=File:Green_check.svg License: Public Domain Contributors: gmaxwell File:Red x.svg Source: http://en.wikipedia.org/w/index.php?title=File:Red_x.svg License: Public Domain Contributors: Anomie Image:Ideal gas isotherms.svg Source: http://en.wikipedia.org/w/index.php?title=File:Ideal_gas_isotherms.svg License: Creative Commons Zero Contributors: Krishnavedala Image:heat engine.png Source: http://en.wikipedia.org/w/index.php?title=File:Heat_engine.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Gonfer Image:GFImg3.png Source: http://en.wikipedia.org/w/index.php?title=File:GFImg3.png License: GNU Free Documentation License Contributors: Original uploader was Engware at en.wikipedia Image:GFImg4.png Source: http://en.wikipedia.org/w/index.php?title=File:GFImg4.png License: GNU Free Documentation License Contributors: Original uploader was Engware at en.wikipedia File:Carnot cycle p-V diagram.svg Source: http://en.wikipedia.org/w/index.php?title=File:Carnot_cycle_p-V_diagram.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Dake, Keta, 3 anonymous edits File:CarnotCycle1.png Source: http://en.wikipedia.org/w/index.php?title=File:CarnotCycle1.png License: Public Domain Contributors: Bigbluefish, Dake, PAR, Perhelion, Smieh File:Carnot Cycle3.png Source: http://en.wikipedia.org/w/index.php?title=File:Carnot_Cycle3.png License: Public Domain Contributors: Eric DUMINIL File:Carnot Cycle2.png Source: http://en.wikipedia.org/w/index.php?title=File:Carnot_Cycle2.png License: Public Domain Contributors: Eric DUMINIL File:Real vs Carnot.png Source: http://en.wikipedia.org/w/index.php?title=File:Real_vs_Carnot.png License: Public Domain Contributors: User:H Padleckas, User:Helix84

288

License

289

License
Creative Commons Attribution-Share Alike 3.0 //creativecommons.org/licenses/by-sa/3.0/

Das könnte Ihnen auch gefallen