Sie sind auf Seite 1von 5

Vol. 76, No. 8, pp.

3607-3611, August 1979 Physics

Proc. Nati. Acad. Sci. USA

From deterministic dynamics to probabilistic descriptions


(Markov semigroups/Bernoulli systems/I-theorem/internal time and entropy operator)

B. MISRA, I. PRIGOGINEt, AND M. COURBAGE


Facult6 des Sciences Universit6 Libre de Bruxelles, Campus Plaine, Boulevard du Triomphe, 1050 Bruxelles, Belgium

Contributed by I. Prigogine, April 9, 1979

ABSTRACT The present work is devoted to the following question: What is the relationship between the deterministic laws of dynamics and probabilistic description of physical processes? It is generally accepted that probabilistic processes can arise from deterministic dynamics only through a process of "coarse graining" or "contraction of description" that inevitably involves a loss of information. In this work we present an alternative point of view toward the relationship between deterministic dynamics and probabilistic descriptions. Speaking in general terms, we demonstrate the possibility of obtaining (stochastic) Markov processes from deterministic dynamics simply through a "change of representation" that involves no loss of information provided the dynamical system under consideration has a suitably high degree of instability of motion. The fundamental implications of this finding for statistical mechanics and other areas of physics are discussed. From a mathematical point of view, the theory we present is a theory of invertible, positivity-preserving, and necessarily nonunitary similarity transformations that convert the unitary groups associated with deterministic dynamics to contraction semigroups associated with stochastic Markov processes. We explicitly construct such similarity transformations for the so-called Bernoulli systems. This construction illustrates also the construction of the so-called Lyapounov variables and the operator of "internal time," which play an important role in our approach to the problem of irreversibility. The theory we present can also be viewed as a theory of entropy-increasing evolutions and their relationship to deterministic dynamics.

1. Introduction According to both classical and quantum mechanics, the time-evolution of states obeys deterministic laws that are symmetric with respect to inversion of time. Irreversibility of physical processes, on the otherhand, is expressed by the second law of thermodynamics. For isolated systems, it affirms the existence of a physical quantity, the entropy, that increases monotonically with time until it reaches its maximum at equilibrium. To elucidate the connection between the dynamical description with its reversible and deterministic laws and the thermodynamical description with its law of monotonic increase of entropy is a fundamental problem of statistical mechanics. This problem is closely related to the question of the possible relations that might exist between the deterministic and probabilistic descriptions of physical processes. Indeed, the stochastic Markov processes provide the best possible models to represent irreversible evolution obeying the law of increasing of entropy. As is well known, the usual expression for entropy
S pt In pt dM

tion") for such processes (1). One would thus achieve a dynamical interpretation of the second law if one could establish a satisfactory link between deterministic dynamical evolutions and probabilistic Markov processes. The interest of the problem of the possible connections between probabilistic and deterministic descriptions is, however, not confined to the domain of statistical mechanics. It concerns the problem of the meaning of probability in natural science. It is generally believed that probabilistic processes can arise from deterministic dynamics only as a result of some form of "coarse-graining" or approximations. The main purpose of this paper is to develop an alternative viewpoint toward the relation between deterministic and probabilistic descriptions. More specifically, we develop a theory of "equivalence," mediated by nonunitary similarity transformations, between deterministic evolution and stochastic (Markov) processes. The viewpoint toward the relation between deterministic evolution and stochastic Markov processes developed here is closely related to the theory of irreversibility developed by Prigogine et al. (2). The main feature of this theory is that the problem of reconciling dynamical evolution and irreversible (thermodynamical) evolution is viewed in terms of establishing an "equivalence" between them via a nonunitary similarity transformation. In essence, the approach thus consists in the determination of a suitable nonunitary transformation A acting on the distribution function p so that the original deterministic Liouville equation
= Lpt

[1.2]

is transformed by it to a dissipative evolution equation describing the irreversible approach of the system to equilibrium. The transformation pt -P lit = Apt converts the Liouville equation into the equation

i-at= ]Pt, 1
Q=
I

ALA-'.

[1.3]

The above-mentioned requirement on A is thus the requirement that the functional

"ptI2du

[1.4]

[1.1]

(and, in fact, any convex function of the distribution functions p on the phase space r) is a Lyapounov functional ("h-funcThe publication costs of this article were defrayed in part by page charge payment. This article must therefore be hereby marked "advertisement" in accordance with 18 U. S. C. 1734 solely to indicate

is a Lyapounov functional (f-function) for the evolution obeying Eq. 1.3. This is the main condition on A. Naturally, there are other physically motivated conditions to be fulfilled by A. But we need not discuss them here because they are treated in refs. 2 and 3. For our purpose, the main point to retain is the important notion of "equivalence;" via a nonunitary A, between the dynamical and thermodynamical descriptions. In conformity with this idea, we seek to determine the conditions on dynamics so that it becomes equivalent, via a similarity transformation A, to a stochastic Markov process.
t Also at the Center for Statistical

this fact.

Mechanics at the University of Texas,

Austin, TX.

3607

3608

Physics:

Misra et al.

Proc. Natl. Acad. Sci. USA 76 (1979)


The existence of an operator of time or "age" satisfying 1.8 seems to express in a compact manner the inherent (but hidden) stochastic and nondeterministic character of the evolution. Once T has been constructed, it is easy to proceed further. Lyapounov variables M can be constructed as monotonically decreasing positive operator functions of T M = M(T), [1.9] and the nonunitary A connecting the given dynamical evolution with a dissipative irreversible evolution can be constructed as a square root of M: A = MI/2 [1.10] In this way, we see that, at least for a class of abstract dynamical systems, the K-flows, the dynamical evolution is equivalent to a dissipative irreversible evolution. However, let us keep in view that it might be possible to establish the desired "equivalence" with a Markov process, for a suitably restricted class of initial conditions, even for systems that are not mixing, but present other types of instabilities (such as Poincare's catastrophe). This would correspond to allowing M and A to be more singular objects than those we have considered in this paper (7). Although this situation could be of considerable interest in statistical mechanics, we do not consider it further in this paper. Obviously, the central question of our approach is: What form of instability ensures the existence of an equivalence between the dynamical evolution and a stochastic Markov process? As explained before, for K-flows, both M and A can be constructed by a "canonical" procedure as functions of an operator time T. It is tempting to conjecture that in this case the transformation not only converts the dynamical evolution of the K-flow to a dissipative process but also converts it to a stochastic Markov process. At present, we are not able to decide this conjecture in its full generality. However, we show (and this is the main purpose of the rest of the paper) that this conjecture is true for an important subclass of K-systems-the socalled Bernouilli systems (8, 9). It seems to us that the significance of this result extends far beyond its immediate application in statistical physics. It proves, of course, the Boltzmann's k-theorem for the Bernouilli systems without invoking the questionable "molecular chaos" hypothesis or any form of "coarse-graining." But more importantly, it confirms our view of how probabilities could arise in physics other than as a result of approximation. From a mathematical point of view, the theory we present is a theory of positivity preserving similarity transformations that connect unitary groups describing deterministic dynamical evolution with dissipative semigroups associated with Markov processes. This theory is, evidently, in its infancy. In fact, the very possibility of such a connection between deterministic and probabilistic descriptions (which is established here in the specific case of Bernouilli systems) is rather an unexpected result that puts the entire problem of the'role of probability in physics in a new perspective. To keep the article brief, we shall omit most of the details of proofs in the following. A more complete version of this paper will appear in a forthcoming publication (10). 2. Notion of "equivalence" between deterministic dynamics and probabilistic Processes We now proceed to formulate and discuss in greater detail the notion of "equivalence" between deterministic evolution and probabilistic Markov processes. Let F denote the state space (a constant energy surface of the phase space) of the system. Deterministic evolution of the system is described by a one-parameter group Tt of one-one transformations that map r onto itself:

It is easy to show that the existence of such a transformation connecting the dynamical evolution to a stochastic Markov process entails the existence of an operator M (acting on the distribution function) with the property that [1.5] (Pt, Mpt) decreases monotonically with t as Pt evolves according to the Liouville equation. Here we have used the inner product notation (p,zp) to denote the integral

pe

dM.

[1.6]

The monotonic decrease of the expression 1.5 can be succinctly expressed by the commutation relation i[L,M] D < 0 [1.7] with (p,Dp) = 0 only for the equilibrium ensemble. In this way, the possibility of relating the dynamical evolution to a stochastic Markov process turns out to be closely linked with the possibility of introducing a new dynamical variable M satisfying 1.7. These operators, called Lyapounov variables, have been studied by Misra to display the intrinsic irreversibility of appropriate classes of dynamical systems (4). If one could introduce a new observable representing nonequilibrium entropy, it would be given by such a Lyapounov variable M with a change of sign. In this sense, then, the existence of M expresses the intrinsic irreversibility of the dynamical evolution. Naturally, one does not expect M to exist for all dynamical systems. Moreover, one expects that (even if it exists) it can be defined only in an extended frame of dynamics. In fact, an important observation of Poincare (5) shows that the operator M cannot correspond to multiplication by a phase function. It can be defined only as acting on the distribution functions and not on individual phase points. One thus expects the existence of M to be associated with some physical mechanism that renders the description of dynamical evolution in terms of phase space trajectories an unobservable idealization, thus forcing the use density functions. Recent developments in classical mechanics show that such a mechanism is the phenomenon of instability of dynamical motion. Many forms of instability have been discovered, and they are found to be more common than originally believed (6). The common feature of dynamical systems having a suitably high degree of instability is that each finite region of phase space, no matter how small, contains phase points that move along rapidly diverging or qualitatively distinct types of trajectories. Obviously, in this situation, the concept of deterministic evolution along phase space trajectories cannot be defined operationally and, hence, constitutes a physically unrealizable idealization. Therefore, in dealing with dynamically unstable systems, classical mechanics seems to have reached the limit of the applicability of some of its own concepts. This limitation on the applicability of the classical concept. of phase space trajectories is-it seems to us-of a fundamental character. It forces upon us the necessity of a new approach to the theory of dynamical evolution of such systems that involves the use of distribution functions in an essential manner. It is shown in ref. 4 that the mixing property is necessary and the condition of K-flow is sufficient for the existence of a Lyapounov variable M. As is well known, mixing flows and (a fortiori) K-flows are unstable to a high degree: arbitrarily close phase points move along widely diverging trajectories. It is also found in ref. 4 that for K-flows there exists an operator time T satisfying the commutation relation i[L,T] = I. [1.8]

Physics:

Misra et al.

Proc. Natl. Acad. Sci. USA 76 (1979)

3609

Tt T = Tt+8. In the case of Hamiltonian systems, Tt will be a group of canonical transformations generated by the Hamiltonian. We shall suppose that there is a measure ,u (defined on a a-algebra 3 of subsets of r) that is invariant under the dynamical group Tt. For Hamiltonian systems, the existence of such an invariant measure is guaranteed by Liouville's theorem. Furthermore, the invariant measure will be supposed to be a finite and, for convenience, normalized measure. To define a probabilistic process (within the same state space r), one needs to specify the transition probabilities P(t,W,A) that the system starting initially from the state w will reach the subset A of r in time t. It is evident that the function P should satisfy the following conditions: (ii) P(t,wP) = (iii) For t, w fixed the function P(tW, A) defines a probability measure on r. [Moreover, one imposes the technical condition that for fixed t and A, P(t,w,A) is a measurable function of co.] The probabilistic process defined by the transition probabilities is called a Markov process if P satisfies the Chapman-Kolmogorov equation: [2.2] P(t + s,w,.A) = fP(t,wv,dw')P(S,co',A). As explained in many textbooks, this important condition expresses the important property of Markov process that the future statistical behavior of the system depends solely on the initial condition independently of the past history. A measure ji on r is said to be an invariant measure for the process [with transition probabilities P(t, w,A)] if

The Chapman-Kolmogorov equation for P(t,w,A) then implies the semigroup property for Wt: [2.8] WtW, = Wt+, (for t,s > 0). Moreover, the previously stated properties (i-iii) of the transition probabilities entail the following properties of Wt: (a) The operation Wt preserves positivity. This means that, if f(w) 2 0 for almost all (a.a.) w e r, then Wtf((co) 2 0 for a.a. c too. (b) Wt Pequ = Pequ, in which Pequ is the uniform distribution
Pequ
1.

By putting f = A, the characteristic function of the set A c r in 2.7, one finds

(i) P(t,W,A) >

P(t,w,A) = (WttPA)(w). [2.9] By using the above relation between Wt and P(t, w,A), the invariance of the measure ,u for the process (relation 2.3) is easily seen to be equivalent to the condition:

(c) Wt Pequ = Pequ.


To sum up, every Markov process with an invariant measure defines through formula 2.7 a semigroup Wt of operators acting on L2 and having properties a through c. The converse of this statement is also true: every semigroup Wt of operators on L, with properties a through c defines a Markov process (having ;z as invariant measure) whose transition probabilities P(t,W,A) are obtained from Wt by formula 2.9 [by a slight variation of theorem 2-1 given by Dynkin (11)]. It is easy to see that if p denotes the distribution function describing the initial state of the system, the state jt to which it evolves in time t under the Markov process is given by [2.10] lit = W* Pa It seems natural now to consider the deterministic evolution described by the unitary group Ut (induced from Tt) as "equivalent" to the stochastic evolution associated with the semigroup Wt (see relation 2.7 and 2.9) if there exists a bounded transformation A on L2 such that (i) AUtp = WtAp (for t > 0), (ii) A preserves positivity, (iii) fpdu = f ApdM, (iv) A has a densely defined inverse, A-', and

P(t,w,A)dji = q (A)

[2.3]

for all t > 0 and A E 13. To formulate the notion of equivalence between deterministic evolutions and stochastic Markov processes, it is necessary to consider how the distribution functions on r transform under respective evolutions. In the case of deterministic evolution, it is easy to see that the initial distribution p transforms in time t to the distribution pt defined by [2.4] Pt(co) = p(T-t(c). The transformations Ut mapping p to pt are unitary operators of L2 (= the Hilbert space of square integrable functions with respect to ji):

(v) Apequ = Pequ.


o

(Utp)(w) = p(T-tv).

[2.4']

The group property of Tt passes on to Ut. The generator of this unitary group is the Liouvillian operator L: [2.5] Ut = e-iLt For Hamiltonian systems L is given by Lp = i[Hp]PB, [2.6] in which [,]PB denotes the familiar Poisson bracket. Let us now consider the transformation of distribution functions under the stochastic evolutions corresponding to Markov processes. To this end, let us first note that with every Markov process [with transition probabilities P(t,w, A)] having an invariant measure ,u, one can associate a family Wt of operators defined by

Wtf(w) =
forf eL2.

f(c')P(t,wv,dwv')

[2.7]

Condition i simply says that the "change of representation" Ap converts the deterministic evolution corresponding to Ut to the stochastic evolution associated with Wt. Conditions ii through v are necessary requirements (from a physical point of view) in order that A.may be interpreted as a "change of representation." In fact, conditions ii and iii simply express the requirement that A maps "states" into "states"; v says that the contemplated change of representation leaves the equilibrium state unchanged; and condition iv expresses the important requirement that the passage from deterministic to probabilistic description brought about by A involves "no loss of information." Condition i can be rewritten in the form = W- AUtA-1 (for t > 0). [2.11] The problem for us now is to determine the class of dynamical groups Ut similar to semigroups Wt that satisfy (in addition to conditions a through c) the following requirement: 11W(p - pequ)II2 = 11Wp- 112 - 0 [2.12]
p

monotonically with t as t a o. Semigroups satisfying conditions a through c and 2.12 will

3610

Physics:

Misra et al.

Proc. Natl. Acad. Sci. USA 76 (1979)

be referred to as strong Markov semigroups. They are associated (through formulas 2.7 and 2.9) with truly stochastic Markov processes that display the irreversibility expressed in the second law. In fact, condition 2.12 just says that Pequ is an "attractor" for the process in question and the approach to Pequ proceeds "monotonically." Now not only expression 2.12, but also the usual expression [1.1] for negative entropy (as indeed any convex functions of p) decreases monotonically for such processes provided p $ Pequ. The existence of a similarity transformation A connecting (through 2.11) the dynamical group Ut to a strong Markov semigroup Wt seems to express the inherent stochastic and irreversible character of the original dynamical evolution. One thus expects such a transformation to exist (if at all) only for systems with a suitably high degree of instability of motion. This intuitive idea is confirmed by the following: PROPOSITION. In order that the dynamical group Ut be similar to a strong Markov semigroup W* (for t > 0), it is necessary that the dynamical evolution be mixing in the sense of ergodic theory. The proof of this statement follows from noting that, if AUtA' = W is a (strong) Markov semigroup (for t 2 0), then A*A = M is a Lyapounov variable for the evolution Ut. As shown in ref. 4, the existence of Lyapounov variables implies that the Liouvillian (restricted to K = the subspace orthogonal to Pequ) has absolutely continuous spectrum, which in its turn implies that the system is mixing. 3. Time and entropy operators for Bernoulli systems This and the following section are devoted to carrying out this program of Section 2 for the class of the so-called Bernoulli systems (8, 9). For the sake of simplicity of exposition, however, we shall limit our consideration to the simplest of the Bernoulli systems, the baker's transformation. But we emphasize that all the results found in this and the next section generalize to arbitrary Bernoulli systems. Let us start with a brief description of the baker's transformation. The phase space F is now the unit square in the plane, and the measure A is the usual Lebesgue measure of the square. The baker's transformation B sends a point w = (p,q) of the phase space to the point Bw with Bw = (2p,q/2) if 0 < p < 1/2 and Bw = (2p- 1, q/2 + 1/2) if 1/2 < p <1. The discrete group Bn, (n = 0, i1, +2,. .) that replaces the continuous parameter group Tt of the preceding section may be thought as describing a discrete deterministic process taking place at regular (unit) intervals of time. A striking, and indeed the characteristic, property of the baker's transformation B is that the partition P = JA0o,A of the unit square into the right and left halves is "independent" and "generating" with respect to B. (For the definitions of these two concepts, see refs. 8 and 9.) It is the existence of an independent and generating partition that characterizes a general Bernoulli system. The baker's transformation corresponds to the special case that the independent and generating partition can be chosen to consist of exactly two sets AO, A with jt(Ao) = g (Ac) = /2. Returning to the baker's transformation, the discrete group Bn now induces a discrete unitary group Un on L2: (cf. Eq. 2.4). By a Lyapounov variable (or negative entropy operator) of the baker's transformation, we mean a bounded operation M on L2 such that (i) M > 0; i.e., ( p,Mp) > O for all p e L2; (ii) Mp u = Pequ, and (iii) (7UfMUnf ) - 0 monotonically as n -0 O for all f in KO , the orthogonal complement of Pequ.

To construct the Lyapounov variables M for the discrete group Un, we follow the general scheme described in ref. 4 and construct first the operator T representing "internal time" or "age" of the system. In the case of continuous parameter group Ut = e-iLt, the operator T is, by definition, a self-adjoint operator that satisfies the canonical commutation relation 1.8 on a suitable dense set of vectors of K -. Expressed in terms of the unitary group e-'Lt, this relation becomes eiLtTe-iLt T + tI. [3.1] Thus the operator of "age" for the baker's system is, by definition, a self-adjoint operator T that satisfies U-nTUn = T + nI [3.2]
=

onKjY

Let En denote the eigen projection of T corresponding to the eigen value n:


co

[3.3] The Ens being eigen projections of a self-adjoint operator satisfy


EnEm =

T=

E: nEnn=-Xo

bnmEn

and
+
co

E En =I.
Condition 3.2 is now equivalent to the condition
U-'EmU = Em-i.

[3.4] [3.5]

Thus the problem before us is to construct a family En (n integers from -o to +co) satisfying 3.4 and 3.5. To this end, we make use of a special basis of K j constructed below. Let P = IAO, Ac A 1 be the partition of the unit square into left and right halves. Define Xo = 1 - 2po and [3.6] Xn = UnXo (n= +1, +2,....). For any finite set S = (n , n2, . . . n,) of (positive or negative) integers, put
=

Xs = XniXn2 .

Xnr

[3.7]

Making use of the independence of the partition P, one can verify that the collection of functions Ixs , with S finite subsets of integers, is an orthonormal set and the generating property of P entails the completeness of the set (in K -L). Moreover, from the definitions of Xn and Xs, it follows that the unitary operator U induced from the baker's transformation B acts as a shift on the orthonormal basis: Uxs = Xs+ 1. [3.8] Here S + 1 denotes the subset (n1 + 1, n2 + 1,. .. n, + 1) if S stands for (n , . . nr). In view of 3.8, the eigen projection En(n = 0, 1, 2, . ..) of the operator time T can now be defined as the projection onto the subspace spanned by all Xs corresponding to subsets S such that n e S and all other integers in S are less than n. The property 3.4 of the EnS thus defined follows from the fact that IXsI is a complete orthonormal basis in K I and condition 3.5 is a consequence of 3.8. The construction of Lyapounov variables M now follows directly. They are monotonically decreasing operator functions of T. Corresponding to every (two-sided) sequence X2 of nonnegative numbers bounded by 1 and decreasing monotonically to 0 as n o+ the operator +
M
+

c[ a, X En + P [3.91
_co

Physics:

Misra et al.

Proc. Natl. Acad. Sci. USA 76 (1979)

3611

(with En eigen projections of T and Po the projection onto Pequ) is easily verified to be a Lyapounov variable for the discrete group Un. To conclude this section, let us mention how the operator time T allows us to associate an "age" or "internal time" with

well-defined distribution functions p, or rather with the excess distribution functions -p P equ If is an eigen function of T, the corresponding eigen value is the "age" associated with the distribution function p. For instance, the excess function for the distribution function (1 - Xn) is Xn, which is an eigen vector of T for eigen value n. Condition 3.5 or 3.8 makes this association a "consistent" one in the sense that it ensures that the change in "internal time" or "age" of the system brought about by the dynamical evolution matches with the progress of external (or observer's) time that serves to label the dynamical group. Existence of a consistent "internal time" operator T in this sense is, of course, not allowed for all dynamical systems. If the excess distribution function is not an eigen function of T but a combination of eigen functions corresponding to two or more distinct eigen values, then one cannot associate a well-defined age to p. But one can still associate an "average age" T(p) to p by the formula
-

T(p)

1[2 (,Tp),

[3.10]

just as in quantum mechanics.

4. "Equivalence" of Bernoulli systems with stochastic Markov processes Let 0 < Xn < 1 (n = 0, 1, .. .) be any sequence of positive numbers decreasing monotonically as n increases. If
in which Ens T.

are

A= E XnEn +Po, [4.1] n =-X the eigen projections of the operator time

+0

It can be shown (10) that this transformation preserves the positivity and the normality. However, A-1 is not positivity preserving. Therefore, to make AUnA-1 positivity preserving we require that Xn be such that the sequence vn = Xtn+ / Xn also decreases monotonically as n increases [for instance, take Xn = (1 + en)-1]. With this requirement, it can be shown that AUnA-I is a semigroup of Markov process and that A has all the properties listed in Section 2, for n > 0. The transformed group AUnA-' is of course defined for both positive and negative n. But it is important to note that it is only for n > 0 that AUnA-' preserves positivity. We may of course define another transformation A such that AUnA'- preserves positivity for n < 0. The important point is that the same transformed group AUnA-I cannot correspond to probabilistic process for both positive and negative time n. This breaks the symmetry between the positive and negative direction of time and causes the physical evolution to be described by a semigroup rather than a group. 5. Concluding remarks

The most striking conclusion, to emerge from our discussion is that the deterministic and probabilistic descriptions are not as radically different as it has been thought in the past and "coarse-graining" or "contraction" is not the only way of relating them. We have demonstrated the possibility of linking probabilistic descriptions and deterministic descriptions by simply a "change of representation" that involves "no loss of
information." Looked at from a slightly different point of view, the present work could be considered as a part of a theory of entropy-increasing evolutions and their relations to deterministic dy-

10. Misra, B., Prigogine, I. & Courbage, M. (1979) Physica, in press. 11. Dynkin, E. B. (1965) Markov Processes (Springer, New York), Vol. 1.

namics. Historically, Boltzmann's kinetic equation represents the first example of an entropy-increasing evolution. To arrive at this equation, Boltzmann had to introduce probability into dynamics from outside. In this work we have demonstrated that entropy-increasing evolutions can arise from deterministic dynamics simply as a result of "change of representation" brought about by invertible (nonunitary) similarity transformations A. This finding is in conformity with, and concretely illustrates, the point of view towards the problem of irreversibility developed in ref. 2. A specially attractive feature of the theory of irreversibility emerging from the considerations of refs. 2 and 4 and the present work is the close links it establishes between instability (expressed in terms of mixing and other ergodic properties), the inherent irreversibility (expressed in terms of the existence of Lyapounov variables M), and the intrinsic randomness (expressed in terms of the existence of an "equivalence" with a stochastic Markov process) of dynamical motion. We have shown that the class of systems exhibiting instability contains the class featuring inherent irreversibility (for instance, the K-systems), which in its turn contains the class (e.g., Bernoulli systems) displaying intrinsic randomness of motion. The precise boundary between these classes is, however, at present not known. It will be an interesting problem to determine the precise extent of these classes. Let us also note that the notion of intrinsic randomness of dynamical systems formulated here differs from-and seems to refer to a more intrinsic form of the randomness of dynamical evolution than-that expressed by strict positivity of Kolmogorov "entropy." The concepts of instability, inherent irreversibility, and intrinsic randomness are formulated and studied here in the frame of classical mechanics. It obviously will be interesting and important to extend and study these concepts for quantum systems as well as for gravitational systems requiring general relativity for their description. We plan to come back to this question in subsequent communications. Let us note that for an arbitrary system, the average (T) of the internal time operator in a state p given by 3.10 can be easily shown to be equal to the change in the ordinary time dt. The internal time operator (when it exists) contains, however, additional information about the physical system that concerns the fluctuation or dispersion of the "internal age" around the average value. In both classical and quantum dynamics, time appears simply as an external parameter to label the dynamical group. In contrast, the internal time operators considered here are new physical observables associated with the irreversible evolution of the system. From this point of view the concept of internal time operator is closer to the concepts of thermodynamic and biological time and may serve as the "microscopic" counterpart of the latter phenomenological concepts. 1. Yosida, K. (1974) Functional Analysis (Springer, New York). 2. Prigogine, I., George, C., Henin, F. & Rosenfeld, L. (1973) Chem. Scr. 4, 5-32. 3. George, C., Henin, F., Mayne, F. & Prigogine, I. (1978) Hadronic J. 1, 520-573. 4. Misra, B. (1978) Proc. Natl. Acad. Sci. USA 75, 1627-1631. 5. Poincare, H. (1889) C. R. Hebd. Seances Acad. Sci. 108, 550553. 6. Moser, J. (1974) Stable and Random Motions in Dynamical Systems (Princeton Univ. Press, Princeton, NJ). 7. Prigogine, I., Mayne, F., George, C. & DeHaan, M. (1977) Proc. Natl. Acad. Sci. USA 74, 4152-4156. 8. Ornstein, D. S. (1974) Ergodic Theory, Randomness and Dynamical Systems (Yale Univ. Press, New Haven, CT). 9. Shields, P. (1973) The Theory of Bernoulli Shifts (Chicago Univ. Press, Chicago).

Das könnte Ihnen auch gefallen